333 70 4MB
English Pages [421] Year 2018
Handbook of College Reading and Study Strategy Research
The most comprehensive and up-to-date source available for college reading and study strategy practitioners and administrators, the Third Edition of the Handbook of College Reading and Study Strategy Research reflects and responds to changing demographics as well as politics and policy concerns in the field since the publication of the previous edition. In this thorough and systematic examination of theory, research, and practice, the Handbook offers information to help college reading teachers to make better instructional decisions, justification for programmatic implementations for administrators, and a complete compendium of both theory and practice to better prepare graduate students to understand the parameters and issues of this field. The Handbook is an essential resource for professionals, researchers, and students as they continue to study, research, learn, and share more about college reading and study strategies. Addressing current and emerging theories of knowledge, subjects, and trends impacting the field, the Third Edition features new topics, such as disciplinary literacy, social media, and gaming theory. Rona F. Flippo is Professor of Education at the University of Massachusetts Boston, College of Education and Human Development, USA. Thomas W. Bean is Professor of Literacy/Reading and the Rosanne Keeley Norris Endowed Chair at Old Dominion University, USA.
Handbook of College Reading and Study Strategy Research Third Edition
Edited by Rona F. Flippo and Thomas W. Bean
Third edition published 2018 by Routledge 711 Third Avenue, New York, NY 10017 and by Routledge 2 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN Routledge is an imprint of the Taylor & Francis Group, an informa business © 2018 Taylor & Francis The right of Rona F. Flippo and Thomas W. Bean to be identified as the authors of the editorial material, and of the authors for their individual chapters, has been asserted in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. First edition published by LEA 2000 Second edition published by Routledge 2009 Library of Congress Cataloging-in-Publication Data A catalog record has been requested for this book ISBN: 978-1-138-64267-6 (hbk) ISBN: 978-1-138-64268-3 (pbk) ISBN: 978-1-315-62981-0 (ebk) Typeset in Bembo by codeMantra
Contents
Foreword viii Norman A. Stahl
Preface xi Acknowledgments xvi Contributors xvii Part I
Framework 1 Eric J. Paulson 1 History 3 Norman A. Stahl and James R. King 2 College Reading 27 Eric J. Paulson and Jodi Patrick Holschuh 3 Policy Issues 42 Tara L. Parker 4 Student Diversity 61 Theodore S. Ransaw and Brian J. Boggs 5 Social Media 74 Barbara J. Guzzetti and Leslie M. Foley Part II
Reading Strategies 87 Sonya L. Armstrong 6 Disciplinary Reading 89 Thomas W. Bean, Kristen Gregory, and Judith Dunkerly-Bean
v
Contents
7 Vocabulary 98 Michelle Andersen Francis and Michele L. Simpson 8 Comprehension 118 Jodi Patrick Holschuh and Jodi P. Lampi 9 Reading and Writing 143 Sonya L. Armstrong, Jeanine L. Williams, and Norman A. Stahl 10 Gaming and College Reading 168 Janna Jackson Kellinger Part III
Study Skills and Strategies 179 Dolores Perin and Kristen Gregory 11 Academic Preparedness 181 Dolores Perin 12 Strategic Study-Reading 191 Patricia I. Mulcahy-Ernt and David C. Caverly 13 Linguistically Diverse Students 215 Christa de Kleine and Rachele Lawton 14 Study and Learning Strategies 227 Claire Ellen Weinstein and Taylor W. Acee 15 Test Preparation and Test Taking 241 Rona F. Flippo, Victoria Appatova, and David M. Wark Part IV
Programs and Assessment 279 David R. Arendale 16 Bridge Programs 281 David R. Arendale and Nue Lor Lee 17 Program Management 293 Karen S. Agee, Russ Hodges, and Amarilis M. Castillo 18 Program Assessment 315 Jan Norton and Karen S. Agee
vi
Contents
19 Student Assessment 326 Tina Kafka 20 Reading Tests 340 Rona F. Flippo, Sonya L. Armstrong, and Jeanne Shay Schumm Compendium: Commercially Available Reading Tests Reviewed 367 Afterword 381 Hunter R. Boylan
Author Index Subject Index
385 395
vii
Foreword Norman A. Stahl professor emeritus, literacy education, program affiliate, center for the study of
language and literacy, northern illinois university, dekalb, il, usa
With the issuance of the third edition the Handbook of College Reading and Study Strategy Research and across the past two editions (Flippo & Caverly, 2000, 2009), along with its predecessor volumes published by the International Reading Association (Flippo & Caverly, 1991a, 1991b), this work has achieved not only the status of a seminal volume, as it has always been known, but also that of a source that has stood the test of time to become one of the most influential works, if not the most important work, in the past century for the postsecondary literacy profession. Furthermore, it can be argued that the Handbook of College Reading and Study Strategy Research will continue to serve as the scholarly benchmark for authors and editors striving to present “state-of-the-art” publications, not only for the college reading and study strategy fields but also for its kindred specializations in developmental education and learning assistance. While the use of the label “Handbook” has created a cottage industry for publisher upon publisher serving the big tent of educational research and beyond, the Handbook of College Reading and Study Strategy Research retains and furthers its original goal of being the go-to source for in-depth coverage of theory, research, and praxis from a literature base that spans multiple disciplines and countless eras. Such a proposition crosses the past two decades as two of our postsecondary literacy heroes, Martha Maxwell (2000) and Frank L. Christ (2009), penned similar position statements in the Forewords of the first and second editions, respectively, of this volume. This edition of the Handbook of College Reading and Study Strategy Research continues to serve as a work that must be read by all postsecondary reading specialists who deliver coursework in college reading or learning strategies, whether the instructional venue is a stand-alone course, a linked or corequisite course, an Integrated Reading and Writing (IRW) course, or a workshop. As all theory is inherently practical, and impactful research, whether quantitative or qualitative, provides the foundation for best practice, those serving in instructional roles should be the first and primary audience for this work. Maxwell noted in the Foreword for the first edition of this volume that individuals teaching college reading and study strategy coursework were more likely than not to be trained in P-12 pedagogy rather than in the theory, research, and best practice for this field. Such is more than likely to continue to be the case as the third edition is being released. More so, new delivery systems, such as student success courses and IRW programs, draw upon faculty (whether tenure track or adjunct) with little or no training in the field. Hence, this Handbook continues to be the first and best source for the personal professional development of the instructional force. Furthermore, the knowledge of and competency with the content within this text must serve as the de facto professional standards for those delivering instruction in the college reading and study strategy arena.
viii
Foreword
The second audience for the Handbook of College Reading and Study Strategy Research will be the cadre of graduate students at schools, such as Northern Illinois University and Texas State University, undertaking doctoral studies or those at San Francisco State University and California State University—Fullerton who are undertaking master’s-level coursework specifically focused on the college reading and study strategy field. These are the individuals who will be the next generation of researchers, curriculum designers, and authors of chapters in future editions of this text. The Handbook, as it covers our past and our present, and hypothesizes about our future, must serve as the foundational text for many if not all of the courses our future leaders will encounter. Beyond our colleagues in college reading and study strategy academic units and learning assistance programs are those individuals who have demonstrated tremendous impact on the field in the years since the second edition of the Handbook was released. These individuals are administrators, both in state and federal governmental units, and those policy makers from private foundations, nonprofit organizations, and think tanks, as well as funded research centers, that have great impact on such agencies. Given the many technical papers that have influenced the field as of late, many that never underwent peer review nor eventual publication in impactful journals, one questions whether individuals from such groups actually read the excellent state-of-the-art literature summaries in the past editions of the Handbook. Perhaps so; more likely, perhaps not. Yet with or without such foundational knowledge of the field, they have greatly influenced how local administrations and trustees have looked at curriculum and instruction, and allocated resources for local programs. That being said, for the future of the field, members of the profession must be fully cognizant of our literature base and then share the content found within this third edition with the aforementioned stakeholders, whether through traditional text, personal meetings, and professional development sessions or via technology as podcasts, blogs, wikis, etc. In an era when the call for college and career readiness becomes a mantra for virtually every secondary school in the country, the Handbook will serve as a most important resource as colleagues from high schools and postsecondary institutions develop national class Alliance Programs, such as that between Elgin Community College and its primary feeder schools. The content within this text provides an extensive literature base that can provide direction for healthy and productive discussions, leading to and supporting both partnerships and programs that cross the existing borders. More so than in past decades, the relationship between theory, research, and praxis from the secondary literacy movement and from the college literacy field must be viewed more as a twoway street without the historical but artificial roadblocks that do not reflect the developmental stage and personal progression of the traditional high school to college clientele. Finally, this Handbook is for the scholars, both those who have contributed these chapters and those who author articles in the field’s journals, deliver papers and workshops at our conferences, and author the textbooks used by our students. Just as it should be expected that our field’s scholars will be fully cognizant of the content in various volumes of the seminal texts in literacy (e.g., the Handbook of Reading Research, Theoretical Models and Processes of Reading), so it must be expected that those viewed as the scholars of the college reading and learning field know of and draw from the theory, research, and practice covered across all three of the Handbook volumes and the two predecessor texts. A thorough knowledge of how the field, along with varied topical areas, has evolved across our recent history is a scholarly imperative not only for one’s own professional work but also in the training of future generations of scholars and practitioners. Furthermore, being aware of the topics within these texts provides one with an understanding of the ebb and flow of the topics, trends, and issues impacting the field, as we see with the introduction of topics such as disciplinary literacy, social media, and gaming theory in the newest edition of the Handbook. Admittedly, this Handbook should not be considered bedtime reading. It requires careful reading and perhaps rereading of chapters. In some cases, it requires also the perusal of chapters found in earlier versions of the work. The editors and their team members have put together a source that
ix
Foreword
provides both breadth and depth of coverage of the field. It becomes the personal onus of members of the field to carefully read the various chapters and then ponder how the theory, research, and best practice, as covered in each chapter, can influence their professional endeavors as scholars or practitioners; offer them suggestions for curricular and instructional design and reform; lead them to undertake program evaluation; or assist them in interacting with our various stakeholders, both on and off campus. In an era of transition in the field, with changes in policy and practice, the Handbook serves as a primary touchstone. The last statement of this Foreword comes in part from its inherent wisdom but also in a sense to honor one of our own. In the closing section of the Foreword of the first edition of the Handbook of College Reading and Study Strategy Research, Martha Maxwell (2000) stated, “In this ever-changing world of higher education, where new theories of knowledge and new technolo gies are emerging, this handbook helps keep educators abreast of the developments that have occurred and helps focus expectations on those changes yet to come.” (p. x.) There is no better s ummation of the potentials open to you from this text as both a reader and a professional serving in the college reading and study strategy field.
References Christ, F. L. (2009). Foreword. In R. F. Flippo & D. C. Caverly (Eds.), Handbook of college reading and study strategy research (2nd ed., pp. vii–viii). New York, NY: Routledge/Taylor and Francis. Flippo, R. F., & Caverly, D. C. (Eds.). (1991a). Teaching reading & study strategies at the college level. Newark, DE: International Reading Association. Flippo, R. F., & Caverly, D. C. (Eds.). (1991b). College reading & study strategy programs. Newark, DE: International Reading Association. Flippo, R. F., & Caverly, D. C. (Eds.). (2000). Handbook of college reading and study strategy research. Mahwah, NJ: Lawrence Erlbaum Associates. Flippo, R. F., & Caverly, D. C. (Eds.). (2009). Handbook of college reading and study strategy research (2nd ed.). New York, NY: Routledge/Taylor and Francis. Maxwell, M. (2000). Foreword. In R. F. Flippo & D. C. Caverly (Eds.), Handbook of college reading and study strategy research (pp. vii–xi). Mahwah, NJ: Lawrence Erlbaum Associates.
x
Preface
As we wrote the Preface for this new edition of the Handbook of College Reading and Study Strategy Research (third ed.), we could not help but look back and reflect on the issues, concerns, and projections indicated in the preface for the previous edition (published in 2009). It was interesting to note that at that time, the field of college reading and study strategy instruction was said to be in transition and was undergoing the effects of changing workforce needs, an increasingly diverse college population, the expansion of literacy requirements, and growing calls for accountability. So, what has changed? The reality and conundrum is that the transition of the field and the issues and concerns within the field have continued to grow and evolve, leaving us with more of the same, plus the ever- growing pressures related to funding, accountability, and diversity needs. We predict, in fact, that this continuum will be ever present, and we will need to do what we have always done: cope, adjust, study, research, write, teach, and think of new ideas and procedures that will enable us to continue. However, we do believe that there is at least one more thing that could be very helpful and enabling: What if we (who are concerned about college reading and study strategy research and issues) reach across the divide and work in tandem with those who are concerned with secondary reading and study strategy research and issues? In an article in the Journal of Adolescent & Adult Literacy (Flippo, 2011), published shortly after the second edition of this handbook came out, this idea was proposed. The emphasis was that if we take an interest in, observe, read the research of, and actually work with those who teach and research our students when they are in middle and high school (and maybe in elementary school as well), we will learn from each other—in fact, we are all often dealing with the same or at least similar issues and concerns, and together, we can learn and do more. For example, the pressures and issues, including theoretical, empirical, content, and funding concerns regarding the Common Core State Standards (2010), go across all grade levels, including postsecondary (i.e., Hiebert & Mesmer, 2013; Smith, Appleman, & Wilhelm, 2014). With this in mind, long-standing users of these handbooks (first, second, and now third editions) will note that the newest chapter authors and, in fact, one of the editors of this Handbook, who historically are thought of as secondary reading researchers, have written chapters in this Handbook at our invitation. They and all the authors each bring a wealth of knowledge and understanding of the needs of our students and the issues we all are facing. We recommend that you consider exploring these and other partnerships across the not so invisible divide. In addition, we believe that it is important to consider generational issues as we work with college students. For example, a recent book by San Diego State University psychology professor and widely published researcher Dr. Jean M. Twenge (2017) examines iGen students born after 1995 who spend significant amounts of time tethered to their smartphones. Based on a detailed analysis of large-scale survey data bolstered by interviews with iGen students, Professor Twenge charts a generation that is increasingly disconnected from human interaction and deliberation in the college classroom. She recommends acknowledging this reality and increasing the use of
xi
Preface
interactive textbooks that are shorter than traditional tomes as well as using short video clips to stimulate engagement and discussion. Indeed, in a 2017 Chronicle of Higher Education survey of employers’ “wish lists” for college graduates, at the very top of their list, they noted communication skills, followed by technical and problem-solving skills. Thus, as Editors of this new Handbook, we believe it is important for us to look at the news media as well as the literacy and educational research profession at large to see what these were publishing as we prepared this new edition. We believe that the Handbook’s readers and users will find this of mutual interest. First, let’s take a look at some of the more telling news headlines: “US warns schools on assessment testing” ( January 29, 2016) “As city offers no-fee SATs, questions about preparation for low-income students (Taylor, March 2, 2016) “UMass Boston puts adjuncts on notice: Up to 400 may be let go as school struggles with budget deficit” (Krantz, June 3, 2016) “In college, the 6-year plan will cost you” (Douglas-Gabriel, June 22, 2016) “SAT subject tests losing favor among N.E. colleges: Three top schools make them optional” (Krantz, August 22, 2016) “Colleges seek diversity ideal, but pick different paths to do it: Socioeconomics becomes factor in admissions” (Yee, August 6, 2017) “Colleges move to close gender gap on science” (Korn, September 26, 2017) “Colleges prepare to fight against GOP’s tax plan: Say students would bear the brunt of higher expenses” (Fernandes, November 4, 2017) Likewise, the profession has published a number of articles and position papers that we should consider as well: “Using high-stakes assessments for grade retention and graduation decisions” [Position statement] (International Reading Association, 2014) “Just let the worst students go: A critical case analysis of public discourse about race, merit, and worth” (Zirkel & Pollack, 2016) “Rights of postsecondary readers and learners” (Angus & Greenbaum, 2017) “Improving admission of low-SES students at selective colleges: Results from an experimental simulation” (Bastedo & Bowman, 2017) “Empowering first-year students” (Hollander, 2017) “Toward equity and diversity in literacy research, policy, and practice: A critical, global approach” (Morrell, 2017) “University-partnered new school designs: Fertile ground for research-practice partnerships” (Quartz et al., 2017) Finally, we would be remiss if we did not look again at the classic work of Martha Maxwell (1995/1996), in which she reviews the “then” current research and insights concerning college reading. Maxwell, at the time, indicated that there is “increasing evidence that college developmental reading courses [did not meet] all of the goals that institutions expect of them” (p. 41). We hope that users of this Handbook will find that in spite of the difficulties, demands, funding issues, social justice problems, and other factors that all of those currently in this field must work hard to overcome, we are making a difference, and we have good research to support it, but we must continue to do more to meet our goals and the goals of our college students.
Overview of the Volume This new third edition is divided into four parts. Each has a Section Editor with expertise in the content of that particular section. In Part I: Framework, Section Editor Eric J. Paulson and
xii
Preface
the chapter authors provide us with the broader context of College Reading and Study Strategy Research. Paulson suggests that this section helps us answer the question “What should we consider when considering college reading and study strategies?” The chapters in this section provide material for such consideration. In Chapter 1, History, Norman A. Stahl and James R. King review the history of the field and its scholarship to provide an awareness of where we came from and what has been done before. In Chapter 2, College Reading, Eric J. Paulson and Jodi Patrick Holschuh combine the foundational, theoretical, and instructional aspects of literacy instruction within postsecondary contexts. In Chapter 3, Policy Issues, Tara L. Parker shares important understandings concerning developmental education and the policies and practices at all levels that, in turn, influence it. In Chapter 4, Student Diversity, Theodore S. Ransaw and Brian J. Boggs provide the foundation for building more understanding of diverse student populations. Chapter 5, Social Media, authored by Barbara J. Guzzetti and Leslie M. Foley, discusses both literacy and social media, and how these can intersect in this digital age. Next, in Part II: Reading Strategies, Section Editor Sonya L. Armstrong and the chapter authors focus on current thinking about approaches to enhance college reading instruction. In Chapter 6, Disciplinary Reading, Thomas W. Bean, Kristen Gregory, and Judith Dunkerly-Bean discuss the impact of a disciplinary literacies’ perspective on college reading and the possibilities of collaborations with faculty across campus to inform reading instruction in various disciplines. In Chapter 7, Vocabulary, Michelle Andersen Francis and Michele L. Simpson discuss Vocabulary, an important component of college reading instruction. Jodi Patrick Holschuh and Jodi P. Lampi, authors of Chapter 8, Comprehension, discuss influences on comprehension and the need for the use of generative strategies that enhance both cognition and metacognition. In Chapter 9, Integrated Reading and Writing, Sonya L. Armstrong, Jeanine L. Williams, and Norman A. Stahl discuss the current reemergence of the Integrated Reading and Writing (IRW) movement. Next, in Chapter 10, Gaming and College Reading, author Janna Jackson Kellinger poses challenges to traditional understandings of “texts” and the new possibilities offered by gaming. In Part III: Study Skills and Strategies, Section Editors Dolores Perin, Kristen Gregory and the chapter authors focus on topics concerning study skills and strategies in various contexts with recommendations for practitioners and future research. In Chapter 11, Academic Preparedness, author Dolores Perin reviews concepts of academic readiness, with a focus on reading and writing skills. In Chapter 12, Strategic Study-Reading by Patricia I. Mulcahy-Ernt and David C. Caverly, a new framework for understanding college students’ academic learning strategies from a sociocultural perspective is the focus. In Chapter 13, Linguistically Diverse Students, authors Christa de Kleine and Rachele Lawton review research on working with linguistically diverse students in college, a large and growing population. Next, Chapter 14, Study and Learning Strategies by Claire Ellen Weinstein and Taylor W. Acee, discusses research on strategic and self-regulated learning to promote the success of college students. Then, in Chapter 15, Test Preparation and Test Taking, authors Rona F. Flippo, Victoria Appatova, and David Wark provide a historic as well as current review of research on test preparation and test performance, test-wiseness and test-taking skills, coaching to prepare students for tests, and test anxiety and approaches for dealing with it. Then, in Part IV: Programs and Assessment, Section Editor David R. Arendale and the chapter authors provide programmatic approaches for reading and study strategies, the assessment of students and types of assessments, and the reading tests available and used by college programs. Arendale indicates that “(T)he chapters in this section are essential to the national conversation about who is ready for college, how to determine this, and what programs and services students need.” Chapter 16, Bridge Programs, authored by David R. Arendale and Nue Lor Lee, review academic skill and social programs to improve transition into the college experience by a diverse student population. In Chapter 17, Program Management, Karen Agee, Russ Hodges, and Amarilis Castillo explore effective management of programmatic approaches to support student academic
xiii
Preface
success. Then, in Chapter 18, Program Assessment, authors Jan Norton and Karen Agee provide a continuation of this topic with an emphasis on the assessment of the programs. Next, in Chapter 19, Student Assessment, Tina Kaf ka identifies assessments to differentiate the learning environment for individual students. Finally, in Chapter 20, Reading Tests, the last chapter in this section and in the Handbook, authors Rona F. Flippo, Sonya L. Armstrong, and Jeanne Shay Schumm discuss and review the reading tests used with college students and the issues. They conclude with a compendium of reviewed tests that could be used for test selection purposes. This includes the publisher information, components, weaknesses, and strengths of each reviewed test. In conclusion, you will note that at the end of each chapter, rather than just including a reference list, we also call out suggested readings cited within the chapter with the use of an asterisk (*) in front of the references the chapter authors particularly recommend for further reading. We hope that you will find this helpful. Rona F. Flippo Thomas W. Bean
References and Suggested Readings *Angus, K. B., & Greenbaum, J. (2017). Rights of postsecondary readers and learners. Oak Creek, WI: College Reading and Learning Association Board of Directors. *Bastedo, M. N., & Bowman, N. A. (2017). Improving admission of low-SES students at selective colleges: Results from an experimental simulation. Educational Researcher, 46(2), 67–77. Douglas-Gabriel, D. (2016., June 22). In college, the 6-year plan will cost you. The Boston Globe, p. C6. (From the Washington Post). Fernandes, D. (2017, November 4). Colleges prepare to fight against GOP’s tax plan: Say students would bear the brunt of higher expenses. The Boston Globe, pp. 1, 10. *Flippo, R. F. (2011). Featured commentary. Transcending the divide: Where college and secondary reading and study research coincide. Journal of Adolescent & Adult Literacy, 54(6), 396–401. *Hiebert, E. H., & Mesmer, H. A. E. (2013). Upping the ante of text complexity in the Common Core State Standards: Examining its potential impact on young readers. Educational Researcher, 42(1), 44–51. *Hollander, P. W. (2017, November). Empowering first-year students. NEA Higher Education Advocate, 35(5), 6–9. *International Reading Association. (2014). Using high-stakes assessments for grade retention and graduation decisions [Position statement]. Newark, DE: Author. Korn, M. (2017, September 26). Colleges move to close gender gap in science. The Wall Street Journal, p. A6. Krantz, L. (2016, June 3). UMass Boston puts adjuncts on notice: Up to 400 may be let go as school struggles with budget deficit. The Boston Globe, pp. B1, B8. Krantz, L. (2016, August 22). SAT subject tests losing favor among N.E. colleges: Three top schools make them optional. The Boston Globe, pp. A1, A7. *Maxwell, M. (1995/1996). New insights about teaching college reading: A review of recent research. Journal of College Reading and Learning, 27(1), 34–42. *Morrell, E. (2017). Toward equity and diversity in literacy research, policy, and practice: A critical, global approach. Journal of Literacy Research, 49(3), 454–463. National Governors Association Center for Best Practices & Council of Chief State Schools Officers. (2010). Common Core State Standards for English language arts, & literacy in history/social studies, science, and technical subjects. Washington, DC. *Quartz et al. (2017). University-partnered new school designs: Fertile ground for research-practice partnerships. Educational Researcher, 46(3), 143–146. Smith, M. W., Appleman, D., & Wilhelm, J. D. (2014). Uncommon core: Where the authors of the standards go wrong about instruction—and how you can get it right. Thousand Oaks, CA: Corwin Literacy. Taylor, K. (2016, March 2). As city offers no-fee SATs, questions about preparation for low-income students. The New York Times, p. A23. Twenge, J. M. (2017). iGen: Why today’s super-connected kids are growing up less rebellious, more tolerant, less happy—and completely unprepared for adulthood. New York, NY: Atria Books.
xiv
Preface
US warns schools on assessment testing. (2016, January 29). The Boston Globe, p. A2. (Associated Press). *What graduates need to succeed: College and employers weigh-in. The Chronicle of Higher Education, November 10, 2017, p. A36. Yee, V. (2017, August 6). Colleges seek diversity ideal, but pick different paths to it. Boston Sunday Globe, p. A14. *Zirkel, S., & Pollack, T. M. (2016). “Just let the worst students go”: A critical case analysis of public discourse about race, merit, and worth. American Educational Research Association Journal, 53(6), 1522–1555.
xv
Acknowledgments
This third edition of the Handbook of College Reading and Study Strategy Research was possible because of the work, dedication, cooperation, and contributions of our many colleagues who took active and important roles, and our publisher, Routledge/Taylor & Francis Group. Our work on this Handbook was enriched because of the many years of significant research in the field of college reading and study strategy instruction, as well as in content area reading, that our colleagues (some still with us and others who have passed on), and those who came before them, undertook and shared. Plus, we would be remiss if we did not call attention to and thank the various professional associations that for decades have supported and promoted research in reading and study strategies instruction and assessment for the venues these associations have provided all of us over the years. Therefore, our sincerest appreciation and thanks go out to Eric Paulson, Sonya L. Armstrong, Dolores Perin, and David R. Arendale, our Section Editors, and to all the authors of all of the twenty chapters in this Handbook; to Norman Stahl, author of the Foreword, and to Hunter Boylan, author of the Afterword for this third edition; to Kristen Gregory for her major role in handling copyediting queries and page proofing; to Judith Dunkerly-Bean, for her expert consultation; to Naomi Silverman, our Acquisitions Editor, who recognized the importance of this Handbook to the field and supported our proposal for the inclusion of content area expertise, urging us (Rona and Tom) to do a new edition; to Karen Adler, our current Acquisitions Editor, for her continued support and encouragement; to Garr Cranney and Alton Raygor (Rona’s mentors in the field of college reading and study strategies); to Lyndon Searfoss and Harry Singer (Tom’s mentors in the field of content area reading instruction); and to the original National Reading Conference (now the Literacy Research Association), the International Reading Association (now the International Literacy Association), the Western College Reading Association (now the College Reading and Learning Association), the College Reading Association (now the Association of Literacy Educators and Researchers), the National Association for Developmental Education, the American Reading Forum, and the National Council of Teachers of English—all significant, both historically and theoretically, to our current understandings in the field of reading and study strategies at all levels. Finally, thanks to all of the readers and users of the first and second editions who supported the need for and encouraged the development of this new Handbook of College Reading and Study Strategy Research (third ed.). We hope you like what we have done with it, and we hope to see you at the various conferences we all take part in. These affiliations, the networking we all do, and your continued work and research in the fields of college reading and study strategies, and content area literacy, are what keeps this work alive, rich, and developing. Thank you all! Rona and Tom
xvi
Contributors
Taylor W. Acee, Ph.D. Texas State University, San Marcos, TX Karen S. Agee, Ph.D. University of Northern Iowa, Cedar Falls, IA Victoria Appatova, Ph.D. University of Cincinnati Clermont College, Batavia, OH David R. Arendale, Ph.D. University of Minnesota, Minneapolis, MN Sonya L. Armstrong, Ed.D. Texas State University, San Marcos, TX Hunter Boylan, Ph.D. Appalachian State University, Boone, NC Thomas W. Bean, Ph.D. Old Dominion University, Norfolk, VA Brian J. Boggs, Ph.D. Michigan State University, East Lansing, MI Amarilis M. Castillo, M.A. Texas State University, San Marcos, TX David C. Caverly, Ph.D. Texas State University, San Marcos, TX Christa de Kleine, Ph.D. Notre Dame of Maryland University, Baltimore, MD Judith M. Dunkerly-Bean, Ph.D. Old Dominion University, Norfolk, VA
xvii
Contributors
Rona F. Flippo, Ed.D. University of Massachusetts Boston, Boston, MA Leslie Foley, Ph.D. Grand Canyon University, Phoenix, AZ Michelle Andersen Francis, Ph.D. West Valley College, Saratoga, CA Kristen Gregory, Ph.D. Old Dominion University, Norfolk, VA Barbara J. Guzzetti, Ph.D. Arizona State University, Tempe, AZ Russ Hodges, Ed.D. Texas State University, San Marcos, TX Jodi Patrick Holschuh, Ph.D. Texas State University, San Marcos, TX Tina Kafka, Ed.D. Independent Consultant Janna Jackson Kellinger, Ph.D. University of Massachusetts Boston, Boston, MA James R. King, Ed.D. University of South Florida, Tampa, FL Jodi P. Lampi, Ph.D. Northern Illinois University, DeKalb, IL Rachele Lawton, Ph.D. Community College of Baltimore County, Baltimore, MD Nue Lor Lee, M.A. University of Michigan, Ann Arbor, MI Patricia I. Mulcahy-Ernt, Ph.D. University of Bridgeport, Bridgeport, CT Jan Norton, M.A. University of Iowa, Iowa City, IA Tara L. Parker, Ph.D. University of Massachusetts Boston, Boston, MA
xviii
Contributors
Eric J. Paulson, Ph.D. Texas State University, San Marcos, TX Dolores Perin, Ph.D. Teachers College, Columbia University, New York, NY Theodore S. Ransaw, Ph.D. Michigan State University, East Lansing, MI Jeanne Shay Schumm, Ph.D. University of Florida, Coral Gables, FL Michele L. Simpson, Ed.D. University of Georgia, Athens, GA Norman A. Stahl, Ph.D. Northern Illinois University, DeKalb, IL David M. Wark, Ph.D. University of Minnesota, Minneapolis, MN Claire Ellen Weinstein, Ph.D. The University of Texas at Austin, Austin, TX Jeanine L. Williams, Ph.D. University of Maryland University College, Baltimore, MD
xix
Part I
Framework Eric J. Paulson texas state university
As I write this section introduction, I am reminded of the use of the word “framework” as a metaphor for a number of things – it can be the structure upon which a house is built, a lens through which political events are understood, the theoretical basis upon which a research study is planned. At its most basic level, a framework is something that allows a thing to be and to be understood. Answering even a seemingly straightforward question in our field – “what is reading,” for example – requires identification and application of a theoretical framework before the question can even be understood. That is, a framework helps us decide what we should consider when seeking to answer questions in our field – or what the questions even should be. In fact, in the context of this book, one could think of a framework as providing both the impetus and the material to aid in answering the question “What should we consider when considering college reading?” And the chapters in this section provide copious material for such consideration. In their chapter titled History, Norman A. Stahl and James R. King ensure that perspectives on college reading scholarship are not limited to the temporally local but include awareness of where we came from and what has been done before. In their chapter, College Reading, Eric J. Paulson and Jodi P. Holschuh weave together foundational, theoretical, and instructional domains of literacy instruction in postsecondary contexts. Tara L. Parker’s chapter, Policy Issues, provides important understandings about how developmental education is influenced by, and exists within, the policies and practices observed at local, regional, state, and other levels. In Theodore S. Ransaw and Brian J. Boggs’s chapter, Student Diversity, the authors provide information on a range of important aspects of diversity in postsecondary education. And finally, in their chapter titled Social Media, Barbara Guzzetti and Leslie Foley discuss the intersection of literacy and social media, and ways of understanding participatory literacy in a digital age. With such an impressive breadth and depth of chapters, I anticipate that this framework section provides an abundance of things to consider when considering college reading research.
1 History Norman A. Stahl northern illinois university
James R. King university of south florida
College reading has been an established field within reading research and pedagogy for over a century. In fact, according to Manzo (1983), college reading is both a generator of new ideas and a repository for considerable wisdom. Yet, to this day, college reading receives scant respect compared to other subfields of literacy. It is ironic then that many noteworthy scholars in reading research and pedagogy (see Israel & Monaghan, 2007; Robinson, 2002) wrote about college readers and/or college reading and study strategy instruction (e.g., Guy Buswell, William S. Gray, Nila B. Smith, Ruth Strang, Miles Tinker, George Spache, Francis Robinson) such that much of our historical, if not foundational, understandings of basic reading processes rest on research conducted with college readers. It is equally ironic that our professional associations (e.g., International Literacy Association (ILA), Literacy Research Association (LRA), Association of Literacy Educators and Researchers (ALER), and College Reading and Learning Association (CRLA) were founded with the major instigation from college reading professionals. Given this legacy, it remains a paradox why the specialization of college reading is an intellectual pariah, confined to the liminal spaces of the discipline of Reading/Literacy. In any quest for parity in the reading profession, the onus continues to be on current and future college reading professionals to learn of the field’s contributions to reading research and pedagogy (Armstrong, 2012; Stahl, Boylan, Collins, DeMarais, & Maxwell, 1999). That being the case, the purpose of this chapter is to provide postsecondary reading specialists with opportunities to learn of the field’s rich heritage. In addition, the chapter discusses one’s responsibility to help the field of college reading grow in stature by undertaking historical work.
Resources for Historical Study of Literacy Instruction The history of any field can be viewed through a multitude of lenses of primary and secondary historical sources. Too often, reading educators have relied solely on Smith’s American Reading Instruction (1934b, 1965, 1986, 2002). Such limited source selection is tunnel vision that begs two questions. The first question to be satisfied is “Does a distinct body of historical resources exist for the field of Reading?” The answer to this question is “yes.” Important works on the history of literacy are increasingly available as both book-length texts and articles in impactful journals (Stahl & Hartman, 2011). The second question is “Does such a body of historical resources exist for the more specific area of college reading research and praxis?” As this third Handbook edition comes
3
Norman A. Stahl and James R. King
to press, the answer continues to be only a qualified “yes” as this affirmation relies not only on the field of literacy but also on the allied fields of developmental education and learning assistance. In an earlier call to undertake historical research in college reading, Stahl, Hynd, and Henk (1986) proposed that three categories of historical materials were available for study. The first category included chronicles synthesizing numerous primary and secondary sources (e.g., Leedy, 1958). The second category was comprised of summaries or time lines that highlighted major events or trends in the field (e.g., Maxwell, 1979). The third category was made up of texts and monographs that had earned a place of historical importance in the field (e.g., Ahrendt, 1975). In reviewing the extant historically oriented sources, it was obvious that the literature was sparse. Furthermore, Stahl et al. (1986) suggested that this dearth of materials might explain why college reading specialists tended to overlook the field’s history when designing curricula, developing programs, writing texts, and conducting research. In retrospect, the lack of supportive literature may also be related to low prestige. Now, three decades after Stahl et al. (1986), it is useful to revisit the corpus of resources available to researchers and practitioners who are interested in the history of college reading and study strategy instruction. In reviewing these works, two of the categories (historical chronicles, and historical summaries and time lines) will be redeployed, along with a category from the first edition of this Handbook for historical writings that investigate specific topics (e.g., study strategies), specific historical eras, and organizational/institutional histories. Finally, in this current chapter, we discuss the methods of interpretive biography (Denzin, 1989), including oral histories, autobiographies, and biographies of leaders in the field. This organizational scheme reveals the field’s breadth of historical knowledge as well as its place within the larger field of literacy theory, research, and praxis.
Historical Chronicles The first category of historical sources is comprised of doctoral dissertations drawing extensively on primary and secondary sources. In all but one case, the historical work was but one component in each dissertation, again indicating the lack of specific focus on historical accounts of literacy. Six of the studies (Bailey, 1982; Blake, 1953; Heron, 1989; Leedy, 1958; Shen, 2002; Straff, 1986) focus directly on college reading instruction. A seventh study (Brier, 1983) investigates academic preparedness for higher education. A seminal, historical work for the field is the dissertation undertaken by Leedy (1958). Through the extensive use of primary sources along with secondary sources (total n = 414), Leedy traced the role of reading, readers, reading materials, and reading/learning programs in American higher education from 1636 to 1958. From this massive undertaking, Leedy (1958) put forth two important conclusions. First, the college reading improvement programs circa 1958 were the result of a slow but orderly evolution in the recognition of the importance of reading’s role in postsecondary education. Second, reading programs were implemented over the years because both students and representatives of the institutions recognized that ineffective reading and study skills created problems in academic achievement. Leedy’s historical work is to college reading as American Reading Instruction (Smith, 2002) is to the overall field of Reading – not surprising as Nila B. Smith served on Leedy’s dissertation committee. An analysis of Leedy’s work is found in Stahl (1988). Four other dissertations provide major historical reviews or historical analyses of the literature in the field. Blake (1953) examined the historical, social, and educational forces that promoted the growth of college reading and study skills programs during the first 50 years of the 20th century. Blake’s work was part of an analysis of the program at the University of Maryland, as augmented with a national survey of programs. Straff (1986) undertook a historical analysis of selected literature on college reading (n = 74 sources) to determine what research, theory, and praxis was covered from 1900 to 1980. The
4
History
intent of this inquiry was to provide a foundation for future program development. His overall findings were similar to Leedy’s (1958): (1) College reading programs grew at a slow and deliberate pace over that 80-year period, and (2) this purposeful growth reflected disparate, local needs in contrast to a coordinated national movement. Straff also stated that the field had grown in both quantity and quality. He concluded that the literature had matured from the simple acknowledgment of reading/study problems in higher education to the discussion of the implementation of programs to research on the effectiveness of programs. Still, this literature review led Straff to believe that over the first eight decades of the 20th century, there was little credible research on program rationales, instructional objectives, student populations, curricula, staffing, reading behaviors, funding sources, and shifts in societal priorities, suggesting that there was little upon which to base recommendations for program development in college reading. Heron (1989) considered the historical context for then current postsecondary reading requirements, the particular needs of at-risk college readers, and the instructional levels and approaches employed by 89 college reading programs. Her research analyzed resources dating from 1927, which she reviewed through the lens of Chall’s developmental reading theory (Chall, 1983). The study led to multiple conclusions, including that (1) the reading requirements in higher education had increased dramatically over the history of American higher education; (2) reading proficiency in college was dependent upon reading skills and strategies as well as domain-specific knowledge; (3) reading problems of college students spanned Chall’s developmental stages, and these deficiencies were compounded by lack of knowledge and language of the academic discourses; (4) programs could be categorized by Chall’s development levels; and (5) historically, lower-level programs emphasizing diagnosis and skills (Chall’s stages one–three) were decreasing in number, whereas higher-level programs emphasizing content strategies and critical reading (stages three and four) were increasing in number. Bridge programs, such as the developmental education model (stages one through four), were also increasing in number but more slowly than those designated as the higher-level programs. Heron also noted that published reports containing appropriate qualitative descriptions of instructional techniques as well as acceptable quantitative measures of the effectiveness of instructional methods were uncommon. Within this category of historical chronicles, we also include the dissertation undertaken by Bailey (1982). Bailey’s critical analysis summarized, classified, and evaluated 170 research studies from 31 different journals published between 1925 and 1980. While this work cannot be called a true historical study, it does provide an extensive annotated bibliography and is, therefore, an important reference source for the college reading field. Furthermore, researchers interested in reading rate, technology (precomputer), teaching methods, test-taking skills, note-taking, textbook study methods, listening, instructional materials, vocabulary, physical factors, comprehension, or combined methods may find Bailey’s categorical analysis of the research to be of value. Shen (2002) provided a historical survey of the field, beginning with our progenitors prior to 1900. She then traversed five eras, with attention directed to the social context impacting college reading as well as the psychological theories and reading research during each respective time period. The three purposes of the content analysis were to (1) examine the physical and content features of the texts, (2) trace the changes in textbooks, and (3) determine the relationships between text features and the development of theory, research, and practice. Shen’s analyses of 88 college reading and study strategy texts lead to 10 conclusions: (1) Authors tended to be experts in their respective fields; (2) textual features did not increase in relation to the size of the book; (3) texts had more in common than in difference; (4) the number of physical features in texts expanded across the eras; (5) common physical features across the eras included introductions, heads and subheads, indexes, student exercises/questions, illustrations, and charts; (6) common topics included attention, dictionary use, test-taking skills, vocabulary
5
Norman A. Stahl and James R. King
mastery, reading rate, note-taking, and mathematics; (7) text features’ prominence varied during different eras; (8) early college reading texts introduced many skills/strategies found in texts currently on the market; (9) some textbooks integrated the era-oriented research and best practice; and (10) topics in the texts tended to draw from psychology and education. Finally, Brier (1983, 1984) undertook a historical narrative that explored the actions undertaken by the newly formed Vassar College and an equally new Cornell University between 1865 and 1890 to meet the academic needs of underprepared, college-aged students. This dissertation draws from primary sources to document the controversy that developed when both institutions enrolled a sizable number of students requiring preparatory instruction, often in basic skills, in order to achieve academic success. While Vassar College responded by developing a preparatory program, Cornell University referred students elsewhere for assistance. Brier demonstrates conclusively that issues associated with modern open-door and special admissions programs have been of concern in higher education for well over a century. The study also underscores the historical nature of the devaluing of college reading by some and the meeting of the challenge by others. (See Arendale, 2001, 2010 and White, Martirosayin, & Wanjohi, 2009, 2010a, 2010b for additional coverage of preparatory programs.) Before moving on to another classification of texts, we would be remiss if we did not cover Smith’s dissertation (1934a), which later evolved into four editions of American Reading Instruction (Smith, 1934b, 1965, 1986, 2002). It was an important contribution for the era in which it was released, and reprintings continue to have great impact (Stahl, 2002). College reading instruction is integrated into Smith’s discussions. Still, finding information about the history of college reading often requires a working knowledge of each era’s scholarship on the college reading field as well as the situated relationship the field had with other reading specializations, such as secondary school reading and adult reading, along with shared topics, such as eye-movement research or linguistic/literacy interfaces. The individual strengths of the documents in the category of historical chronicles are found in the depth and/or breadth of coverage by each author on the particular topic. As a whole, the documents draw from era-based primary sources. Researchers of both historical topics and the historical roots of current topics will find these sources most useful. The Smith (2002) text is readily available in libraries. The dissertations will be available either as text or in digital format through ProQuest. Older dissertations are often available via interlibrary loan.
Historical Summaries and Time Lines The sources in this category include chronological representations of watershed events in the history of college reading. These works appear as chapters or sections in comprehensive books or in edited texts focusing on the fields of college reading, learning assistance, or developmental education as well as parts of yearbook chapters and/or journal articles that are more specific in nature. These chapters and articles cannot be expected to contain the same depth of coverage for each historical era as those found in the dissertation studies. Another issue to consider is that many of these works, such as Spache (1969), were written with the purpose of providing a historical survey along with a state-of-the-art review or speculative discussions about the profession’s future. These works are categorized as college reading, learning assistance, and developmental education.
College Reading The historical works focusing on college reading are limited. During the height of the National Reading Conference’s (NRC’s, now the LRA) and the College Reading Association’s (CRA’s, now ALER) influence on college reading in the 1960s and early 1970s, Lowe authored two papers providing college reading professionals with concise histories of the field. In his first paper, he
6
History
(1967a) analyzed 49 surveys of college reading programs undertaken, from Parr’s survey (1930) to Thurstone, Lowe, and Hayden’s (1965) work. Lowe pointed out that over the years, the number of programs had grown in number and size, and this growth paralleled an emergence of greater professionalism in the field. Lowe’s (1970) second paper, which evolved from his dissertation (Lowe, 1967b), traces the field’s history from the founding of the Harvard University program in 1915 to the late 1960s. Focusing on each decade, Lowe examines the growth of programs in the field along with curricular trends and instructional innovations. Not for another 20 years did a wide-ranging historical chronicle of the college reading field appear. Wyatt (1992) draws upon secondary sources as well as primary sources in the fields of college reading, developmental education, learning assistance, and higher education to provide a chronological discussion of the underprepared college reader and writer since the early 1800s. Woven throughout the article is the description of how a number of “prestigious” institutions (e.g., Harvard University, Yale University, the University of California, Stanford University) responded to their respective students’ reading needs. Finally, in a period before the digital age, annotated bibliographies were helpful sources of information both current and historical. The International Reading Association (now ILA) issued an annotated bibliography series, including Kerstiens’s work (1971) on junior/community college reading and Berger and Peebles’s bibliography (1976) on reading rate research and programs for secondary and postsecondary students. A decade later, Cranney (1983a, 1983b) released two annotated bibliographies detailing valuable sources about the field’s contributions. With the advent of search engines, such sources are thought to be obsolete.
Learning Assistance The evolution of the learning assistance movement has been covered by a number of writers. Enright (1975) provides a frequently referenced history of the origins of the learning assistance center (LAC), which proposes that the movement went through five eras: (1) the age of clinical aspiration: Programs become scientific (1916–1940), (2) the age of disenchantment: Remedial reading is not the answer (1940–1950), (3) the age of integration: Programs treat the whole student (1950–1960), (4) the age of actualization: Good ideas become realities (1960–1970), and (5) the age of systematization: The LAC is organized (1970–1980). This work, based on extensive literature review, illustrates how learning assistance is intertwined with the history of college reading and that it developed a broader orientation in which college reading was an intricate component. Enright and Kerstiens (1980) revisited the history of the LAC in the historically important but short-lived New Directions for College Learning Assistance sourcebook series. The authors provide an overview of historical events from 1850 to 1940 and then move into a decade-by-decade review of the evolution of the LAC. They demonstrate that over the decades, the terminology describing the reading/learning programs evolved along with changes in philosophy and instructional method. Drawing heavily from secondary sources, Lissner (1990) discusses the LAC’s evolution from 1828 through the latter 1980s. The 19th century is described as the prehistory of learning assistance as it focused on compensatory designs, such as preparatory programs and tutoring schools. The 20th century is presented as an evolution of learning assistance through the Depression, World War II, the GI Bill era, the Sputnik era, the open admissions era, and the learning center moment. An important conclusion emanating from Lissner is that learning centers originated as one of a long series of responses to two themes in higher education: the recurring perception that students entering college were less prepared for academics than the preceding academic generation and that new segments of the population had increased opportunities to attend college. Maxwell’s (1979) classic text Improving Student Learning Skills contained a detailed outline of events and trends, illustrating how colleges had been concerned with students’ qualifications for
7
Norman A. Stahl and James R. King
academic work since the 1850s. Given the importance of her work to the field of learning assistance, it is not surprising that many of the historical works, literacy centered and otherwise, and appearing after its publication, used her outline as a foundation. Maxwell’s (1997) thoroughly revised edition of this text provides rich narratives combined with personal anecdotes based on her 50 years of leadership in the field. Included as well is information from historical sources on topics such as at-risk students, tutoring, learning centers, writing instruction, and reading instruction. With the turn of this century, Arendale (2004) authored a historical perspective on the origins and growth of the LAC. After providing an overview of the LAC mission, drawing heavily from the pioneering work of Frank Christ, Arendale makes the case that the LACs were a product of factors influencing postsecondary education as a whole. These factors include changes in federal policies and economic resources, dramatic growth in enrollment, increased diversity in the student population, and a dissatisfaction with the existing approaches to promoting retention. Further, Arendale documented the growth of professionalism that came with the founding of the Western College Reading Association (now the CRLA), the Midwest College Learning Center Association (now the National College Learning Center Association (NCLCA)), and the Annual Institute for College Learning Center Directors (the Winter Institute). [A chronology of the evolution of the Winter Institute can be found at Learning Support Centers in Higher Education (LSCHE) (n.d.b).]
Developmental Education The history of developmental education cannot be separated from the history of college reading instruction. The two fields are mutually entwined. Cross (1971) provided one of the first historical discussions on the still-evolving field of developmental education. Indeed, the tenuousness of the new developmental education label is observed in Cross’s use of the term remedial in juxtaposition with developmental in the chapter’s title. The historical discussion is directed at two themes: (1) causes of poor academic performance and (2) historical trends in the evaluation of remediation in higher education. In discussing how poor academic performance had been viewed, Cross proposed that there was a predominant vantage held by educators in each of five eras, respectively defined and roughly delimited as (1) poor study habits – pre 1935, (2) inadequate mastery of basic academic skills – late 1930s through early 1940s, (3) low academic ability or low intelligence – postwar 1940s through early 1960s, (4) psychological and motivational blocks to learning – mid1960s, and (5) sociocultural factors related to deprived family and school backgrounds – latter 1960s through 1976. Cross’s analyses conclude that educators in each succeeding era saw the problems associated with lack of success in college as having greater complexity than in the preceding eras and that solutions tended to be additive over the years. In looking at the trends in the evaluation of remedial programs, Cross notes that the 1940s and 1950s was a period of relatively unsophisticated methodological analysis of program effectiveness. Evaluation in the 1960s focused on the emotional defenses of both the programs of the era and the students entering higher education through such programs. In the 1970s, evaluation was concerned with the degree to which programs helped students meet academic goals. A number of articles (Boylan, 1988, 1990; Boylan & White, 1987, 1994; Jones & R ichards-Smith, 1987) on the history of developmental education come from the National Center for Developmental Education. These articles show how developmental education services have been provided to college students since 1630. Specific attention is directed toward each academic generation’s understanding of nontraditional students as they were served by the new categories of postsecondary institutions or institutions with evolving missions. The authors argue that it is the nation’s way to induct newer groups of students into higher education, label them in a pejorative manner,
8
History
and then watch them overcome the stereotypes and their “lack of preparation” to become functional members of the ever-evolving traditional class of students. The cycle continues with the enrollment of new groups of nontraditional students. Jones and Richards-Smith (1987) present a particularly important chronicle investigating historically black colleges as providers of developmental education services. Roberts’ (1986) and Tomlinson’s (1989) summaries of the trends in developmental education from the mid-1800s to the modern era parallel many of the historical sources mentioned in this section. Both authors concur that programs have grown from being isolated, narrowly conceived, and inadequately funded to being more integrated, broadly conceptualized, and regularly funded campus entities. Tomlinson provides a useful graphic presentation of the changes in the terminology used to identify developmental education-style programs as well as the labels for the students receiving such services during three different eras (1860s–1890s, 1900s–1940s, 1950s–1989). Carpenter and Johnson (1991) provide another brief historical summary that closely mirrors the discussions provided by Roberts and Tomlinson. Bullock, Madden, and Mallery (1990) cover the growth of developmental education starting in the Pre-Open Admissions Era (prior to 1965), moving to the Equality and Access Era (1965–1980) and continuing through the Accountability Era (1980–1989). So as to adequately situate the field in the larger milieu, each section covers (1) the social milieu for the time, (2) the era’s impact on American education, (3) the university setting, and (4) the place of developmental education in the university setting. The work of Casazza and Silverman (1996) and later Casazza (1999) and Casazza and Bauer (2006) combines the events common to the fields of learning assistance and developmental education with events shaping higher education. Casazza and Silverman (1996) argue that the tension created between each generation’s traditionalists’ viewpoints and reformists’ philosophies has promoted gradual change in education. Given this premise, three eras were identified. The first era (1700–1862) is characterized by the tensions that evolved from the development of a new American system built upon democratic ideals, while the educational touchstones for those times were the classical colleges of Europe. A second era (1862–1960) stressed the tensions that evolved as higher education continued to open, or be forced to open, its portals to a more diverse clientele. Finally, the third era (1960–2000) looks at the tensions that existed in the movement to provide support services to an increasingly diverse body of students. As Casazza and Silverman review each era, they strive to answer three key questions: (1) What is the purpose of postsecondary education? (2) Who should attend college? and (3) What should the curriculum look like? In this work, it is important to note that the authors show that learning assistance and developmental education do not operate in a vacuum. Rather, they are imbricated into the culture and the events that shape higher education. Over the past decade, Arendale carried the mantel of telling the history of developmental education. His now classic work “A Memory Sometimes Ignored” (2002b) not only provides the story of the early history of developmental education, beginning with preparatory programs in the 1800s, but also offers a cogent argument as to why the field has a pariah status in texts authored by higher education historians. He clearly shows that higher education histories and institutional histories focus on great leaders, political issues, and growth of infrastructure. When students are of interest, it is through the lens of white males as opposed to women, students of color, or students from lower-status economic or academic castes. Arendale concludes that the story of higher education requires a deeper and more diverse study of developmental education and its students, even if the inclusion of such topics proves to be uncomfortable. In the second work in his trilogy, Arendale (2002a) intertwines six phases of developmental education’s history with the history of higher education. These phases are presented as a chronology that highlights both the common instruction of the time and the students most likely to have been served in developmental education: (1) mid-1600s to 1820s (tutoring that served privileged white males),
9
Norman A. Stahl and James R. King
(2) 1820s to 1860s (precollege preparatory academies and tutoring that served privileged white males), (3) 1860s to mid-1940s (remedial classes within college preparatory and tutoring that served mostly white males), (4) mid-1940s to early 1970s (remedial education classes integrated within the college, tutoring, and compensatory education, serving traditional white males; nontraditional students; and federal legislative priority groups, such as first generation to college, economically disadvantaged, and diverse student groups), (5) early 1970s to mid-1990s (developmental education, learning assistance, tutoring, and supplemental instruction programs that served returning students as well as those from previously mentioned groups), and (6) mid-1990s to the present (developmental education with expansion into enrichment activities, classes, and programs, serving the previous groups along with students wishing to gain greater breadth and depth of content knowledge). Throughout his discussion, Arendale interrelates the six phases with the economic, social, and political movements and events that influenced, if not promoted, each respective phase. The author concludes that developmental education grew and expanded not because of a carefully conceived plan but rather due to an exigent response to the expanding needs of a population that grew more ethnically, culturally, and economically diverse over the years. In a third work, Arendale (2005) approaches the history of developmental education through the analysis of the labels that have represented the field as it has redefined itself over the years. He begins with Academic Preparatory Programs (early 1800s through the 1850s) and then moves through Remedial Education (1860s–1960); Compensatory Education (1960s); Learning Assistance (late 1960s–2005); Developmental Education (1970s–2005); and, finally, ACCESS Programs (the European ACCESS network). Arendale offers a prognosis for the future and suggests that the field must articulate its mission to others in higher education as well as to those outside the field. (Also see Arendale, 2006.) Arendale’s landmark scholarship from over the first decade of the 21st century culminated with the release of Access at the Crossroads: Learning Assistance in Higher Education (2010), in which he covered a range of topics impacting learning assistance from a “big tent” perspective. The text, through its coverage of the history of learning assistance, serves as a “bully pulpit” as it was disseminated to a wider higher education audience with a greater likelihood of enlightening and perhaps of influencing those who possessed little knowledge of the history and contributions of the field. [Arendale’s chapter has been reprinted in Boylan and Bonham (2014).] Boylan and Bonham (2007) provide a chronicle of the field from the birth of the National Center for Developmental Education in 1976 through landmark events such as the founding of the National Association for Remedial/Developmental Studies in Postsecondary Education in 1976 (the progenitor of the National Association of Developmental Education), the release of the first issue of the Journal of Developmental and Remedial Education (now the Journal of Developmental Education) in 1978, the first Kellogg Institute for Training and Certification of Developmental Educators in 1980, the advent of the CRLA Tutor Training Certificate in 1989, the first Technology Institute for Developmental Educators in 1999, and the first inductees into the Fellows program of the American Council of Developmental Education Associations (ACDEA). The articles in this category provide summaries of where the field has been and, in several cases, interesting speculations of where the field was expected to move. Many of these works can be found in academic libraries, both as published articles and as archive versions in the ERIC (Educational Resources Information Center) document collection or on JSTOR ( Journal Storage). More recently, such publications can be found on open-source websites, on sites such as ResearchGate, or on personal home pages. A weakness of the materials in this category is that there is a degree of redundancy from article to article. Because of this redundancy, there is a blurring of the distinctions between college reading, learning assistance, and developmental education. It is true that there is much common history between the fields, and it is also true that there are modern interrelationships as well. Still, there are
10
History
differences in breadth of mission and in underlying philosophy. Reaching common ground is important, but so is the systematic identification of differences. Perhaps this redundancy, particularly in the more recent articles, is due to an overreliance on certain sources (e.g., Brier 1984; Maxwell 1979). The bottom line, however, may very well be that the field is saturated with historical surveys, and writers should turn to more focused topics, as presented in the next category.
Historical Topics, Eras, and Institutional Histories As a field reaches a developmental stage in which its history is valued and there is an academic commitment to more fully understanding the contributions of individuals and colleges within historical contexts, the studies begin to focus on specific topics or specific eras. In the case of the topical papers, these articles were often logical historical outgrowths of popular research trends or theoretical postulates from the era in which the piece was authored. In other instances, the papers were part of an ongoing line of historical work by an author or an authoring team. In the case of era-focused articles, the authors present works that, when organized into a concerted whole, tend to define the era(s). Comparisons to other eras, both historical and present, may be integrated into the works as well. Finally, as our associations and institutions come of age, there are a growing number of works that focus on organizational history. In the paragraphs that follow, we begin by addressing studies that are of a topical nature, follow with work focused on historical eras, and then review organizational or institutional histories.
Topical Studies During the latter 1980s and early 1990s, there was an initial interest in the relationship between reading and writing as modes of learning. Quinn’s (1995) article traced the impact of important pedagogical influences, instructional trends, theories, and research upon the integration of reading and writing instruction in higher education from the turn of the century to the mid-1990s. The article drew upon historical work in the fields of writing across the curriculum, reading and writing instruction in grades K-12, college reading instruction, content field reading instruction, and reading research. Quinn showed that interest in the reading-writing connection arose on several occasions over the 20th century, but it was with the 1980–1990s discussions of reading and writing as powerful tools for promoting thinking and learning that the integration of the two fields evolved into a powerful instructional model. Ironically, this curricular model did not fully flower in the community college until the coming of the developmental education reform movement with its emphasis on acceleration. Learning strategies (also known as work methods, work-study methods, study methods, study skills, and study strategies during different eras) have been the topic of several historical texts. Stahl (1983) and Stahl and Henk (1986) traced the growth of textbook study systems through the development of Robinson’s Survey Q3R (SQ3R). Specific attention was given to the birth of study systems through their relationship to the scientific management theory (i.e., Taylorism) up to the advent of World War II. In addition, these studies covered the initial research underlying the design of SQ3R and analyzed the research undertaken with the system through the late 1970s. Finally, the authors detailed over 100 clones of SQ3R up through the 1980s. They found that at the time of its introduction in the postwar period, SQ3R was a most effective sobriquet and organizing mechanism for a set of well-accepted reading strategies based on era-appropriate theory and reading research. In discussing the same topic, Walter Pauk (1999) presented a historical narrative of how SQ3R was developed. While this work covers some of the same ground, it also provides insight into the decades following World War II from an individual who single-handedly defined the study skills field in the 1950s, 1960s, and 1970s.
11
Norman A. Stahl and James R. King
In another historical text, Stahl, King, and Eilers (1996) examined learning strategies such as the Inductive Outline Procedure, the Self-Recitation Study Method, SQ3R, the Block Method, and the Bartusch Active Method, which have been lost to varying degrees from the literature. The authors suggested that a study strategy must be perceived as (1) efficient by the user, regardless of the innovative theory or research validation underlying it; (2) associated with advocates or originators viewed as professional elites in the field; and (3) in line with the tenor of the field, past or present, to be accepted by students and instructors. Textbooks and workbooks published for the field of college reading and study strategy instruction also merit historical analysis. Two articles provide focus on this topic. The first article (Stahl, Simpson, & Brozo, 1988) used historical contexts to examine content analyses of college reading texts published since 1921. The data from specific content analyses and the observed trends in the body of professional literature suggested that no consensus existed across texts as to what constituted effective study strategies. Research evidence for most of the techniques was not present. Both the scope and validity of the instructional methods and the practice activities were limited. The transfer value of many practice activities was questionable. Overall, the content analyses issued since 1941 suggested that there had been a reliance on impressionable evidence rather than research when designing college reading textbooks. The task of conducting a historical analysis of instructional materials for college reading instruction has been limited in the past because an authoritative compilation of instructional materials was not available. Early attempts at developing such a resource (Bliesmer, 1957; Narang, 1973) provide an understanding of two historical eras, but both were limited in breadth over the years as well as in depth across editions for specific texts. Hence, Stahl, Brozo, and Hynd (1990) undertook the compilation of an exhaustive list of texts pertaining to college reading instruction. These authors also detailed the archival activities undertaken to develop the list. By employing texts published in the 1920s as an example, the authors explained how the resource list might be employed in conducting research or designing a curriculum. The final compilation contained 593 bibliographic entries for books printed between 1896 and 1987, along with the dates for each identified edition of a respective text. The full bibliographic list is available in the technical report that accompanied the article (Stahl, Brozo, & Hynd, 1989). More recently, this work had influence upon the dissertation undertaken by Shen (2002). Walvekar (1987) investigated 30 years of evolving practices in program evaluation as these impacted the college reading and learning assistance fields. For instance, Walvekar showed how three forms of program evaluation (humanistic evaluation, systematic evaluation, and research) were, in fact, responses to larger issues associated with the “open door” at community colleges, the expanded diversity in students at universities, and the call for greater retention of all college students in the 1970s. Overall, Walvekar felt that the evaluation practices were undeveloped in the 1960s, inadequate through the early 1980s, and still evolving as of 1987. Mason (1994) provides a comparative study in a historical context of seven college reading programs, founded in most cases in the 1920s or 1930s at elite institutions (Harvard University, Hamline University, Amherst College, the University of Chicago, Syracuse University, the University of Pennsylvania, and the University of Iowa). In comparing and contrasting instructional programs, institutional mandates, academic homes, assessment procedures, and staff qualifications across the institutions, the author reported as much variation existed as did commonality in programs.
Era Studies There are four era-focused studies that can be found in the literature. These works cover the postWorld War II scene.A professional field does not operate in a vacuum.
12
History
The field of college reading has been influenced by a number of events, such as the mid-century civil rights movement and the community college boom years. One of the historical events to influence the field was the passing of the Serviceman’s Readjustment Act of 1944 or, as it is best known, the GI Bill of Rights. Bannier (2006) investigated the legacy of the GI Bill on both colleges and universities as well as its current impact on developmental education. The author traced the roots of the GI Bill back to a lack of action when the veterans came home from World War I. This led to the realization that for political, economic, and even social reasons, a similar lack of action could not be the case with returning vets from World War II. The GI Bill’s impact on curriculum, instruction, and enrollment trends was tremendous. Variations of the legislative acts to serve the vets from the Korean War, the Vietnam conflict, and more recent military actions are covered as well (also see Rose, 1991). Another valuable era-focused work is Kingston’s (1990, 2003) discussion of the programs of the late 1940s through the 1960s. This narrative rests in part on the insights, experiences, and knowledge of Kingston as an important leader in the field during the period in question, and as such, the work might be categorized as a self-study (Denzin, 1989). Kingston covered changes and innovations in assessment, elements of the curriculum, and instructional programs. Finally, Kingston discussed the birth of the Southwest Reading Conference and the CRA as well as the Journal of Developmental Reading and the Journal of the Reading Specialist, which were to serve the field during the period and the years after. In a similar self-study, Martha Maxwell (2001) wrote historically regarding the impact of the GI Bill. With the returning service men and women from World War II, through the 1970s, the doors of higher education swung open to this new cadre of nontraditional students. Maxwell’s narrative provides recollections about the students, the programs serving them, the professional associations that evolved to serve the fledgling profession, and technology introduced into the curriculum (speed reading) as she encountered each while at various institutions and in varied professional positions. A study that overlaps the periods covered by Kingston and Maxwell is the work by Mallery (1986) that compared two eras: the 1950s through the mid-1960s and the years 1965–1980. The period before 1965 was characterized by program orientation and organization’s being dependent on home department and instructional methods. The demarcation point between the two eras was the point when the “new students” began to make their presence felt in the college reading programs with the advent of the War on Poverty. The influx of federal dollars into higher education led to underrepresented student populations gaining admission to postsecondary institutions in numbers not seen before. Concerns about retention were framed in a glib aphorism that the “open door” was becoming a “revolving door” in the 1970s. Questions also began to arise as to the training that was desirable for college reading specialists. Instructional philosophies differed from college to college, and instructional activities included diagnosis, individualization, content area reading, and study skills instruction. The previously discussed work by Bullock et al. (1990) is an outgrowth of this work.
Organizational and Institutional Histories As an extension of the era-based studies, it is practical to discuss the contributions of different organizations. Each organization, within its own historical time, provided fundamental leadership for the field of college reading and learning research and instruction. Historical research and content analysis of the various organizations’ conference yearbooks and journal articles by Singer and Kingston (1984), Stahl and Smith-Burke (1999), and Van Gilder (1970) discuss how the NRC’s (now the LRA’s) origins and maturity reflected the growth and development of college reading in the nation. The NRC held its first meeting as the Southwest Reading Conference for Colleges and Universities in April 1952 with the goal of bringing southwestern-based college reading instructors together
13
Norman A. Stahl and James R. King
to discuss issues impacting GI Bill-era programs. The content of the conference yearbooks during the organization’s first five years focused on administrative procedures, student selection processes, and mechanical equipment for college reading programs. During the next four years, the papers grew in sophistication as the presenters began to focus on research, evaluation, and the interaction of college reading with other academic fields. Speed of reading became less of a topic of import as greater interest was directed at comprehension and reading flexibility. Over the 1960s, the membership was beginning to face a crisis that was both developmental and generational between those who were interested in the pedagogy of college reading and those who were more directly concerned with the research on the psychology of reading and learning. The outcome, with hindsight, was rather predictable. As the years have passed, the LRA has become a premier forum for literacy research. While topics on college reading can still be found on the yearly program, such presentations do not approach the proportional representation that manifested during the organization’s formative years. By the late 1950s, there was a growing interest in the eastern United States in starting an organization that would serve much the same purpose as the Southwest Reading Conference in the west. Hence, in 1958, individuals from Pennsylvania met at Temple University to discuss the possibility of forming an association for individuals from the northeast and the mid-Atlantic states who taught and administered college reading programs. After contacting faculty from across the region, the decision was made to hold the first conference of the CRA at LaSalle College in October 1958. Over the years, CRA (Alexander & Strode, 1999) (now the ALER) would broaden its mission to include emphasis on adult literacy, literacy clinics, and teacher education. Its journal, initially titled the journal of the Reading Specialist (now Literacy Research & Instruction) regularly published articles and columns pertaining to the college reading and study skills field. The ALER organizational history (Linek, et al. 2010a, 2010b) is a tour de force, serving as a model for other organizations as it includes decade-by-decade histories as well as sections of biographies and recollections of leaders, oral histories, and presidential and keynote addresses, with many of these documents focusing on college reading instruction. In the 1970s, the onus of leadership in college reading was assumed by a new group in its formative years, the CRLA. O’Hear (1993), writing in the 25th anniversary edition of the Journal of College Reading and Learning, examined what had been learned about the field through the articles published in this journal and in the earlier conference yearbooks of the CRLA (known first as the Western College Reading Association and then as the Western College Reading and Learning Association). O’Hear proposed that after the enrollment of the “new students” in the latter 1960s, the field evolved from blind reliance on a deficit model driven by standardized tests and secondary school instructional techniques and materials. It evolved into a model in which students’ needs were better understood and more likely to be approached by instruction based on current learning theory and reading research. This article, along with works by Kerstiens (1993), Mullen and Orlando (1993), and Agee (2007), provides important perspectives on CRLA’s many contributions. Boylan (2002) provided a historical narrative of the origins of the ACDEA (now the CLADEA). This unique umbrella council attempts to bring together the leadership of developmental education and learning assistance organizations so as to harness the power of synergy rather than to allow competition or jealousies hamper a common pedagogical mission across the field. Boylan covers the birthing process from 1996, the council’s inter-organizational communication and informal mediation roles, and the development of the ACDEA and CLADEA Fellows Program. A final organizational history category (Dvorak with Haley, 2015) provides the historical perspective on the contributions provided by the NCLCA. Within this work, the authors cover the association’s history by discussing the organizational structure, contributions of selected officers, the NCLCA conference and its summer institute, the journal The Learning Assistance Review, the Learning Center Leadership Certification, and the designation of Learning Centers of Excellence for the NCLCA.
14
History
Organizational histories that have not been published as texts or articles can be found at the websites for each of the CLADEA member associations: National Association for Developmental Education (www.nade.net), National College Learning Center Association (nclca.wildapricot. org), the Association for the Tutoring Profession (www.myatp.org), Association of Colleges for Tutoring & Learning Assistance (actla.info), and the CRLA (www.CRLA.net). The articles and texts that are classified as topical works, era-based narratives, or organizational histories focus on depth of issue rather than the broad sweep found in articles on the subfields. Still, several cautions should be noted. A single author or team of coauthors have researched most of the topics within these works. Hence, there is little in the way of alternative viewpoints upon which to base conclusions. Second, the era studies tend to be focused on the times in which the author(s) were professionally active. While these “lived studies” are important, there is a danger of individuals trying to set themselves within history through personal interpretations. Furthermore, there is a need for era studies that go beyond the rather recent past. Finally, institutional studies have a tendency to paint a positive picture of the organization under study. Such a work, particularly when commissioned by the organization, must be read with an open mind.
Oral History, Autobiography, and Biography Norman Denzin authored Interpretive Biography in 1989. To this day, the text serves as a seminal source for those planning to undertake a range of biographical endeavors, including oral history, life history, biography, memorials, autobiography, and case histories, among others. As our field ages with the respective aging of those who have served over the years, opportunities for undertaking interpretive biography have grown. Works focusing on oral history by Stahl, King, Dillon, and Walker (1994) and Stahl and King (2007) provide guidelines for projects that have been undertaken by members of the profession. One of the first oral histories to appear was the interview with Martha Maxwell (Piper, 1998). Piper traces Maxwell’s career of 50 years at the American University, the University of Maryland, and the University of California, Berkeley, as she served the fields of college reading, learning assistance, and developmental education [also see “Martha Maxwell” (2000) and Massey & Pytash (2010)]. Several years later, Casazza and Bauer (2006) produced a major oral history endeavor that preserves the perspectives of four groups of individuals who have impacted, or have been impacted by, developmental education over the years. The individuals interviewed can be classified as pioneers who worked to open the doors of higher education to diverse populations, leaders of the professional organizations, practitioners serving nontraditional students, and students who entered higher education through access programs. Across 30 interviews of the elites (e.g., David Arendale, Hunter Boylan, K. Patricia Cross, Martha Maxwell, and Mike Rose) who have left a published legacy and of the non-elites who will not have left an extensive legacy of publication and presentation, there emerged four common themes. These include (1) the power in having a belief in students, (2) the struggle between providing access to those who may be underprepared for postsecondary education and holding standards in learning, (3) the importance of institutional commitment to developing and supporting access, as well as the integration of support services into the mainstream of the mission and goals for the institution, and (4) the value of having a purposeful repertoire of strategies, both academic and personal, that promotes student success. From the themes that evolved came both recommendations and action steps that can promote access to higher education and ensure that the experience is meaningful for students. As an outgrowth of the aforementioned project, Bauer and Casazza recrafted selected interviews so as to present intimate portraits of three of the field’s enduring pioneers: K. Patricia Cross (Bauer & Casazza, 2005), Mike Rose (Bauer & Casazza, 2007), and Martha Maxwell (Casazza & Bauer, 2004). Through this oral history series in the Journal of Developmental Education, readers are
15
Norman A. Stahl and James R. King
able to learn of major events and seminal works that influenced key players in our field. More recently in the organizational history of ALER, oral histories covering the lives of college reading specialists Maria Valeri-Gold (Mahoney, 2010) and Norman Stahl (King, 2010) were published. Both oral histories cover many of the contributions to research and praxis provided by the programs in Georgia over the latter 20th century. The autobiographic account can also have tremendous impact on the field of developmental education. Lives on the Boundary by Mike Rose (1989) is an example of autobiography as we see its author overcome the effects of being branded a remedial learner as a youngster in South Los A ngeles and later become a leading advocate of quality education for all students. It is through Rose’s exploration of the self that we, as readers, are able to participate vicariously in the shared life so as to understand and become sympathetic for the argument he puts forward. The profession has a growing number of biographies of individuals who shaped the field or who undertook research with college readers, influencing instruction over the past century (Israel & Monaghan, 2007; Taylor, 1937), in addition to the Reading Hall of Fame’s (RHF’s) website at www.readinghall of fame.org. Brief biographies of Walter Pauk (Johri & Sturtevant, 2010a) and Martha Maxwell ( Johri & Sturtevant, 2010b) are found in the ALER organizational history. Finally, a kindred source to biography is the memorial. A professional field that has come of age begins to post memorials for the field’s elites at the time of their respective passings. Such memorials serve as a type of interpretive biography with historical merit. They are found in the files for deceased members of the RHF on its website well as in the memorial section of the website for the LSCHE at www.lsche.net/?page_id+1438. Along with growing interest in qualitative research, there has been a concomitant growth in oral histories and life histories that preserve the most important historical artifact, the human memory. Although the selectivity of the human memory over the years does influence the artifact, oral history interviews provide the field with valuable insights that could have been lost to the sands of time. The growth in both autobiography and biography provides important resources, and these are fruitful areas for future work.
The Field in History A logical question naturally resurfaces at this point in our discussion. We query is there a body of historical scholarship that informs us about the field? The answer is multifaceted. First and foremost, it can be acknowledged that there is a documented history, particularly at the survey level, of the field of college reading and its allied fields of learning assistance and developmental education. Hence, there is little excuse for college reading professionals, including graduate students in programs that focus on developmental education, not to have a sense of the field’s history through these readily available texts. It is not enough for the future leaders and researchers in the field to simply know of our history from a distance via historical surveys. They must adopt a philosophy that leads them to seek out and read widely and critically historically important texts, as covered in Strang (1938, 1940) and Stahl (2014), and those included in Armstrong, Stahl, and Boylan (2014) and Flippo (2017). Second, it is evident that the number of historically oriented texts is growing, both in number and in sophistication. In 1986, Stahl, Hynd, and Henk were able to identify nine texts that covered historical topics about college reading and learning assistance. The current chapter includes over 100 resources with the same historical mission. The burgeoning interest in history is due in part to the field’s coming of age with a committed cadre of scholars who have not abandoned college reading or developmental education for what might have been considered “greener pastures” in teacher education. Also with an established, if not graying, professoriate in our field, there has been a growing desire to know our place and roots in the profession, and perhaps to define our role or legacy in the history of the field.
16
History
Since the release of the original version of this chapter, a number of sources have been released focusing on historical method in literacy (Gray, 2007; Monaghan & Hartman, 2000; Stahl & Hartman, 2011; Stahl & King, 2007; Zimmer, 2007). It is clear today that the sophistication in historical research methods deployed in literacy has matured over the years. There are a greater number of studies that attempt to be more than simple chronological surveys of past events. The work is becoming more focused on specific topics and defined eras as well as more articulate about its own processes. There are numerous opportunities for members of the profession to become involved in preserving the historical legacy of the field of college reading. Hence, we now turn to the role each of us can play in the history of the field.
Doing History While all persons make history, and are part of history on a day-to-day basis, most individuals naively assume that history represents only the scope of events at national or international levels. Hence, the history of our profession is generally viewed as the broad historical chronicles, chronicles that pay scant attention to the field of college reading. History is also erroneously thought of as the story of men of wealth and status. Hence, the thought of being a historian and undertaking historiography, even at a personal or a local level, can seem to be a most daunting task. Still, we believe that each college reading specialist can be and certainly should be a historian of what Kyvig and Marty (2010) call “nearby history.”
Nearby History What then is “nearby history” or “local history” (Kammen, 2014), and what is its value to our profession? As an outgrowth of the turbulence and social upheavals of the 1960s, there came to be both academic and practical value for the detailed study of specific institutions and communities through the advent of social history. We hold that college reading programs and LACs are intricate parts of a larger institution and that professionals delivering services, along with students who receive them, are part of a defined community in a postsecondary institution and worthy of concerted and careful study. It is important to ask questions about (1) the conditions that lead to the origins of a program, (2) the purposes of the program at various stages in its evolving life, (3) the dynamics of the program’s relationships with other academic units, (4) the milestones over the years, (5) the unique program features over time, (6) the traditions incorporated into the design of the unit, (7) the distinctive nature for which the reading program comes to be known, and (8) how the pride of community is promoted. In pursuing answers to these questions, we gain vital information on the history of that program. Furthermore, we have a solid foundation upon which to build for the future or handle pedagogical issues and institutional situations currently impacting the program. Hence, there is a reason that historical method should be utilized to preserve the accomplishments and heritage of specific college reading programs and learning assistance centers. Four examples of historical narratives of programs include Christ’s (1984) account of the development of the LAC at California State University, Long Beach, Arendale’s (2001, 2002c) description of the historical development of the Supplemental Instruction program at the University of Missouri-Kansas City as it grew into an international pedagogical movement, Spann’s (1996) historical narrative of the National Center for Developmental Education at Appalachian State University, and the LSCHE’s (n.d.a) description of the 40-year evolution of the LSCHE resource system. While it is not the purpose of this chapter to cover the methods and techniques of historiography, we would be remiss if we did not note that there exists a range of documents at the disposal of college reading programs and learning centers that open the doors to the study of an academic
17
Norman A. Stahl and James R. King
unit’s history. These documents include published texts of wide circulation (e.g., scholarly books, institutional histories, journal and yearbook articles, course texts/workbooks, dissertations, reading tests, government reports, LISTSERV archives); documents of local distribution (e.g., campus newspapers, college catalogs, campus brochures, training manuals); unpublished documents (e.g., strategic plans, yearly reports, accreditation documents, evaluation reports, faculty service reports); and media/digital products (e.g., photographs, videos, movies, software, www homepages) from the program’s files or institutional archives. Likewise, artifacts such as tachistoscopes, controlled readers, reading films, reading accelerators, and software may seem like obsolete junk that has been shunted to forgotten storage closets. Yet these artifacts have as much value in learning a program’s history as old texts or archives of students’ work from past generations. The history of a college reading program as an entity, along with the history of the academic community that instantiates that program, can be preserved through the collection of autobiographies and oral history narratives of current and former faculty and administrators as well as current and former students. The autobiographic account or autoethnography can impact understanding of the self as a professional. It can also impact the workings of an entire program. Life history and oral history can play an equally important role in preserving the history of a college reading program. With the more established programs, faculty might undertake oral history interviews with retired faculty who served with the program in past years. Second, life history interviews with former students might provide interesting narratives that suggest the ways in which the program played a part in their development as college students and mature readers. Finally, life history narratives of current faculty and staff will provide an interesting picture of the personal histories that underlie the current pedagogical philosophy of the program. The history of a program can be disseminated in a number of ways. The audience for this activity may be internal to the institution, or it may be an external body of reading professionals, legislators, or community members. Written forms of dissemination include scholarly books; articles in state or national journals, or conference yearbooks; and chapters in institutional histories, whether released in traditional publication venues or through the growing number of open-source texts. The historical study of a program (e.g., Walker, 1980) or an oral history project focusing on individuals associated with a program or professional organization (Casazza & Bauer, 2006; King, 1990) can be a most appropriate but often overlooked thesis or dissertation topic. Program histories can also find avenues for dissemination through conference presentations. In fact, this type of dissemination may be the only method of preserving for the historical record the contributions and stories of national class programs and faculties, such as those in the General College at the University of Minnesota and at Georgia State University ( Johnson, 2005; Singer, 2002) that were lost to political winds. Forms of digital media housed on a website that can be used to highlight a program’s history include streaming videos, PowerPoint presentations, podcasts, blogs, artifact/ document displays, and open-source documents.
Historical Research for the Profession We now shift the discussion to historical topics that have more nationally oriented foci. In the first edition of the Handbook, we built upon Stahl et al.’s (1986) 10 avenues, which provide options for undertaking historical research into the college reading and learning assistance profession. In the second edition, two additional avenues were added to the discussion. Now, at this juncture, we suggest that the 12 avenues for research continue to serve as important options for the field’s historical endeavors for four reasons. First, given the depth of each topic, there are many valid and valuable opportunities for research by either the neophyte or the more experienced literacy historian. Second, with the breadth of the field, there is still so much need to undertake historical research in each of the areas. Third, undertaking any of the suggestions may result in history becoming
18
History
immediately relevant for the researcher. Equally relevant, although at a later time, the individual who reads the articles or attends any conference sessions that are the product of the historian’s endeavors will also benefit. Finally, since the release of the last edition of the Handbook, the number of graduate programs training college reading specialists and developmental educators has grown. Hartman, Stahl, and King (2016) have made the case that all doctoral students in the literacy field should undertake an experience with historical scholarship before being granted candidate status. These 12 avenues for historical study are presented in Table 1.1. Each topic is followed by a focus question. Then, in column three, there are references previously published on the topic, historical studies providing guidance for future research, or resources for historical work on the topic. Each topic provides a rich opportunity for research. Table 1.1 D oing the History of College Reading and Learning Assistance Avenues for Research Judging the impact of a historical event
Questions to Guide Research
How have pedagogical, sociological, and economic events and trends at the national and international levels impacted the field? Focusing on an era What was the impact of influential theories, research, individuals, institutions, and instructional texts for a defined era? Assessing the impact of What were the critical contributions and influential individuals influences of key leaders over the years (the elite) (e.g., Francis Robinson, Alton Raygor, Oscar Causey, Frances Triggs)? Consulting the experienced What can we learn about the history of the field through the oral histories and autobiographies of leaders (e.g., Walter Pauk, Martha Maxwell, Frank Christ, Mike Rose)? Tracing changes in materials How have published instructional materials changed or evolved over the years due to theory, research, or pedagogical trends? Observing changes across What can a case study of a particular text multiple editions across multiple editions inform us about the field or programs that used it (e.g., How to Study in College by Walter Pauk)? Judging innovation and How do innovations in instruction and movements curriculum measure up to the records of precursors? How do innovations stand the test of time? Appraising elements of How have formal and informal measures of instrumentation assessment changed or influenced practice over the years? Focusing on an institution How has instruction or research that took place in a particular college impacted the field? Tracking and evaluating an How has a particular issue (e.g., labeling idea or a problem programs) impacted the field over the years? Doing history and creating What is the art of the literacy historian? How and preserving a legacy should we preserve texts, tests, hardware, and software of instruction from previous generations for future generations?
Key Sources Arendale (2010), Bullock et al. (1990) Boylan and Bonham (2007), Brier (1983, 1984), Kingston (1990), Pauk (1999) Flippo, Cranney, Wark and Raygor (1990), Israel and Monaghan (2007), Stahl (1983) Bauer and Casazza (2007), Kerstiens (1998), Piper (1998), Rose (1989) Leedy (1958), Shen (2002), Stahl et al. (1988) Shen (2002), Stahl et al. (1989, 1990)
Stahl et al. (1996)
Flippo and Schumm (2009), Van Leirsburg (1991) Johnson (2005), Singer (2002), Walker (1980) Arendale (2005) Hartman et al. (2016), Monaghan and Hartman (2000), Stahl and Hartman (2011)
19
Norman A. Stahl and James R. King
The historical works of the college reading field can be disseminated through a range of activities. Conferences sponsored by CRLA, NADE, and NCLCA welcome historical papers. All of the journals in the field of literacy research and pedagogy, learning assistance, or developmental education publish historical works pertaining to college reading. The Journal of Developmental Education publishes biographical-oriented interviews, as with Walter Pauk (Kerstiens, 1998), and oral histories. Still, it must be noted that historically focused manuscripts are not submitted to journal editors on a regular basis, and hence, such work is not published with regular frequency. Finally, the History of Literacy Innovative Community Group of the LRA has supported the study of individuals of historical importance to college reading’s history in sessions at the annual LRA conference.
Final Thoughts Like our nation, the field of college reading and learning research and instruction is in the midst of a period of great flux, leading to ever so many questions about the future. Given an uncertain and perhaps tenuous future, the history of the field has a most important role. Here, we turn to the sage wisdom of Franklin D. Roosevelt (1941) as he addressed his fellow citizens in an earlier time when the nation faced the onset of a most perilous period in our history: A nation must believe in three things. It must believe in the past. It must believe in the future. It must, above all, believe in the capacity of its own people to learn from the past that they can gain in judgement in creating their own future. Substituting the field of college reading and learning for nation in the quote above can provide important guidance to all professionals in the field. Hence, we remain constant in our belief that the value of studying literacy history is great (Hartman et al., 2016; Moore, Monaghan, & Hartman, 1997). The options for historical research are many, yet researchers’ uptake of these options is most uncommon. We are even stronger in our shared belief that each of us must be a historian, and each of us must be a student of history. The conduct of historical work in the field of college reading is alive and growing in a positive manner. In an era of reform in which the futures of many programs are at best tenuous, it is ever more important for the professionals in the field to understand that we have been making history for over a century. We should be learning and interpreting our history through classes, journal articles, and conference presentations, and we should be doing history at both the nearby and national levels on a regular basis. Simply put, we should remain cognizant that our understanding of our past will define and direct our future.
References and Selected Readings Agee, K. (2007). A brief history of CRLA. Retrieved from www.crla.net/index.php/membership/about-us Ahrendt, K. M. (1975). Community college reading programs. Newark, DE: International Reading Association. Alexander, J. E., & Strode, S. L. (1999). History of the College Reading Association, 1958–1998. Commerce, TX: A&M University, Commerce. Arendale, D. R. (2001). Effect of administrative placement and fidelity of implementation of the model on effectiveness of Supplemental Instruction programs. Dissertation Abstracts International, 62, 93. Arendale, D. R. (2002a). Then and now: The early history of developmental education: Past events and future trends. Research and Teaching in Developmental Education, 18(2), 3–26. *Arendale, D. R. (2002b). A memory sometimes ignored: The history of developmental education. The Learning Assistance Review, 7(1), 5–13.
20
History
Arendale, D. R. (2002c). History of Supplemental Instruction (SI): Mainstreaming of developmental education. In D. B. Lundell & J. L. Higbee (Eds.), Histories of developmental education (pp. 15–28). Minneapolis, MN: Center for Research on Developmental Education and Urban Literacy, University of Minnesota. Arendale, D. R. (2004). Mainstreamed academic assistance and enrichment for all students: The historical origins of learning assistance centers. Research for Education Reform, 9(4), 3–21. Arendale, D. R. (2005). Terms of endearment: Words that define and guide developmental education. Journal of College Reading and Learning, 35(2), 66–81. Arendale, D. R. (2006). Developmental education history: Recurring trends and future opportunities. The Journal of Teaching and Learning, 8(1), 6–17. *Arendale, D. R. (2010). Access at the crossroads: Learning assistance in higher education. ASHE Higher Education Report, 35(6). San Francisco, CA: Jossey-Bass. Armstrong, S. L. (2012). The impact of history on the future of college reading: An interview with Norman A. Stahl. Journal of Developmental Education, 35(3), 24–27. Armstrong, S. L., Stahl, N. A., & Boylan, H. R. (Eds.). (2014). Teaching developmental reading: Practical, historical, and theoretical background readings (2nd ed.). Boston, MA: Bedford/St. Martins. Bailey, J. L. (1982). An evaluation of journal published research of college reading study skills, 1925–1980 (Unpublished doctoral dissertation). University of Tennessee, Knoxville, TN. Bannier, B. (2006). The impact of the GI Bill on developmental education. The Learning Assistance Review, 11(1), 35–44. Bauer, L., & Casazza, M. E. (2005). Oral history of postsecondary access: K. Patricia Cross, a pioneer. Journal of Developmental Education, 29(2), 20–22, 24–25. Bauer, L., & Casazza, M. E. (2007). Oral history of postsecondary access: Mike Rose, a pioneer. Journal of Developmental Education, 30(3), 16–18, 26, 32. Berger, A. & Peebles, J. D. (1976). Rates of comprehension. Newark, DE: International Reading Association. Blake, W. S. (1953). A survey and evaluation of study skills programs at the college level in the United States and possessions (Unpublished doctoral dissertation). University of Maryland, College Park, MD. Bliesmer, E. P. (1957). Materials for the more retarded college reader. In O. S. Causey (Ed.), Techniques and procedures in college and adult reading programs: 6th yearbook of the Southwest Reading Conference (pp. 86–90). Fort Worth, TX: Texas Christian University Press. Boylan, H. R. (1988). The historical roots of developmental education, Part III. Research in Developmental Education, 5(3), 1–4. Retrieved from http://eric.ed.gov, ED. #341–434. Boylan, H. R. (1990). The cycle of new majorities in higher education. In A. M. Frager (Ed.), College reading and the new majority: Improving instruction in multicultural classrooms (pp. 3–12). Oxford, OH: College Reading Association. Boylan, H. R. (2002). A brief history of the American Council of Developmental Education Associations. In D. B. Lundell & J. L. Higbee (Eds.), Histories of developmental education (pp. 11–14). Minneapolis, MN: Center for Research on Developmental Education and Urban Literacy, University of Minnesota. Boylan, H. R., & Bonham, B. S. (2007). 30 years of developmental education: A retrospective. Journal of Developmental Education, 30(3), 2–4. Boylan, H. R., & Bonham, B. S. (2014). Developmental education: Readings on its past, present, and future. Boston, MA: Bedford/St. Martin’s. Boylan, H. R., & White, W. G. (1987). Educating all the nation’s people: The historical roots of developmental education (Part 1). Research in Developmental Education, 4(4), 1–4. (Reprinted in Developmental education: Readings on its past, present, and future. pp. 5–18, by H. R. Boylan & B. S. Bonham, Eds., 2014, Boston, MA: Bedford/St. Martin’s) Boylan, H. R., & White, W. G. (1994). Educating all the nation’s people: The historical roots of developmental education – condensed version. In M. Maxwell (Ed.), From access to success (pp. 3–7). Clearwater, FL: H & H Publishing. Brier, E. M. (1983). Bridging the academic preparation gap at Vassar College and Cornell University, 1865–1890 (Unpublished doctoral dissertation). Columbia University Teachers College, New York, NY. *Brier, E. (1984). Bridging the academic preparation gap: An historical view. Journal of Developmental Education, 8(1), 2–5. (Reprinted in Developmental education: Readings on its past, present, and future. pp. 11–18, by H. R. Boylan & B. S. Bonham, Eds., 2014, Boston, MA: Bedford/St. Martin’s). Bullock, T. L., Madden, D. A., & Mallery, A. L. (1990). Developmental education in American universities: Past, present, and future. Research & Teaching in Developmental Education, 6(2), 5–74. Carpenter, K., & Johnson, L. L. (1991). Program organization. In R. F. Flippo & D. C. Caverly (Eds.), College reading and study strategy programs (pp. 28–69). Newark, DE: International Reading Association.
21
Norman A. Stahl and James R. King
Casazza, M. E. (1999). Who are we and where did we come from? Journal of Developmental Education, 23(1), 2–4, 6–7. Casazza, M. E., & Bauer, L. (2004). Oral history of postsecondary access: Martha Maxwell, a pioneer. Journal of Developmental Education, 28(1), 20–22, 24, 26. (Reprinted in Teaching developmental reading: Historical, theoretical, and practical background readings, pp. 9–18, by S. L. Armstrong, N. A. Stahl, & H. R. Boylan, Eds., 2014, Boston, MA: Bedford/St. Martin’s). *Casazza, M. E., & Bauer, L. (2006). Access, opportunity, and success: Keeping the promise of higher education. Westwood, CN: Greenwood Publishing Group. Casazza, M. E., & Silverman, S. (1996). Learning assistance and developmental education: A guide for effective practice. San Francisco, CA: Jossey-Bass. Chall, J. S. (1983). Stages of reading development. New York, NY: McGraw-Hill. Christ, F. L. (1984). Learning assistance at California State University-Long Beach, 1972–1984. Journal of Developmental Education, 8(2), 2–5. Cranney, A. G. (1983a). Two decades of adult reading programs: Growth, problems, and prospects. Journal of Reading, 26(5), 416–423. Cranney, A. G. (1983b). Two decades of college-adult reading: Where to find the best of it. Journal of College Reading and Learning, 16(1), 1–5. *Cross, K. P. (1971). Beyond the open door: New students to higher education. San Francisco, CA: Jossey-Bass. Denzin, N. K. (1989). Interpretive biography. Newbury Park, CA: Sage. Dvorak, J. with Haley, J. (2015). Honoring our past: The founding of NCLCA. The Learning Assistance Review, 20(2), 7–11. *Enright, G. (1975). College learning skills: Frontierland origins of the Learning Assistance Center. College learning skills: Today and tomorrowland. Proceedings of the eighth annual conference of the Western College Reading Association (pp. 81–92). Las Cruces, NM: Western College Reading and Learning Association. (ERIC Document Reproduction Service No. ED105204.) [Reprinted in Maxwell, M. (1994). From access to success (pp. 31–40). Clearwater, FL: H & H Publications.] Enright, G., & Kerstiens, G. (1980). The learning center: Toward an expanded role. In O. T. Lenning & R. L. Nayman (Eds.), New directions for college learning assistance (pp. 1–24). San Francisco, CA: Jossey-Bass. Flippo, R. F. (Ed.) (2017). Reading: Major themes in education (Vols. I–IV). London and New York: Routledge. Flippo, R. F., Cranney, A. G., Wark, D., & Raygor, B. R. (1990). From the editor and invited guests – In dedication to Al Raygor: 1922–1989. Forum for Reading, 21(2), 4–10. Flippo, R. F., & Schumm, J. C. (2009). Reading tests. In R. F. Flippo & D. C. Caverly (Eds.), Handbook of college reading and study strategy research (pp. 408–464). New York, NY: Routledge. Gray, A. (2007). Searching for biographical sources: An archivist’s perspective. In S. E. Israel & E. J. Monaghan (Eds.), Shaping the reading field: The impact of early pioneers, scientific research, and progressive ideas ( pp. 421–426). Newark, DE: International Reading Association. Hartman, D. K., Stahl, N. A., & King, J. R. (2016, December). When topical currency reflects a myopic professional zeitgeist: A case in the defense of the history of literacy research. Presented at the annual conference of the American Reading Forum, Sanibel, FL. Heron, E. B. (1989). The dilemma of college reading instruction: A developmental analysis (Unpublished doctoral dissertation). Harvard University, Cambridge, MA. Israel, S. E., & Monaghan, E. J. (2007). Shaping the reading field: The impact of early reading pioneers, scientific research, and progressive ideas. Newark, DE: International Reading Association. Johnson, A. B. (2005). From the beginning: The history of developmental education and the pre-1932 General College idea. In J. L. Higbee, D. B. Lundell, & D. R. Arendale (Eds.), The General College vision: Integrating intellectual growth, multicultural perspectives, and student development (pp. 39–59). Minneapolis, MN: General College, University of Minnesota. Johri, A., & Sturtevant, E. G. (2010a). Walter Pauk: A brief biography. In W. M. Linek, D. D. Massey, E. G. Sturtevant, L. Cochran, B. McClanahan, & M. B. Sampson (Eds.), College Reading Association legacy: A celebration of fifty years of literacy leadership (Vol. 1, p. 171). St. Cloud, MN: Association of Literacy Educators and Researchers. Johri, A., & Sturtevant, E. G. (2010b). Martha Maxwell: A brief biography. In W. M. Linek, D. D. Massey, E. G. Sturtevant, L. Cochran, B. McClanahan, & M. B. Sampson (Eds.), College Reading Association legacy: A celebration of fifty years of literacy leadership (Vol. 1, pp. 155–157). St. Cloud, MN: Association of Literacy Educators and Researchers. Jones, H., & Richards-Smith, H. (1987). Historically black colleges and universities: A force in developmental education, Part II. Research in Developmental Education, 4(5). (Retrieved from http://eric.ed.gov, ED. # 341–434).
22
History
Kammen, C. (2014). On doing local history (3rd ed.). Lanham, MD: Rowman & Littlefield. Kerstiens, G. (1971). Junior-community college reading/study skills. Newark, DE: International Reading Association. Kerstiens, G. (1993). A quarter-century of student assessment in CRLA publications. Journal of College Reading and Learning, 25(2), 1–9. Kerstiens, G. (1998). Studying in college, then & now: An interview with Walter Pauk. Journal of Developmental Education, 21(3), 20–24. (Reprinted in Teaching developmental reading: Historical, theoretical, and practical background readings, pp. 33–42, by S. L. Armstrong, N. A. Stahl, & H. R. Boylan, Eds., 2014, Boston, MA: Bedford/St. Martin’s). King, J. R. (1990). Heroes in reading teachers’ tales. International Journal of Qualitative Studies in Education, 4(1), 45–60. King, J. R. (2010). Norman A. Stahl. In W. M. Linek, D. D. Massey, E. G. Sturtevant, L. Cochran, B. McClanahan, & M. B. Sampson (Eds.), College Reading Association legacy: A celebration of fifty years of literacy leadership (Vol. 1, pp. 499–506). St. Cloud, MN: Association of Literacy Educators and Researchers. Kingston, A. J. (1990). A brief history of college reading. Forum for Reading, 21(2), 11–15. Kingston, A. J. (2003). A brief history of college reading. In E. J. Paulson, M. E. Laine, S. A. Biggs, & T. L. Bullock (Eds.), College reading research and practice: Articles from the Journal of College Literacy and Learning (pp. 7–12). Newark, DE: International Reading Association. Kyvig, D. E., & Marty, M. A. (2010). Nearby history (3rd ed.). Lanham, MD: Alta Mira. Learning Support Centers in Higher Education. (n.d.a.). LSCHE – a “nearby history.” Retrieved from www. lsche.net/?page_id=3954 Learning Support Centers in Higher Education. (n.d.b.). History of the institutes. Retrieved from www.lsche. net/?page_id=3259 *Leedy, P. D. (1958). A history of the origin and development of instruction in reading improvement at the college level (Unpublished doctoral dissertation). New York University: New York, NY. Linek, W. M., Massey, D. D., Sturtevant, E. G., Cochran, L., McClanahan, B., & Sampson, M. B. (2010a). College Reading Association legacy: A celebration of fifty years of literacy leadership (Vol. 1). St. Cloud, MN: A ssociation of Literacy Educators and Researchers. Linek, W. M., Massey, D. D., Sturtevant, E. G., Cochran, L., McClanahan, B., & Sampson, M. B. (2010b). College Reading Association legacy: A celebration of fifty years of literacy leadership (Vol. 2). St. Cloud, MN: A ssociation of Literacy Educators and Researchers. Lissner, L. S. (1990). The learning center from 1829 to the year 2000 and beyond. In R. M. Hashway (Ed.), Handbook of developmental education (pp. 127–154). New York, NY: Praeger. Lowe, A. J. (1967a). Surveys of college reading improvement programs: 1929–1966. In G. B. Schick & M. May (Eds.), Junior college and adult reading – Expanding fields, 16th yearbook of the National Reading Conference (pp. 75–81). Milwaukee, WI: National Reading Conference. (ERIC Document Reproduction Service No. ED011230). Lowe, A. J. (1967b). An evaluation of a college reading improvement program (Unpublished doctoral dissertation). University of Virginia, Charlottesville, VA. *Lowe, A. J. (1970). The rise of college reading, the good and bad and the indifferent: 1915–1970. Paper presented at the College Reading Association Conference, Philadelphia, PA. (ERIC Document Reproduction Service No. ED040013). Mahoney, L. (2010). Maria Valeri-Gold. In W. M. Linek, D. D. Massey, E. G. Sturtevant, L. Cochran, B. McClanahan, & M. B. Sampson (Eds.), College Reading Association legacy: A celebration of fifty years of literacy leadership (Vol. 2; pp. 527–534). St. Cloud, MN: Association of Literacy Educators and Researchers. Mallery, A. L. (1986). College reading programs 1958–1978. In D. Lumpkin, M. Harshberger, & P. Ransom (Eds.), Evaluation in reading: Learning teaching administering, Sixth yearbook of the American Reading Forum (pp. 113–125). Muncie, IN: Ball State University. (ERIC Document Reproduction Service No. ED290136). Manzo, A. V. (1983). College reading: Past and present. Forum for Reading, 14, 5–16. Martha Maxwell: An oral history. (2000). In J. L. Higbee & P. L. Dwinell (Eds.). The many faces of developmental education (pp. 9–14). Warrensburg, MO: NADE. (Reprinted in The profession and practice of learning assistance and developmental education: Essays in memory of Dr. Martha Maxwell, J. L. Higbee (Ed.), 2014, Boone, NC: Appalachian State University, NCDE & CLADEA). Mason, R. B. (1994). Selected college reading improvement programs: A descriptive history. New York, NY: Author. (ERIC Document Reproduction Service No. ED366907). Massey, D. D., & Pytash, K. E. (2010). Martha Maxwell. In W. M. Linek, D. D. Massey, Cochran, E. G. Sturtevant, B. McClanahan, & M. B. Sampson (Eds.), College Reading Association legacy: A celebration of fifty years of literacy leadership (Vol. 1, pp. 397–400). St. Cloud, MN: Association of Literacy Educators and Researchers.
23
Norman A. Stahl and James R. King
Maxwell, M. (1979). Improving student learning skills. San Francisco, CA: Jossey-Bass. *Maxwell, M. (1997). Improving student learning skills: A new edition. Clearwater, FL: H&H Publishers. Maxwell, M. (2001). College reading fifty years later. In W. M. Linek, E. G. Sturtevant, J. A. R. D ugan, & P. Linder (Eds.), Celebrating the voices of literacy (pp. 8–13.). Commerce, TX: College Reading Association. Monaghan, E. J., & Hartman, D. K. (2000). Undertaking historical research in literacy. In M. Kamil, P. Mosenthal, P. D. Pearson, & R. Barr (Eds.), Handbook of reading research (Vol. III, pp. 109–121). Manwah, NJ: Erlbaum. Moore, D. W., Monaghan, E. J., & Hartman, D. K. (1997). Conversations: Values of literacy history. Reading Research Quarterly, 32(1), 90–102. Mullen, J. L., & Orlando, V. P. (1993). Reflections on 25 years of the Journal of College Reading and Learning. Journal of College Reading and Learning, 25(2), 25–30. Narang, H. L. (1973). Materials for college and adult reading improvement programs. Reading World, 12(3), 181–187. O’Hear, M. (1993). College reading programs: The last 25 years. Journal of College Reading and Learning, 25(2), 17–24. Parr, F. W. (1930). The extent of remedial reading work in state universities in the United States. School and Society, 31, 547–548. Pauk, W. (1999). How SQ3R came to be. In J. R. Dugan, P. E. Linder, & E. G. Sturtevant (Eds.), Advancing the world of literacy: Moving into the 21st century. The 21st yearbook of the College Reading Association (pp. 27–35). Commerce, TX: College Reading Association. (Reprinted in In W. M. Linek, D. D. Massey, E. G. Sturtevant, L. Cochran, B. McClanahan, & M. B. Sampson (Eds.). (2010). College Reading Association legacy: A celebration of fifty years of literacy leadership (Vol. 1, pp. 172–179). St. Cloud, MN: Association of Literacy Educators and Researchers). Piper, J. (1998). An interview with Martha Maxwell. The Learning Assistance Review, 3(1), 32–39. Quinn, K. B. (1995). Teaching reading and writing as modes of learning in college: A glance at the past; A view to the future. Reading Research and Instruction, 34(4), 295–314. Roberts, G. H. (1986). Developmental education: An historical study. (ERIC Document Reproduction Service No. 276–395). Robinson, R. D. (2002). Classics in literacy education: Historical perspective for today’s teachers. Newark, DE: International Reading Association. Roosevelt, F. D. (1941). Remarks at the dedication of the Franklin D. Roosevelt library. Retrieved from www. fdrlibrary Rose, A. D. (1991). Preparing for veterans: Higher education and the efforts to accredit the learning of World War II serviceman and women. Adult Education Quarterly, 42(1), 30–45. Rose, M. (1989). Lives on the boundary. New York, NY: Penguin. Shen, L. B. (2002). Survey of college reading and study skills textbooks (Unpublished doctoral dissertation). University of Pittsburgh, Pittsburgh, PA. Singer, H., & Kingston, A. (1984). From the Southwest Reading Conference to the National Reading Conference: A brief history from 1952–1984. In J. A. Niles & L. A. Harris (Eds.), Changing perspectives on research in reading/language processing and instruction. Thirty-third yearbook of the National Reading Conference (pp. 1–4). Rochester, NY: National Reading Conference. Singer, M. (2002). Toward a comprehensive learning center. In D. B. Lundell & J. L. Higbee (Eds.), Histories of developmental education (pp. 65–71). Minneapolis, MN: Center for Research on Developmental Education and Urban Literacy, University of Minnesota. Smith, N. B. (1934a). A historical analysis of American reading instruction (Unpublished doctoral dissertation). Teachers College, Columbia University, New York, NY. Smith, N. B. (1934b). American reading instruction. New York, NY: Silver-Burdett. Smith, N. B. (1965). American reading instruction (Rev. ed.). Newark, DE: International Reading Association. Smith, N. B. (1986). American reading instruction. Prologue by L. Courtney, FSC, and epilogue by H. A. Robinson. Newark, DE: International Reading Association. Smith, N. B. (2002). American reading instruction (Special Edition). Newark, DE: International Reading Association. Spache, G. D. (1969). College – adult reading: Past, present, and future. In G. B. Schick & M. May (Eds.), The psychology of reading behavior, 18th yearbook of the National Reading Conference (pp. 188–194). Milwaukee, WI: National Reading Conference. Spann, M. G. (1996). National Center of Developmental Education: The formative years. Journal of Developmental Education, 20(2), 2–6.
24
History
Stahl, N. A. (1983). A historical analysis of textbook-study systems (Unpublished doctoral dissertation). University of Pittsburgh, Pittsburgh, PA. Stahl, N. A. ( January–March 1988). Historical titles in reading research and instruction: A history of the origin and development of instruction in reading improvement at the college level. Reading Psychology, 9, 73–77. Stahl, N. A. (2002). Epilogue. In N. B. Smith (author) American reading instruction (Special Edition) (pp. 413–418). Newark, DE: International Reading Association. Stahl, N. A. (2014). Selected references of historical importance to the field of college reading and learning. In S. L. Armstrong, N. A. Stahl, & H. R. Boylan (Eds.), Teaching developmental reading: Historical, theoretical, and practical background readings (pp. 42–50). Boston, MA: Bedford/St. Martin’s. *Stahl, N. A., Boylan, H., Collins, T., DeMarais, L., & Maxwell, M. (1999). Historical perspectives: With hindsight we gain foresight. In D. B. Lundell & J. L. Higbee (Eds.), Proceeding of the first intentional meeting on future directions in developmental education (pp. 13–16). Minneapolis, MN: The Center for Research on Developmental Education and Urban Literacy, University of Minnesota. Stahl, N. A., Brozo, W. G., & Hynd, C. R. (1989). The development and validation of a comprehensive list of primary sources in college reading instruction with full bibliography (College Reading and Learning Assistance Technical Report No. 88-03). DeKalb, IL: Northern Illinois University. (ERIC Document Reproduction Service No. ED307597). Stahl, N. A., Brozo, W. G., & Hynd, C. R. (October, 1990). The development and validation of a comprehensive list of primary sources in college reading instruction. Reading Horizons, 31(1), 22–34. *Stahl, N. A., & Hartman, D. K. (2011). Doing historical research on literacy. In N. K. Duke & M. H. Mallette (Eds.), Literacy research methodologies (2nd ed., pp. 213–241). New York, NY: Guilford Press. Stahl, N. A., & Henk, W. A. (1986). Tracing the roots of textbook-study systems: An extended historical perspective. In J. A. Niles (Ed.), Solving problems in literacy: Learner, teachers and researchers – 35th yearbook of the National Reading Conference (pp. 366–374). Rochester, NY: National Reading Conference. Stahl, N. A., Hynd, C. R., & Henk, W. A. (1986). Avenues for chronicling and researching the history of college reading and study skills instruction. Journal of Reading, 29(4), 334–341. Stahl, N. A., & King, J. R. (2007). Oral history projects for the literacy profession. In S. E. Israel & E. J. Monaghan (Eds.), Shaping the reading field: The impact of early reading pioneers, scientific research, and progressive ideas (pp. 427–432). Newark, DE: International Reading Association. Stahl, N. A., King, J. R., Dillon, D., & Walker, J. (1994). The roots of reading: Preserving the heritage of a profession through oral history projects. In E. G. Sturtevant and W. Linek (Eds.), Pathways for literacy: 16th yearbook of the College Reading Association (pp. 15–24). Commerce, TX: College Reading Association. Stahl, N. A., King, J. R., & Eilers, V. (1996). Postsecondary reading strategies: Rediscovered. Journal of Adolescent and Adult Literacy, 39(5), 368–379. Stahl, N. A., Simpson, M. L., & Brozo, W. G. (Spring, 1988). The materials of college reading instruction: A critical and historical perspective from 50 years of content analysis research. Reading Research and Instruction, 27(3), 16–34. Stahl, N. A., & Smith-Burke, M. T. (1999). The National Reading Conference: The college and adult reading years. Journal of Literacy Research, 31(1), 47–66. Straff, W. W. (1986). Comparisons, contrasts, and evaluation of selected college reading programs (Unpublished doctoral dissertation). Temple University, Philadelphia, PA. Strang, R. (1938 & 1940). Problems in the improvement of reading in high school and college. Lancaster, PA: Science Press. Taylor, E. A. (1937). Controlled reading. Chicago, IL: University of Chicago Press. Thurstone, E. L., Lowe, A. J., & Hayden, L. (1965). A survey of college reading programs in Louisiana and Mississippi. In E. L. Thurston & L. E. Hafner (Eds.), The philosophical and sociological bases of reading – 14th yearbook of the National Reading Conference (pp. 110–114). Milwaukee, WI: National Reading Conference. Tomlinson, L. M. (1989). Postsecondary developmental programs: A traditional agenda with new imperatives. ASHEERIC Higher Education Report 3. Washington, DC: Clearinghouse on Higher Education. Van Gilder, L. L. (1970). A study of the changes within the National Reading Conference (Unpublished doctoral dissertation). Marquette University, Milwaukee, WI. Van Leirsburg, P. J. (1991). The historical development of standardized reading tests in the United States, 1900–1991 (Unpublished doctoral dissertation). Northern Illinois University, DeKalb, IL. Walker, M. M. J. (1980). The reading and study skills program at Northern Illinois University, 1963–1976 (Unpublished doctoral dissertation). Northern Illinois University, DeKalb, IL. Walvekar, C. C. (1987). Thirty years of program evaluation: Past, present, and future. Journal of College Reading and Learning, 20(1), 155–161.
25
Norman A. Stahl and James R. King
White, W., Martirosayin, R., & Wanjohi, R. (2009). Preparatory programs in nineteenth-century Midwest land grant colleges, Part 1. Research in Developmental Education, 23(1), 1–5. White, W., Martirosayin, R., & Wanjohi, R. (2010a). Preparatory programs in nineteenth-century Midwest land grant colleges, Part 2. Research in Developmental Education, 23(2), 1–6. White, W., Martirosayin, R., & Wanjohi, R. (2010b). Preparatory programs in nineteenth-century Midwest land grant colleges, Part 3. Research in Developmental Education, 23(3) 1–7. *Wyatt, M. (1992). The past, present, and future need for college reading courses in the U.S. Journal of Reading, 36(1), 10–20. Zimmer, J. E. (2007). Hints on gathering biographical data. In S. E. Israel & E. J. Monaghan (Eds.), Shaping the reading field: The impact of early pioneers, scientific research, and progressive ideas (pp. 417–420). Newark, DE: International Reading Association.
26
2 College Reading Eric J. Paulson and Jodi Patrick Holschuh texas state university
This chapter1 focuses on select fundamental aspects of college developmental reading. The metaphorical frame of terrain is used to convey the breadth and depth of those fundamental aspects. While a complete survey of all aspects of college reading would entail volumes, this chapter reviews three terrains of college reading: the foundational terrain, the theoretical terrain, and the instructional terrain. The unique positioning of college developmental reading within institutions of higher education offers potential for impacting a large number of postsecondary learners. These considerations, including implications and further research recommendations, are discussed. Throughout this chapter, primary research and early discussion of pertinent topics that span the breadth of scholarship on college reading are referenced.
The Foundational Terrain of College Reading Recently, college developmental reading—and developmental education in general—has been enjoying exposure and discussion in both educational and political outlets at unprecedented levels. While generally praising the principled intentions and ideals embedded in critical reading support for struggling college students, the discussion has not been entirely positive (e.g., Bailey, Jaggers, & Scott-Clayton, 2013; Complete College America, 2012). As part of this discussion, calls for eliminating developmental courses, including college reading, are regularly heard; Levin and Calcagno (2008) note that “the ‘remediation crisis’ has surely become one of the most controversial issues in higher education in recent times” (p. 181). Although this is indeed a topic of considerable debate, neither the topic of postsecondary developmental education nor complaints about students’ lack of preparedness are new. Politicians, pundits, and some educators who lament the current state of education in the U.S. often focus their questions around how U.S. education could have gotten so bad that so many college students now need developmental reading coursework. But that is the wrong question for several reasons. One reason is that developmental reading courses are not new to postsecondary education. In fact, there is a rich history of postsecondary literacy instruction in the U.S. Another reason is that reading instruction is entirely appropriate at every educational level, including elementary, middle, secondary, and postsecondary. As the following sections discuss, college reading courses have been historically prevalent, and they are available because they are needed and useful.
27
Eric J. Paulson and Jodi Patrick Holschuh
Historically Prevalent Although Martino and Hoffman (2002) noted that “The number of college students experiencing difficulty with reading comprehension and study strategies is surprisingly high” (p. 310), we might emphasize that any level of difficulty can be considered high as educators strive for successful educational experiences for all their students. But the level at which this is “surprising” may need to be rethought as the need for college reading instruction is not new. Over 100 years ago, in assessing college students’ literacy proficiencies, Copeland and Rideout (1901) complained that “at one extreme of this class of Freshmen are the illiterate and inarticulate, who cannot distinguish a sentence from a phrase, or spell the simplest words” (p. 2) and that “so few of them have been brought up to read anything at all, or would now start to read of their own accord, that an acquaintance with a few books must be forced upon them” (p. 63). Triggs (1941) wrote four decades later that “research has established beyond a doubt that students entering college vary greatly in reading proficiency” (p. 371), and in the middle of the 20th century, Barbe (1952) estimated that “twenty percent of entering college students read less efficiently than did the average eighth-grade pupil” (p. 229). These complaints—some dating back more than a century—provide some context to fears that the need for college developmental reading is a recent, unprecedented need. Stahl and King (2009; see also Chapter 1) noted that college reading has been an established subfield of reading pedagogy since the early 1900s, with evidence of reading assistance classes existing before that and developmental education services in general dating to 1630. Where there is developmental education, there is almost certainly developmental reading instruction, especially since “The history of developmental education cannot be separated from the history of college reading instruction. The two fields are mutually entailed” (Stahl & King, 2009, p. 9). As an example of this historical relationship, Boylan (2003) observed that one of the reasons for the establishment of the first college preparatory department over 150 years ago at the University of Wisconsin was to provide postsecondary reading instruction for its students. Indeed, most 19th-century students in American colleges could not meet what those colleges termed “basic skills” in reading, and at some universities, there were more preparatory students than non- preparatory students; by 1889, 335 out of 400 universities in the U.S. had preparatory departments (Wyatt, 1992). By the 1930s, Ivy League universities were among the many institutions providing developmental reading programs (Maxwell, 1997). Research on reading in college soon followed. Dissertations focused on college reading appeared in the first decades of the 20th century (e.g., Anderson, 1928), and research designed to inform the improvement of college reading was commonplace by that time (e.g., Pressey & Pressey’s “an experiment in teaching college students to read” [1930, p. 211]). There were enough college reading programs, and enough studies on them, that halfway through the 20th century, Robinson (1950) was able to do what amounts to an early meta-analysis of college reading studies, covering over 100 research reports. Literacy studies focused on the current generation report similar levels of reading proficiency at the college level. National Assessment of Educational Progress (NAEP) data from 2015 indicate that only 37 percent of high school seniors are considered “college-ready” in reading (Camera, 2016; NAEP, 2016). In addition, the ACT college entrance test indicated that in 2016, less than half—44 percent—of incoming students were prepared for the reading requirements of a typical first-year college course (ACT, 2016). In an analysis of 57 institutions providing data as part of Achieving the Dream’s Community Colleges Count Initiative, Bailey, Jeong, and Cho (2010) found that 33 percent of the students in those schools were referred to developmental reading courses. At the turn of the most recent century, 42 percent of all first-year students in public two-year colleges were enrolled in developmental courses, and approximately half of those students—20 percent—were enrolled in developmental reading (Parsad, Lewis, & Greene, 2003). Note that the 20 percent figure those researchers quote is the same percentage noted by Barbe
28
College Reading
(1952) half a century earlier. An exact count of students needing developmental reading courses varies widely, depending on the database analyzed as well as the local characteristics of each institution, but what is clear is that, historically, a substantial proportion of the college population has been served by developmental reading. As the next section makes clear, the need for college reading instruction has not diminished.
Necessity of College Reading Classes Despite developmental reading courses’ being intertwined with the history of postsecondary education, reading as a college course does not necessarily enjoy broad acceptance—or even awareness—from the general public or at times even from fellow educators. However, traditionally, many educators have realized the need for continued literacy instruction in the middle grades (International Reading Association, 2006; Moore & Stefanich, 1990; Wilhelm, 1997) and in secondary contexts (Allington, 1994; International Reading Association, 2006; Pressley, 2004). What is clear is that literacy instruction is an accepted part of the entire K-12 experience. We would extend this thinking to the postsecondary context. Coursework in college-level reading is an important part of a postsecondary educational context as well. Several researchers argue that there is a need for advanced literacy instruction for all students beyond what is traditionally offered in precollege educational settings (Alexander, 2005; Shanahan & Shanahan, 2008). One assumption about literacy development is that proficiencies automatically evolve as readers advance through school—that as long as students have acquired adequate basic reading skills, they will be able to read anything successfully. However, this notion is not accurate. Early reading gains do not necessarily push students toward more advanced literacy achievement without continued instruction (Shanahan & Shanahan, 2008). Williamson (2008) found that there is a continuum of college readiness when it comes to learning from text because the literacy demands are more challenging in college-level texts. Students who were proficient readers of high school texts may still experience difficulty because their reading strategies are not appropriate for the types of texts they encounter in college. There is a clear need for expanded literacy instruction and support at postsecondary levels.
The Theoretical Terrain of College Reading Viewing literacy as a social practice—one typified by the specific context in which the literacies are found and valued—is critically important when considering college developmental reading. Regrettably, to the extent that college reading textbooks reflect the type of teaching going on in the classroom (Wood, 1997), college developmental reading practice has traditionally been characterized by a focus on word-attack strategies and discrete skill-building, which does not support literacy as a social practice. A recent study that examined literacy demands for first-year community-college students bore this unfortunate premise out, noting that there was “considerable evidence suggesting that many of the deficits of secondary school language arts instruction are being replicated rather than remedied in community college teaching” (National Center on Education and the Economy [NCEE], 2013, p. 24). Our theoretical perspective calls such an approach into question, particularly because the literacy practices of academic disciplines are wide-ranging social practices (Lea & Street, 2006; New London Group, 1996). Such social practice perspectives are not typically found in developmental reading textbooks, which often emphasize a general or generic comprehension approach. This generic comprehension approach is typified by focusing on the types of questions one might be asked on an exam—such as literal versus inferential, finding the main idea in an out-of-context passage, or defining vocabulary that does not take specialized knowledge or disciplinary social practices into account. This transmission model is not supported
29
Eric J. Paulson and Jodi Patrick Holschuh
by research. Developmental education professionals should be ready to prepare students for a variety of texts and text complexity, including text coherence, organization, disciplinary conventions, and sentence structure rather than focusing on discrete skills (Shanahan, Fisher, & Frey, 2012). We view postsecondary literacy instruction not as a set of technical skills to learn but as a constructive series of connections that takes place within the context of college. That is, this instruction takes place in a social network in which students must be able to critically examine their role in the network and how to navigate this aspect of society. Discussions of literacy processes must also include discussions of language processes (e.g., Kucer, 2014). Gee’s (2005) work provides a platform on which language processes as related to literacy instruction can be understood. The following section presents an overview of a Discourse view of literacy appropriate to the postsecondary context.
The Role of Discourse Approaching literacy as social practice is connected to Gee’s (2005) concept in linguistics of “big D discourse” (delineated as “Discourse” with a capital “D” here) as opposed to “little d discourse.” “Little d discourse” refers to written and oral speech acts, propositions, syntactic arrangements, and a myriad of other aspects of language production: the bits and pieces of language that make up a string of verbal or written text. “Big D Discourse” encompasses those aspects of language as well but also refers to everything else that marks the user of the language as an authentic member of a group and how language is used by members of different groups. That is, big D Discourse (“Discourse” from this point forward) includes not only knowing what to say but when to say it, how to say it, in what context it is appropriate, and so on. For example, think of all the different ways to describe a sporting event. Depending on the audience—fellow sports fan, spouse, child, grandparent, stranger on the subway, a person from a country where the game is not played, and so on—the way the speaker describes the event would change. Everything from the vocabulary used, the shared knowledge accessed, the emphasis on different parts of the description, and the structure and purpose of the description itself could vary. Gee’s concept of Discourse provides insight into how the words in such a description may be technically correct across each different description but their meaning and communication is linked to specific contexts and audiences. Applied to college developmental reading instruction, students must understand the Discourse of the academy and be a proficient user of that Discourse. Every strategy, technique, discussion, reading act, or writing act is placed within the context of the academy and the students’ lives, and this knowledge is not transmitted to students; instead, they are apprenticed into college academic literacy Discourses. The kinds of Discourses found in college are usually what Gee (2005) would term “secondary Discourses” in contrast to primary Discourses. A primary Discourse is one often acquired early in life, usually in contexts centered on family and peer groups, and is usually a nonspecialized Discourse. In some ways, primary Discourses are what our everyday identities are constructed from, which can change throughout our lifetimes. Secondary Discourses are distinguished from primary Discourses in that they are usually found in institutions or disciplines that exist in a more public, wider community sphere. College Discourses are specialized, secondary Discourses that carry with them expectations of identity construction and “belonging” in the institution. One role of developmental education has been to increase students’ awareness and control of the secondary Discourses that they encounter in college (Paulson, 2012). Viewing reading through a Discourse lens allows for an understanding of literacy instruction, not as isolated bits of skills to be learned but rather as a focus on when, where, why, and how to apply different aspects of effective reading knowledge and tools. Knowledge of the academic reading and writing expectations across the entire university or community college and how those
30
College Reading
expectations are realized in each of the student’s classes becomes an important point of reference for the student’s understandings of academic literacy. This view expands the concept of developmental education beyond a single “one shot” college developmental reading course, which may necessitate instructional reconceptualizations, as we discuss later.
The Instructional Terrain of College Reading An important aspect of increasing the effectiveness of developmental reading is tied to pedagogical choices. However, much of the instruction in developmental reading courses has traditionally centered on a transmission model of teaching isolated reading skills, such as selecting main idea, identifying fact and opinion statements, and other subskills (Armstrong & Newman, 2011; Maxwell, 1997), despite calls for a more strategic or process-based approach (Simpson, Stahl, & Francis, 2004). Research results on skills-based instruction show little to no improvement on students’ reading ability upon completion of these remedial courses (Merisotis & Phipps, 2000). Such an approach cannot adequately prepare students because the tasks of college vary widely across disciplines and purposes, and students are expected to engage and interpret text of increasing difficulty (Attewell, Lavin, Domina, & Levey, 2006; Eckert, 2011). Thus, the goal of instruction is not to fill a deficit, but to teach new literacy strategies that can accommodate the increase in literacy demands in unfamiliar, specialized Discourse milieus. Effective reading instruction is not a monolithic concept, however, and there is a host of important elements to consider when implementing classroom practice.
Multifaceted Instruction Although the research literature presents few findings that show any improvement from a transmission, skills-based approach to reading, results of research on strategic reading where the focus includes, but is not limited to, the social, cognitive, metacognitive, and affective processes involved in academic reading has been more encouraging (Alexander & Jetton, 2000; Caverly, Nicholson, & Radcliffe, 2004; Gee, 2004; Kucer, 2014; Pawan & Honeyford, 2009). Each of these foci illuminates a different aspect of reading, and it is through considering all of them that we gain what we think of as a multifaceted perspective on what it means to read in college. In this section, we consider each element briefly, while still emphasizing that it is their continual interaction that explains the complexity of reading and reading instruction.
Multifaceted Instruction: Social The instructional outcome of the social practice perspective described earlier might be best thought of as an apprenticeship model. There are a variety of pedagogical models that can be aligned with such an approach. One such model is the New London Group’s (1996) integration of four interrelated, nonhierarchical, nonlinear factors. The first of these is Situated Practice, in which “immersion in a community of learners engaged in authentic versions” (p. 84) of appropriate academic practices characterizes daily actions in the classroom. Situated Practice emphasizes the contextualized nature of mastery learning. Students would use context and experience to help them make sense of ideas. However, Situated Practice cannot stand on its own as an approach; students’ background experiences vary greatly and may not lead to a metacognitive awareness of strategic learning. Overt Instruction—the second factor described by the New London Group (1996)—is a useful complement to Situated Practice. Overt Instruction, in which metacognition is a core function of learners gaining control and conscious awareness of their learning, moves students toward mastery. It is important to note that Overt Instruction does not imply out-of-context
31
Eric J. Paulson and Jodi Patrick Holschuh
presentation and reproduction of discrete skills, but rather deliberate focus on learners understanding both the “how” and “what” of strategic learning. Critical Framing, the third factor, is concerned with how learners frame their expanding proficiencies “in relation to the historical, social, cultural, political, ideological, and value-centered relations” (p. 86) of the disciplinary area. Critical Framing can also aid learners in developing approaches for Situating Practice by providing a way to critique previous assumptions by thinking about them in new ways. Finally, through engaging in practice of the first three factors, the fourth factor, Transformed Practice, is made possible. A focus of this factor is in transferring these critical, situated masteries of practice to new situations in a recursive manner. Transformed Practice gets at the crucial educational aspect of, for example, college students’ ability to understand the traditions and rhetoric of different disciplines and work effectively within each. Using Transformed Practice, students construct new understandings from multiple contexts. For example, they may draw ties between ideas in their anthropology, psychology, business, and philosophy classes as they understand how and why homelessness (or another issue) occurs. The authors argue that through the juxtaposition and appropriate use of these four factors, students are able to achieve the two goals for literacy learning explicit in their model: “creating access to the evolving language of work, power, and community, and fostering the critical engagement necessary for them to design their social futures and achieve success through fulfilling employment” (New London Group, 1996, p. 60).
Multifaceted Instruction: Cognitive Cognitive views of reading processes also center on the complex nature of reading. Specifically, they focus on the interactive nature of knowledge, taking into consideration factors such as interest, strategies, domain specificity, and task. Implicit in this approach is self-regulation of cognition, which implies a pedagogical shift to foster student responsibility for planning, decision-making, and reflection (Mulcahy-Ernt & Caverly, 2009). This view of cognition must also include the importance of its situated nature: that “reading, as well as other acts of cognition, is always situated” (Purcell-Gates, 2012, p. 467). Situated cognition relates social, behavioral, and cognitive perspectives of knowledge and learning (Clancy, 1997) in which students work in communities of practice where learning is viewed as active participation and interaction (Barab, Warren, del Valle, & Fang, 2006; Lave & Wenger, 1991). Thus, in the developmental reading classroom, students learn best when learning is scaffolded and based in real-world tasks, and when students are encouraged to generate solutions to problems (Brown, Collins, & Duguid, 1989). To prepare students for the variety of disciplines, texts, and tasks they will encounter, instruction necessitates less focus on specific skills and more emphasis on the underlying processes needed to become a flexible reader by learning and understanding how, when, where, and why to use a variety of task-appropriate strategies that promote comprehension (Chapter 8; RAND Reading Study Group, 2002; Simpson & Nist, 2000). As we discuss in a subsequent section, explicit instruction of these processes—selecting, organizing, synthesizing, elaborating—may be more important and effective than teaching specific strategy heuristics (see also Chapter 8).
Multifaceted Instruction: Metacognitive Metacognitive reading processes are those that encourage students to understand and regulate their own cognitive abilities and skills (Paris, Lipson, & Wixson, 1983; Sperling, Howard, Staley, & DuBois, 2004). There is more to metacognitive reading than any one individual reading strategy or action. Readers who can reflect metacognitively about reading are able to detect contradictions or inconsistencies in text, can pull out important information, and can select different
32
College Reading
strategies depending on the text and the discipline (Alexander, 2005; Pintrich, 2002). Metacognitive readers understand that active reading consists of predicting, questioning, clarifying, and summarizing (Pressley, 2002). They also understand that they are responsible for monitoring their cognition and strategy use while reading (Winne, 2005). Metacognitive knowledge has been shown to be a significant predictor of reading comprehension; however, students do not automatically develop useful metacognitive strategies with time or age (Baker, 2008). Pintrich (2002) noted that there are a “number of students who come to college having very little metacognitive knowledge; knowledge about different strategies, different cognitive tasks, and particularly, accurate knowledge about themselves” (p. 223). However, there is some compelling evidence that metacognition can be developed through instruction. Pressley (2000) noted that reading strategy instruction promotes metacognition when instruction includes an explanation and model of the strategy, offers opportunities for students to practice the strategy, and encourages reflection after reading. Metacognitive reading instruction focuses on comprehension monitoring, elaborating, and regulating strategies (Pressley, Gaskins, & Fingeret, 2006). Metacognitive reading can also be developed as students gain control of the strategies they use. Research has indicated that students can begin to question the influences of their own values and beliefs on their text interpretation as they become more adept at strategy use (Eckert, 2011). Additionally, beliefs about literacy are part of the student knowledge base that educators should ideally take into account when planning instruction. This is important because how students conceptualize literacy can affect how they approach texts and reading tasks as well as the strategies they use while reading (Paulson & Mason-Egan, 2007; Schraw & Bruning, 1996). Unfortunately, students’ conceptualizations about literacy learning are often unclear or unarticulated, with potentially hindering consequences (Hardin, 2001). If students understand reading in one way, but evaluate it (or are evaluated) in a way that runs counter to their conceptualizations, there also develops an inconsistency in how they participate in literacy practices that contribute to their own reading development. Fortunately for college developmental reading instructors, conceptualizations are neither set in stone nor wholly external to the classroom, and can be shaped by the pedagogical environment, including classroom language (e.g., see Paulson & Kendall Theado, 2015). In the developmental reading classroom, conceptualizations of reading are found along several spectra of views, including product and process, positive and negative affect, and various metaphorical frames that range from a journey to a sport to a relationship (Paulson & Armstrong, 2011). Understanding the range of conceptualizations by students, and working to generate understandings of literacy that lend themselves to effective practices, can be part of the metacognitive discussions instructors have with their students in developmental reading classes. In addition to the general conceptualizations about literacy that students hold, they also bring a multitude of beliefs about specific concepts and disciplines (Nist & Holschuh, 2005) to each learning situation. These beliefs, which are impacted by prior domain knowledge and their general literacy conceptualizations, influence comprehension at all levels and may influence student interaction with text. Suppose a student holds a belief that everything contained in a textbook or on a printed page must be true. That student would experience trouble reconciling multiple explanations or theories, which may result in comprehension difficulties (Schommer, 1994; Shanahan & S hanahan, 2012). Experts and novices have beliefs about text that cause them to respond to and interpret text in different ways (Hynd-Shanahan, Holschuh, & Hubbard, 2004; Reisman & Wineburg, 2008; Wineburg, 1991). For example, expert readers believe that science text is approached differently than history text (Nist & Holschuh, 2005). However, many beginning college students do not share these beliefs and, for example, may be unable to see the subtexts in history texts that are readily apparent to expert readers (Wineburg, 1991). Wineburg (1991) argues that for students to be able to detect subtext, an important literacy skill for reading history, students must have a particular epistemology of text—they must believe that these subtexts actually exist. Although
33
Eric J. Paulson and Jodi Patrick Holschuh
many students enter developmental reading classrooms with relatively unsophisticated conceptions of knowledge, it is encouraging that beliefs about text can be positively impacted through instruction that includes providing background knowledge, modeling, making explicit ties to strategy selection, and opportunities for practice (Holschuh & Hubbard, 2013, Nist & Holschuh, 2005; Reisman & Wineburg, 2008).
Multifaceted Instruction: Affective There is an increasing body of support in the literature that focuses on the influence of affect in reading proficiency. Affective influences are tied to identity, as students must understand themselves as learners who can negotiate the complex, multifaceted literacy demands of college that involve much more than knowledge of specific, isolated skills (Paulson & Armstrong, 2010). Although there are many dimensions of the affective component, we address two major influences that are influenced by instruction: self-schemas about reading and motivation for reading. Self-schemas, which are general characterizations individuals ascribe to themselves that are derived from past experiences (Ng, 2005; Pintrich & Garcia, 1994), are domain and context- specific and are related to competency beliefs in that individuals have varying reactions to different domains based on past experiences (Linnenbrink & Pintrich, 2003; Ng, 2005). For example, a student who has experienced success in writing courses and low achievement in mathematics courses will have a more positive self-schema and higher self-efficacy about writing. Thus, affective influences can impact motivation for learning “by providing critical feedback to the self about the self ’s thoughts, intentions, and behavior” (Tangney, 2003, p. 384). College instructors often feel frustrated by the difficulty of motivating students to learn (Hofer, 2002; Svinicki, 1994), and some research has indicated that reading comprehension is directly tied to motivation through engagement (Guthrie et al., 2007). Motivation can impact comprehension, but it also appears that setting the conditions for motivation can increase reading comprehension, especially for informative texts (Guthrie et al., 2007). Such conditions include giving students some choices on text and task (Turner & Patrick, 2008), setting reading goals based on content rather than skill building, and emphasizing a mastery approach to learning from text (Guthrie et al., 2006; Linnenbrink & Pintrich, 2003). Additionally, Bray, Pascarella, and Pierson (2011) found that having a variety of reading experiences (e.g., a ssigned and unassigned reading, library research experiences) was tied to growth in reading comprehension and promoting positive attitudes toward reading in the first three years of college.
Beyond Heuristics A question many college reading instructors have asked themselves centers around whether there are certain reading strategies that are more effective than others, and the field has various empirical studies that focus on the efficacy of specific strategies (Caverly, Nicholson, & Radcliffe, 2004; Martino, Norris, & Hoffman, 2001; Perin, Bork, Peverly, & Mason, 2013; Snyder, 2002). Of course, problems manifest themselves when we reify a particular strategy as being equally useful to all students or even equally useful in all reading situations. A specific strategy—like the nearly ubiquitous SQ3R—used successfully by one student in one course will not be effective when used by another student in another course due to the differences in text material, background knowledge, course focus, academic task demands, and a host of other contextual reasons. Beyond the utility of the strategy itself, if the student does not have metacognitive awareness of how and why the strategy works, when and in what circumstances to employ it, and how to adapt it for different purposes and different texts and situations, its effectiveness will vary widely. In many circumstances, efficacy may not be attributed to the strategy itself, but how it is understood and
34
College Reading
employed by students. If students are not aware of the purpose of the strategy, how to employ the strategy in a variety of contexts; or how the goals of the instructor, course, student, and text author intersect, its effectiveness will diminish (Paulson & Bauer, 2011). Compounding issues of strategy misapplication is the fact that there are easily hundreds of reading strategies available in publications, the Internet, professional development courses, and elsewhere. Instructors’ decisions about which strategies to teach their students can be complicated. In fact, perhaps the question is not “what strategy is best” but rather “what aspects of reading strategies are useful.” That is, it is important to focus not just on which specific strategies should be recommended but also on the broad elements of effective strategies. To that end, Simpson and Nist (2000) reviewed the major research foundations for common processes within strategies and grouped these aspects of strategies into four major categories: question generation and answer elaboration, text summarization, student-generated elaborations, and organizing strategies. Note that these categories are not mutually exclusive; that is, a given strategy could be question generation and answer elaboration by itself, or a strategy could incorporate question generation and answer elaboration as one of the steps of a more wide-ranging strategy (such as in some versions of SQ3R—which is likely why that strategy, and others that are similarly widespread, continues to find new audiences). Evaluating the potential effectiveness of a strategy—either a found strategy or one created by instructor or students—through the lens of these four broad descriptions of effective elements of reading strategies promotes understanding the important aspects of a strategy and whether it is likely to be useful. Other strategies that don’t appear to incorporate any of these elements may be useful in certain contexts as well, but it is likely that the strategy may not have a solid research base underlying it and it is important to examine what the strategy purports to do and whether there is any basis for those expectations.
Recommendations for Practice Research indicates some general instructional principles that show promise. For example, using active, student-centered instructional approaches has been demonstrated to be effective with learners in developmental contexts (Boylan, 2003; Simpson & Nist, 2000). Using contextual, real-world text, rather than short, manipulated paragraphs helps students transfer their learning to their non-developmental classes (Simpson & Nist, 2000). Peer collaboration and focus on mastery learning has been tied to student engagement and motivation (Turner & Patrick, 2008). Culturally responsive teaching, which includes using cultural knowledge, prior experiences, and examples from many cultures (Gay, 2000), has shown potential for increasing student motivation and learning. We are encouraged by the direction the field is heading in terms of these approaches. Following from a multifaceted perspective on literacy that relies heavily on understanding literacy as social processes, any instructional outcomes aligning with that perspective must include certain pedagogical elements. Two of those elements are the nature of literacy processes as Discourse processes and the contextualized specificity of disciplinary foci involved in reading and writing. In the next section, we discuss additional areas of promise for instruction, especially in terms of how they can be intertwined: integrating reading and writing with disciplinary literacy (DL).
Integrating Reading and Writing with Disciplinary Literacy A DL perspective emphasizes the knowledge, abilities, and unique tools that people within a discipline use to participate in the discipline (Shanahan & Shanahan, 2012). DL makes the assumption that reading and writing tasks and processes differ based upon the demands, foci, and
35
Eric J. Paulson and Jodi Patrick Holschuh
epistemology of the discipline. The aim is to identify the reading and writing distinctions among the disciplines and create instruction to help students successfully negotiate the literacy demands across disciplines (Shanahan & Shanahan, 2012). It is also tied to pedagogical content knowledge in that it involves ways teachers can construct teaching and learning with texts in their disciplines (Moje, 2007). This instruction seeks to make the disciplinary differences in reading and writing conventions explicit for students (Shanahan & Shanahan, 2008). DL allows students to engage in deep learning within a specific context and involves reading, writing, and communication. DL tasks allow students to experience rigor and cognitively demanding work in ways that are supported. As noted by McConachie and Apodaca, “Embedding DL routines and relevant, challenging tasks into lessons are fundamental components of making equity and excellence attainable for every student” (2009, p. 166). Where that instruction is embedded, however, is important. Currently, the vehicle for apprenticing developmental education students with a DL focus is developmental reading and writing coursework. Traditionally, the separate and stand-alone nature of reading courses and writing courses allowed only very artificial connections to disciplinary conventions and even fewer opportunities to engage with those disciplines in any real way. In the last several years, modes of instruction at the course level focused on integrated reading and writing have opened up new possibilities for more authentic DL instruction. Although the integration of developmental reading and writing at the course level is a current focus at institutional and state levels, the theoretical foundation for integrated reading and writing instructional models is not new, having been a part of the overall literacy field for decades (e.g., Shanahan & Lomax, 1988; Smith, Jensen, & Dillingofski, 1971). Early work focusing on integrated reading and writing in postsecondary contexts has also been useful (Bartholomae & Petrosky, 1986; see also Chapter 9), and programs in California and Ohio have focused on integrating reading and writing a variety of ways in developmental education contexts since the late 1990s (e.g., Goen & Gillotte-Tropp, 2003; Goen-Salter, 2008; Laine, 1997). Other programs that have received national attention have approached integration from an acceleration framework. For example, the California Acceleration Project views integrating reading and writing not only as pedagogically appropriate in the college context, but also as one way to shorten the time students spend in developmental education (see Hern, 2012; Hern & Snell, 2010). Moving students through preparatory sequences efficiently is important, as long as students’ literacy experiences are not artificially truncated due to time; however, the primary reason that integrating reading and writing is beneficial is focused more on their shared social, cognitive, and language bases and pedagogical interrelatedness; that is, at its core is the perspective that as modes of language, they are inextricably related (Parodi, 2007) and should be approached as such in the classroom. As Purcell-Gates notes, “reading and writing are both social and mental acts situated within specified sociocultural contexts” (2012, p. 69). The integration of reading and writing is supported by social-constructivist models of learning, in which reading and writing are both viewed “as social and cultural tools for acquiring and practicing learning” (Quinn, 1995, p. 306). This does not imply that teaching writing is sufficient to automatically result in gains in reading, or vice versa (Shanahan, 1984), but it does mean that the two should be focused on continuously, throughout a course of study, on every assignment and every text. This integrated view of literacy may benefit the field as reading and writing taught together has the potential for greater gains in overall student learning. Constructing an integrated reading and writing developmental education classroom that taught students intertwined reading and writing strategies to negotiate different DL goals would have the potential to be highly impactful. From a content standpoint, for example, students could read several accounts of a historical event and write an interpretation of what happened. To do this, they would need to be able to pull out the most important information in each text; summarize and synthesize across texts; and write in a genre appropriate for the task, purpose, and audience.
36
College Reading
From a structural standpoint, we might look toward the effective corequisite or paired-course schemes, where a developmental support class—reading, or math, for example—is paired with a college credit class—history, or algebra, for example—so that students take both the developmental education course and the disciplinary course simultaneously (Edgecombe, 2011). This has been done effectively with reading coursework especially: taking the effective “Reading Paired with History” structure and reinventing it as Integrated Reading & Writing Paired with History (or psychology, or biology, etc.) is a starting point. In sum, students would be working on the reading, writing, and vocabulary skills and strategies currently used in many developmental education reading and writing classrooms, but they would be employing meaningful purposes for engaging in literacy practices, which has been found to be an important consideration for student engagement in learning (Hull & Moje, 2012).
Conclusions and Moving Forward We conclude with some thoughts on future research and instruction. Over a decade ago, Simpson, Stahl, and Francis (2004) called for an approach that focuses on factors that contribute to growth or change over a period of time, or the “why” questions. This is in contrast to the “what” questions that are often asked, such as retention in a course, standardized exam scores, or grade point averages. Many developmental reading program evaluations currently focus solely on the “what” questions, but longitudinal, sustained assessment will yield a more complete view of the impact of our efforts. Such studies may focus on instructional, pedagogical, content, or affect, any and all of which would benefit the field. Additionally, the field must consider policy issues (see Chapter 3). How does reading connect to departments, institutions, and national policies? How do the lenses we use to view college reading impact our approaches to instruction? There is a good deal of research needed on policy at college, state, and national levels. Finally, we need to examine the impact of integrated approaches to reading and writing. We suspect that instructional models that creatively integrate effective approaches to reading with effective approaches to writing through a multifaceted literacy lens will benefit students more than an approach that merely assembles and combines current reading and writing course curricula. We also suspect that creating course objectives that include college success beyond developmental education coursework will be more successful as well. Continued focus on research is needed to determine best practices.
Note 1 A version of this chapter was previously commissioned and distributed as a white paper by the College Reading and Learning Association (Holschuh & Paulson, 2013).
References and Suggested Readings ACT. (2016). The Condition of College & Career Readiness 2016: National. Iowa City, IA: Author. Retrieved from www.act.org/content/dam/act/unsecured/documents/CCCR_National_2016.pdf *Alexander, P. A. (2005). The path to competence: A lifespan developmental perspective on reading. Journal of Literacy Research, 37(4), 413–436. Alexander, P. A., & Jetton, T. L. (2000). Learning from text: A multidimensional and developmental perspective. In M. L. Kamil, P. B. Mosenthal, P. D. Pearson, & R. Barr (Eds.), Handbook of reading research (Vol. 3, pp. 285–310). Mahwah, NJ: Lawrence Erlbaum Associates. Allington, R. (1994). The schools we have. The schools we need. The Reading Teacher, 48(1), 14–29. Anderson, E. M. (1928). Individual differences in the reading ability of college students. (Unpublished doctoral dissertation). University of Missouri-Columbia, Columbia, MO.
37
Eric J. Paulson and Jodi Patrick Holschuh
Armstrong, S. L., & Newman, M. (2011). Teaching textual conversations: Intertextuality in the college reading classroom. Journal of College Reading and Learning, 41(2), 6–21. Attewell, P., Lavin, D., Domina, T., & Levey, T. (2006). New evidence on college remediation. The Journal of Higher Education, 77(5), 886–924. Bailey, T., Jaggars, S. S., & Scott-Clayton, J. (2013). Characterizing the effectiveness of developmental education: A response to recent criticism. Retrieved from http://ccrc.tc.columbia.edu/media/k2/attachments/responseto-goudas-and-boylan.pdf Bailey, T., Jeong, D. W., & Cho, S. W. (2010). Referral, enrollment, and completion in developmental education sequences in community colleges. Economics of Education Review, 29(2), 255–270. Baker, L. (2008). Metacognition in comprehension instruction: What we have learned since NRP. In C. C. Baker, S. R. Parris, & L. M. Morrow (Eds.), Comprehension instruction: Research-based best practices (2nd ed., pp. 65–79). New York, NY: Guilford. Barab, S., Warren, S. J., del Valle, R., & Fang, F. (2006). Coming to terms with communities of practice. In J. A. Pershing (Ed.), Handbook of human performance technology (pp. 640–664). San Francisco, CA: John Wiley & Sons. Barbe, W. B. (1952). The effectiveness of work in remedial reading at the college level. Journal of Experimental Psychology, 43(4), 229–307. *Bartholomae, D., & Petrosky, A. (1986). Facts, artifacts, and counterfacts: Theory and method for a reading and writing course. Portsmouth, NH: Heinemann. Boylan, H. (2003). Developmental education: What’s it about? In N. A. Stahl & H. Boylan (Eds.), Teaching developmental reading: Historical, theoretical, and practical background readings (pp. 1–10). Boston, MA: Bedford/St. Martin’s. Bray, G. B., Pascarella, E. T., & Pierson, C. T. (2011). Postsecondary education and some dimensions of literacy development: An exploration of longitudinal evidence. Reading Research Quarterly, 39(3), 306–330. doi:10.1598/RRQ.39.3.3 *Brown, J. S., Collins, A., & Duguid, P. (1989). Situated cognition and the culture of learning. Educational Researcher, 18(1), 32–42. Camera, L. (2016). High school seniors aren’t college ready. U.S. News & World Report, April 27, 2016. Retrieved from www.usnews.com/news/articles/2016-04-27/high-school-seniors-arent-college-readynaep-data-show Caverly, D. C., Nicholson, S. A., & Radcliffe, R. (2004). The effectiveness of strategic reading instruction for college developmental readers. Journal of College Reading and Learning, 35(1), 25–49. Clancy, W. J. (1997). Situated cognition: On human knowledge and computer representations. Cambridge, UK: Cambridge University Press. Complete College America. (2012). Remediation: Higher education’s bridge to nowhere. Retrieved from http:// completecollege.org/docs/CCA-Remediation-final.pdf Copeland, C. T., & Rideout, H. M. (1901). Freshman English and theme-correcting in Harvard College. New York, NY: Silver, Burdett, and Company. Eckert, L. S. (2011). Bridging the pedagogical gap: Intersections between literary and reading theories in secondary and postsecondary literacy instruction. Journal of Adolescent & Adult Literacy, 52(2), 110–118. doi:10.1598/JAAL.52.2.2 Edgecombe, N. (2011). Accelerating the academic achievement of students referred to developmental education. (CCRC Working Paper No. 30). Retrieved from http://ccrc.tc.columbia.edu/media/k2/attachments/ accelerating-academic-achievement-students.pdf Gay, G. (2000). Culturally responsive teaching: Theory, research, & practice. New York, NY: Teachers College Press. *Gee, J. P. (2004). Reading as situated language: A sociocognitive perspective. In N. J. Unrau & R. B. Ruddell (Eds.), Theoretical models and processes of reading (5th ed., pp. 116–132). Newark, DE: International Reading Association. *Gee, J. P. (2005). An introduction to discourse analysis: Theory and method (2nd ed.). New York, NY: Routledge. Goen-Salter, S. (2008). Critiquing the need to eliminate remediation: Lessons from San Francisco State. Journal of Basic Writing, 27(2), 81–105. Goen, S., & Gillotte-Tropp, H. (2003). Integrating reading and writing: A response to the basic writing “crisis.” Journal of Basic Writing, 22(2), 90–113. Guthrie, J. T., Hoa, A. L. W., Wigfield, A., Tonks, S. M., Humenick, N. M., & Littles, E. (2007). Reading motivation and reading comprehension growth in the later elementary years. Contemporary Educational Psychology, 32(3), 282–313. doi:10.1016/j.cedpsych.2006.05.004 Guthrie, J. T., Wigfield, A., Humenick, N. M., Perenevich, K. C., Taboada, A., & Barbosa, P. (2006). Influences of stimulating tasks on reading motivation and comprehension. The Journal of Educational Research, 99(4), 232–246.
38
College Reading
Hardin, V. B. (2001). Transfer and variation in cognitive reading strategies of Latino fourth-grade students in a late-exit bilingual program. Bilingual Research Journal, 25(4), 539–561. Hern, K. (2012). Acceleration across California: Shorter pathways in developmental English and math. Change: The Magazine of Higher Learning, 44(3), 60–68. Hern, K., & Snell, M. (2010). Exponential attrition and the promise of acceleration in developmental English and math. Retrieved from http://3csn.org/developmental-sequences/ *Hofer, B. (2002). Motivation in the college classroom. In W. J. McKeachie (Ed.), McKeachie’s teaching tips: Strategies, research and theory for college and university teachers (12th ed., pp. 118–127). Wilmington, MA: D.C. Heath. Holschuh, J. P., & Hubbard, B. P. (2013, February). Student responses to epistemic nudging in developmental education courses. Paper presented at the annual meeting of the National Association of Developmental Education, Denver, CO. Holschuh, J. P., & Paulson, E. J. (2013). The terrain of college developmental reading. The College Reading & Learning Association. Retrieved from www.crla.net/publications.htm Hull, G., & Moje, E. B. (2012, January). What is the development of literacy the development of? Paper presented at the Understanding Language Conference, Stanford, CA. Hynd-Shanahan, C. R., Holschuh, J. P., & Hubbard, B. P. (2004). Thinking like a historian: College students’ reading of multiple historical documents. Journal of Literacy Research, 36(2), 141–176. International Reading Association. (2006). Standards for middle and high school literacy coaches. Retrieved from www.reading.org/downloads/resources/597coaching_standards.pdf Kucer, S. (2014). Dimensions of literacy: A conceptual base for teaching reading and writing in school settings (4th ed.). New York, NY: Routledge. Laine, M. N. (1997). Unmuted voices: The role of oral language in development perceptions regarding reading and writing relationships of college developmental students. (Unpublished doctoral dissertation). University of Cincinnati, Cincinnati, OH. Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. Cambridge, UK: University of Cambridge Press. *Lea, M.R., & Street, B. V. (2006). The “academic literacies” model: Theory and applications. Theory into Practice, 45(4), 368–377. Levin, H., & Calcagno, J. C. (2008). Remediation in the community college: An evaluator’s perspective. Community College Review, 35(3), 181–207. Linnenbrink, E. A., & Pintrich, P. R. (2003). The role of self-efficacy beliefs in student engagement and learning in the classroom. Reading & Writing Quarterly, 19(2), 119–137. Martino, N. L., & Hoffman, P. R. (2002). An investigation of reading and language abilities of college freshmen. Journal of Research in Reading, 25(3), 310–318. Martino, N. L., Norris, J., & Hoffman, P. (2001). Reading comprehension instruction: Effects of two types. Journal of Developmental Education, 25(1), 2–10. Maxwell, M. (1997). Improving student learning skills: A new edition. Clearwater, FL: H & H Publishers. McConachie, S. M., & Apodaca, R. E. (2009). Embedding disciplinary literacy. In S. M. McConachie & A. R. Petrosky (Eds.), Content matters: A disciplinary literacy approach to improving student learning (pp. 163–196). San Francisco, CA: Jossey-Bass. Merisotis, J. P., & Phipps, R. A. (2000). Remedial education in colleges and universities: What’s really going on? The Review of Higher Education, 24(1), 67–85. *Moje, E. B. (2007). Chapter 1: Developing socially just subject matter instruction: A review of the literature on disciplinary literacy teaching. Review of Research in Education, 31(1), 1–44. doi:10.3102/0091732X07300046 Moore, D. W., & Stefanich, G. P. (1990). Middle school reading: A historical perspective. In G. G. Duffy (Ed.), Reading in the middle school (pp. 3–15). Newark, DE: International Reading Association. Mulcahy-Ernt, P. I., & Caverly, D. C. (2009). Strategic study-reading. In R. F. Flippo & D. C. Caverly (Eds.), Handbook of college reading and study strategy research (2nd ed., pp. 177–198). New York, NY: Routledge. NAEP. (2016). The nation’s report card: Mathematics and reading at grade 12. Retrieved from https://nationsre portcard.gov/reading_math_g12_2015/ National Center on Education and the Economy. (2013). What does it really mean to be college and work ready? The English literacy required of first year community college students. Retrieved from www.ncee.org *New London Group. (1996). A pedagogy of multiliteracies: Designing social futures. Harvard Educational Review, 66(1), 60–92. Ng, C. H. (2005). Academic self-schemas and their self-congruent learning patterns: Findings verified with culturally different samples. Social Psychology of Education, 8(3), 303–328. Nist, S. L., & Holschuh, J. P. (2005). Practical applications of the research on epistemological beliefs. Journal of College Reading and Learning, 35(2), 84–92.
39
Eric J. Paulson and Jodi Patrick Holschuh
Paris, S. G., Lipson, M. Y., & Wixson, K. K. (1983). Becoming a strategic reader. Contemporary Educational Psychology, 8(3), 293–316. Parodi, G. (2007). Reading-writing connections: Discourse-oriented research. Reading and Writing, 20(3), 225–250. Parsad, B., Lewis, L., & Greene, B. (2003). Remedial education at degree-granting post-secondary institutions in fall 2000. Washington, DC: National Center for Educational Statistics, Institute for Educational Science, U.S. Department of Education. Retrieved from http://nces.ed.gov/pubs2004/2004010.pdf Paulson, E. J. (2012). A discourse mismatch theory of college learning. In K. Agee & R. Hodges (Eds.), Handbook for training peer tutors and mentors (pp. 7–10). Mason, OH: Cengage Learning. Paulson, E. J., & Armstrong, S. L. (2010). Postsecondary literacy: Coherence in theory, terminology, and teacher preparation. Journal of Developmental Education, 33(3), 2–9. Paulson, E. J., & Armstrong, S. L. (2011). Mountains and pit bulls: Students’ metaphors for college reading and writing. Journal of Adolescent & Adult Literacy, 54(7), 494–503. Paulson, E. J., & Bauer, L. (2011). Goal setting as an explicit element of metacognitive reading and study strategies for college readers. NADE Digest, 5(3), 41–49. Paulson, E. J., & Kendall Theado, C. (2015). Locating agency in the classroom: A metaphor analysis of teacher talk in a college developmental reading class. Classroom Discourse, 6(1), 1–19. Paulson, E. J., & Mason-Egan, P. (2007). Retrospective Miscue Analysis for struggling postsecondary readers. Journal of Developmental Education, 31(2), 2–13. Pawan, F., & Honeyford, M. (2009). Academic literacy. In R. F. Flippo & D. C. Caverly (Eds.), Handbook of college reading and study strategy research (2nd ed., pp. 26–46). New York, NY: Routledge. Perin, D., Bork, R. H., Peverly, S. T., & Mason, L. H. (2013). A contextualized curricular supplement for developmental reading and writing. Journal of College Reading and Learning, 43(2), 8–38. Pintrich, P. R. (2002). The role of metacognitive knowledge in learning, teaching, and assessing. Theory into Practice, 41(4), 219–225. *Pintrich, P. R., & Garcia, T. (1994). Self-regulated learning in college students: Knowledge, strategies, and motivation. In P. R. Pintrich, D. R. Brown, & C. E. Weinstein (Eds.), Student motivation, cognition, and learning: Essays in honor of Wilbert J. McKeachie (pp. 113–134). Hillsdale, NJ: Erlbaum. Pressey, L. C., & Pressey, S. L. (1930). Training college freshmen to read. The Journal of Educational Research, 21(3), 203–211. Pressley, M. (2000). What should comprehension instruction be the instruction of? In M. L. Kamil, P. B. Mosenthal, P. D. Pearson, & R. Barr (Eds.), Handbook of reading research, (Vol. 3, pp. 545–563). Mahwah, NJ: Lawrence Erlbaum Associates. *Pressley, M. (2002). Metacognition and self-regulated comprehension. In A. E. Farstrup & S. J. Samuels (Eds.), What research has to say about reading instruction (3rd ed., pp. 291–309). Newark, DE: International Reading Association. Pressley, M. (2004). The need for research on secondary literacy education. In T. L. Jetton & J. A. Dole (Eds.), Adolescent literacy research and practice (pp. 415–432). New York, NY: Guilford. Pressley, M., Gaskins, I., & Fingeret, L. (2006). Instruction and development of reading fluency in struggling readers. In S. J. Samuels & A. E. Farstrup (Eds.), What research has to say about fluency instruction (pp. 4–23). Newark, DE: International Reading Association. Purcell-Gates, V. (2012). Epistemological tensions in reading research and a vision for the future. Reading Research Quarterly, 47(4), 465–471. doi:10.1002/RRQ.031 Quinn, K. B. (1995): Teaching reading and writing as modes of learning in college: A glance at the past; a view to the future. Reading Research and Instruction, 34(4), 295–314. RAND Reading Study Group. (2002). Reading for understanding: Towards an R&D program in reading comprehension. Retrieved from www.rand.org/multi/achievementforall/reading/readreport.html Reisman, A., & Wineburg, S. (2008). Teaching the skill of contextualizing in history. The Social Studies, 99(5), 202–207. Robinson, H. A. (1950). A note on the evaluation of college remedial reading courses. Journal of Educational Psychology, 41(2), 83–96. Schommer, M. (1994). An emerging conceptualization of epistemological beliefs and their role in learning. In R. Garner & P. A. Alexander (Eds.), Beliefs about text and instruction with text (pp. 25–40). Hillsdale, NJ: Erlbaum. Schraw, G., & Bruning, R. (1996). Readers’ implicit models of reading. Reading Research Quarterly, 31(3), 290–305. Shanahan, T. (1984). The reading-writing relation: An exploratory multivariate analysis. Journal of Educational Psychology, 76(3), 466–477.
40
College Reading
*Shanahan, T., Fisher, D., & Frey, N. (2012). The challenge of challenging text. Educational Leadership, 69(6), 58–62. Shanahan, T., & Lomax, R. (1988). A developmental comparison of three theoretical models of the r eadingwriting relationship. Research in the Teaching of English, 22(2), 196–212. *Shanahan, T., & Shanahan, C. (2008). Teaching disciplinary literacy to adolescents: Rethinking contentarea literacy. Harvard Educational Review, 78(1), 40–61. Shanahan, T., & Shanahan, C. (2012). What is disciplinary literacy and why does it matter? Topics in Language Disorders, 32(1), 7–18. *Simpson, M. L., & Nist, S. L. (2000). An update on strategic learning: It’s more than textbook reading strategies. Journal of Adolescent & Adult Literacy, 43(6), 528–541. Simpson, M. L., Stahl, N. A., & Francis, M. A. (2004). Reading and learning strategies: Recommendations for the 21st century. Journal of Developmental Education, 28(2), 2–4, 6, 8, 10–12, 14. Smith, R. J., Jensen, K. M., & Dillingofski, M. S. (1971). The effects of integrating reading and writing on four variables. Research in the Teaching of English, 5(2), 179–189. Snyder, V. (2002). The effect of course-based reading strategy training on the reading comprehension skills of developmental college students. Research and Teaching in Developmental Education, 18(2), 37–41. Sperling, R. A., Howard, B. C., Staley, R., & DuBois, N. (2004). Metacognition and self-regulated learning constructs. Educational Research and Evaluation, 10(2), 117–139. Stahl, N. A., & King, J. R. (2009). A history of college reading. In R. F. Flippo & D. C. Caverly (Eds.), Handbook of college reading and study strategy research (2nd ed., pp. 3–25). New York, NY: Routledge. Svinicki, M. D. (1994). Research on college student learning and motivation: Will it affect college instruction? In P. R. Pintrich, D. R. Brown, & C. E. Weinstein (Eds.), Student motivation, cognition, and learning: Essays in honor of Wilbert J. McKeachie (pp. 331–342). Hillsdale, NJ: Erlbaum. Tangney, J. P. (2003). Self-relevant emotions. In M. R. Leary & J. P. Tangney (Eds.), Handbook of self and identity (pp. 384–400). New York, NY: Guilford Press. Triggs, F. O. (1941). Remedial reading. The Journal of Educational Research, 12(7), 371–377. Turner, J. C., & Patrick, H. (2008). How does motivation develop and why does it change? Reframing motivation research. Educational Psychologist, 43(3), 119–131. doi:10.1080/00461520802178441 Wilhelm, J. D. (1997). You gotta be the book: Teaching engaged and reflective reading with adolescents. Urbana, IL: National Council of Teachers of English. Williamson, G. L. (2008). A text readability continuum for postsecondary readiness. Journal of Advanced Academics, 19(4), 602–632. Wineburg, S. S. (1991). On the reading of historical texts: Notes on the breach between school and academy. American Educational Research Journal, 28(3), 495–519. Winne, P. H. (2005). Key issues in modeling and applying research on self-regulated learning. Applied Psychology: An International Review, 54(2), 232–238. Wood, N. V. (1997). College reading instruction as reflected by current reading textbooks. Journal of College Reading and Learning, 27(3), 79–95. Wyatt, M. (1992). The past, present, and future need for college reading courses in the U.S. Journal of Reading, 36(1), 10–20.
41
3 Policy Issues Tara L. Parker university of massachusetts boston
Much of the U.S.’ societal strength and economic growth depends on its success in developing an educated citizenry (Dewey, 1916; Grossman, 2006). Prospective students representing a multitude of diverse racial, ethnic, cultural, and socioeconomic (SES) groups often turn to higher education, hoping a baccalaureate degree will bring them much closer to achieving their own A merican dreams. Many of these prospective students, however, are turned away as they are told they are unqualified, ineligible, and underprepared for the rigor of higher education. Remedial and developmental education1 are tools designed to assist these students in not only accessing higher education but also in earning a degree (Parker, Sterk Barrett, & Bustillos, 2014). Over the past two decades, however, state policymakers have placed developmental education under considerable scrutiny as students often arrive to college campuses underprepared to meet the rising standards and expectations of postsecondary education. A recent Education Commission of the States report (Gianneschi & Fulton, 2014) showed that 39 states have state- or system-wide policies related to developmental education. Instead of holding institutions accountable for the barriers institutional policies often present for students, policymakers appear to be more concerned about how many state dollars are spent on developmental education. Higher education leaders often blame secondary school educators for underpreparedness; elementary and secondary schools blame each other, as well as the students and their parents; while developmental education becomes the policy scapegoat, with neither educational sector accepting responsibility for it. Critics of college remediation charge that it wastes taxpayer dollars on teaching what was ostensibly taught during high school. Further, they often perceive college remediation as a back- alley approach to college, granting access to students who are often considered unqualified for a four-year institution. A glance at recent newspaper editorials, headlines, and educational reports across the U.S. shows that the country is still grappling with what to do with developmental courses. Some columnists, editorialists, and nonprofit group reports have called college remediation a “bridge to nowhere” (Complete College America, 2012), an “education trap” (Hanford, 2016), and a “black hole” (Fain, 2012) that represents a “hidden cost” (Douglas-Gabriel, 2016) for students to “play catch up” (Guerrero, 2016) in higher education. Policymakers opposed to developmental education point to high enrollments in developmental education courses and low standardized test scores to construct an academic and economic crisis. Even a cursory review of the history of higher education reveals that concerns regarding underpreparedness are not a new phenomenon. On the contrary, even the most prestigious colleges and
42
Policy Issues
universities accepted students who did not meet admissions standards throughout most of their histories. These institutions not only admitted students considered underprepared but also took on the responsibility of providing academic support to meet their academic needs. In 1869, for example, Harvard president Charles W. Eliot advanced this view in his inaugural speech, maintaining that colleges are obligated to provide supplementary instruction to students whose elementary schools failed to provide them with the tools they needed to succeed in college (Spann, 2000). While some colleges and universities have affirmed their commitment to serving a diverse group of students, including those with varying academic needs, many of today’s colleges and universities have proven ready to abandon their institutional commitment to serving students whose test scores indicate a need for more academic support. External and internal demands for “academic excellence” and efficiency within colleges and universities have led some institutions to pursue students with perceived higher academic achievement (Parker & Richardson, 2006). Colleges and universities may, therefore, choose to raise their admissions standards and seek to enroll the students with the highest SAT scores. Students who cannot meet these inflated admissions standards are potentially barred then, not only from the most selective schools but also from many “less selective” four-year institutions that use the SAT averages of incoming freshman to increase their prestige. To illustrate this point further, Astin (2000) argued, when we define our excellence in terms of the test scores of our entering freshmen – the high scoring student being viewed here as a “resource” that enhances our reputation – we set our sense of excellence in direct conflict with our desire to promote educational opportunities for those groups in our society whose test scores put them at a competitive disadvantage. Here, Astin argued that educating students who were considered underprepared was “the most important educational problem” facing American higher education of the day (p. 130). Seventeen years later, evidence suggests that serving students who are underprepared remains one of the most important issues to face higher education. Indeed, educational reports show that prospective college students from an array of academic backgrounds still represent varying degrees of college-readiness. The National Assessment of Educational Progress (NAEP) found, for example, that in 2015, only 25 percent of high school seniors scored at or above proficiency in math, and only 37 percent scored proficient in reading. Similarly, only 38 percent of graduating seniors who took the ACT in 2016 met ACT college-readiness benchmarks in three of the four subjects (English, reading, math, and science). Moreover, 34 percent of graduating seniors did not meet the benchmarks in any of the four subjects, suggesting that they will be underprepared for college courses and may need developmental support in their first year of college. More specifically, readiness for college-level reading has remained low for over a decade (ACT, 2016). Even when accounting for benchmarks that were revised in 2013, reading rates have remained stagnant; in 2016, 44 percent of high school graduates met the reading benchmark, down from 46 percent the previous year. Racial and ethnic differences in reading also point to the ongoing significance of the issue. While 55 percent of whites reached the reading benchmark in 2016, only 30 percent of Latinos and 19 percent of African American students reached the same goal. Equally important, the gap in reading scores between Latinos and whites has remained relatively the same since 1992, while the gap between blacks and whites actually increased during the same time period. These indicators show that it is unsurprising that 50 percent of students enrolled in at least one developmental education class during their college careers (Radford & Horn, 2012). Further, the National Center for Education Statistics (NCES) reported that 68 percent of two-year college students and 40 percent of four-year students took at least one developmental course in their college careers (Chen, 2016).
43
Tara L. Parker
Developmental education, therefore, is germane to the unending quest for equality in higher education because it helps to repair leaks in the educational “pipeline.” In addition, the courses support students of all academic abilities who may be required to enroll due to a problematic assessment and placement process. While it is perceived by some to be a necessary tool to promote equal opportunity, developmental education is often associated by the press and policymakers with a negative stigma that suggests that students who enroll in remediation are in some way “deficient” (Gandara & Maxwell-Jolly, 1999). Callan (2001) argued that students requiring only short-term developmental assistance were actually perceived to have the same developmental needs as “functional illiterates.” Ironically, most students needing remediation require only one or two courses (Attewell, Lavin, Domina, & Levey, 2006). Perhaps even more importantly, many students who could do well in college-level classes without remediation are inaccurately required to enroll in developmental classes (48 percent and 18 percent at public two- and four-year institutions, respectively), while some students who are much less prepared for college (25 percent and 23 percent at public two- and four-year institutions, respectively) do not take any developmental classes (Chen, 2016). Belfield and Crosta (2012) found that a third of community college test takers were “either assigned to developmental education, despite being predicted to get at least a B in college-level English, or assigned to college-level English, despite being predicted to fail the course” (p. 39). Though the evidence suggests a misalignment between high school and college admissions, and inadequate assessment and placement strategies, many policymakers only question the utility and benefits of developmental programs. Developmental education, though, is just one part of a much larger, multifaceted process that theoretically provides the tools necessary for students to successfully complete a college degree. Developmental education is thus often regarded as a critical piece, not in “fixing” the student but rather in fixing the academic pipeline. Course offerings and support services may begin to address social inequalities in elementary and secondary schools, college preparation and eligibility, admissions and enrollment, and degree completion. Students who complete developmental education coursework are also likely to succeed in earning a baccalaureate degree, leading to more potential benefits after college related to employment and positive contributions to society. Despite the obvious need for remedial education and its purported significance, policymakers in several states revealed concerns regarding the cost and pervasiveness of college remediation. In fact, the need for remediation is often cited as the problem, yet policy solutions rarely address this fundamental aspect of the issue. In Virginia and Florida, for example, lawmakers argued that college remediation duplicated skills that should have been learned in high school and was thus a waste of resources. In the mid-1990s, they proposed charging high schools for the true cost of “remediating” high school graduates at the college level (Benning, 1998; Merisotis & Phipps, 2000). Eventually, both states reconsidered and, instead, limited college remediation to community colleges. In 2013, Florida took an additional step and made enrollment in developmental education voluntary. South Dakota legislators do not allocate any public funding, including financial aid, for the instruction of developmental coursework (Turnblow, 2006). At least 19 other states or higher education systems have reduced or eliminated developmental education in baccalaureate degree programs (Parker et al., 2014). In California, for example, the California State University (CSU) system limits the time a student may take remedial courses and may disenroll students who require more time (CSU Board of Trustees, 1997). In contrast, the City University of New York (CUNY) restricts remediation to its associate’s programs (CUNY Board of Trustees, 1999). Students whose placement exams suggest the need for remediation are thus prohibited from even entering a baccalaureate program. As other universities and states are considering or will consider similar policies, developmental education’s position in higher education, as well as the role of postsecondary education in general, is called into question.
44
Policy Issues
Purpose of the Chapter This chapter examines the current policy environment facing college remedial education. To illustrate the politicization of remediation that has occurred over the past two decades, CUNY is used as a case study. As one of the nation’s largest and most diverse public universities, CUNY offers important lessons for institutions struggling to provide access to higher education in a policy environment that increasingly demands accountability and efficiency. This chapter also reviews the ways in which other states have similarly sought to address policy issues related to developmental education. The chapter concludes with implications for policy and practice as well as specific avenues for future research. Before discussing the politicization of remediation, however, it is important to first briefly discuss college remediation’s history and its current status in American postsecondary education (see Chapter 1).
Learning from the Past In 1879, Harvard admitted 50 percent of its first-year students as conditional admits who also needed to receive academic support to be successful in their coursework (Cassaza, 1999). In fact, most colleges provided a “sizable proportion of their curricula to preparatory or remedial courses” (Thelin, 2004, p. 96) to help develop academic skills. Today’s college campuses are more racially, ethnically, linguistically, and economically diverse than those of centuries past; yet concern regarding student preparation and expanded developmental courses and services should not be solely attributed to changing demographics. Indeed, it is a part of the mission and history of higher education to educate for the public good. Harvard, for example, has provided tutoring to students since its inception in 1636. By the 1870s, it instituted entrance exams to respond to applicants’ “bad spelling, incorrectness as well as inelegance of expression in writing, [and] ignorance of the simplest rules of punctuation” (Wyatt, 1992, p. 12). When half of its incoming students failed the entrance exam, Harvard leaders blamed preparatory schools, grammar schools, teachers, students, and parents. Still, the university tinkered with admissions exams to allow access to students whose scores would otherwise exclude them from the college due to inadequate academic preparation (Karabel, 2005). Harvard, as well as other prestigious institutions, such as Princeton and Yale, had to find ways to not only admit but educate students in need of academic support. In his 1869 inaugural address as President of Harvard College, Charles W. Eliot urged, “Whatever elementary instruction the schools fail to give, the college must supply” (Addresses at the Inauguration of Charles William Eliot as President of Harvard College, October 19, 1869, 1869, p. 23). Certainly, the history of developmental education is not limited to Harvard and other prestigious universities. Many of the colleges of the 19th century had low admissions standards that were still not met by entering students (Wyatt, 1992). By 1890, only 27 states had compulsory education, and most schools prepared students for life, not higher education. As a result, most colleges had to offer some type of preparatory services. During this same time period, fewer than 20 percent of the nearly 400 American higher education institutions were without a preparatory program (Wyatt). Approximately 40 percent of first-year college students participated in preparatory instruction (Merisotis & Phipps, 2000). In the early 20th century, colleges began to pay more attention to college reading and study skills, offering handbooks, how-to study courses, and reading programs. By the 1930s, the majority of higher education institutions offered developmental coursework in reading and centers for study skills (Crowe, 1998). When the once highly selective CUNY established an open admissions policy in 1970, its success (or failure) rested on adequate financial support for remedial and other academic assistance services. Even the University of California (UC), Berkeley, required more than half of its entering freshman to take a remedial writing course in 1979, nearly a century after the UC campus developed the nation’s first developmental writing course (Wyatt, 1992).
45
Tara L. Parker
College Remediation Today The Hechinger Report examined remediation at 911 two- and four-year institutions, and estimated that approximately 96 percent of these offered at least one remedial course, suggesting a prevalent need for remediation (Butrymowicz, 2017). Nearly a quarter of those institutions enrolled more than half of their first-year students in developmental courses. Of students who began college in 2003, 68 percent of students in two-year colleges and 40 percent of students enrolled in four-year colleges took at least one remedial course by 2009 (Chen, 2016). While students enrolled at less selective four-year institutions are more likely to take a remedial course, more than 27 percent of students at highly selective institutions also took at least one remedial course during their academic careers. When controlling for academic preparedness, studies found that four-year colleges and universities are considerably less likely to enroll students in developmental classes than two-year colleges (Attewell et al., 2006). In other words, students who begin at a two-year college are more likely to place into a developmental course than if they (with the same academic preparedness) started at a fouryear college or university. Similarly, students of the same academic backgrounds are more likely to take a developmental course at a public college and university than at a private institution. These findings suggest that higher education policymakers, practitioners, and scholars need to rethink not only how students are assessed but how students who take developmental education courses are portrayed. To further illustrate this point, students who take (whether required or optional) remedial courses cannot be easily characterized. While many policymakers and others portray students who take developmental courses as underprepared due to weak academic skills, the reality is that students of all academic backgrounds may find themselves in remedial courses. Interestingly, many students with limited academic preparedness never enroll in remediation, while others with higher levels of academic performance actually do enroll in remediation courses (Attewell et al., 2006). More specifically, while students with weak academic preparedness were more likely to take a remedial course between 2003 and 2009, 25 percent of students with indicators of weak academic preparation at public two-year and 23 percent at public four-year institutions never took a remedial course during their academic careers. Chen (2016) estimates that 28 percent and 18 percent of students with strong academic preparation at public two- and four-year institutions, respectively, enrolled in at least one developmental course. Students who take remedial courses are also demographically diverse. They may be recent high school graduates and first-time freshman, or they may be students who entered college at least one year after completing high school. Both groups may enter college with little or varying experiences with a rigorous high school curriculum, often cited as an indicator of college-readiness (Adelman, 2004). They may also be adult students who have been away from school for a number of years. More than 60 percent of adults 24 and over enrolled in an average of three developmental courses between 2003 and 2009 (Chen, 2016). Students of color, particularly African American and Latino students, are more likely to take at least one remedial course during their academic careers. In two-year institutions, 78 percent of A frican Americans and 75 percent of Latinos enrolled in remedial coursework, compared to 68 percent of Asian Americans and 64 percent of whites. At four-year colleges and universities, 66 percent of African Americans and 53 percent of Latinos took developmental coursework, while 36 percent of whites and 30 percent of Asian Americans did so. Low-income students and first- generation college students were also more likely to enroll in remedial courses, though 59 percent of students from the highest income quartile and 65 percent of students whose parents hold a bachelor’s degree or higher also enrolled in developmental coursework. The lack of a clear profile of student who are considered underprepared or a universal definition of what constitutes c ollege-readiness causes difficulty in understanding the myriad factors that contribute to the need for college remediation; a question that few policymakers even attempt to address. Instead, many argue that increases
46
Policy Issues
in remedial education offerings brought an “influx” of students who are underprepared, thereby, negating the history of higher education and the reality of remediation’s role in it. Some researchers (Merisotis & Phipps, 2000; Phipps, 1998) suggest that developmental education is actually more prevalent than what is reported as most institutional leaders may refuse to report developmental courses due to negative stigmas attached to them and concerns about state funding (Parker et al., 2014). Campuses differently assess students to determine who requires remediation and who does not (Merisotis & Redmond, 2002). As four-year college administrators attempt to maximize prestige, few want to be associated with underprepared students and/or remediation. While the quality of colleges increasingly depends on the quality of students who enter (as indicated by test scores) and the manner in which students leave (i.e., graduation rates), remedial courses threaten to weaken college ratings. Remedial coursework is often considered to be “below ‘college level’” (Phipps, 1998, p. vi) suggesting administrators may fear hurting their institution’s reputation by merely conceding that they accept students who may not have initially met their admissions requirements. Remediation thus remains at the margins of higher education, slowly losing ground with college administrators and state policymakers who equate remediation with low standards, quality, and prestige. Perhaps as a result of the perceived stigma placed on students enrolled in developmental education, the distrust by parents, and the marginalization of remediation, remedial courses and so-called remedial students are often left in an indeterminate state. The higher education community is not the only group complaining about underpreparation. A growing number of businesses and occupations now require skilled workers. Increases in knowledge-based jobs thus require at least some postsecondary education. Employers, however, argue that too few high school and college graduates have the skills considered necessary to succeed in the workforce, fueling charges of a crisis in academic preparation. Large and small companies across the nation demand an educated workforce and better-skilled college graduates. In 2000, for example, 38 percent of job applicants “lacked the required readings skills for the jobs in which they applied” (ACT, 2006, p. 5). In fact, Greene’s (2000) Michigan study reported that one company rejected 70 percent of their applicants due to insufficient math or reading skills. Greene further estimated that the lack of adequate reading, writing, and math skills costs U.S. businesses and postsecondary institutions $16 billion per year. Indeed, an educated workforce may help companies meet their goals while a less skilled workforce may slow productivity and innovation. Moreover the U.S. and thus the workforce are increasingly diverse. As historically underrepresented racial/ethnic groups change the nation’s demography, they continue to face challenges associated with educational opportunity and success. States that fail to educate significant portions of their populations, particularly students of color, jeopardize the promotion of equal opportunity, the promise of democracy, and subsequently fail to reap the benefits that an educated citizenry may provide (Callan, 1994; Ratliff, Rawlings, Ards, & Sherman, 1997). In fact, a 2006 Measuring Up report estimated the U.S. lost more than $199 billion due to significant racial/ethnic group disparities in education and income levels (Hunt, Carruthers, Callan, & Ewell, 2006). Developmental education, cited by many policymakers as the root of the academic problem, is often referred to as a pathway to access higher education by others and subsequently may provide greater employment opportunities. With much at stake, it is increasingly important to examine ongoing political debates on the issue of remediation in colleges and universities.
The Politicization of Developmental Education To develop an educated workforce, and to maintain the competitiveness of the U.S., many policymakers have a renewed interest in developmental education, albeit an often negatively socially constructed one. In many higher education policy discussions, college remediation has taken the blame for inadequate academic preparation of high school graduates and college students as well
47
Tara L. Parker
as the misalignment between high school graduation requirements, changing higher education admissions standards and expectations. Policymakers often recognize the social inequalities in American public high schools but place most of their attention on the role of colleges and universities when it comes to college-readiness. Hostility toward college remediation, for example, suggests that educational experiences in high school are irreversible (Attewell et al., 2006). College remediation is often viewed as wasting valuable resources, such as students’ and instructors’ time, and taxpayers’ money. In response to these concerns, a number of state policymakers proposed to reduce, phase-out, or abolish developmental education programs, particularly in four-year institutions. An estimated 34 states including Massachusetts, California (CSU), New York (CUNY), South Dakota, Virginia, Florida, Oklahoma, and Colorado have taken steps or proposed to limit or eliminate college remediation. Often citing cost constraints and the unfairness of “paying double” for students who did not learn necessary skills, a national debate on an issue that previously received little academic and political attention has ensued, most recently since the mid-1990s. Most states that have attempted to address the issue of college remediation have restricted formal developmental course offerings to community colleges. Making community colleges primarily responsible for remediation has not removed developmental education from public policy discussions. On the contrary, even within community colleges, policymakers and institutional leaders have shown concern for the delivery of instruction and its effectiveness. Indeed, a number of reforms have taken place in recent years in efforts to ostensibly increase effectiveness. Still, national and state policy debates are often argued with little use of or attention to empirical studies of developmental education. Often, remediation debates have been based on ideology (Shaw, 1997; Tolbert, 2017) and anecdotal evidence. The CUNY case illustrates how remediation debates manifested in one university system.
Controversy at CUNY CUNY, the nation’s largest urban university, is also one of the most diverse. With a large proportion of students of color, immigrants, first-generation and low-income college students, in the late 1990s the university long symbolized the rewards and challenges of an open-access admissions policy. Critics of developmental education, led by Rudy Giuliani, then Mayor of New York, tried to link it to the affirmative action debate (Arendale, 2001) contending that developmental programs permitted “unqualified” and presumably undeserving students to enter into baccalaureate programs (MacDonald, 1998). A special task force appointed by Giuliani presented a highly publicized scathing report on the status of CUNY. In the report, “An Institution Adrift” (Schmidt et al., 1999), the task force argued that CUNY spent more than $124 million dollars on remediation. Further, the task force argued that 78 percent of incoming CUNY freshmen required remediation in any subject in 1997 and more than 50 percent required remediation in reading specifically. They argued that these students offered little return on the public investment, as illustrated by low graduation rates: Though CUNY had launched the nation’s first affirmative action program for minority students in 1966, both the university and the city continued to be rocked by racial disturbances. So in 1970, CUNY undertook to change its demographics on a far larger scale, through what came to be known as “open admissions.” …CUNY dismantled its entrance requirements; unprepared students would be admitted and given whatever remedial training they needed… CUNY’s experiment in large-scale remedial education may now be declared a failure. (MacDonald, 1994, p. 10)
48
Policy Issues
Supporters of college remediation, however, argued that the Mayor’s report was flawed because it overestimated the cost of remediation and failed to recognize the diversity of the university’s students. As one of the most racially, ethnically, and socioeconomically diverse universities in the nation, CUNY is attended by many students who graduated from New York City public schools were members of underrepresented racial and ethnic groups, and worked full time. Supporters of remediation argued developmental programs symbolize access and opportunity to a bachelor’s degree (Arenson, 1999). They contended that developmental education provides students considered underprepared with the tools needed for them to academically succeed in college. Supporters of developmental education further argued that for many students, remediation opens college doors that would otherwise remain closed. Despite the arguments related to access and educational opportunity, few were surprised that the CUNY Board of Trustees voted to eliminate developmental courses from four-year colleges in 1999. The CUNY Board, primarily comprised of Mayor Giuliani appointees, was placed in a political milieu. Mayoral leadership and sharp criticisms by the New York press proved to be compelling factors for the Board’s decision. Before implementing the ban, however, CUNY had to secure approval from the New York State Board of Regents that was concerned about the potential impact on diversity. As one of the conditions for initial approval, the Regents required CUNY to provide evidence that the change in policy had not adversely impacted either enrollment or the representation of students of color. CUNY administrators provided the minimal evidence, the Board approved, and the issue continued to be controversial throughout the university system. CUNY continued its historic struggle to achieve both access and excellence during a time when developmental courses and students requiring developmental education were barred from the system’s four-year colleges. Parker (2005), however, found that many students, particularly students of color, failed to achieve the minimal scores required for admission to a four-year college. Further, students eligible for admission were said to still demonstrate levels of underpreparedness, even if they were students who passed the New York State Regents (exit) exam, calling changes in quality into question. While the exit exam was intended in part to raise educational standards and assess student learning, the CUNY case suggests that high-stakes tests are not predictors of college-readiness or success. The CUNY case also illustrates political debates that occurred and continue to occur in states across the country. By aligning actors opposed to remediation, critics publicly denounced the university by citing inefficiencies in the system. High rates of remediation and low graduation rates of CUNY colleges were used to demonstrate developmental education was not only creating an “institution adrift,” but it was also doing harm to students who were “bogged down” in courses that did not lead to a degree. Those opposed to remediation suggested that improved educational quality would come only when remediation was purged from four-year colleges. As was true in this CUNY case, national debates on remediation usually center on four policy areas: assessment and placement, outcomes, cost, and the location of remediation. Over the past decade, federal and state policymakers expressed concern that remediation is ineffective and decreases educational attainment. Remediation is thus seen as too costly. Many state policymakers suggest that remediation does not belong in college-level programs. Instead, many view community colleges as a more appropriate avenue to resolve remediation concerns of four-year colleges. The following discussion highlights the positions within each of these areas, how these issues played out at CUNY, and how these issues unfolded more recently in other states and university systems.
Assessment and Placement of Developmental Education When remediation was first introduced to the state policy agenda 20 years ago, some policymakers (as well as some researchers and institutional leaders) argued that remediation would no longer be
49
Tara L. Parker
needed because of the expectation that raised academic standards in high schools would drastically improve student preparedness (Arendale, 2001). Beginning in 1996, for instance, New York State required all New York high school seniors to pass a Regents exam in order to graduate. As the CUNY case demonstrates, many students continued to need academic support to succeed in college. Raising graduation standards in high schools, however, did not automatically change social inequalities that continued to plague public schools. Indeed, public concerns related to teacher preparation and turnover, crowded classrooms, inadequate textbooks, and the lack of Advanced Placement (AP) courses cannot be changed with the implementation of an “exit” exam or other overnight strategies. The board of Trustees at CSU, as another example, voted in 1997 to require all incoming first-time, full-time freshman to take placement exams in math and English. Students who placed into developmental classes were given one year to complete the courses and demonstrate proficiency in the corresponding subject (math or English). Students who did not pass the placement exam after one year were subject to disenrollment. The goal of the CSU executive order was to reduce the need for developmental education on CSU campuses. In fact, the goal of Executive Order 665 was to reduce the need for developmental education to 10 percent of the incoming student body. At the time of implementation, nearly half of all incoming students needed remediation in at least one subject. As of Fall 2016, more than 28 percent of CSU incoming freshman needed remediation. While this is obviously an improvement, CSU has fallen short of its goal 10 years after its target date. California’s Legislative Analyst’s Office (LAO) suggested that CSU’s placement exam may be a part of the problem and considered investigating that issue further. The LAO also suggested that university eligibility policies and the quality of public high school education should also be considered. State legislatures are also increasingly involved in assessment and placement of students, a role previously reserved for institutional faculty and staff. It is estimated the 12 states and 17 higher education systems have state assessment and placement policies (Fulton, 2012), primarily directed toward community colleges. Even more states (34) reported placement into developmental mathematics and 33 reported placement in developmental English. States also reported placement based on gender (12), race/ethnicity (17), and age (9) (Gianneschi & Fulton, 2014). As these figures suggest, policies vary across states. South Carolina, for instance, requires two-year colleges to assess students to determine placement in college-level courses or in developmental courses. However, minimum scores required for each is a decision left to the individual institutions. If community and technical college students scored below a minimum on the reading assessment, for example, they would be referred to adult basic education even before enrolling in developmental courses. Perhaps because four-year colleges in South Carolina are banned from offering developmental courses, they are exempt from the assessment/placement policy (Parker et al., 2014). Still, some four-year institutions still choose to use placement exams. The vast policy variation across institutions and institutional types suggests that some students, who are strategic and not bound by location, could move around to avoid developmental education, even if they certainly need it. North Carolina, on the other hand, was the only statewide policy in Parker et al.’s (2014) case study of five states that impacted both four- and two-year public institutions. Only community colleges, however, had to set a minimum placement exam score to determine placement into developmental or college-level courses. Like South Carolina, four-year colleges and universities in North Carolina had greater latitude in deciding whether to assess students and how to place students in college-level classes. In addition, Parker et al. found that some four-year universities permitted students to make multiple attempts at passing a placement exam. Interestingly, Florida’s state legislature voted to end developmental education for many students in its Florida College system (Florida’s community college and state college system) campuses. Students who attended and graduated one of Florida’s high schools, therefore, were considered “college-ready” and were exempt from taking a placement exam altogether. Students who were
50
Policy Issues
not exempt had to take a placement exam but had options as to how to proceed with developmental instruction and academic support. While the long-term impacts of this policy are still being evaluated, it is safe to say that remediation, in some form, will remain necessary for many students for years to come. When thinking about public policies that seek to address assessment and placement of developmental education, it is important to consider that most placement tests do not accurately place students. In other words, students whose test scores suggest a need for remediation often choose not to take the course, especially if the course is in reading or writing (Smith, 2015) and may still do well in college-level courses. At the same time, some students whose test scores do not suggest a need for developmental support do enroll and may have negative experiences and may be more likely to drop out (Scott-Clayton & Rodríguez, 2012). Placement exams, like developmental courses and services more generally, require consistent monitoring and evaluation. Understanding the validity, reliability, and predictability of placement exams, however, may be a “moot point” as Saxon and Morante (2014) suggests that most scholars advocate against using one test to determine course placement. Without adequate and effective assessment and placement strategies, it is difficult to draw conclusions when examining persistence and completions rates of students enrolled in developmental courses.
Developmental Education Outcomes Early studies of college remediation outcomes primarily examined grade point averages (GPAs), student retention, and degree completion. Despite replicated results in a number of studies (Boylan & Saxon, 1999; Kraska, 1990; McCormick, Horn, & Knepper, 1996; Parsad, Lewis, & Greene, 2003; Seybert & Soltz, 1992), findings should be considered with caution because those related to effectiveness are inconclusive. More recent studies, particularly those of the last 10 years, have shifted with the nation’s preoccupation with degree productivity and the state’s interest in accountability by focusing more on persistence, passing college-level courses and degree completion or transfer. The authors of a recent joint statement from the National Center for Developmental Education and the National Association for Developmental Education (2015) argue that much of the negative critiques of remediation are “exaggerated.” They go on to state, there are many professionals doing an outstanding job of teaching remedial courses and getting excellent results that are not reflected when large sample data is aggregated. These professionals should not have their efforts denigrated by those who understand neither the available research nor the challenges involved in teaching underprepared students. In fact, it is their efforts that have led to many of the innovations now being promoted in the developmental education reform movement. Another clear deficiency in the remediation literature is the lack of research on the impact of developmental education at four-year colleges or universities as recent studies have examined developmental education only at community colleges. The potential long-term effect on students who begin in or transfer to a baccalaureate program is therefore difficult to assess. An additional shortcoming is that student outcomes are rarely disaggregated by race and ethnicity. Despite being disproportionately enrolled in developmental programs, outcomes of African American and Latina/o students, until recently, were seldom analyzed. In spite of these limitations, remediation studies offer important contributions to the policy debate and provide a foundation for avenues of future research. At the same time, despite policymakers’ apparent interest in outcomes, only 12 states reported student success in developmental courses and only 17 states reported retention, persistence, success in college-level courses, and/or degree completion for
51
Tara L. Parker
students who took developmental courses. New research and the lack of state reports suggest that the focus will remain on developmental education courses, rather than the issue of underpreparation, which is more likely to influence educational outcomes (Tolbert, 2017). The type of developmental coursework is also an important consideration in terms of educational outcomes. Students enrolled in developmental reading, for example, are less likely to earn a degree compared to students of similar academic ability who did not enroll in remediation (Adelman, 2004; Attewell et al., 2006). Attewell and his colleagues, however, found that 40 percent of students who took developmental reading courses still earned a four-year degree. They also suggested that students in two-year colleges who took developmental writing improved their chances of earning a degree. Other studies that examined the impact of developmental education on degree completion are inconclusive. As previously indicated, lower graduation rates were linked to underpreparation in high school as opposed to participating in college developmental courses. In other words, graduation rates of students considered underprepared are generally lower than students with more academic preparation. In one study of the Ohio state system, Bettinger and Long (2006) tried to account for this preparation and academic ability bias. They found that students who were similarly underprepared and took developmental courses were less likely to “stop out” and were more likely to complete their degrees after four years. This conflation between developmental education and academic preparation and the impact on outcomes highlights the danger of using effectiveness as a rationale for limiting funding for developmental coursework.
Cost As higher education institutions are faced with the challenge of increased accountability by state legislatures and decreased public resources, goals related to efficiency and quality gain in importance. Remediation is often on the receiving end of criticisms regarding ways to “trim the fat off” higher education budgets. Yet only seven states report their developmental education costs and only three of those do so on an annual basis. This indicates a difficulty in defining costs and/or cost is not as important as recent policies might suggest. Indeed, few scholars agree on the impact of enrollment in college remediation on various academic and employer outcomes. As the previous section indicates, it is difficult to draw conclusions about the benefits and/or disadvantages of participating in developmental coursework. Saxon (2016) argues it is a matter of inconsistently applied models used to calculate the costs of developmental education. Without standard meanings and definitions of cost, remediation thus has become an easy target for higher education leaders to criticize and discredit in the name of quality and efficiency. Those who supported ending remediation at CUNY and more recently in states across the nation often did so in the name of increased accountability, educational quality, and efficiency. Pinning poor institutional academic performance on the tail of developmental coursework, CUNY could quickly demonstrate its responsiveness to political demands for educational quality by simply removing developmental courses and students from the four-year colleges. Little consideration was therefore given to students who could benefit from a four-year college education, despite standardized test scores that fell below admissions requirements. Similarly, many colleges and universities throughout the U.S. voluntarily removed developmental courses from the institutional curriculum due to policymaker concerns about cost and efficiency (Parker et al., 2014). Faced with potentially importunate developmental needs of high school graduates, lawmakers have considered not only whether to offer remediation but they must also grapple with who should pay for it. Clearly, deciding who should pay is linked to who to blame. While many high schools have implemented statewide exit exams, students still enter institutions of higher
52
Policy Issues
education needing to develop college-level skills. At CUNY, some students who obtain required test scores still had trouble with reading, writing, and math. It might be argued then that CUNY successfully changed its image of maintaining low standards but failed to redress persistent social inequalities or improve student retention. Other institutions and states (New Jersey, Florida, South Carolina to name just a few) witnessed similar occurrences. High schools are thus often blamed for not properly preparing students for college. As a result, policymakers in Florida, Massachusetts, New Jersey, and other states have considered plans to charge high school districts for developmental education in the past (Arendale, 2001). In 1989, Oklahoma’s legislature required the state’s Board of Regents and Department of Education to track the performance of schools in the state. This resulted in reporting and evaluation of the state’s remediation rates of graduates from each high school. Public state college students requiring remediation pay an additional fee for developmental courses and may not use state financial aid to assist them with developmental courses. Between 2012 and 2015, students in Oklahoma generated $3.37 million (Oklahoma State Regents of Higher Education, 2015). In Colorado, students paid between 64 percent and 66 percent of the cost for developmental coursework (including those who applied their financial aid award) in 2013 (Fain, 2014). Arendale (2001) also found that some states proposed to charge the individual student the “true cost” of a developmental course which could equate to up to three times as much as “college-level” courses. Similarly, prior to limiting remediation to CUNY community colleges, the Mayor’s task force on CUNY argued that privatizing remediation would save the University on expenses, a proposal that was fiercely contested by the University’s faculty senate. Proposals to charge students in these ways place the idea of “paying double” into question. Indeed, while opponents of remediation argue the taxpayer is double billed for secondary education and postsecondary developmental classes, little consideration is given to the students who may pay more for skills not learned or perhaps not even taught in high school. Parents of high school graduates and the graduates themselves often pay their taxes, their children’s tuition, and if remediation is no longer available, may pay the cost of reduced educational opportunities. In Florida, for example, students who chose not to take developmental education, following the state’s 2013 decision to make developmental courses voluntary, were more likely to fail a college-level course, thereby not only discouraging students but also financially burdening students. Moreover, some researchers suggest concerns regarding cost are exaggerated. In one of only a few national studies, Breneman and Haarlow (1998) concurred with a previous study that showed that developmental education costs equated to approximately 1 percent of public higher education institution budgets. When examining the cost of remediation per full-time equivalents (FTE), researchers at the Institute for Higher Education Policy (IHEP) found that the costs for remediation per FTE were actually lower than the cost of other core instruction, such as English and math (Merisotis & Phipps, 2000; Phipps, 1998). While some policymakers suggest that developmental education still simply costs too much, Breneman and Haarlow (1998) argued that the benefits outweigh the costs particularly if denying access is the alternative. Remediation therefore may be a wise investment particularly if it provides access to a college education that ultimately contributes to the public good (Phipps, 1998). Astin (2000) argued that effective developmental education “would do more to alleviate our most serious social and economic problems than almost any other action we could take” (p. 130). Similarly, some scholars warn that the social costs of not offering remediation will have a dramatic impact on the nation’s ability to compete in a global arena (Breneman and Haarlow, 1998; Long, 2005; Saxon, 2016). Long (2005) cautions “lower levels of education are associated with higher rates of unemployment, government dependency, crime and incarceration.” The cost of eliminating developmental education, therefore, is likely to be much higher than the expense of the programs.
53
Tara L. Parker
The Location of Developmental Education Some policymakers and other higher education leaders suggest that developmental education is most cost-effective when contained in two-year colleges. As a result, at least 14 states and many more colleges, universities, or higher education systems have limited developmental education to public community colleges (Parker, 2012; Parker et al., 2014). Some states, like North Carolina, do not have formal policies to limit remediation but the persistent interest in the subject by state legislators in the 1990s and beyond was enough to push some institutions to remove all evidence of developmental course offerings from their curricula. These tactics resulted in a 45 percent decrease in developmental education in the state. Some four-year college administrators who felt the pressure to produce and increase efficiency seemed to welcome the change because it freed them to pursue more prestigious avenues, such as improving educational quality. Indeed, CUNY four-year college administrators argued that community colleges were better equipped to serve students considered underprepared and to offer developmental services (Parker, 2005) and in some cases, chose to pursue students considered to be high academic achievers (Parker & Richardson, 2006). Policies that limit remediation to community colleges assume that two-year institutions benefit students who are underprepared without the pressure of “catching up” during their first year of college. These policies, however, also fail to consider that students who begin at community colleges are less likely to earn a baccalaureate degree (Bailey & Weininger, 2002; Bernstein & Eaton, 1994; Long & Kurlaender, 2009). Thus, Phipps (1998) argues that remediation is an “inappropriate function of community colleges” (p. v). As open admissions institutions, community colleges are often obligated to offer developmental courses. Remediation, however, is not the only purpose of community colleges. Most community colleges are expected to maintain multiple missions, ranging from college preparation to vocational education to associate degree completion. States may therefore use the community college as a means to balance college access demands. States can then maintain or increase the selectivity of fouryear colleges by diverting some students to the less expensive two-year college (Wellman, 2002). Referring students designated as needing remediation to community colleges has important implications. This is particularly true because there is not an established standard that determines college-readiness, academic preparation, and or requirements for remediation. Instead, each state, higher education system, and/or institution may have different definitions and measures of what developmental education means on a particular campus (Merisotis & Phipps, 2000). Further, some scholars found that when accounting for academic preparation, students were more likely to require remediation at community colleges than a four-year college (Attewell et al., 2006; Merisotis & Phipps, 2000). This finding is counterintuitive since many people consider the community college to be less academically rigorous in comparison to their four-year counterparts. In addition, it suggests that labeling students as remedial is problematic and may unnecessarily limit students to a community college when they might benefit from entering a four-year college. When CUNY voted to limit remediation to its community colleges, it was a compromise after critics proposed privatizing remediation. The Mayor’s advisory task force charged with recommending changes for the university proposed to outsource remediation to for-profit educational organizations, local private and independent postsecondary institutions, or community-based organizations. The proposal to send remediation to the for-profit sector as well as other stated suggestions was rejected. Placing remediation in the CUNY two-year colleges appeared to be CUNY’s best alternative. Since then, CUNY has opened a new community college that does not offer any developmental courses; instead, the college embeds developmental support in all of its classes. CUNY has sought other unique ways to live with its new political reality by developing innovative strategies of delivering developmental instruction much of what is reflected in other states and university systems.
54
Policy Issues
Developmental Education Reforms Changes in developmental education policy and increased research in the area over the past few years have led to a number of reforms in the instructional delivery of developmental coursework. In reforming developmental education, state policymakers generally seek to achieve at least three goals: decrease enrollment in developmental courses, reduce the amount of time students spend in developmental courses, and improve success rates in college-level courses. Many reforms primarily focus on community colleges and are a clear response to increased attention by state legislatures. Colleges often seek to decentralize developmental education and/or integrate academic support with college-level classes while others promote accelerated models of developmental education, like CUNY’s Accelerated Study in Associate Programs (ASAP). Other states have combined writing and reading developmental courses in community colleges into one English course, similar to most four-year colleges. North Carolina and Virginia, for example, streamlined two courses (English and reading) into one English course to accelerate students into college-level courses ( Kalamkarian, Raufman, & Edgecombe, 2015). Other institutions take a corequisite approach to reduce time in remedial courses. Front Range Community College in Colorado and Miami Dade in Florida, for example, actually add time to college-level courses to develop reading strategies at the same time (Vandal, 2014). Accelerated developmental education models may indeed be promising for these purposes, but no single strategy will adequately address the issue of underpreparation. It is also not clear, for instance, if these models work for all students, as different groups of students (including those who are students of color or low income) are often not accounted for in much of the available research. Limiting reforms to what is most efficient in terms of time may limit the effectiveness for many students who might otherwise succeed. The pedagogy of accelerated programs and the ways different models and pedagogical practices impact students of various racial and ethnic backgrounds must be examined. Similarly, a number of colleges have begun to “mainstream” or decentralize remediation. By removing “stand-alone” remediation programs and courses, colleges, in theory, hold all academic departments accountable for meeting developmental needs. Most recently, however, institutions are choosing to place students considered underprepared in enriched courses in various subjects, where students may benefit from an additional lab hour or supplemental academic support. Soliday (2002) argues that a mixed approach is best when developmental courses are only one step in a larger process of developing skills. Developmental education instructors might support students by building coalitions with other faculty who are in the disciplines. While the effectiveness of accelerated developmental education and enriched courses is inconclusive, Perin (2002) suggests that any developmental strategy is dependent on the college’s commitment to improving services to develop skills of students who are underprepared.
Conclusions Despite a long history at some of the nation’s oldest and most prestigious institutions, developmental education remains at the margins of higher education in the U.S. Policymakers suggest that issues related to accountability, efficiency, educational quality, degree completion, and student success are at risk due to high levels of college remediation across the nation. Further, many state and university system-level policymakers have discussed and debated the issue with little consideration of the available research and/or public involvement. The case of CUNY, for example, suggests that the issue had political motivations with little evidence of improvements in either educational quality or efficiency of the University. CUNY’s reputation however seemed to improve as the University is no longer distracted by headlines labeling them as “Remedial U” (Parker, 2005).
55
Tara L. Parker
The CUNY case thus illustrates some of the fundamental problems within many developmental education policy debates across the country. Efforts to improve reputation and prestige are often masked by arguments for improved educational quality. Perhaps as a result, college remediation has become the scapegoat for many of the challenges facing higher education. Many of the arguments for and against developmental education tend to perpetuate the myth that colleges and universities cannot maintain access and excellence at the same time. Critics of developmental education fault such courses and services as catering to an unqualified student body. By failing to demonstrate the significance of providing wide access to higher education and its social and economic benefits, remediation advocates failed to address the perhaps more politically compelling arguments related to educational quality and reform. Definitions for underpreparedness or remediation are arbitrary; policies that reduce or eliminate developmental education may unnecessarily exclude students who might benefit from a four-year college experience from pursuing a college degree, due in part because we do not have adequate measures to assess placement or predict success. Until the K-12 system is improved and developmental needs more accurately and adequately measured to identify students who truly require college preparatory support, it is imperative that colleges and universities continue to offer all necessary academic supports. Indeed, despite claims otherwise, evidence that eliminating remediation improves educational quality is lacking. Moreover, when state or university systems eliminate remediation, they often do so without consideration that the need for remediation still exists. To date, high schools have not successfully and consistently met the challenge of preparing our youth for college. While this chapter did not evaluate the effectiveness of remediation programs per se, concerns related to the delivery of developmental instruction should be addressed. Ending developmental education or its funding does not only eliminate specific courses, services, or programs. Rather, it may potentially exclude thousands of students who might otherwise benefit from any college in general and a four-year college in particular. The key then appears to be finding ways to increase preparedness by meeting students’ needs while at the same time reducing the very need for developmental education. Too often policy discussions never get to this level. Today, too many high schools do not maintain adequate human, fiscal, and academic resources needed to prepare for higher education. Thus, public colleges and universities continue to have an obligation to “accept students where they are” and provide the support necessary for them to excel and complete a baccalaureate degree. This should be the measure of educational quality: the institution’s ability to educate students, not a student’s ability to pass a standardized exam for admission. Policymakers must begin to understand this. Ending college remediation is a quick fix that may boost prestige but does not promote access or student success. If higher education turns its back on students who are often arbitrarily deemed underprepared for admission and are thus denied the opportunity to “prove themselves,” the result may be failure to educate significant proportions of diverse populations. As the U.S. is an increasingly diverse society, such policies may ultimately weaken the social and economic benefits of an educated citizenry.
Recommendations In this section, the issues that are central to developmental education policy debates presented in this chapter are considered to offer a number of recommendations for practice and policy. Recommendations to improve the delivery and outcomes of remediation are directed to instructors and program directors that on a daily basis provide support, guidance, and skill development to underprepared students. These actors play a key role in informing policy to maintain, evaluate, and/or change current remediation policies. Recommendations for policy are directed toward
56
Policy Issues
institutional and state higher education policymakers who have considered or may consider limiting remediation. Current reductions in developmental education seem to assume that what occurs in developmental classrooms is so ineffective that enrollment in such courses decreases a student’s chance of completing their degree. The truth is that very little is known about what goes on inside the developmental classroom or what “works” long-term. Instructors of developmental education must therefore seek new ways to share their successes with policymakers. Studying pedagogy and student learning are keys to moving remediation from the margins of higher education and removing the negative stigmas attached to it. Future policy decisions regarding remediation should consider the capacity of the state’s public high schools to meet the challenges of preparing students if developmental education is not an option. State policymakers that decide to eliminate developmental education courses must find ways of continuing to accommodate students who are less well prepared who may still be able to benefit from a four-year college education. States should continue to encourage and support collaboration between high schools and higher education. K-16 initiatives may help to reduce the potential of colleges moving too far ahead of high schools in terms of admissions and classroom expectations. Educational gaps between racial/ethnic and SES groups will continue to grow if states continue to require colleges and universities to limit access while high schools cannot meet the challenges. Finally, states and university systems should reconsider access and excellence so that the one goal does not automatically oppose the other. Instead, higher education leaders should use developmental programs (courses, support services, etc.) to support the educational mission of colleges and universities. Instead of placing emphasis on admitting the most qualified students, policymakers should refocus efforts on improving preparation of all students and providing them with the tools necessary to be successful in college. Relegating students who do not meet admissions requirements to community colleges where their chances of obtaining a baccalaureate degree are reduced is ill-advised public policy.
Avenues for Future Research Directions for future research relate to filling some of the gaps in the literature related to college remediation as well as to provide policymakers with evidence to make informed policy. Areas of future research include research on the impact on educational outcomes and the consequences of not providing developmental education, including improving understanding of what states are doing and to what effect. Research on the impact of developmental education (including more recent reforms) on educational outcomes must not only continue to examine persistence and degree completion as previously mentioned, but it must also disaggregate data by race and ethnicity. As students of color disproportionately enroll in developmental courses, they are also likely to be disproportionately impacted by changes in remediation policy. Future research should examine the ways racial and ethnic groups are impacted by policies that eliminate or reduce remediation. Additional research should also improve our understanding of the consequences (or benefits) of reserving community colleges for remediation by prohibiting instruction and students who are or perceived to be underprepared from four-year colleges. Finally, future research should consider the impact of underpreparation as opposed to enrollment in developmental courses. This distinction must be made clear to state policymakers. Only when the relationship between underpreparation and developmental education and the subsequent effects on access, degree completion, and other outcomes are better understood will progress be made to better prepare students and move toward ending the need for developmental education.
57
Tara L. Parker
Note 1 Despite differences in meaning, the terms remedial and developmental education have often been conflated by researchers and policymakers (Boylan, Calderwood, & Bonham, 2017; Parker et al., 2014). As a result, it is difficult to distinguish the two terms, particularly in the context of public policy. This chapter, therefore, will use the terms interchangeably with the acknowledgment and regret that it may perpetuate the problem of ignoring the practical differences between the two. Please see Boylan et al. (2017) or Parker et al. (2014) for a more detailed discussion of the difference between the two terms.
References and Suggested Readings ACT. (2006). Reading between the lines: What the ACT reveals about college readiness in reading. Iowa City, Iowa. ACT. (2016). The condition of college and career readiness. Retrieved from www.act.org/content/dam/act/ unsecured/documents/CCCR_National_2016.pdf Addresses at the Inauguration of Charles William Eliot as President of Harvard College, October 19, 1869. (1869). Cambridge, MA: John Wilson and Son. Adelman, C. (2004). Principal indicators of student academic histories in postsecondary education, 1972–2000. Washington, DC: US Department of Education, Institute of Education Sciences. Arendale, D. (2001). Trends in developmental education. Kansas City: University of Missouri-Kansas City. Arenson, K. W. (1999, January 6). Hearing brings out city university’s staunchest defenders. New York Times. Retrieved from www.nytimes.com Astin, A. W. (2000). The civic challenge of educating under-prepared students. In T. Ehrlich (Ed.), Civic responsibility and higher education (pp. 124–146). Washington, DC: ACE, Onyx Press. *Attewell, P., Lavin, D., Domina, T., & Levey, T. (2006). New evidence on college remediation. Journal of Higher Education, 77(5), 886–924. Bailey, T., & Weininger, E. B. (2002). Performance, graduation, and transfer of immigrants and natives in City University of New York community colleges. Educational Evaluation and Policy Analysis, 24(4), 359–377. Belfield, C., & Crosta, P. (2012). Predicting success in college: The importance of placement tests and high school transcripts CCRC Working Paper No. 42. New York, NY: Community College Research Center. Benning, V. (1998, November 23). Va. wants freshman to have a ‘warranty’; College remediation a concern. Washington Post, p. A01. Bernstein, A. R., & Eaton, J. S. (1994). The transfer function: Building curricular roadways across and among higher education institutions. In M. J. Justiz, R. Wilson, & L. G. Bjork (Eds.), Minorities in higher education (pp. 215–260). Phoenix, AZ: American Council of Education. Bettinger, E., & Long, B. T. (2006). Institutional responses to reduce inequalities in college outcomes: Remedial and developmental courses in higher education. In S. Dickert-Conlin & R. Rubenstein (Eds.), Economic inequality and higher education: Access, persistence and success (pp. 69–100). New York, NY: Russell Sage Foundation Press. Boylan, H. R., Calderwood, B. J., & Bonham, B. (2017). College completion: Focus on the finish line. Retrieved from https://ncde.appstate.edu/sites/ncde.appstate.edu/files/College%20Completion%20 w%20pg.%201%20per%20bjc%20suggestion.pdf Boylan, H. R., & Saxon, D. P. (1999). Outcomes of remediation. Retrieved from www.ncde.appstate.edu/ reserve_reading/Outcomes_of_Remediation.htm Breneman, D. W., & Haarlow, W. N. (1998). Remedial education: costs and consequences. Washington, DC: Thomas B. Fordham Foundation. Butrymowicz, S. (2017). We don’t know how many students in college aren’t ready for college. That matters. Retrieved from http://hechingerreport.org/dont-know-many-students-college-arent-ready-collegematters/ Callan, P. M. (1994). Equity in higher education: the state role. In M. J. Justiz, R. Wilson & L. G. Bjork (Eds.), Minorities in higher education (pp. 334–346). Phoenix, AZ: American Council on Education and The Oryx Press. Callan, P. M. (2001). Reframing access and opportunity: Problematic state and federal higher education policy in the 1990s. In D. E. Heller (Ed.), The states and public higher education policy (pp. 83–99). Baltimore, MD: Johns Hopkins University Press. Cassaza, M. E. (1999). Who are we and where did we come from? Journal of Developmental Education, 23(1), 2–7.
58
Policy Issues
Chen, X. (2016). Remedial coursetaking at U.S. public 2- and 4-year institutions: Scope, experiences, and outcomes (NCES 2016-405). U.S. Department of Education. Washington, DC: National Center for Education Statistics. Retrieved from http://nces.ed.gov/pubsearch Complete College America. (2012). Remedial education: Higher education’s bridge to nowhere. Retrieved from www.completecollege.org/docs/CCA-Remediation-final.pdf Crowe, E. (1998). Statewide remedial education policies. Denver, CO: State Higher Education Executive Officers. CSU Board of Trustees. (1997). Executive Order 665: Determination of competence in English and Mathematics: California State University. CUNY Board of Trustees. (1999). Minutes of the Meeting of the Board of Trustees of the City University of New York, January 25. Long Island City, NY: City University of New York. Dewey, J. (1916). Democracy and education: An introduction to the philosophy of education. New York, NY: Macmillan. Douglas-Gabriel, D. (2016). Remedial classes have become a hidden cost of college. Washington Post. Retrieved from www.washingtonpost.com/news/grade-point/wp/2016/04/06/remedial-classes-have-becomea-hidden-cost-of-college/?utm_term=.0700c1f13def Fain, P. (2012). Standardized tests that fail. Inside HigherEd. Retrieved from www.insidehighered.com/ news/2012/02/29/too-many-community-college-students-are-placing-remedial-classes-studies-find Fain, P. (2014). Complex problem, complex solution. Inside HigherEd. Retrieved from www. insidehighered. com/news/2014/06/19/early-success-colorados-broad-set-remedial-reforms#ixzz354jKEn00 Fulton, M. (2012). Using state policies to ensure effective assessment and placement in remedial education. Denver, CO: Education Commission of the States. Gandara, P., & Maxwell-Jolly, J. (1999). Priming the pump: strategies for increasing the achievement of underrepresented minority undergraduates. New York, NY: College Board. Gianneschi, M., & Fulton, M. (2014). A cure for remedial reporting chaos: Why the U.S. needs a standard method for measuring preparedness for the first year of college. Denver, CO: Education Commission of the States. Greene, J. P. (2000). Remedial education: How much Michigan pays when students fail to learn basic skills. Midland, MI: Mackinac Center for Public Policy. Grossman, M. (2006). Education and Nonmarket Outcomes, In E. Hanushek & F. Welch (Eds.), Handbook of the economics of education (pp. 577–633). Maryland Heights, MO: Elsevier. Guerrero, R. (2016). Colleges looking to accelerate catch-up process to get students on track. Yakima Herald. Retrieved from www.yakimaherald.com/news/education/colleges-looking-to-help-accelerate-catch-upprocess-to-get/article_4f8ba63c-cca7-11e5-a638-d71490538e45.html Hanford, E. (2016). Stuck at square one: The Remedial education trap. MPR News. Retrieved from www. apmreports.org/story/2016/08/18/remedial-education-trap Hunt, J. B., Jr., Carruthers, G., Callan, P. M., & Ewell, P. T. (2006). Measuring up 2006: The state-by-state report card for higher education. Washington, DC: The National Center for Public Policy and Higher Education. Kalamkarian, H., Raufman, J., & Edgecombe, N. (2015). Statewide developmental education reform: Early implementation in Virginia and North Carolina. New York, NY: Community College Research Center. Karabel, J. (2005). The chosen: The hidden history of admission and exclusion at Harvard, Yale, and Princeton. Boston, MA: Houghton Mifflin Co. Kraska, M. F. (1990). Comparative analysis of developmental and nondevelopmental community college students. Community/Junior College Quarterly of Research and Practice, 14(1), 13–20. Long, B. T. (2005, Fall). The remediation debate: Are we serving the needs of underprepared college students? National CrossTalk, 13, 11–12. Long, B. T., & Kurlaender, M. (2009). Do community colleges provide a viable pathway to a baccalaureate degree? Educational Evaluation and Policy Analysis, 31(1), 30–53. MacDonald, H. (1994, Summer). Downward mobility. City Journal, 4, 10–20. MacDonald, H. (1998, Winter). CUNY could be great again. City Journal, 8, 65–70. McCormick, A. C., Horn, L. J., & Knepper, P. (1996). A descriptive summary of 1992–93 bachelor’s degree recipients 1 year later, with an essay on time to degree. Baccalaureate and beyond longitudinal study. Statistical analysis report. Washington, DC: National Center for Education Statistics. *Merisotis, J., & Phipps, R. A. (2000). Remedial education in colleges and universities: What’s really going on? The Review of Higher Education, 24(1), 67–85. Merisotis, J., & Redmond, C. (2002). Developmental education and college opportunity in New England: Lessons for a national study of state and system policy impact. Washington, DC: Institute of Higher Education Policy, New England Research Center for Higher Education. National Assessment of Educational Progress. (2015). The Nation’s Report Card. Retrieved from http://nces. ed.gov/nationsreportcard/
59
Tara L. Parker
National Center for Developmental Education. (2015). Remediation: Reports of its failure are greatly exaggerated: An NCDE/NADE statement on research and developmental education. Retrieved from http:// ncde.appstate.edu/node/103. Oklahoma State System of Higher Education. (2015). Degrees of progress: The state of higher education in Oklahoma. Oklahoma City: Oklahoma State Regents for Higher Education. *Parker, T. L. (2005). Changing the rules for access and equity: The Elimination of remedial education (Unpublished doctoral dissertation). New York University, New York, NY. *Parker, T. L. (2012). The role of minority serving institutions in redefining and improving developmental education. Atlanta, GA: Southern Education Foundation. *Parker, T. L., & Richardson, R. C. (2006). Ending remediation at CUNY: Implications for access and excellence. Journal of Educational Research and Policy Studies, 5(2), 1–22. *Parker, T. L., Sterk Barrett, M., & Bustillos, L. (2014). The state of developmental education: Higher education and public policy priorities. New York, NY: Palgrave-Macmillan. Parsad, B., Lewis, L., & Greene, B. (2003). Remedial education at degree-granting postsecondary institutions in fall 2000: Statistical analysis report. Washington, DC: National Center for Education Statistics. *Perin, D. (2002). The location of developmental education in community colleges: A discussion of the merits of mainstreaming vs. centralization. Community College Review, 30(1), 27–45. Phipps, R. A. (1998). College remediation: What it is, what it costs, what’s at stake. Washington, DC: The Institute for Higher Education Policy. Ratliff, C. A., Rawlings, H. P., Ards, S., & Sherman, J. (1997). State strategies to address diversity and enhance equity in higher education. Denver, CO: State Higher Education Executive Officers. Radford, A. W., & Horn, L. (2012). Web tables: An overview of classes taken and credits earned by beginning postsecondary students. Washington, DC: National Center for Education Statistics (NCES). * Saxon, D. (2016): Developmental education: The cost literature and what we can learn from it. Community College Journal of Research and Practice, 41(8), 494–506. DOI: 10.1080/10668926.2016.1202875. *Saxon, D., & Morante, E. (2014). Effective student assessment and placement: Challenges and recommendations. Journal of Developmental Education, 37(3), 24–31. Schmidt, B. C., Badillo, H., Brady, J. Q., MacDonald, H., Ohrenstein, M., Roberts, R. T., et al. (1999). The City University of New York: An Institution adrift. New York: The Mayor’s Advisory Task Force on the City University of New York. Scott-Clayton, J., & Rodríguez, O. (2012). Development, discouragement, or diversion? New evidence on the effects of college remediation (NBER Working Paper No. 18328). Cambridge, MA: National Bureau of Economic Research. Seybert, J. A., & Soltz, D. F. (1992). Assessing the outcomes of developmental courses at Johnson County Community College. Overland Park, KS: Johnson County Community College. *Shaw, K. M. (1997). Remedial education as ideological battleground: Emerging remedial education policies in the community college. Educational Evaluation and Policy Analysis, 19(3), 284–296. Smith, A. (2015). Legislative fixes for remediation. Inside Higher Education. Retrieved from www.insidehighered. com/news/2015/05/08/states-and-colleges-increasingly-seek-alter-remedial-classes Soliday, M. (2002). The Politics of remediation: Institutional and student needs in higher education. Pittsburgh, PA: University of Pittsburgh Press. Spann, M. G., Jr. (2000). Remediation: A must for the 21st-century learning society. Denver, CO: Education Commission of the States. Thelin, J. R. (2004). A history of American higher education. Baltimore, MD: The Johns Hopkins University Press. Tolbert, A. (2017). Discourses of developmental English education: Reframing policy debates. (Unpublished Dissertation). Turnblow, K. (2006, September 7). Board of regents clears up report. Capital Journal. Retrieved from http:// capjournal.com/main.asp?Search=1&ArticleID=15833&SectionID=2&SubSectionID=2&S=1 Vandal, B. (2014). Promoting gateway course success: Scaling corequisite academic support. Complete College America. Retrieved from http://completecollege.org/wp-content/uploads/2014/06/PromotingGateway-Course-Success-Final.pdf Wellman, J. V. (2002). State policy and community college-baccalaureate transfer. San Jose, CA: National Center for Public Policy and Higher Education. Wyatt, M. (1992). The past, present, and future need for college reading courses in the U.S. Journal of Reading, 36(1), 10–20.
60
4 Student Diversity Theodore S. Ransaw and Brian J. Boggs michigan state university
Understanding and articulating the need for diversity in higher education is vitally important (Higbee, 2009), and embracing the need for diversity could not be more necessary. In general, the population of the U.S. is becoming increasingly heterogeneous. Over half of all children under age five are children of color, and by 2044, people of color will be the majority population (National Equity Alliance, 2016). College enrollment of students between 25 and 34 years old increased 51 percent between 1997 and 2011, and is projected to increase 20 percent between 2011 and 2022 (Hussar & Bailey, 2014). Additionally, American college students are “less likely to be white, male, traditional college-age (18–24 years old), U.S.-born, and Christian than in previous decades” (Bowman, 2013, p. 875). In short, America and American college students are more diverse than ever before. Understanding and respecting such diversity is important. Including and acknowledging multiple forms of identity while treating and accepting all identities equally can lead to a better understanding of differences and can promote opportunities to interact with other students from diverse backgrounds (see Bowman, 2013). One of the most promising approaches to literacy education and diversity classrooms is the framework Culturally Sustaining Pedagogy (CSP). CSP, most recently advanced by Paris and Alim (2017), builds on the work of Ladson-Billings’s Culturally Relevant Pedagogy (CRP) (1995a). Ladson-Billings (1994) describes culturally relevant teachers as educators who practice eight principles: (i) effective communication of high expectations, (ii) a pedagogy of active teaching methods, (iii) treatment of the teacher as facilitator, (iv) inclusion of students who are culturally and linguistically diverse, (v) awareness of cultural sensitivity, (vi) a reshaping the curriculum to meet the cultural needs of the students, (vii) student-controlled classroom discourse, and (viii) implementation of small group instruction. Ladson-Billings (2014) views CSP as an extension of her CRP work or as she describes, “the place where the beat drops” (p. 76). Additionally, Dominguez (2017) calls the practice of CSP, Culturally Sustaining Revitalizing Pedagogy (CSRP). CSRP is more of a mind-set that values cultural differences than a new teaching pedagogy. One example is the way in which it looks at the cultural strength of marginalized communities in coping with oppression, such as the diaspora, as a jumping off point to dive into a pedagogy that immerses a teacher in a world where nonwhite perspectives and non-colonial norms are wells of resilience and fortitude, not deficits.
61
Theodore S. Ransaw and Brian J. Boggs
By breaking the habit of denying students and families their languages, literacies, cultures, and histories in order to achieve in schools, CSP seeks to “perpetuate and foster – to sustain – linguistic, literate, and cultural pluralism as part of schooling for positive social transformation” (Alim & Paris, 2017, p. 1). CSRP disrupts the othering “practices of deep-seated, uninterrogated assumptions, values, and beliefs of cultural normativity that perpetuate coloniality” (Dominguez, 2017, p. 227). For example, CSP rejects asking the question “How can ‘we’ get ‘these’ working-class kids of color to speak/write/be more like middle class White ones, rather than critiquing the White gaze itself that sees, hears, and frames students of color in everywhichway as marginal and deficient” (Alim & Paris, 2017, p. 3). The white gaze is a result of whiteness that is oppressive, restrictive, and passed down with the intent of diminishing communities of color (Alim & Paris, 2017). Once students view themselves through their own lenses, they increase their self-efficacy and engage “critical abilities to reinterpret situations and organizations as containing possibilities for change” (Anyon, 2009, p. 394). The following sections will provide overviews of information pertinent to understanding diversity issues related to age, disability, first-time and first-generation students, gender, language, race, sexual orientation, and spirituality.
Age Older students are a steadily growing population in American higher education institutions. College enrollment of students between 18 and 24 years old is projected to increase 9 percent between 2011 and 2022, while college enrollment of students 35 years and older is projected to increase 23 percent between 2011 and 2022 (Hussar & Bailey, 2014). Because age and part-time status typically go together, older college students are often categorized as “nontraditional.” Nontraditional students are diverse and include different genders, first-generation status, diverse racial and ethnic identities, multiple notions of spirituality, and various income levels. “Nontraditional students are much more likely than traditional students to leave postsecondary education without a degree” (Choy, 2002, p. 12). There are situational, dispositional, and institutional barriers to higher education for all students (Cross, 1981). However, Hyland-Russell and Groen (2011) suggest that adult learners often view their barriers to postsecondary education, material and nonmaterial, as related to poverty. For example, many low-income adult learners experience simultaneous and overlapping barriers, including lack of stable income and being on a fixed income (Hyland-Russell & Groen, 2011). Additionally, adult learners typically support themselves and other family members, making efforts to avoid wage loss while going to college a necessity (Windisch, 2016). Choy (2002) asserts that working to support a family, balancing child care/class schedules and balancing time energy and financial resources as barriers for many nontraditional students. Additionally, older adult students may not feel they are smart enough for college and may not even feel they belong in a college space where they are frequently surrounded by young students who speak a different generation’s language and often do not seem to value an older generation’s perspective. Hyland-Russell and Groen (2011) suggest the following to help adult learners persist: a responsive educational system that recognizes the individual needs of older and nontraditional learners, such as clear and transparent partnership agreements; valuing all participants and their perspectives, and contributions; supports to address material and noneconomic barriers to learning; sustained funding that provides for paid support staff; and trained staff to address concepts of self as learner and student.
Disability Despite the fact that 69 percent of fourth graders and 60 percent of eighth graders with disabilities score below basic levels in reading (NCES, 2014), 60 percent of young adults with disabilities
62
Student Diversity
enrolled in postsecondary education within eight years of leaving high school, with 71 percent enrolled full time (Newman et al., 2011). That is, while K-12 students with a disability score below basic levels, K-12 students with disabilities do still enroll in higher education institutions with the expectation that they can persist to graduation. Colleges and universities should strive to support all students, including students with disabilities. However, like many students who are members of diverse groups, students with disabilities have inimitable challenges in obtaining a higher education degree. For example, people with disabilities often have unique residential independence, fi nancial independence, and emotional concerns related to higher education experiences (Newman et al., 2011). However, not all students who were identified as having a disability in high school disclose that disability to their postsecondary institution; 63 percent of students who reported they had a disability in high school did not report their disability when they enrolled in postsecondary school (Newman et al., 2011). Looking for a fresh start after high school, many students do not reveal they have a disability (Herbert et al., 2014) because they want to be seen as active participants in their own destinies and enabled to define themselves on their own terms in college (May & LaMont, 2014). A unique subgroup, college students with disabilities have a distinct culture with their own languages, values, and codes of all ages (Gilson & DePoy, 2001). Many people with disabilities already celebrate their disability culture with pride and do not see themselves as part of the “able” community at all (Dupré, 2012). Students with disabilities start college with many of the same issues as other first-year college students, including effectively communicating their needs, understanding how to be self- regulating or evaluate their own performance, becoming aware of their own strengths and weaknesses, and understanding behavioral outcomes related to internal versus external perceived control (Herbert et al., 2014). Resisting language and perspectives that view a disability as a deficit rather seeing a person with a disability as part of a diverse community is a useful approach when considering ways to help students with disabilities persist (May & LaMont, 2014). Other helpful strategies include focusing on the positive abilities of the student as well as thinking of a disability as something an entire class can support rather than seeing educational success as the responsibility of the individual alone to solve (May & LaMont, 2014). Embracing the entire student, a key component of CSP, liberates both teachers and students from deficit thinking (Alim & Paris, 2017).
First-Generation Students First-generation students are those whose parents did not have prior postsecondary experience (NCES, 2017b). Additionally, first-generation college students are more likely to apply and take out student loans than continuing generation peers, and are more likely to take developmental coursework than students whose parents held a bachelor’s or advanced degree (Chen, 2005). First-generation college students also have lower persistence and graduation rates than their peers. For example, a recent review of U.S. Department of Education data show that 48 percent of first generation students stayed on track for graduation within three years after beginning college as opposed to 53 percent of students whose parents attended some college. Likewise, 67 percent of students whose parents earned a bachelor’s degree stayed on track (Cataldi, Bennett, & Chen, 2018). Both first-time college students and first-generation college students are more likely to be less informed about the college process. In addition, first-generation college students are more likely to be underprepared for college, more likely to come from a lower-economic background, and more likely to drop out (The Institute for Higher Education Policy, 2012). Additionally, Jehangir (2009) asserts that many first-generation college students struggle with discovering the unwritten rules and expectations in academia. This phenomenon is often called
63
Theodore S. Ransaw and Brian J. Boggs
the null curriculum. The null curriculum can be defined as a focus on what is not provided or mentioned in schooling (Flinders, Noddings, & Thornton, 1986). For example, in a recent interview for PBS News Hour, first-generation college student Jennine Crucet said that she heard students tell her that professors have office hours but no one told her what having office hours actually meant (PBS News Hour, 2016). First-generation college students may also experience time constraints with regard to campus activities since they are more likely to have to work to support themselves in college. Fostering a sense of belonging through student engagement has proven to be a successful method for increasing student retention for both first-time and first-generation college students. Student engagement, once college experiences are taken into account – living on campus, enrollment status, working off campus (Kuh, Cruce, Shoup, Kinzie, & Gonyea, 2008) – plays a significant role in college success and the persistence of first-year college students.
Gender Male college enrollment is projected to increase by 11 percent (from 7.5 million to 8.3 million students), and female enrollment is projected to increase by 16 percent (from 9.5 million to 11.0 million students), between 2015 and 2026 (NCES, 2017a). In addition, while women are earning more degrees than men, by percentage black American women are the most educated group in the U.S. Approximately, half of all black women between the ages of 18 and 24 are now pursuing degrees at postsecondary institutions (Taylor, 2014), and 9.7 percent of all black women are enrolled in colleges (Catalyst, 2004; Taylor, 2014). Gender and degree attainment belief are important issues in higher education as attitudes toward postsecondary education can vary for students as well as college and university staff. Different attitudes and perspectives about college completion and related job prospects are directly related to the type of classes students take to prepare for college. For example, female high school students are more likely than men to have planned for a college degree or higher and are also more likely to be pressured to go to college when they enroll (Kleinfeld, 2009). Ross et al. (2012) would agree, arguing that once enrolled, men’s persistence at the college level is weaker than that of women. However, women still trail men at top-tier intuitions and in science, technology, engineering and mathematics (STEM) fields (Buchmann, 2009). Higher rates of male enrollment at select universities occur, despite the fact that girls typically have higher grades and are more likely than boys to take math and science in K-12, and are therefore better prepared for college (Buchmann, 2009). With the exception of exclusively selective institutions, male college graduation rates are on the decline, and by 2020, men will only represent 41 percent of college enrollees (Budd, 2010). Harris and Harper (2008) and Marrs and Sigler (2012) argue that encouraging men to break the gender perspective that seeking help is a weakness and encouraging them to participate in activities that promote healthy identity development may help address the male gender gap in college. Budd (2010) suggests making college campuses gender friendly, incentivizing men to attend college with the same energy as women but with male-targeted audiences, and encouraging men to participate in college recruitment activities and campus visits.
Language Enrollment of college English Language Learners (ELLS) is steadily increasing and many universities have increased efforts to recruit them (Harrison & Shi, 2016). However, while U.S. higher education goes to great lengths to attract ethnic minority and international students, increasing diversity and tuition revenue, these students are not always provided with the language fluency
64
Student Diversity
and English writing competency support they need to persist once they actually enroll in college (Matsuda, 2012). The International English Language Testing System (IELTS) and the Test of English as a Foreign Language (TOEFL) test serve as tools for college acceptance and do not help students flourish to graduation (Matsuda, 2012). Robertson and Lafond (2016) advocate for ELL college students to find mentors to increase their persistence as well as to find help in the precollege application process to find the right college. Almon (2015) suggests that helping ELL students with issues regarding the linguistic intricacies of college procedures is one solution. However, what is not discussed is valuing the diversity that ethnic, minority, and international students bring to an institution. Higher education may value the content of student voice, but institutions do not embrace all of the ways students use the language of their voice. For example, Rosa and Flores (2017) recount the experience of Estela, a second-generation Chicana who grew up speaking English as well as Spanish. Despite the fact that the professors in Estela’s doctoral program said that her writing was quite competent, she suffered from the racialization of her language and the claim that her Spanish was limited. That heritage speakers with highly nuanced language skills are positioned as less skillful than their counterparts ‘who have acquired the language exclusively in the classroom’ is precisely the power of raciolinguistic ideologies as they apply to conceptions of heritage language issues (Rosa & Flores, 2017, p. 184) In other words, despite being a living example of a fluent lifelong bilingual English-Spanish person, Estela is not afforded the opportunity to speak either English or Spanish with the ethnic delivery effective in her community. CSP encourages educators to eliminate deficit thinking and resist the fact that in academia, heritage language students who can read, write, interpret, and speak fluently in multiple languages may seem superior to some and limited to others (Rosa & Flores, 2017). Differing languages and dialects within languages are not ill-literacies (Alim & Paris, 2017, p. 10) but healthy valuable and necessary parts of non-colonial language that foster non-colonial ideas. As colleges and universities spend time teaching ELL students how to read and write correct English, there should be as much time spent teaching students when to use their unique cultural and linguistic skills for agency.
Race/Ethnicity Seventy-five percent of whites and fifty-nine percent of Hispanics believe college applicants should be judged only on merit, while blacks are divided in their views ( Jones, 2013). Additionally, two-thirds of Americans believe that college applications should only be admitted on merit, even it means that fewer minorities would be admitted ( Jones, 2013). Clinedinst and Koranteng (2017) note that regarding admissions decisions, 3.8 percent of colleges reported said race/ethnicity was a considerable influence, 14.8 percent said race/ethnicity was given moderate influence, 17.1 percent said race/ethnicity was given limited influence, and 64.3 percent said race/ethnicity was not an influence. Race, is after all, the most noticed and considered the most prevalent signifier used to discuss diversity. However, it is important to consider race from paradigms in addition to skin color and consider the constraints that coincide with being a racial minority, which can include lack of access to housing, a livable income, quality schooling, and poor nutrition. To do this, higher education educators should consider critical race theory (CRT). CRT has the same roots as cultural sustaining pedagogy, critical pedagogy (CP). CRT can best be characterized as “Reject[ing] economic determinism and focus[ing] on the media, culture, language, power,
65
Theodore S. Ransaw and Brian J. Boggs
desire, critical enlightenment, and critical emancipation” to examine why the world exists as it does (Denzin & Lincoln, 2000, p. 160). Originally conceived by Derrick Bell (1992), CRT challenges the idea that race is not an issue with regard to education and acknowledges that barriers, such as school policies, are often obstacles to opportunity for people of color. CRT shifts away from perspectives that suggest communities of color as being culturally deficient (Yosso, 2005), one of the many similarities of CSP. With regard to CRT and education, L adson-Billings and Tate (1995) argue that race is a factor in inequities, the U.S. is based solely on property rights, and that through the lenses of race and property we can see the system of school inequities. Boggs and Dunbar argue that the idea of race is locked within the limited use of the terms “White” and “whiteness.” They state that being “White” is the “unreflected-upon standard from which all other racial identities vary” (Boggs & Dunbar, 2015, p. 44). This means that as a result of colonialism, both domestic and foreign, whiteness has come to represent a way of being beyond just a color or race; whiteness has been normalized and serves as a backdrop to which all other races and cultures are juxtaposed (Boggs & Dunbar, 2015). Alim and Paris (2017) would agree, asserting that the concept of whiteness as a measure of what is considered ethnic and what is not considered an ethnicity has become so pervasive it has negatively influenced all levels of society throughout history including schooling. While there are numerous student organizations that advocate for change to improve racial climates on colleges and universities, one organization, the website The Demands, stands out. Started in 2014 after the murder of Mike Brown on August 9, 2014, the website TheDemands. org was created by a national collaborative of activists that fight to end racism and police violence in America. Currently there are 80 colleges and universities with lists of demands for racial equity and social justice. Those demands include mandatory racial sensitivity/cultural competency classes, the establishment of or refreshed spending for multicultural centers, curriculum design to include diversity, an increase of tenure stream faculty of color, support for undocumented students, clear and effective policies that prevent and outline punishment for racist and discriminatory practices and micro aggressions, campus accountability for “safe” classrooms and residence halls, and student-appointed advisory boards (Thedemands.org, 2016). By implementing and demanding a list of changes to eliminate racism on college campuses, contributors to TheDemands. org are actively engaged in the work of CRT, while confronting white hegemony in the marginalization of culturally relevant/sustaining pedagogy (Sleeter, 2014).
Identity/Sexual Orientation/LGBTQIA College research on enrollment employment and achievement statistics for LGBTQIA (Lesbian, Gay, Bisexual, Transgender, Queer/Questioning, Intersex, and Asexual-Agender-Aromantic) students and faculty is sparse (Anderson, 2016). Few colleges and universities collect the sexual identity information preference of their students or staff, yet knowing the sexual identity information of staff is especially important for LGBTQIA who look to LGBTQIA staff for support. Adding to the difficulty of examining and collecting data about non-cisgender institutional sexual identity is the fact that researchers use different acronyms to describe their research and participants. For example, Lesbian, Gay Bisexual and Transgender (LGBT) is common in literature, Rankin (2003) employs Gay, Lesbian Bisexual and Transgender (GLBT), and Anderson (2016) utilizes LGBTQIA. These various terms reflect the culturally sustaining pedagogical premise that identities are fluid and are constantly changing (Alim & Paris, 2017). One study of students from 14 campuses found that 43 percent considered their campus climate to be homophobic and more than a third of undergraduate students experienced harassment (Rankin, 2003). However, the same study found that 66 percent of respondents said the climate
66
Student Diversity
of their classrooms or workplaces was accepting of GLBT people, and 17 percent thought it was not (Rankin, 2003). Perhaps this report captures a moment in time that represents a crossroads for many colleges and universities; institutions that, having taken preliminary measures towards equal access for GLBT people in academia, must now take the steps necessary to truly close the gap between stated institutional commitments and the realities experienced by GLBT individuals. (Rankin, 2003, p. vi) One suggestion understood to improve LGBTQIA students’ college experience is to hire trans-identified or training trans-knowledgeable admission personnel and other staff in order to better understand the unique experiences of LGBTQIA students and to become allies (Anderson, 2016; Beemyn et al., 2014; Rankin, 2003). Changing enrollment policies to allow students to use names other than legal birth names, gender-inclusive rest rooms, promoting and providing information about nontraditional sexual information to cisgender students, and providing safe showers were also suggested (Beemyn et al., 2014). Relatedly, having a physical safe place, such as an LGBTQIA Student Center or meeting space, can create a sense of student emotional well- being. CSP advocate San Pedro (2017) encourages educators to employ “sacred truth places” (p. 101) as opportunities to create and sustain reflexive practices. Likewise, students in Anderson’s (2016) study asserted that safe places where LGBTQIA students can congregate were crucial to support LGBTQIA students; in addition, providing a place for resources to support students who were allies was helpful.
Spirituality Religion is an influential element in America. For example, the Pew Forum on Religion and Public Life (2007) reports that 88 percent of Americans believe either in God or a universal spirit. Additionally, 83 percent of Americans consider religion important in their lives; 81 percent pray, 72 percent attend religious services at least a few times a year, and most indicated that they believe in miracles and heaven (Pew Forum on Religion and Public Life, 2007). It is reasonable to think that religious diversity also holds true for college students as well. Astin (2016) would agree, asserting that 90 percent of college students who consider themselves to be religious also identify as spiritual; however, 30 percent of college students who say they are “spiritual” do not view themselves as “religious” (Astin, 2016). Students who describe themselves as spiritual represent a significant population in college, and they hold worldviews unique to their perspectives that directly influence their campus experience. For example, since many students who identify as spiritual but not religious typically do not engage in religious traditions, they seldom participate in campus interfaith programs (Astin, 2016). Religion is often kept out of school and not embraced on college campuses. However, for an institution to truly embrace the diversity of all its students, those who are religious must be included as well as those who are spiritual but not aligned with a particular religion. When students hear that “all are welcome” at interfaith events, nonreligious students can sometimes assume they are unwelcome. While the obvious solution to help support religious and nonreligious college students is to recognize that religion and spirituality are important elements to many college students’ lives and their academic experiences. Yet a more meaningful initiative would be to include spirituality in campus activities that are already successful. Astin (2016) suggests spiritual study-abroad programs and other campus activities, such as contemplation, meditation, or self-reflection activities, that enhance and facilitate student spiritual development. The importance for safe and sacred spaces for students of color cannot be overestimated. Sacred places that include both physical and emotional
67
Theodore S. Ransaw and Brian J. Boggs
spaces where students can tell or write their stories in educational settings validate who they are and who they are becoming (San Pedro, 2017). San Pedro (2017) asserts that for educators who use CSRP “students’ cultures and languages shared through stories are not trivialized, minimized, or locked in the past; they are dynamic, shifting, and evolving in healing and revitalizing ways” (p. 113). Cokley, Garcia, Hall-Clark, Tran, and Rangel (2012) and others (Alim & Paris, 2017; San Pedro, 2017) suggest postsecondary institutions should include safe places for spirituality and religiousness from a health benefit perspective since participation in one, both, or all can increase student well-being.
Implications for Practice Throughout this chapter, the authors have attempted to provide sketches of statistics, content, and suggestions related to issues of diversity for colleges and universities. Understanding the barriers that diverse students encounter provides institutions with the potential to better serve them. In this section, pedagogical approaches and other suggestions are presented. One way to help improve persistence among America’s increasingly diverse college student population is by addressing common problems that a wide variety of college students have from the moment they set foot on a college campus. With regard to reading, Lei, Rhinehart, Howard, and Cho (2010) recommend utilizing student background knowledge and experience to inform lectures and class discussions and providing homework and classwork related to reading topics and practice exercises. In addition, they note that college instructors may find that providing learning aids – such as study guides, handouts, learning packets, chapter/section review questions, chapter/ section summary statements, self-quizzes, and suggested readings – are useful strategies. Bean, Readence, and Dunkerly-Bean (2017) suggest seven principles for effective studying: attention, goal orientation, organization, rehearsal, time on task, depth of processing, and spaced note-taking. The first principle, students’ attention, encourages students to realize that comprehension and learning/memory are not the same and asks that students spend time focusing on memorizing the material necessary, such as a formula, and comprehending how it can be used separately. Goal orientation centers on students focusing on the most effective goal when studying. Far too often students are focused on getting through the reading versus comprehending and remembering the material. Staying focused on the specific goal of comprehending what is being read instead of just reading the material is key for effective studying. Organization strategies are used to practice mental organization skills that include the interrelated habit of putting information into meaningful patterns, chunking information into manageable bits, and using acronyms to increase retention. Rehearsal, as the word suggests, encourages practicing. However, a more nuanced perspective related to study strategies involves frequent, spaced, and brief periods of study of a specific topic, also known as distributed practice. The “time on task” study strategies advises students simply to remember that time on task – typically short, spaced out focused studying – is an effective way to study because it increases the amount of narrowly focused time you spend. This concurs with Marrs and Sigler’s (2012) research, which asserts that periodically stopping and reviewing material while studying as well as using a systematic studying approach increases academic achievement. Time on task and the related increased memory retention is more efficient than trying to study for long blocks of time or cramming before a test. The depth of processing principle notes that the brain retains information better the deeper level of thinking, the more complex the task, and the higher the degree of analysis. New vocabulary words and their definitions are likely to be remembered longer and with greater accuracy if the word is introduced with such elements as context, morphemic analysis, and etymology (Bean et al., 2017). Finally, students should study in ways that match the way the information will be
68
Student Diversity
tested (e.g., multiple choice, essay, or a project). For example, making a graph to help understand the order of events in a story is helpful but not as helpful as if a student were to repeat the writing structure of the story.
Recommendations for Future Research This chapter highlighted the fact that colleges and universities are becoming more diverse than ever and provided recommendations for meeting increasing, as well as complex, needs of students. Age, disabilities, first-generation students, gender, language, race, identity, and spirituality were viewed as individually important but also as integrated aspects influencing postsecondary education. Increasing diversity is challenging the institution of higher education with regard to teacher and student demographics, as well as how instruction is received and how students implement what they have learned. However, just as postsecondary students influence what works for them, colleges and universities should respond with new and innovative strategies to better serve them. Some might call implementing the diverse needs of students using the framework of CSP as just good teaching practice (Ladson-Billings, 1995b). Indeed, CSP is part of the pedagogy of good teaching practices and is the mindful pedagogy that meets the needs of the colonized and not just the colonizer. In the words of Anyon (2009), CP is not enough, educators must “communicate to students knowledge about the injustices that mark society and constrain their life” (p. 389). It is imperative that students understand that circumstances that most often affect people of color, such as poverty, inequitable education, and substandard living conditions, are systemic and not personal failings. Students of color suffer from lower educational attainments than white students by design, not because they aren’t trying hard enough (Anyon, 2009). CSP “is the challenges and paradigm shift… that helps educators to contest the ways of coloniality lives in and through schooling” (Dominguez, 2017, p. 233). In light of the interconnectedness of CRP, CRSP, and CSP, we offer the following three suggestions. Future research and practices that would best support diverse students in higher education include facilitating cognitive justice, understanding diglossia, that is, the function of colonialization in language education, and helping to support underserved students through empowering information including financial literacy classes.
Facilitating Cognitive Justice The conscious act of acknowledging and embracing multiple ways of thinking and treating individual ways of knowing equally is called cognitive justice. It allows diverse people to be experts in their own lives, informed from their own sense of being, based on their unique cultures and experiences (Hoppers, 2009). Cognitive justice requires establishing “new evaluation and appraisal criteria” (Hoppers, 2009, p. 611). Facilitating positive student engagement based on cognitive justice has the possibility to redefine higher education and, more importantly, to include aspects of diversity that have rarely been thought of before. What would research on cognitive justice look like? It would investigate and then implement the use of cultural artifacts to educate using objects familiar to the cultures of the students being served, as well as examine and then utilize sociocultural and historical contexts to teach problem-solving skills, literacy, and scholarly writing (Makoelle, 2014).
Embracing Diglossia in Student Voice Student voice is conceptualized in terms of organized student leadership, such as student organizations and student council, informal individual group and student conversations, acknowledgment of unique learning styles, encouragement of student reflection and asking students their thoughts
69
Theodore S. Ransaw and Brian J. Boggs
on improving instruction and curriculum, and inclusion of the needs of students with special needs (Fielding, 2004). However, student voice also needs to be thought of in terms of valuing and embracing the linguistic voices of students. All too often higher education forces students to give up their cultural expressions in their speaking and writing. What good does college reading and writing class accomplish if students are not able to effectively communicate with those in the community they serve? There have been debates in higher education concerning whether indigenous languages should be translated into English so the knowledge and information is accessible to those who can affect the most change or whether writing in indigenous languages frees thinking from cultural oppression (Wafula, 2002). However, literacy is not just about understanding the language of reading and writing for comprehension: Literacy also includes understanding and appreciating different dialects as well as pronunciations. Diglossia, or the appreciation of “high-and-low” languages (Saville-Troike, 1989), includes, rather than restricts, dialogue in a way that allows students from many cultural, ethnic, and social economic statuses (SES) to be able to contribute. Including diglossia in literacy research has the potential to provide students with information about when and when not to use language choice. In short, a diglossic approach would give students agency in their voices unrestricted by ivory-tower perspectives that can impede and damper sustaining and improving their communities. Since language is never neutral, more research needs to be done that examines the lived experiences of postsecondary literacy and language choices with regard to heuristic impact.
Offering Financial Literacy Classes Applying for college student loans has become part of the college experience and the accompanying student loan debt is increasing, two facts that are unlikely to change anytime soon. Acquiring staggering amounts of student loan debt is not only common for offspring of college-educated parents but can also be especially troubling for first-generation college students. In addition, navigating the student loan process has additional difficulties for ELL students. Lee and Mueller (2014) assert that student loan debt literacy is critical to citizenship, as well as a critical component to college completion, and that financial literacy courses could be taught as part of required orientation or freshman year classes for all students. Many colleges and universities have financial literacy workshops, and some even have entire departments focused on that topic. Lee and Mueller (2014) also recommend both quantitative and qualitative studies to examine the student loan decision process, where, when, and how along the process students begin to incur debt, measuring student loan debt literacy competency before and after attending student loan debt literacy training, and examining financial literacy gaps that affect student populations. The sections from this overview – age, disability, first-time and first-generation students, gender, language, race, sexual orientation, and spirituality – are all related to diversity in some way. Combined they help add context to CSP and how culture can bind us together in the classroom. In the words of Grant and Sleeter (2011), “all students have culture” (p. 132), whether it is the culture they speak, the way they use language or the background of their SES. It is acknowledging our similarities while being mindful of our differences that bring us together in the classroom and in life.
References and Suggested Readings Alim, H. S., & Paris, D. P. (2017). What is culturally sustaining pedagogy and why does it matter? In D. Paris & H. S. Alim (Eds.), Culturally sustaining pedagogies: Teaching and learning for justice in a changing world (pp. 1–24). New York, NY: Teachers College Press. Almon, C. (2015). College persistence and engagement in light of a mature English language learner (ELL) student’s voice. Community College Journal of Research and Practice, 39(5), 461. doi:10.1080/10668926.201 3.850757.
70
Student Diversity
Anderson, J. A. (2016). Finding purpose: Identifying factors that motivate lesbian, gay, bisexual, and transgender college student engagement at a two-year institution (Unpublished doctoral dissertation). University of Minnesota, Minneapolis. Anyon, J. (2009). Critical pedagogy is not enough: Social justice education, political participants, and the politicization of students. In M. W. Apple, W. Au, & L. A. Gandin (Eds.), The Routledge international handbook of critical education (pp. 389–395). New York, NY: Routledge. Astin, A. W. (2016). “Spirituality” and “Religiousness” among American college students. About Campus, 20(6), 16–22. doi:10.1002/abc.21222. Bean, T. W., Readence, J. E., & Dunkerly-Bean, J. (2017). Content area literacy: An integrated approach (11th ed.). Dubuque, IA: Kendall/Hunt. Beemyn, G. B., Jones, A. J., Hinesley, H., Martin, C., Dirks, D. A., Bazarsky, D., … Robinson, L. (2014). Suggested best practices for supporting trans students. New York, NY: Consortium of Higher Education LGBT Resource Professionals. Bell, D. (1992). Faces at the bottom of the well. New York, NY: Basic Books. Boggs, B. J., & Dunbar, C. (2015). An interpretive history of urban education and leadership in age of perceived racial invisibility. In M. Khalifa, N. W. Arnold, A. F. Osanloo, & C. M. Grant (Eds.), Handbook of urban educational leadership (pp. 43–57). Lanham, MD: Rowman & Littlefield. Bowman, N. A. (2013). How much diversity is enough?: The curvilinear relationship between college diversity interactions and first-year student outcomes. Research in Higher Education, 54(8), 874–894. Buchmann, C. (2009). Gender inequalities in the transition to college. Teachers College Record, 111(10), 2320–2346. Budd, A. (2010). Missing men: Addressing the college gender gap. Higher Ed Admissions. Retrieved from http:// higheredlive.com/missing-men/ Cataldi, E., F., Bennett, C., T., & Chen, X. (2018). Stats in brief: First-generation students: College access persistence and pistachio’s outcomes. National Center for Education Statistics. Catalyst. (2004). Advancing African-American women in the workplace: What managers need to know. New York, NY: Catalyst Publications. Chen, X. (2005). First-generation students in postsecondary education: A look at their college transcripts (NCES 2005–171). U.S. Department of Education. Washington, DC: National Center for Education Statistics. Choy, S. 2002. Nontraditional undergraduates (NCES 2002-012). Washington, DC. National Center for Education Statistics, U.S. Department of Education. Clinedinst, M, & Koranteng, A. (2017). State of college admissions. Arlington, VA: The National Association for College Admission Counseling. Retrieved from www.nacacnet.org/news--publications/ publications/state-of-college-admission Cokley, K., Garcia, D., Hall-Clark, B., Tran, K., & Rangel, A. (2012). The moderating role of ethnicity in the relation between religiousness and mental health among ethnically diverse college students. Journal of Religion and Health, 51(3), 890–907. Cross, P. (1981). Adults as learners: Increasing participation and facilitating learning (1st ed.). San Francisco, CA: Jossey-Bass. Denzin, N., & Lincoln, Y. (2000). Handbook of qualitative research (2nd ed.). Thousand Oaks, CA: Sage Publications. Dominguez, M. (2017). “Se hace Puentes al andar”: Decolonial teacher education as a needed bridge to culturally sustaining and revitalizing pedagogies. In D. Paris & H. S. Alim (Eds.), Culturally sustaining pedagogies: Teaching and learning for justice in a changing world (pp. 226–246). New York, NY: Teachers College Press. Dupré, M. (2012). Disability culture and cultural competency in social work. Social Work Education, 31, 168–183. Fielding, M. (2004). ‘New Wave’ student voice and the renewal of civic society. London Review of Education, 2(3), 197–217. Flinders, D., Noddings, N., & Thornton, S. (1986). The null curriculum: Its basis and practical implications. Curriculum Inquiry, 16, 133–42. Gilson, S. F., & DePoy, E. (2001). Theoretical approaches to disability content in social work education. Journal on Social Work Education, 38, 153–165. Grant, C., A. & Sleeter, C. (2011). Doing multicultural education for achievement and equity (2nd ed). New York: Routledge. Harris, F., & Harper, S., R. (2008). Masculinities go to college: Understanding male identity socialization and gender role conflict. In J. Lester (Ed.), Gendered perspectives in community colleges. New Directions for Community Colleges (No. 142, pp. 25–35). San Francisco, CA: Jossey-Bass. Harrison, J., & Shi, H. (2016). English language learners in higher education: An exploratory conversation. Journal of International Students, 6(2), 415–430.
71
Theodore S. Ransaw and Brian J. Boggs
Herbert, J. T., Hong, B. S. S., Byun, S., Welsh, W., Kurz, C. A., & Atkinson, H. A. (2014). Persistence and graduation of college students seeking disability support services. Journal of Rehabilitation, 80(1), 22–32. Higbee, J. L. (2009). Student diversity. In R. Flippo & D. Caverly (Eds.), Handbook of college reading and study strategies research (2nd ed., pp. 67–94). New York, NY: Routledge, Taylor & Francis Group. *Hoppers, C. A. (2009). Education, culture and society in a globalizing world: Implications for comparative and international education. Compare: A Journal of Comparative and International Education, 39(5), 601–614. doi:10.1080/03057920903125628. Hussar, W. J., & Bailey, T. M. (2014). Projection of education statistics to 2022. (NCES 2016-013). U.S. Department of Education, National Center for Education Statistics. Washington, DC: U.S. Government Printing Office. Hyland-Russell, T., & Groen, J. (2011). Marginalized non-traditional adult learners in Canada: Beyond economics. The Canadian Journal for the Study of Adult Education, 24(1), 61–79. Jehangir, R. R. (2009). Cultivating voice: First-generation students seek full academic citizenship in multicultural learning communities. Innovative Higher Education, 34(1), 33–49. doi:10.1007/s10755-008-9089-5. *Jones, J. M. (2013). Considering race in college admissions: Sixty-seven percent say decisions should be based solely on merit. Gallup. Retrieved from www.gallup.com/poll/163655/reject-considering-race-college- admissions.aspx Kleinfeld, J. (2009). No map to manhood: Male and female mindsets behind the college gender gap. Gender Issues, 26(3/4), 171–182. Kuh, G. D., Cruce, T. M., Shoup, R., Kinzie, J., & Gonyea, R. M. (2008). Unmasking the effects of student engagement on first-year college grades and persistence. The Journal of Higher Education, 79(5), 540–563. doi:10.1353/jhe.0.0019. Ladson-Billings, G. (1994). The dreamkeepers: Successful teachers of African American children. San Fransisco, CA: Jossey-Bass. Ladson-Billings, G. (1995a). Toward a theory of culturally relevant pedagogy. American Educational Research Journal, 32(3), 465–491. *Ladson-Billings, G. (1995b). But that’s just good teaching: The case for culturally relevant pedagogy. Theory Into Practice, 34(3), 159–165. Ladson-Billings, G. (2014). Culturally relevant pedagogy 2.0: The remix. Harvard Educational Review, 84(1), 74–88. *Ladson-Billings, G., & Tate, W. F. (1995). Toward a critical race theory of education. Teachers College Record, 97(1), 47–68. *Lee, J., & Mueller, J. (2014). Student loan debt literacy: A comparison of first-generation and continuinggeneration college students. Journal of College Student Development, 55(7), 714–719. doi:10.1353/csd.2014.0074. Lei, S. A., Rhinehart, P. J., Howard, H. A., & Cho, J. K. (2010, Spring). Strategies for improving reading comprehension among college students. Reading Improvement, 47(1), 30–42. Makoelle, T. (2014). Cognitive justice: A road map for equitable inclusive learning environments. International Journal of Education and Research, 2(7), 505–518. Marrs, H., & Sigler, E. A. (2012). Male academic performance in college: The possible role of study strategies. Psychology of Men & Masculinity, 13(2), 227–241. doi:10.1037/a0022247. Matsuda, P. K. (2012). Let’s face it: Language issues and the writing program administrator. Writing Program Administration, 36(1), 141–163. May, B., & LaMont, E. (2014). Rethinking learning disabilities in the college classroom: A multicultural perspective. Social Work Education, 33(7), 959–975. doi:10.1080/02615479.2014.895806. National Equity Alliance. (2016). Data summaries. Retrieved from http://nationalequityatlas.org/data-summaries/ Michigan/ NCES. (2014). The nation’s report card mathematics and reading 2013: Trends in 4th and 8th grade NAEP reading and mathematics achievement-level results, by status as students with disabilities (SD). Institute of Education Sciences, U.S. Department of Education. Retrieved from www.nationsreportcard.gov/ reading_math_2013/#/ NCES. (2017a). Undergraduate enrollment. National Center for Education Statistics. Retrieved from: https://nces.ed.gov/programs/coe/indicator_cha.asp. NCES. (2017b, September). First generation and continuing-generation college students: A complete comparison of High school and post secondary experiences. Retrieved from: https://nces.ed.gov/pubsearch/ pubsinfo.asp?pubid=2018009.
72
Student Diversity
Newman, L., Wagner, M., Knokey, A.-M., Marder, C., Nagle, K., Shaver, D., Wei, X., with Cameto, R., Contreras, E., Ferguson, K., Greene, S., & Schwarting, M. (2011). The post-high school outcomes of young adults with disabilities up to 8 years after high school: A report from the national longitudinal transition study-2 (NLTS2) (NCSER 2011–3005). Menlo Park, CA: SRI International. Paris, D., & Alim, S. (Eds.). (2017). Sustaining pedagogies: Teaching and learning for justice in a changing world. New York, NY: Teachers College Press. PBS News Hour. (2016, September). Why first-generation students need mentors who get them. Retrieved from www.youtube.com/watch?v=B1O9Lvv8lv4&feature=youtu.be Pew Forum on Religion and Public Life. (2007). U.S. religious landscape survey. Washington, DC: Pew Research Center. Rankin, S. R. (2003). Campus climate for gay, lesbian, bisexual, and transgender people: A national perspective. New York, NY: The National Gay and Lesbian Task Force Policy Institute. Robertson, K., & Lafond, S. (2016). Getting ready for college: What ELL students need to know. All about adolescent literacy. Retrieved from www.adlit.org/article/28377/ Rosa, J., & Flores, N. (2017). Do you hear what I hear: Raciolinguistic ideologies and culturally sustaining pedagogies. In D. Paris & H. S. Alim (Eds.), Culturally sustaining pedagogies: Teaching and learning for justice in a changing world (pp. 175–190). New York, NY: Teachers College Press. Ross, T., Kena, G., Rathbun, A., KewalRamani, A., Zhang, J., Kristapovich, P., & Manning, E. (2012). Higher education: Gaps in access and persistence study. (NCES 2012–046). U.S. Department of Education, National Center for Education Statistics. Washington, DC: Government Printing Office. San Pedro, T. J. (2017). “This stuff interests me.” Re-centering indigenous paradigms in colorizing schooling spaces. In D. Paris & H. S. Alim (Eds.), Culturally sustaining pedagogies: Teaching and learning for justice in a changing world (pp. 99–116). New York, NY: Teachers College Press. Saville-Troike, M. (1989). The ethnography of communication. New York, NY: Basil Blackwell. Sleeter, C. E. (2014). Confronting the marginalization of culturally responsive pedagogy. Urban Education 47(3) 562–584. Taylor, A. V. ( July 2014). Go sisters go! Black women are the most educated group in the United States. Naturally, moi. Retrieved from http://naturallymoi.com/2014/07/go-sisters- go-black-women-the-most-educatedgroup-in-the-united-states/#.Vj0lH7_0eag Thedemands.org. (2016). The demands. Retrieved from Thedemands.org The Institute for Higher Education Policy. (2012). Supporting first-generation college students through classroom-based practices. A report by The Institute for Higher Education Policy. Washington, DC. Wafula, R. M. (2002). “My audience tells me in which tongue I should sing”: The politics about language in African literature. In S. G. Obeng & B. Hartford (Eds.), Political independence with linguistic servitude: The politics about languages in the developing world (pp. 95–108). New York, NY: Nova Science Publishers. Windisch, H. C. (2016). How to motivate adults with low literacy and numeracy skills to engage and persist in learning: A literature review of policy interventions. International Review of Education, 62(3), 279–297. doi:10.1007/s11159-016-9553-x. Yosso, T. (2005). Whose culture has capital? A critical race theory discussion of community cultural wealth. Race, Ethnicity and Education, 8(1), 69–91.
73
5 Social Media Barbara J. Guzzetti arizona state university
Leslie M. Foley grand canyon university
This chapter provides a discussion of the changing nature of literacy for millennial youth enrolled in college classes, particularly in relation to their take up of social media. In discussing this, we define and elaborate on the construct of literacy in a digital age and present the recent research on college students’ needs and abilities in relation to digital media for social networking that mostly focuses on writing or producing texts rather than simply reading or consuming them. Our review points to the paucity of research on the needs and abilities of developmental literacy learners with social media for teaching and learning, particularly for advancing reading and writing, and calls for additional lines of inquiry to address this gap in the extant literature. Today’s college students are known as the E-generation ( Jones & Flannigan, 2006) or as “digital natives” (Prensky, 2001) who have grown up with digital media. They are accustomed to using online media, such as websites, to locate information and spend more time accessing information on digital media than acquiring information from traditional print texts (Hsu & Wang, 2011). Youth report learning more from their own explorations online than they do in their classrooms (Magnifico, 2005). These young adults have become facile users of social media, also known as participatory media, or those digital media that have interactive capabilities and allow for content creation, social interaction, collaboration, and deliberation (Bryer & Zavattaro, 2011). Social media include networking platforms, like Facebook, wikis, podcasts, blogs, and virtual worlds. Their users are enabled to become “produsers” or those who both consume and produce new media (Bruns, 2006) by reading and writing online. They are active members of a participatory culture ( Jenkins, 2006) as they use and create Web 2.0 technologies to communicate, share information, and network with others. A survey of 127 college and university students in the USA and Canada supplemented by focus group interviews revealed that about 40 percent of these students produced and posted videos to video sharing sites like YouTube and updated wikis, while about a third of them contributed to blogs (Smith & Caruso, 2010). The rapid and widespread take-up of these participatory media has resulted in expanded definitions of literacy and what counts as being literate in 21st-century society ( Jones & Flannigan, 2006). The construct of “literacy” has shifted from being viewed as a singular ability to read and write in print contexts to “literacies” or those multiple skills and abilities that are required for communication and participation in contemporary society, representing diverse ways of making meaning (Knobel & Lankshear, 2007; Lankshear & Knobel, 2008). They are often referred to as
74
Social Media
“the new literacies” or those that are both chronologically new and constitute socially recognized ways of communicating, generating, and negotiating meaningful content (Knobel & Lankshear, 2007; Lankshear & Knobel, 2011). The National Council of Teachers of English (1998–2017) has defined these literacies to include cultural and communicative practices that encompass not only developing traditional forms of print literacy, such as reading and writing, but also developing proficiency and fluency with the tools of technology. These continually emerging digital forms of communication represent social capital, technologically savvy abilities that are transformational and critical for equitable participation in a global society.
Digital Literacies in Higher Education “Digital” is the most recent descriptor used in higher education to refer to new information and communication media, including social media (Goodfellow, 2011). Digital literacies are considered transformational for higher education since one of the major missions of academia is promoting practices that are critical for communication (Goodfellow, 2011). Scholars (e.g., Gee, 2003; Kress, 2003) argue that since communication is now multimodal and diverse, it is useful not to restrict literacy education to simply developing generic competencies that are transferable to varying contexts but to prepare students for a variety of competencies for myriad communicative purposes. This directive is consistent with the demands placed on college students while they navigate the technological ecosystem of their institutions as they register online for classes, apply for financial aid online, complete multimedia assignments, and conduct research using electronic resources (Goode, 2010). College students need to become adept at drawing on complex hybrid textual genres using a range of technologies and integrate these practices into their course work, more complex tasks than those required of students in the pre-digital era (Lea & Jones, 2011). These are the kinds of new literate skills and abilities that college students must develop to function and communicate in a digital world.
The Digital Divide Surveys, such as those conducted by the Pew Internet and American Life Center (Fox & L ivingston, 2007; Horrigan, 2006; Pew Research Center, 2017), have indicated that demographic characteristics influence the take-up of digital media by young adults, who represent a large section of the population embracing new media. Gender, race, socioeconomic status, primary language, geographical location, (ab)disability, educational level, and generational characteristics are associated with variable access to and use of new digital technologies, creating a digital divide (Prensky, 2001). These characteristics are associated with if and how digital media are taken up and are further complicated by young people’s sociocultural influences and their access to quality K-12 education.
Update on the Digital Divide Because this research identifying a digital divide has been critiqued for being superficial and for not indicating how young adults use digital technologies, a second level of research has focused on usage rates, dispositions toward digital technologies and social media, and the skills and abilities associated with digital media. For example, Ching, Basham, and Jang (2005) surveyed college students, discovering that high-income males who had early access to a computer were the most likely to use digital media for construction, entertainment, and communication, and that home context trumps school experiences in terms of building digital knowledge and abilities.
75
Barbara J. Guzzetti and Leslie M. Foley
Students from low-income homes, students from rural areas, and people of color have been found to have limited Internet access, hindering their engagement in digital media outside of school (Pew Research Center, 2017). Just over 1 in 10 American adults are currently smartphone-only Internet users, lacking broadband access at home, a scenario especially common among young adults, non-Whites, and low-income students (Pew Research Center, 2017). Yet 90 percent of young adults use social media, while Whites, Hispanics, and African Americans have adopted social media at the same brisk pace, accelerating from 6 percent for women and 8 percent for men in 2005 to 68 percent of women and 66 percent of men in 2015 (Perrin, 2015). Case study research supplemented these surveys with in-depth interviews of college students, indicating that these young adults form “technology identities” or beliefs about themselves and their relationships to digital tools and media that either reinforce or limit their learning opportunities, social interactions, and their future career or study plans (Goode, 2010). For example, a low-income Latino female characterized as digitally challenged and not fluent in new digital media but eager to learn was offered no digital support or learning opportunities in college and consequently continued to perceive herself as on the wrong side of the digital divide. Her male peer with extensive digital knowledge and skill who grew up with a computer at home and a father in the technology industry became fluent enough to access other forms of digital media in his academic and social life but expressed little interest in learning more about digital technologies. Another student from an affluent background inherited a series of computers from his electrical engineer father, achieved rich digital knowledge, and characterized himself as a “technophile” whose digital abilities and understandings have shaped his sense of self and his career endeavors. These cases illustrate the digital divide present in the higher education system, which is reflective of social and economic inequities and highlights the role of colleges and universities in perpetuating rather than resisting inequalities associated with that digital divide (Goode, 2010). These issues include disparity in access, as well as skills and use, and the contexts in which college students use new media. Low-income college students, as well as those of color, many of whom are first-generation college and developmental literacy students, are particularly marginalized in their take-up of new digital technologies and tools (Goode, 2010). Technology and digital literacy demands widen the gap in college preparation for students in need of basic literacy instruction (Relles & Tierney, 2016). Extant research has indicated the need for colleges and universities to address these disparities in access to and take-up of new digital media in order to shrink the digital divide. Such efforts can assist students in developing the robust technology identities required for college success and career pathways (Goode, 2010). For many students, the lack of a strong technology identity influences their trajectories for future study and careers, particularly their participation in science, technology, engineering, and mathematics (STEM) fields that are historically and currently lacking members of the underrepresented groups of women and people of color (Bidwell, 2015; Camera, 2015).
Social Media for Teaching and Learning in Colleges and Universities The use and production of digital and social media for educational purposes are still relatively new endeavors in higher education classrooms. Therefore, most studies focusing on digital literacies as an inclusion in literacy education have focused on K-12 students (Hsu & Wang, 2011). Little research has been conducted on digital media for teaching and learning at the college and university level. As other researchers found almost a decade ago (Caverly, Peterson, Delaney & Starks-Martin, 2009), few studies have been conducted on the impact of new digital or social media with undergraduates in developmental literacy classes.
76
Social Media
Hence, this review focuses on the implementation of participatory media with college and university students, and where possible, with those enrolled in developmental literacy courses. We particularly highlight those studies conducted from 2008, since the publication of the last edition of this text. In doing so, we note the most popular social media used in higher education, as evidenced by reports in the professional literature on teaching and learning.
The Call for Digital Literacies in Basic Literacy Instruction In 2009, a report published by the National Council of Teachers of English, Writing in the 21st Century (Yancey, 2009), challenged educators to consider how technology has transformed authoring by creating powerful new practices to converge reading and writing with the digital world. A “digital imperative” was issued that shifts toward a pedagogy aimed at furthering students’ digital literacy as a dominant model for composition instruction (Clark, 2010). As a basic English instructor at the community-college level, Clark (2010) called for updates to the epistemology and pedagogy that shape the teaching and learning process: The traditional essayistic literacy that still dominates composition classes is outmoded and needs to be replaced by an intentional pedagogy of digital rhetoric that emphasizes the civic importance of education, the cultural and social imperative of the now, and the cultural software that engages students in the interactivity, collaboration, ownership, authority and malleability of texts. Today, the composition classroom should immerse students in analyzing digital media, in exploring the world beyond the classroom, in crafting digital personae, and in creating new and emerging definitions of civic literacy. (p. 28) Reframing literacy instruction to include participatory spaces of social media will be important in capturing the potential of these sites for appropriate 21st-century pedagogies (Vie, 2008). Selfe and Hawisher (2004) argued that educators cannot ignore the increasingly expansive networked environments that students use to communicate, or they run the risk of their curricula no longer holding relevance for those students. While instructors have made advances in implementing the digital technologies of blogs, wikis, and course management platforms, other digital technologies, like video games and social media, have received less academic attention, despite their potential value. This oversight is unfortunate as social media can provide “many teachable moments for instructors who wish to talk with students about audience, discourse communities, intellectual property, and the tensions between public and private writing” (Vie, 2008, p. 22). Social media offer an authentic audience, a readership composed of interested others, the ability to peer edit or comment on a written post, and the discussion of ideas. Social media, such as Reddit (www.reddit.com) or Facebook (www.facebook.com), readily provide opportunities for readers’ feedback and extensions of the author’s written ideas, and assist in forming communities around topics and issues of common interest. Social media represent contemporary ways of reading and writing in digital environments and therefore have a place in literacy instruction for today’s youth. Despite this compelling rationale for providing instruction in digital reading and writing, few college and university faculty have incorporated social media into their literacy instruction (Chen & Bryer, 2012; Clark, 2010). This phenomenon is consistent with findings from The Faculty Survey of Student Engagement (2010), a survey of 4,600 faculty members from 50 colleges and universities, revealing that over 80 percent of college and university faculty had never used social media, such as blogs, wikis, or virtual worlds, in their teaching. Consequently, students have rarely used social media for educational or learning purposes (Chen & Bryer, 2012).
77
Barbara J. Guzzetti and Leslie M. Foley
Students’ Needs in Literacy with the Changing Nature of Texts Community-college instructors who have incorporated Web 2.0 and multimodal strategies in their instruction in Basic Reading/Literacy or Basic Writing courses point to the need to develop and refine students’ reading, writing, and critical thinking skills and abilities for participation in 21st-century global society (Clark, 2010; Klages & Clark, 2009). They note that while digital natives may be fluent in their adoption of new digital media, they have the same need for learning reading, writing, and critical thinking skills that has characterized traditional basic literacy instruction. Although youth may be adept at locating and accessing information online, they are less proficient at reading and thinking critically about these texts, deconstructing or questioning online texts, and producing digital information (Clark, 2010).
Contemporary Critical Thinking and Reading Abilities The nature of texts that compose social media have changed the demands put on readers as consumers and/or producers of these texts. New kinds of literate skills and abilities are needed for critical thinking, including critical literacy or the ability to critically question and deconstruct a text for messages that promote one point of view and marginalize others. These are skills and abilities that will need to be modeled and directly taught to developmental literacy students. These thinking skills and abilities associated with new media allow for creative communication and connection to others in a digital world. Researchers have identified new literate skills and abilities associated with new media, particularly social media (e.g., Jenkins, 2006; Van Someren, 2008). Some of the new kinds of critical thinking and reading abilities that are associated with comprehending and producing digital texts include judgment or the ability to determine if the information in a text is reliable. Students will need to be taught to examine the sources of information being presented to look for biases and judge its authors’ credibility. Students will also need to be taught the skill of appropriation or the ability to remix and combine various texts to create a new text. In addition, instructors will need to teach transmedia navigation or the ability to switch back and forth between a visual text, such as a digital photograph or graphic embedded in a text, and the text of digital words, and to combine these elements to comprehend a text ( Jenkins, 2006; Van Someren, 2008). This is a type of visual literacy that promotes the ability to read different types of visuals in different contexts (Gee, 2003). Other new literate skills and abilities include negotiation, or the ability to enter different online spaces and understand the norms that prevail in these spaces that influence text production. For example, Facebook features allow for comments, questions, or posting digital photos or videos as responses to reading a poster’s text. Students will need to be taught to play with texts or develop the ability to juxtapose digital texts to experiment and test new possibilities in creating original texts and problem solve with texts. They will also need skills of Intertextuality or the ability to relate one text to another. Digital texts often include links or hyperlinks to additional readings so readers must be able to relate, compare, and contrast information in one digital text to another. In addition, students will need to learn how to look for evidence that supports or refutes opinions or ideas between and among various forms of digital texts.
Contemporary Reading Skills and Abilities Digital natives also need to develop and refine their reading skills and abilities due to the changing nature of contemporary texts. Twenty-first-century texts now incorporate symbols and abbreviations that students will need to learn to comprehend to be literate in contemporary society. For
78
Social Media
example, individuals will need to interpret “cyberspeak” or Internet slang, such as “BRB” for “be right back” or “LOL” for “laughing out loud,” language that often appears in text messages, instant messaging, or other forms of digital communication as jargon used in Internet texts. A dictionary has been published (Ihnatko, 1996) that lists and defines these terms, such as “LOL” for “laughing out loud.” In addition, digital texts often incorporate emoticons, symbols for emotions that are typically incorporated into digital texts. Students will need to be able to “read” these emoticons, such as the smiley face or J. The use of emoticons that often appears in instant messaging or texting and represents a new form of text readers must interpret. A complete list of emoticons can be found at Cool Smileys at http://cool-smileys.com/text-emoticons.
Contemporary Writing Skills and Abilities Developmental literacy students will also need to refocus their writing skills and abilities. They need to be able to code switch between casual writing, such as writing blogs, wikis, online journals, and social networking sites and their formal academic writing. They will need to understand the appropriate conventions for the audiences for which they are writing. The advent of cyberspeak and emoticons has resulted in confusion between appropriate conventions in formal and informal writing as these informal and new forms of text have crept into formal text production. Students need to be able to recognize the contextual differences between writing for an academic audience in a formal way and their casual or informal writing. Developmental literacy students also need to refine their understandings of the role that the writing process can play in developing written texts (Klages & Clark, 2009). Since digital technologies allow for instant publishing, today’s students typically do not understand the role of the writing process with its cycle of revision, editing, self-assessment, peer feedback, and further revision for a final draft. Digital writing has become “process-less” (Klages & Clark, 2009). Today’s e-generation will need to value the writing process and write for an authentic audience of readers who are enabled by social media to become peer editors or commentators as a necessary means for becoming proficient writers.
Contemporary Ability to Identify as Authors In addition to these value and skill needs, students may need to develop new views of themselves as readers and writers and overcome their deficit views of themselves as literacy learners and practitioners. Many college students may come from underperforming high schools or arrive from other nations as refugees where they have had little social or educational preparation for college. They may be uncomfortable with reading and writing, and may have higher levels of anxiety about academic literacy with little confidence in their writing, reading, and critical thinking abilities which is characteristic of college students in basic literacy classes, particularly in urban areas (Klages & Clark, 2009). In addition, although these young people may be or become prolific writers of personal narratives on social media, they typically do not see these efforts as “real” writing. They have come to learn that their stories are not of value in academia. They may exclusively associate the five- paragraph essay and high-stakes testing with writing (Klages & Clark, 2009). Hence, they may not see their online writing as authorship and connection to the world of writing. These students will need to be directly informed of the value of developing their identities as readers and writers. They will need to be shown the inherit value of writing their lives and how in doing so they can provide the impetus to stimulate others’ writing and offer models for others who read their life stories to create their own personal narratives.
79
Barbara J. Guzzetti and Leslie M. Foley
Strategies for Advancing Reading and Writing for the 21st Century Digital Storytelling To accomplish the objective of realizing the value of reading and writing life stories, students may be made aware of ideas originating from the Center for Digital Storytelling. The Center for Digital Storytelling is a community center for media production founded by Joe Lambert in the 1990s that is based on the belief that everyone has a story to tell (Davis & Foley, 2016; Lazorchak, 2012). Digital storytelling consists of composing personal narratives through multimedia by combining words with sounds and images, reinforcing the notion that life stories are worth reading, writing, and sharing. The process for creating a digital story follows the typical stages of the writing process, including brainstorming, planning, creating, revising, editing, publishing, and reflecting (Warfield, 2016). Multimedia elements, such as sound effects, background graphics, color, visuals, and digital photographs, are incorporated to supplement the words of the story, and the story may be read aloud by its writer. Including color, sound, and visuals with words can convey tone and mood and add depth and dimension to a personal narrative. Reading others’ digital stories demonstrates the ability to find voice through reading and writing and illustrates how multimodal forms of text can be combined to form an enhanced and coherent narrative.
E-Portfolios To meet students’ needs for relevant literacy instruction, college instructors who have implemented social media in literacy instruction have recommended incorporating e-portfolios to help students learn how to write for real audiences for authentic purposes (Klages & Clark, 2009). Electronic portfolios are digital versions of print portfolios in which students collect their written work during the semester, select representative pieces, and write reflections on them. E-portfolios serve as a vehicle for teaching developmental writing where students can explore their emerging literacies in a range of digital media by writing essays, drafting reflections, and using a variety of digital platforms. Inside their portfolios, students may showcase their digital writing in forms such as blogs, wikis, and digital stories or i-Movies that students create from their essays. They may incorporate multimodal texts of YouTube videos or podcasts and write scripts for these products. These portfolio collections can become public artifacts that other students may read and comment on leading to peer feedback, self-assessment, editing, revision, and affirmation. College writing instructors who incorporated e-portfolios (e.g., Klages & Clark, 2009) have described the benefits of doing so. They primarily reported increasing students’ ability to understand authorship and to move from private to public writing. Their students learned to value ideas as they combined digital imagery with prose. They learned to read multimodal and hybrid forms of text and make intertextual ties, 21st-century reading skills and abilities. Individuals learned to address issues of audience and voice and to understand the value of revision in the writing process. Sharing their archived work can allow literacy learners’ readers to react to or extend a text and peers can assist each other as readers and writers.
Weblogs Weblogs (known as blogs or web logs) are websites consisting of a series of entries or posts in reverse chronological order with the most recent appearing first that enable collaborative meaning making through writing. Developmental literacy instructors discovered that blogs enabled peer review and revision and facilitated learning the process of writing (e.g., Gerda, 2012; Klages &
80
Social Media
Clark, 2009). For example, in an English as a Second Language course, college students used blogs as online portfolios where they shared their experiences and posted writing assignments based on their classroom discussions and lessons (Gerda, 2012). In another course in basic writing development at the college level, students’ blogs enabled them to write about topics that were important to them, contextualize their writing, realize the importance of revision, and shift from private to public writing by including links to their blogs in their e-portfolios (Klages & Clark, 2009). Blogging helped students to become confident writers as they transformed their relationship to writing, gaining a sense of authority as an author, and understanding how and why writing can assist them in their academic journey (Klages & Clark, 2009). Other researchers exploring the use of blogs in a college developmental reading course found additional positive outcomes (Hsu & Wang, 2011). Students who blogged had increased rates of retention from semester to semester over those who did not write blogs. Instructors reported that students in the blogging groups established a stronger sense of learning community through reading and writing and built a better rapport with their peers than the students in the non-blogging groups. Students who wrote blogs used them to initiate topics on writing assignments and fostered supportive peer relationships. Students who did not feel comfortable expressing personal opinions in face-to-face discussions tended to discuss more controversial issues on blogs. Blogs were an equalizer that allowed students to express what they had comprehended and extend discussion beyond the classroom. Their instructors (who were also simultaneously learning how to blog) were enabled to integrate multimedia into their students’ reading assignments, monitor students’ reading comprehension and organizational skills, and track students’ use of reading strategies through their blogs. Earlier research on blogs in developmental literacy classes was reviewed in the second edition of the Handbook of College Reading and Study Strategy Research (Caverly et al., 2009). These researchers found that although the research on digital literacies in college classrooms was scarce, most of the extant research on college students’ practices with digital literacies in classroom instruction had been conducted on blogs. For example, Wang and Fang (2005) found that college students enrolled in a writing and rhetoric course considered cooperative blogging groups to be beneficial in their development as writers. Nicholson, Caverly, and Battle (2007) used blogging to develop developmental students’ critical thinking skills, noting that students perceived that blogs provided an audience beyond the professor and were an effective tool for communication that had educational and learning benefits and allowed them to develop and demonstrate critical thinking in analysis and presentation of their ideas.
Social Networking Sites Despite widespread use of social networking, little is known about the benefits of this type of social media in postsecondary education for learning, student engagement, academic performance, or educational outcomes (Davis, Deil-Amen, Rios-Aguilar, & Canche, 2012). Researchers have not yet developed a line of inquiry on how social networking sites can be used to promote college students’ success and persistence. Yet social networking sites are known for their ability to facilitate interaction, collaboration, sharing, and producing, providing new ways and motivating ways of interaction and problem-solving (Shirky, 2010). Scholars have speculated that social networking may offer opportunities for faculty and students to build community and better enable students to gain a sense of belonging to, identity with, and investment in college. Social networking has potential to enhance learning and extend classroom discussions (Davis et al., 2012). Of all the social networking sites, Facebook is considered one of the most powerful platforms (Davis et al., 2012) and remains the most popular social networking platform among
81
Barbara J. Guzzetti and Leslie M. Foley
higher-education students (Lenhart, 2015). Yet students tend to be more accepting of the possibilities of using Facebook as an educational tool than do their college instructors (Perez, Araiza & Doeffer, 2013). It has been posited that students may see faculty members’ use of Facebook as an attempt to foster working relationships with students which could have positive effects on students’ academic outcomes and enhance faculty members’ credibility with students by signifying their understanding of contemporary youth culture (Mazer, Murphy & Simonds, 2007). Since developmental literacy programs may be most effective when tied to students’ college courses (Holschuh & Paulson, 2013), social networking media may be useful in providing and supporting content knowledge acquisition, application, and critical thinking and reading skills. To assist in this goal, professional organizations and providers in specific disciplines, such as science, have Facebook pages (e.g., the National Science Foundation, the National Oceanic and Atmospheric Association, and the National Aeronautics and Science Association) devoted to sharing content knowledge. These professional agencies’ social media present multimodal texts of photographs, visuals, and videos. Their Facebook pages welcome visitors to interact by asking questions, posting comments, and reposting to visitors’ own personal pages. In doing so, students may stimulate discussion within their personal social-mediated public networks. Rather than passively and simply reading a text, social networking sites allow visitors to take action with their new-found knowledge. Readers develop 21st-century literacy skills by making intertextual ties between visuals and texts and by creating new texts of their own in response. In addition, dedicated Facebook pages (www.edusocial/info) are devoted to helping students use Facebook as a research tool. Students may develop practical skills and abilities of critical analysis of resources, effective online communication, and filtering and deciphering information (Kabilian, Ahmad, & Abidin, 2010). College faculty and students who have used Facebook and other social networking platforms in their teaching have reported their educational benefits. Social networking embedded in instruction has been found to blend informal learning into formal learning, to break the limitations of course management systems, to enable innovative and collaborative interactions, to connect textbook knowledge to real-world problems, and to facilitate personalized learning (Chen & Bryer, 2012). Social media has motivated reluctant readers and writers to engage with digital texts and network with others. College students tend to prefer holding discussions on Facebook rather than through their university course management systems, such as Blackboard, which students perceive as not user-friendly (Barczyk & Duncan, 2013). Facebook has made it easier to combine teaching and learning with social interactions, has provided instant access to instructors, and has allowed students to engage with one another (Bosch, 2009). Female students have tended to find Facebook gives them more opportunities to engage, to make connections, and to share information (Goudreau, 2010).
Virtual Worlds A virtual world (also known as a three-dimensional environment) is an online computer-simulated environment where multiple users interact in shared space in real time by creating motional characters called avatars (Guzzetti & Stokrocki, 2013). Virtual worlds have been used in college and university instruction for role-playing activities, virtual tours, research, collaboration, and communication (Muir-Cochrane, Basu, Jacobson, & Larson, 2013). Many colleges have used virtual worlds for students to author, discuss, and share materials with each other (Oh & Nussli, 2014). Although other virtual worlds have been used in instruction, like Active Worlds (www.
82
Social Media
activeworlds.com), Second Life (www.secondlife.com) is one of the most popular virtual worlds for higher education (Oh & Nussli, 2014) with an active community of global educators creating and using sites within it for purposes ranging from historical simulations to second language learning. Virtual worlds like Second Life offer opportunities for immersive and experiential learning, which has been found to be an effective approach for developmental learners (Navarro, 2007). Virtual worlds foster increased motivation and engagement, improved contextualization of learning, and rich collaborative learning compared to 2-D alternatives (Dalgarno & Lee, 2010). Community college, university, and K-12 faculty enrolled in a course for teaching and learning in virtual worlds identified the advantages of virtual worlds for learning as promoting new insights and critical reflection, creating community, assuming identities of a discipline, practicing literacy and English language skills in a safe place, accommodating diverse learning styles, promoting collaborative learning, and offering opportunities for creating (Guzzetti & Stokrocki, 2013).
Implications for Practice: Adopting New Perspectives In using social media, such as virtual worlds, social networking sites, weblogs, digital storytelling, and e-portfolios, for teaching and learning, developmental skills instructors may need to shift their perspectives—to change not only how they perceive social media text but also how they view the teaching and learning process. Researchers have argued for instructing students to become critical users of digital media (Snyder & Bulfin, 2007). To advance this agenda, educators may develop critical literacy activities. These activities advance students’ ability to deconstruct social media in terms of access, privacy, and equity issues that appear in online discussions, as well as directly foster readers’ critical questioning of authority in text sources. Students will need to be taught to critically question social media by asking questions like “Whose ideas get promoted?” and “Who is being marginalized in this discussion?” They will need to question a text’s authority when reading it by asking questions such as “Who is the author of this text?” and “How reliable is the source?” Students will need to examine a text to determine if it is a text originating from a government agency, a university, or from a personal or commercial author and decide which source would be more credible, have biases, or be written from a position of authority and look for evidence to support positions. Individuals will need to learn to interrogate digital text for its accuracy and reliability. Faculty will also need to shift their perspectives on the teaching-learning process and their roles within it. Instructors will need to become accepting of students as cocreators of content and knowledge. By doing so, the classroom can become a community of learners where the instructor becomes a learner along with students. The role of the instructor might then transform to become as much of a facilitator as a subject-matter expert (Chen & Bryer, 2012). This will require a reduction in the viewpoint of the instructor as the authority who dispenses knowledge and the students as mere receivers of that knowledge.
Implications for Future Research: Advancing a Social Media Agenda Educators often still view social media in its many forms with skepticism and its affordances for teaching and learning remains poorly understood. There is a paucity of research that formally explores social media with college students, particularly those enrolled in developmental literacy classes. Therefore, we join other researchers in calling for studies with objective measures of how social media impacts reading and writing outcomes and students’ performance. Such investigations would provide greater insight into the value of social media in advancing literacy learning
83
Barbara J. Guzzetti and Leslie M. Foley
(Barczyk & Duncan, 2013). It is our hope that future scholars will undertake this research agenda and contribute to the knowledge base on the pedagogical value of participatory media for advancing students’ 21st-century literate skills and abilities.
References and Suggested Readings (*) Barczyk, C. G., & Duncan, D. G. (2013). Facebook in higher education courses: An analysis of student attitudes, community of practice and classroom community. International Business and Management, (8)1, 1–11. Bidwell, A. (2015, February). STEM workforce no more diverse than 14 years ago. U.S. News & World Report. Retrieved from www.usnews.com/news/stem-solutions/articles/2015/02/24/stem workforce-no-more-diverse-than-14-years-ago Bosch, T. E. (2009). Using online social networking for teaching and learning: Facebook use at the University of Cape Town. Communication, 35(2), 185–200. Bruns, A. (2006). Towards produsage: Futures for user-led content production. In I. Sudweeks, H. Hrachovec, & C. Ess, (Eds.), Proceedings cultural attitudes towards communication and technology (pp. 275–284), Tartu, Estonia. Retrieved from http://eprints.qut.edu.au/4863/1/4863_1.pdf Bryer, T. A., & Zavattaro, S. (2011). Social media and public administration: Theoretical dimension and introduction to symposium. Administrative Theory & Praxis, 33(3), 325–340. Camera, L. (2015, October 21). Women still unrepresented in STEM fields. U.S. News & World Report. Retrieved from www.usnews.com/news/articles/2015/10/21/women-still-underrepresented-in-stem-fields. *Caverly, D. C., Peterson, C. L., Delaney, C., & Starks-Martin, G. A. (2009). Technology integration. In R. F. Flippo & D. C. Caverly (Eds.), Handbook of college reading and study strategy research (2nd ed., pp. 314–350). Florence, NY: Routledge. Chen, B., & Bryer, T. (2012). Investigating instructional strategies of using social media in formal and informal learning. International Review of Research in Open and Distributed Learning, 13(1), 87–104. Ching, C., Basham, J., & Jang, E. (2005). The legacy of the digital divide. Urban Education, 40(4), 394–411. *Clark, J. E. (2010). The digital imperative: Making the case for a 21st century pedagogy. Computers and Composition, 27, 27–35. Dalgarno, B., & Lee, M. J. W. (2010). What are the learning affordances of 3-D virtual environments? British Journal of Educational Technology, 41(1), 10–12. Davis, A., & Foley, L. (2016). Digital storytelling. In B. Guzzetti & M. Lesley (Eds.), Handbook of research on the societal impact of digital media (pp. 317–342). Hershey, PA: IGI Global. *Davis, C. H. F., Deil-Amen, R., Rios-Aguilar, C. R., & Canche, M. S. G. (2012). Social media in higher education: A literature review and research directions. Tucson, AZ: The Center for the Study of Higher Education at the University of Arizona and Claremont University. Retrieved from http://works.bepress. com/hfdavis/2/ Faculty Survey of Student Engagement. (2010, July 25). Professors use of technology in teaching. The Chronicle of Higher Education. Retrieved from http://chronicle.com/articles/Professors-Useof/123682/ ?sid=wc&utm_source=we&utm_medium-en Fox, S., & Livingston, G. (2007). Latinos online: Pew internet and American life project. Retrieved from http:// pewinternet.org/pdfs/latinos_online_march_ 14_2007.pdf Gee, J. P. (2003). What video games have to teach us about learning and literacy. New York, NY: Palgrave McMillian. Gerda, D. (2012). The dynamics of peer feedback in an ESL classroom. The Journal of Teaching English with Technology, 4, 16–30. Goode, J. (2010). The digital identity divide: How technology knowledge impacts college students. New Media & Society, 12(3), 497–513. Goodfellow, R. (2011). Literacy, literacies and the digital in higher education. Teaching in Higher Education, 16(1), 131–144. Goudreau, J. (2010). What men and women are doing on Facebook. Retrieved from www.forbes.com/2010/ 04/26/popular-social-networking-sites-forbes-woman-time-facebook-twitter.html#48321e398161 Guzzetti, B., & Stokrocki, M. (2013). Teaching and learning in a virtual world. E-Learning and Digital Media, 18(3), 242–259.
84
Social Media
Holschuh, J. P., & Paulson, E. J. (2013). The terrain of college developmental reading. The College Reading & Learning Association. Retrieved from http://crla.net/index.php/publications/crla-white-papers Horrigan, J. D. (2006). Home broadband adoption 2006. Retrieved from www.pewinternet.org/pdfs/PIP_ Broadband_trends2006.pdf Hsu, H., & Wang, S. (2011). The impact of using blogs on college students’ reading comprehension and learning motivation. Literacy Research and Instruction, 50, 68–88. Ihnatko, A. (1996). Cyberspeak: An online dictionary. New York, NY: Random House. Jenkins, H. (2006). Convergence culture: Where old and new media collide. New York, NY: New York University Press. Jones, B. R., & Flannigan, S. L. (2006). Connecting the digital dots: Literacy of the 21st century. Educause Review. Retrieved from http://er.educause.edu/articles/2006/1/connecting-the-digital-dots-literacyof-the-21st-century Kabilian, M. K., Ahmad, N., & Abidin, M. J. A. (2010). Facebook: An online environment for learning of English in institutions of higher education? Internet and Higher Education, 13, 179–187. *Klages, M. A, & Clark, J. E. (2009). New worlds of errors and expectations: Basic writers and digital assumptions. Journal of Basic Writing, 28(1), 32–44. Knobel, M., & Lankshear, C. (Eds.), (2007). A new literacies sampler. New York, NY: Peter Lang. Kress, G. (2003). Literacy in the new media age. London, UK: Routledge. Lankshear, C., & Knobel, M. (Eds.), (2008). Digital literacies: Concepts, policies and practices. New York, NY: Peter Lang. Lankshear, C., & Knobel, M. (2011). New literacies: Everyday practices and literacy learning. New York, NY: Open University Press. Lazorchak, B. (2012, May). Telling tales: Joe Lambert from the center for digital storytelling. Retrieved from https:// blogs.loc.gov/thesignal/2012/05/telling-tales-joe-lambert-from-the-center-for-digital-storytelling/ Lea, M. R, & Jones, S. (2011). Digital literacies in higher education: Exploring textual and technological practice. Studies in Higher Education, 36(4), 377–393. Lenhart, A. (2015). Teens, social media and technology overview, 2015. Washington, DC: Pew Internet Research Center. Magnifico, A. (2005). Science, literacy and the Internet? Epistemic games building the future of education. Retrieved from http://epistemicgames.org/?p=463 Mazer, J. P, Murphy, R. E., & Simonds, C. S. (2007). “I’ll see you on Facebook”: The effects of computer mediated teacher self-disclosure on student motivation affective learning and classroom climate. Communication Education, 56(1), 1–17. Muir-Cochrane, E., Basu, A., Jacobson, M., & Larson, I. (2013). Virtual worlds in Australian and New Zealand higher education: Remembering the past understanding the present and imagining the future. Retrieved from http:// eprints.qut.edu.au/64096 National Council of Teachers of English. (1998–2017). The NCTE definition of 21st century literacies. Retrieved from www.ncte.org/positions/statements/21stcentdefinition Navarro, D. J. (2007). Digital bridge academy: Program overview. Watsonville, CA: Cabrillo College. Nicholson, S. A., Caverly, D. J., & Battle, A. (2007). Using blogs to foster critical thinking for underprepared college students. Paper presented at the annual meeting of the College Reading Association. Oh, K., & Nussli, N. (2014). Teacher training in the use of a three dimensional immersive virtual world: Building understanding through first-hand experience. Journal of Teaching and Learning with Technology, 3(1), 31–58. Perez, T., Araiza, M., & Doerfer, C. (2013). Using Facebook for learning: A case study on the perception of students in higher education. Procedia-social and behavioral sciences, 4th International Conference on New Horizons in Education, 106(10), 3259–3267. Perrin, A. (2015). Social media usage: 2005–2015. Washington, DC: Pew Research Center. Retrieved from www.pewinternet.org/2015/10/08/social-networking-usage-2005-2015/ *Pew Research Center. (2017). Internet/broadband fact sheet. Washington, DC: Pew Research Center. Retrieved from www.pewinternet.org/fact-sheet/internet-broadband/ Prensky, M. (2001). Digital natives, digital immigrants. On the Horizon, 9(5), 1–6. Relles, S. R., & Tierney, W. G. (2016). Understanding the writing habits of today’s students: Technology and college readiness. The Journal of Higher Education, 84(4), 477–505. Selfe, C. L., & Hawisher, G. E. (2004). Literate lives in the information age: Narratives of literacy from the United States. Mahwah, NJ: Erlbaum. Shirky, C. (2010). Cognitive surplus. New York, NY: Penguin Press.
85
Barbara J. Guzzetti and Leslie M. Foley
Smith, S, & Caruso, J. (2010). ECAR study of undergraduate students and information technology (Research Study, Vol. 6). Boulder, CO: EDUCAUSE Center for Applied Research. Retrieved from www.educause.edu/ Resources/ECARStudy of UndergraduateStuden/217333 Snyder, I., & Bulfin, N. (2007). Digital literacy: What it means for arts education. In L. Bresler (ed.), International handbook of research in arts education (pp. 1297–1310). Dordrecth, The Netherlands: Springer. Van Someren, Q. (2008). The new media literacies. Retrieved from https://youtu.be/pEHcGAsnBZE Vie, S. (2008). Digital Divide 2.0: Generation “M” and online social networking sites in the composition classroom. Computers and Composition, 25(1), 9–23. Wang, J., & Fang, Y. (2005). Benefits of cooperative learning in weblog networks. ERIC Document Report No. ED490815. Warfield, A. (2016, January 17). Six reasons you should be doing digital storytelling with your students. Getting Smart. Retrieved from www.gettingsmart.com/2016/01/6-reasons-you-should-be-doing-d igitalstorytelling-with-your-students/ *Yancey, K. B. (2009). Writing in the 21st century. National Council of Teachers of English. Retrieved from www.ncte.org/library/NCTEFiles/Press/Yancey_final.pdf
86
Part II
Reading Strategies Sonya L. Armstrong texas state university
This section of the Handbook of College Reading and Study Strategy Research explores current thinking about instructional strategies, or approaches, to college reading instruction that are both theoretically sound and evidence-based. In Chapter 6, Disciplinary Reading, authors Thomas W. Bean, Kristen Gregory, and Judith Dunkerly-Bean discuss how a disciplinary literacies perspective can not only impact college and developmental reading curricula but also nudge impactful collaborations with faculty across campus, which can then inform the work of reading instruction. Next, in Chapter 7, Michelle A ndersen Francis and Michele L. Simpson discuss Vocabulary, an essential, yet often misunderstood, component within college reading instruction. Jodi Patrick Holschuh and Jodi Lampi delineate the key influences on Comprehension in Chapter 8 and argue for the inclusion of reading strategies that are generative in nature and emphasize the underlying cognitive and metacognitive aspects rather than solely focusing on the procedural ones. Following that, in Chapter 9, Integrated Reading and Writing by Sonya L. Armstrong, Jeanine L. Williams, and Norman A. Stahl, the authors trace the roots of the current reemergence of the Integrated Reading and Writing (IRW) movement and explore several popular models underpinning that pedagogical approach. Finally, in Chapter 10, Janna Jackson Kellinger challenges traditional understandings of “texts” in reading courses by discussing the possibilities for bridging the worlds of Gaming and College Reading.
6 Disciplinary Reading Thomas W. Bean, Kristen Gregory, and Judith Dunkerly-Bean old dominion university
College Reading and Disciplinary Literacy In this chapter, we briefly touch on the history of college reading curriculum and situate this history within the scholar academic ideology (Schiro, 2013). In many ways, our existing curriculum in the disciplines harks back to the late 1800s and what curriculum theorist Schiro (2013) terms “the scholar academic ideology.” The scholar academic ideology had its roots in the 1893 Committee of Ten report, which resulted in a required secondary curriculum consisting of four years of English, mathematics, foreign language, and three years of science. At the college level, this translates to the general education courses taken by freshmen and sophomores before selecting a major. The scholar academic ideology is located in specific disciplines in colleges and universities, and encompasses fields like history, mathematics, and science, in which a hierarchy places professors at the top and students at the bottom (Schiro, 2013). Schiro notes, As a community, each discipline has a tradition and a history; a heritage of literature and artifacts; a specialized language, grammar and logic for expression of ideas; a communications network; a valuation and affective stance; and territorial possession of a particular set of concepts, beliefs, and knowledge. (p. 27) The academic ideology foreshadowed current debates and discussions that point to the natural tensions that arise around changing how college faculty and students teach and learn in the disciplines. Toward that end, our theoretical lens relates to change theory (Fullan, 2015) and the challenges of college general education faculty resistance to facilitating students’ content area reading and writing (Armstrong & Stahl, 2017; Stahl & King, 2018).
College Reading Efforts to support college students’ reading and studying date to 1630 and the introduction of developmental education (Holschuh & Paulson, 2013; Stahl & King, 2018). In today’s contemporary context, “increasing numbers of first-year college students—particularly community- college students—are being placed into one or more developmental course prior to beginning their c ollege-level courses” (Armstrong & Stahl, 2017, p. 100). 89
Thomas W. Bean, Kristen Gregory, and Judith Dunkerly-Bean
Some estimates suggest that as many as two-thirds of community-college students may be underprepared for college-level study (Xu, 2016). Students are advised into multitiered developmental reading and writing classes, sometimes taking as many as three courses before moving into credit-bearing college-level English that counts toward graduation. Xu (2016) analyzed a data set of 46,000 community-college students across 23 campuses in the state of Virginia. Many of these students were African American and from rural, low-income households. They were placed into low-level developmental classes with significant negative effects, including discouragement and the likelihood of dropping out of college. Following on the heels of this study, developmental classes were realigned to combine reading and writing into a single, shorter sequence aligned to college-level English. Xu (2016) noted that similar curriculum models in other states show a positive impact on students’ retention and graduation rates. Some of the generic skills deemed to be helpful for college success in reading include the following (Springer, Wilson, & Dole, 2015): • • • •
The ability to read complex texts independently The ability to read complex texts using strategic tools, such as note-taking The ability to engage in multiple readings of text passages to ensure comprehension The ability to synthesize information across multiple texts.
These generic foundational reading skills are important as it is estimated that a large percentage of disciplinary reading in college relies on independent reading and repeated readings to grasp academic language and concepts (Springer et al., 2015). Disciplinary literacy success depends, to a great extent, on the degree to which high schools develop students’ requisite skills in the four elements cited by Springer et al. (2015) and teacher modeling of these practices as well as the metadiscursive properties of disciplinary practices (Moje, 2015). For example, reading in text-intensive fields like history rests on both independent reading skills and disciplinary specific elements, including evaluating the truth-value of historical sources (Dunkerly-Bean & Bean, 2016).
Disciplinary Literacy In the previous edition of the Handbook, the chapter on “Academic Literacies” defined disciplinary literacy as “the ability, at the onset of college-level work to operate within texts and genres of academic traditions” (Pawan & Honeyford, 2009). Foreshadowing much of the current research in disciplinary literacy (Armstrong, Stahl, & Kantner, 2015; Dunkerly-Bean & Bean, 2016; Moje, 2015), these researchers delineated the rhetorical differences across texts in the sciences and social sciences. For example, chemistry classes call for precision; memorization; and strict adherence to procedures, safety issues, and accurate measurement. Reading history texts requires attention to the source of the information and its likely accuracy in describing and commenting on historical events (Shanahan & Shanahan, 2014). The challenge for new college students reading introductory texts in the disciplines includes heavy compression of content and substantial concept load (Pawan & Honeyford, 2009). In addition, content experts and related texts largely ignore the possible contribution of students’ prior knowledge. Recent research in disciplinary literacy focuses on how expert readers, like college and university faculty, go about navigating history, science, literature, and other texts to tease out the particularities of reading and writing in each discipline (Dunkerly-Bean & Bean, 2016; Moje, 2015; Shanahan & Shanahan, 2014).
90
Disciplinary Reading
In contrast to this belief in the primacy of the disciplines, some college reading continues to rely on outdated views centering developmental reading on word attack strategies and discrete skill building isolated from the disciplines in which these skills could be applied (Holschuh & Paulson, 2013). Developmental reading texts featuring generic study strategies persist, despite a growing interest in specific disciplinary literacy practices. These skill-drill approaches continue, despite contemporary views of literacy as a wide-ranging social practice that should be preparing students for complex text reading across multiple disciplines. College academic discourse varies in terms of technical vocabulary, rhetorical structures, symbol systems, and metadiscursive properties in science, mathematics, English, music, and so on. As Holschuh and Paulson (2013) note, “The goal of instruction is not to fill a deficit, but to teach new literacy strategies that can accommodate the increase in literacy demands in unfamiliar, specialized discourse materials” (p. 7). Thus, knowing how to tackle a history text versus a science text would be central in creating a transferable disciplinary literacy-based curriculum. Holschuh and Paulson (2013) thus recommended using real-world, discipline-based texts rather than the short paragraphs that often typify content in developmental reading classes. Indeed, in a detailed comparative study of Developmental Reading (DR) and General Education (GE) course materials and expectations in a large community college, the results illustrate the disparity between GE content and DR expectations (Armstrong et al., 2015). The study examined three areas in GE and DR: text practices, faculty perceptions, and student perspectives. Results of text difficulty showed that DR texts were well below the 12th-grade level, with a range of 6.5–9.5 grade equivalence. Text passages were typically short practice exercises and excerpts from content area material. In contrast, GE texts were typically traditional textbooks, scoring at or above the 12th-grade readability level. The researchers noted that texts were found to be “vastly different in DR courses and GE courses” (Armstrong et al., 2015, p. 20). The majority of GE faculty relied on a single textbook and held high expectations that students would read the assigned texts independently. The researchers noted, There did not appear to be any instruction on how to navigate texts or extrapolate text content occurring in the GE courses, yet the GE faculty called for DR faculty to do more of this action; therefore, it also seems that the expectation is for students to be fully competent for the specialized types of text practices (often discipline driven) upon entry into the GE courses. (Armstrong et al., 2015, p. 50) Indeed, the GE faculty, while largely unaware of the specific details of DR classes, recommended that DR faculty engage students in lengthier readings and include scholarly texts in instruction. That said, neither DR nor GE faculty were communicating or observing each other’s classes to gain a better understanding of how to better align their curricula. Interestingly, the students’ perceptions echoed this disconnect and, in some cases, called for greater rigor in the DR context, with one respondent noting, “The book we’re reading right now feels like something I read in like fifth grade” (Armstrong et al., 2015, p. 47). As a first step in bridging this gap, the researchers recommended that DR and GE faculty meet and begin to address this alignment issue in both areas. Toward that end, these researchers suggested a number of creative approaches that might bridge this divide (Armstrong & Stahl, 2017). For example, DR faculty could observe GE courses, or the DR classes could be paired with GE courses to help students understand and use disciplinary-based academic terminology by reading in those forms of text. In addition, DR and GE faculty could help create YouTube videos that address learning strategies or specific GE courses. Of course, some form of incentive would help this collaboration, particularly grant-funded, reassigned time to facilitate curriculum alignment.
91
Thomas W. Bean, Kristen Gregory, and Judith Dunkerly-Bean
Approaches to Disciplinary Literacy in College While disciplinary literacy has developed some traction at the K-12 level, “it has yet to catch fire in the content fields of postsecondary education or in developmental reading circles” (A rmstrong & Stahl, 2017, p. 114). And the increasingly complex way in which the demarcation of one discipline from another begins to blur into interdisciplinary groups coming together to tackle problems like global climate change suggests that even using discipline-based texts may not be adequate. It may be that we should be projecting toward teaching college students how to read across multiple integrated disciplines, like mathematics and science, rather than adhering to rigid boundaries. Noddings (2013) has argued for moving away from hierarchical power structures that privilege one discipline (e.g., chemistry) over another (e.g., health). Rather, students need to learn to read to make intertextual connections across related disciplines. Ultimately, in a complex global society, confronting climate change and a host of other daunting problems, and being able to work creatively across disciplines in an interdisciplinary fashion will be increasingly important (Dunkerly-Bean & Bean, 2016). For example, in the Massachusetts Institute of Technology (MIT) Glass Lab, three faculty members from disparate disciplines meet each week to make large, complex glass art sculptures (Crawford, 2015). The three-member team consists of the director of the lab, a computer science faculty member, and an artist in residence. They collaborate across these disciplines to apply Computer Assisted Design (CAD) elements and creative expertise in glass blowing to create large, intricate art works. The team notes that MIT students often want to reduce the process to a set of formulas describing heat transfer, viscosity, and the like. But the morphing of molten glass—its drooping, turning, and solidifying—is something you have to feel. Only by actually manipulating the glass does it convey its current state and likely trajectory. (Crawford, 2015, p. 132) In essence, this interdisciplinary team functions much like a jazz band. The Director, Peter Houk, argues that this process “Reminds me of how Miles Davis worked with his bands—some structure to work with, but not too much information and then the rest improvisation within a fairly structured system” (p. 133). To see this process in action, including the CAD elements, go to YouTube “Peter Houk-Meet the Artist.” The point here is that by combining the features of various disciplines in collaborative problem- solving, college faculty and students can move beyond the limits of the individual self. O therwise, in our current disciplinary silos, students quickly conclude that course content seems divorced from any real-world application. For example, one can learn more about mathematics in the process of designing a model of a bridge from scratch than doing one more word problem isolated from any production or value.
Changing the Status Quo As college faculty grapple with the particularities of reading and writing in their disciplines, dispositions and habits of mind are likely to reinstitute resistance to change well known to content area reading and writing advocates at college and secondary levels (Armstrong et al., 2015; O’Brien & Stewart, 1992). Indeed, the well-worn phrase, “every teacher a teacher of reading” has engendered a good deal of ire in the disciplines (Alvermann & Moje, 2013). They note that this platitude tends to ebb and flow depending on the level of perceived crisis in literacy instruction. For college faculty in the disciplines of history and other content areas, their professional identity
92
Disciplinary Reading
is directly linked to the metadiscursive ways of communicating in their chosen field. Thus, in history, discerning the accuracy and truth-value of sources (or “sourcing”) is critical to being able to trust an historical account. Moreover, this important process may be best learned within the discipline (Heller, 2010; Moje, 2015). Moje (2015) defines disciplines in a fashion that may be more appealing to college faculty in that she moves beyond narrow conceptions of reading and writing. She notes that “Disciplines are, in effect, domains or cultures in which certain kinds of texts are read and written for certain purposes and thus require certain kinds of literacy practice” (p. 255). In Moje’s view, the best way to introduce students to the nuances of the various disciplines is to use a four-item heuristic: • • • •
Engage students in using the actual practices of the discipline (e.g., sourcing in history, field notes in science) Guide students’ entry into disciplinary practices for novices Engage students in attending to words and ways with words in a discipline (e.g., academic technical vocabulary) Center students’ learning and attention in inquiry and problem-solving.
Rather than abandoning generic reading and writing strategies (e.g., graphic organizers), these can be helpful organizational approaches designed to aid students in extracting information from print and nonprint materials in the disciplines. Combined with specific disciplinary practices, these long-standing generic approaches become complementary practices (Dunkerly-Bean & Bean, 2016). In order to change the status quo of college disciplinary isolation, professional development incentives and the hard work of curriculum redesign need to be evaluated in light of change theory (Fullan, 2015; Hall & Hord, 2015).
Change Theory Top-down efforts to enact change run counter to research on the process, offering a more complex, nuanced understanding of factors that impede or facilitate change. For example, change theorists argue that the wrong way to go about changing practices in an organization is to demand punitive external accountability (e.g., high-stakes assessments) and reward competition (Fullan & Quinn, 2016). Indeed, if faculty feel like they are reeling under the weight of too many mandates and disparate projects and initiatives, they are likely to experience cognitive and emotional overload. Rather, strong change leaders participate with their faculty by leading from the middle versus issuing top-down mandates from above. These change leaders seek to create a climate that fosters dreaming, creativity, and collaboration with support from social networks and collegial cooperation. This process takes time and incentives, like reassigned time from a full teaching load, can help. If the goal is to develop students’ deep thinking, then one-shot workshops and brief professional development is unlikely to have an impact on faculty dispositions toward incorporating disciplinary literacy practices (Gregory, Bol, Bean, & Perez, in press). In both college developmental reading and general education courses, tackling real-world problems and big questions can go a long way toward instituting disciplinary literacy practices. For example, college students taking developmental reading can read to address issues of local and global interest, like climate change. There are a number of texts aimed at understanding and thinking about global climate change (e.g., Romm, 2016). To avoid natural faculty resistance to the well-worn trope “every teacher a teacher of reading,” centering disciplinary literacy on problem-based learning activities relevant to becoming an
93
Thomas W. Bean, Kristen Gregory, and Judith Dunkerly-Bean
“insider” in a field may be preferable to adding more onto already busy disciplinary faculty. As noted researchers in change theory argue, “For the first time in history the mark of an educated person is that of a doer—they are impatient with lack of action. Moreover, they are synergistic” (Fullan & Scott, 2014). Thus, cooperative membership in a community of scholars (i.e., college faculty) calls for finding some common ground (e.g., mentoring students in an apprenticeship aimed at becoming “insiders” in a discipline), thinking creatively, and getting outside oneself (Crawford, 2015). Clearly, change is difficult and, as we noted earlier in this chapter, much depends upon what dispositions and prior knowledge college students bring to developmental reading and general education classes in the disciplines. Exclusive reliance on the disciplines to codify what counts as knowledge may ignore the interests and attitudes of learners (Deng, 2013). Local funds of knowledge, aboriginal knowledge, folk knowledge, craft knowledge, and other dimensions are just a few of the cross-cultural elements diverse student bodies bring to the college campus (Aikenhead, 2014).
Implications for Practice As discussed earlier, large numbers of community college and college students may be underprepared for the coursework they encounter. Students enroll in development reading and English courses and can take advantage of tutoring support in learning assistance centers on campus. However, these both typically provide general literacy strategies rather than strategies specific to the disciplines. If discipline faculty are teaching in an institution where students have inconsistent literacy support from learning assistance centers or if they have large numbers of students who are struggling with discipline-specific literacy skills, they may feel forced to provide literacy support for their students either within or outside of their class sessions. Due to faculty’s limited perception of their role as a literacy educator and their marginal level of self-efficacy with incorporating disciplinary literacy within their content courses (Gregory et al., in press), there is the risk that faculty will not be fully equipped to properly support their students. Community college and general education faculty typically teach large class loads and large numbers of students. This results in limited time for professional development, especially for areas outside their specific content area. One solution to this dilemma is for faculty professional development programs to provide opportunities for discipline faculty to participate in professional learning communities where they can collaborate with other discipline faculty and literacy educators to develop their knowledge of disciplinary literacy and how it applies specifically to their courses. When general education faculty and developmental literacy faculty have space and time to collaborate, they may be able to change the status quo that separates these domains (Armstrong & Stahl, 2017). This knowledge is necessary in order to redesign developmental reading and writing to better align with disciplinary discourse and, in turn, inform disciplinary faculty about specific and complementary approaches aimed at student success (Stahl & Armstrong, 2018). In their detailed look at college reading’s history and potential reformation, these researchers argue for change, noting, This calls for a new mission for college reading experts that takes them away from the silo focused on traditional skills-oriented courses to a role of chief professional development specialists supporting the contextualization of reading and learning competencies in classes, the integration of the disciplinary literacy theories and practices in all classes across the institution, and the delivery of professional development (for both faculty and graduate teaching assistants) and literacy-oriented services across the academic community. (p. 58)
94
Disciplinary Reading
Throughout this collaboration, faculty in developmental reading and general education can begin to unpack their respective teaching philosophies and alter the status quo charted by Armstrong et al. (2015) in their detailed comparative study of developmental reading and general education curricula. Collaborative development between higher education disciplinary specialists, literacy educators, librarians, and student support center staff increases knowledge about the role literacy instruction can play within the content courses (Bergman, 2014; Jacobs, 2005). Jacobs (2005) stresses the importance of creating spaces for such collaboration so that all stakeholders can work together on neutral ground while sharing the leadership and responsibility for increasing student success. In a professional learning community, the various stakeholders can work together to determine how to embed discipline-specific literacy instruction into the curriculum and provide the faculty adequate training and resources in order to effectively make this curricular change.
Recommendations for Future Research As proposed in this chapter, further investigation into faculty support for professional learning communities may offer a model of literacy as social practice in the disciplines (Gregory et al., in press; Moje, 2015). The well-established research in change theory shows that mandates to change without scaffolding and support lead to resistance and alienation (Fullan, 2015). By creating professional learning communities aimed at establishing a climate that respects disciplinary literacy elements across disciplines and a diverse student body, a more cosmopolitan approach to change may be possible. Otherwise, it is easy to remain mired in the past, and as philosopher David Hansen (2014) noted in a themed issue of Curriculum Inquiry on cosmopolitanism, “People necessarily speak from who or what they have become at the moment, their prejudices or presumptions take form through the course of socialization and life experience” (p. 9). By constructing professional learning communities, opportunities emerge to discuss, debate, and share ideas across diverse disciplinary areas. One of the ways in which this may be accomplished is through further research incorporating a cosmopolitan perspective that incorporates local and global issues within d iscipline-specific or interdisciplinary approaches. Cosmopolitanism as a disposition and philosophy invites discussion, debate, and critique (Bean & Dunkerly-Bean, 2015). Cosmopolitanism refers to the condition of living at the intersection of the local and global (Bean & Dunkerly-Bean, 2015). This theoretical stance challenges narrow views of literacy as situated within a single social, cultural, or national locale and as a stable, monolithic skill. The focus of mainstream literacy instruction on reading the word rather than the world, while important, narrowly defines what it means to be “literate” and ignores what it means to be a citizen in the world (Bean & Dunkerly-Bean, 2015; Freire, 2010). Indeed, when literacy, especially in the disciplines, is viewed in this reified manner, the possibilities for interdisciplinary approaches fade as skills are privileged over context and cause for knowledge. Thus, in addition to the likely influence of change theory on a professional community of practice, attention to the diverse cosmopolitan condition of our college student body seems equally important. Curriculum and research based on cosmopolitan theory seek to examine the ways in which disciplinary literacies can contribute to the betterment of the individual and the larger community, particularly around big questions and issues (e.g., global warming, immigration, etc.). This position calls for an expanded perspective on what constitutes powerful sociocultural and ethical practices in the various disciplines. When local and global issues are important to students, this interest may well lead to innovative approaches to disciplinary literacies in the college classroom. Although not a mainstream framework in most colleges, a cosmopolitan approach needs further investigation to bring a sense of urgency and meaning to both individual instructors and the communities of practice in which they work.
95
Thomas W. Bean, Kristen Gregory, and Judith Dunkerly-Bean
Indeed, critical thinking, currently an integral part of the college curriculum in literacy, is one such area that could be investigated from a cosmopolitan theory perspective. By taking into account change theory and cosmopolitan theory, we may be able to move literacy out of the narrow, reified set of skill-based practices that have mired college reading in the past and move the field into an era that engages students both in reading the word and the world.
References and Suggested Readings (*) Aikenhead, G. (2014). Enhancing school science with indigenous knowledge: What we know from teachers and research. Saskatoon, CA: Saskatoon Public Schools Division. Alvermann, D. E., & Moje, E. B. (2013). Adolescent literacy instruction and the discourse of “Every Teacher a Teacher of Reading.” In R. B. Ruddell, N. Unrau, & D. Alvermann (Eds.), Theoretical models and processes of reading (6th ed., pp. 1072–1103). Newark, DE: International Reading Association. Armstrong, S. L., & Stahl, N. A. (2017). Communication across the silos and borders: The culture of reading in a community college. Journal of College Reading and Learning, 47(2), 99–122. Armstrong, S. L., Stahl, N. A., & Kantner, M. J. (2015, July). Technical Report Number 1. Research Report: What constitutes ‘college ready’ for reading? An investigation of academic text readiness at one community college. DeKalb: Northern Illinois University Center for the Interdisciplinary Study of Language and Literacy. Bean, T. W., & Dunkerly-Bean, J. M. (2015). Expanding conceptions of adolescent literacy research and practice: Cosmopolitan theory in educational contexts. Australian Journal of Language and Literacy, 38(1), 46–54. Bergman, L. (2014). The research circle as a resource in challenging academics’ perceptions of how to support students’ literacy development in higher education. Canadian Journal of Action Research, 15(2), 3–20. Crawford, M. B. (2015). The world beyond your head: On becoming an individual in an age of distraction. New York, NY: Farrar, Strauss, and Giroux. Deng, Z. (2013). School subjects and academic disciplines. In A. Luke, A. Woods, & K. Weir (Eds.), Curriculum syllabus design and equity: A primer and model (pp. 40–73). New York, NY: Routledge. *Dunkerly-Bean, J., & Bean, T. W. (2016). Missing the savoir for the connaissance: Disciplinary and content area literacy as regimes of truth. Journal of Literacy Research, 48(4), 448–475. Freire, P. (2010). Pedagogy of the oppressed. New York, NY: Continuum. Fullan, M. (2015). Freedom to change: Four strategies to put your inner drive into overdrive. San Francisco, CA: Jossey-Bass/Wiley. Fullan, M., & Quinn, J. (2016). Coherence: The right drivers in action for schools, districts, and systems. Thousand Oaks, CA: Corwin/SAGE. Fullan, M., & Scott, G. (2014, July). New pedagogies for deep learning: A global partnership. Seattle, WA: Collaborative Impact, SPC. Gregory, K., Bol, L., Bean, T., & Perez, T. (in press). Community college discipline faculty’s attitudes and self-efficacy with literacy instruction in the disciplines. Journal of Behavioral and Social Sciences, 6. Hall, G., & Hord, S. M. (2015). Implementing change: Patterns, principals, and potholes. Boston, MA: Pearson. Hansen, D. (2014). Theme issue: Cosmopolitanism as cultural creativity: New modes of educational practice in globalizing times. Curriculum Inquiry, 44(1), 1–14. Heller, R. (2010). In praise of amateurism: A friendly critique of Moje’s “Call for change” in secondary literacy. Journal of Adolescent & Adult Literacy, 54(4), 265–273. *Holschuh, J. P., & Paulson, E. J. (2013, July). The terrain of college developmental reading. Executive summary and paper commissioned by the College Reading and Learning Association (CRLA). Retrieved from www.crla.net/images/whitepaper/TheTerrainofCollege91913.pdf Jacobs, C. (2005). On being an insider on the outside: New spaces for integrating academic literacies. Teaching in Higher Education, 10(4), 475–487. Moje, E. B. (2015). Doing and teaching disciplinary literacy with adolescent learners: A social and cultural enterprise. Harvard Educational Review, 85(2), 254–278. Noddings, N. (2013). Education and democracy in the 21st century. New York, NY: Teachers College Press. O’Brien, D. G., & Stewart, R. A. (1992). Resistance to content area reading instruction: Dimensions and solutions. In E. K. Dishner, T. W. Bean, J. E. Readence, & D. W. Moore (Eds.), Reading in the content areas: Improving classroom instruction (3rd ed., pp. 30–40). Dubuque, IA: Kendall/Hunt. Pawan, F., & Honeyford, M. A. (2009). Academic literacy. In R. F. Flippo & D. C. Caverly (Eds.), Handbook of college reading and study strategy research (2nd ed., pp. 26–46). New York, NY: Routledge.
96
Disciplinary Reading
Romm, J. (2016). Climate change: What everyone needs to know. New York, NY: Oxford University Press. Schiro, M. S. (2013). Curriculum theory: Conflicting visions and enduring concerns (2nd ed.). Thousand Oaks, CA: Sage. Shanahan, T., & Shanahan, C. (2014). Teaching history and literacy. In K. A. Hinchman & H. K. SheridanThomas (Eds.), Best practices in adolescent literacy instruction (2nd ed., pp. 232–248). New York, NY: The Guilford Press. Springer, S. E., Wilson, T. J., & Dole, J. A. (2015). Ready or not: Recognizing and preparing college-ready students. Journal of Adolescent & Adult Literacy, 58(4), 299–307. Stahl, N. A., & Armstrong, S. L. (2018). Re-claiming, re-inventing, and re- reforming a field: The future of college reading. Journal of College Reading and Learning, 48(1), 47–66. Stahl, N., & King, J. (2018). History. In R. F. Flippo, & T. W. Bean (Eds.), Handbook of college reading and study strategy research (3rd ed., pp. 3–25). New York, NY: Routledge. *Xu, D. (2016). Assistance or obstacle? The impact of different levels of English developmental education on underprepared students in community colleges. Educational Researcher, 45(9), 496–507.
97
7 Vocabulary Michelle Andersen Francis west valley college
Michele L. Simpson university of georgia
Federal reports (e.g., Snow, 2002) have indicated that vocabulary knowledge is one of the five essential components of reading, and further research has indicated that understanding the components of words and using flexible strategies when reading can increase students’ success as readers (Cromley & Wills, 2016; Ebner & Ehri, 2016; Holschuh & Paulson, 2013). Therefore, at the college level, it should be our goal, as college reading professionals, to increase the breadth of our students’ vocabularies (i.e., the number of words for which they have a definition) as well as the depth and precision of their word knowledge. But the goal is much more than simply improving students’ word knowledge. Given that most college students are expected to read content textbooks packed with concepts and technical vocabulary that they need to understand fully if they are to learn, the relationship between vocabulary and comprehension becomes even more significant (Harmon, Hedrick, Wood, & Gress, 2005; Rupley, 2005; Schoerning, 2014; Shanahan & Shanahan, 2008; Willingham & Price, 2009). If too many general or technical words puzzle students, they will read in a halting manner, a behavior that compromises their reading fluency ( Joshi, 2005). Moreover, when the processing demands for reading a textbook become elevated because of the vocabulary load, many students will have little, if any, cognitive energy left for thinking about key concepts or monitoring their understanding (Scott & Nagy, 2004). In sum, if college students are to succeed, they need an extensive vocabulary and a variety of strategies for understanding the words and language of an academic discipline. In order to assist their students, college reading professionals need to be aware of research-validated and effective approaches, and strategies for vocabulary development. This chapter reviews the current research and theory related to vocabulary development and offers practical teaching and programmatic guidelines. The first section examines the issues related to vocabulary development and instruction. The second section highlights research into the development of students’ academic vocabulary. In the third section, seven recommendations for effective vocabulary practices are outlined. The final section outlines future avenues for vocabulary research.
Vocabulary Development and Instruction Prior to developing an approach for enhancing students’ word knowledge, college reading professionals should acknowledge the theoretical issues concerning vocabulary development. Possibly
98
Vocabulary
the most important theoretical issue is what constitutes word knowledge. Closely related to this first issue is the troublesome methodological issue that considers how to measure students’ word knowledge and vocabulary growth. The third theoretical issue addresses the role of students as they attempt to acquire vocabulary knowledge.
What Does It Mean to Know a Word? The extant research on this question has been well documented and investigated over the last 50 years (e.g., Dale, 1965; Stahl, 1999), and it appears that such knowledge exists in degrees or on a continuum. Stahl (1999) expanded upon Dale’s (1965) idea of word knowledge levels by suggesting that students should have “full and flexible knowledge” of a given word. Stahl defines full and flexible knowledge of a word as knowledge that “involves an understanding of the core meaning of a word and how it changes in different contexts” (p. 25). Stahl’s (1999) definition lends support to the idea that the more exposure students have to an unknown word, the more opportunities they have to forge connections and interconnections between words. Therefore, when they acquire full and flexible knowledge of a partially known word, they are able to use it and identify it correctly in different contexts. Awareness of the different levels of word knowledge aids in the understanding of the notion that it is not whether or not a student knows a word but at what level the word is known. That is, a student can know many words but only possess a smattering of words at the full and flexible word knowledge level. If that is the case, the student may be at a disadvantage when forced to utilize a variety of words in written or spoken forms. That is not to say that all encountered words must be known at the full and flexible word knowledge level; many words can be known at the partial word knowledge level and still be useful for students. However, if instructors want to increase their students’ vocabulary knowledge so that it will be most beneficial to their comprehension and learning, it would be prudent to encourage full and flexible word knowledge levels (Baumann, Kame’enui, & Ash, 2003; Francis & Simpson, 2009; Rimbey, McKeown, Beck, & Sandora, 2016).
How Can Vocabulary Knowledge Be Measured? The bulk of past research into vocabulary knowledge focused on vocabulary instruction, not on formats for evaluating and assessing vocabulary knowledge. College reading professionals, however, need to understand the options for measuring, in a valid and reliable manner, their students’ levels of vocabulary knowledge. Moreover, the type of assessment used to measure vocabulary knowledge should match the instructor’s philosophy regarding word knowledge (Baumann et al., 2003; Joshi, 2005). If this matching between philosophy, instruction, and test format does not occur, it is quite likely that students’ levels of vocabulary knowledge will be overestimated or masked in some way. For example, if students are taught vocabulary words using synonyms and contextual examples, they may perform poorly on a straight multiple-choice definition assessment of those words. A better measurement might be a test that asks students to create their own context for the targeted words. Formats that are more sensitive to the dimensions and levels of students’ vocabulary knowledge are needed. Although some attempts have been made in the past (e.g., Francis & Simpson, 2003; McKeown & Beck, 2004), these alternative formats have not been systematically researched or incorporated into everyday practice. In order to be effective, reading professionals must identify the word knowledge level they want students to acquire, use vocabulary strategies that will help students learn at that level, and then measure the level of learning with appropriate formats, such as sentence-generation tasks.
99
M. Andersen Francis and M. L. Simpson
Role of Students in Vocabulary Acquisition The third theoretical issue concerning vocabulary knowledge involves the students’ role in vocabulary acquisition. The extant research suggests that students who actively try to make sense of what they see and hear are those who learn more (Simpson, Stahl, & Francis, 2004; Winne & Jamieson-Noel, 2002; Zimmerman, 2002). The activity of the learner has been theoretically defined by Craik (1979) and Craik and Lockhart (1972), who proposed that deeper, more elaborate, and distinctive processing of stimuli results in better performance, all other things being equal. Reading strategy research with college learners supports this concept (e.g., Cromley & Wills, 2016; Holschuh, 2000; Nist & Simpson, 2000). For even further elaboration of this topic, see the Francis and Simpson (2009) chapter from the Handbook. Stahl (1985) proposed a model that described the different and more elaborative processes involved when students learn new words (see Francis & Simpson, 2009, for further elaboration), and Beck and McKeown (1983) created a comprehensive program of vocabulary research and development that involved students in a variety of generative processing activities. In the study, students were asked to answer questions using the words they had been taught (e.g., Would a glutton tend to be emaciated?) rather than simply matching definitions to the words they were studying. Cromley and Wills (2016) recently found that students who learn the most from text are better able to verbalize vocabulary knowledge when reading a textbook. Instructors can then take this verbalized vocabulary knowledge and encourage students to make inferences about the text or generate a graphic organizer to aid reading comprehension. This connection between vocabulary knowledge and reading comprehension, while not new information, does reinforce the idea that deep processing is necessary for vocabulary acquisition. Consequently, a strategy that actively engages the learner in deep processing, such as solving problems, answering questions, or producing applications in new situations, appears to be a more effective means of vocabulary instruction. The next section will delve deeper into what the research says about vocabulary development for college students.
Research on Academic Vocabulary Development This section examines the trends and conclusions of the existing literature on vocabulary development at the college level. Many studies in this section are a mixture of college-level and middle school- and high school-level studies that have outlined the following three main approaches to improving student vocabulary knowledge: (a) traditional word knowledge approaches, (b) academic vocabulary approaches, and (c) student-centered approaches.
Studies on Traditional Word Knowledge Approaches The traditional word knowledge approaches appear to be based on Anderson and Freebody’s (1981) instrumentalist hypothesis, which maintains that word knowledge is a direct causal link affecting comprehension. Thus, the more individual word meanings taught to them, the better students comprehend any new or difficult expository material they read. A nderson and Freebody stressed that the most distinguishing characteristic of the instrumentalist hypothesis is the emphasis on direct and generic vocabulary-building exercises. The emphasis on vocabulary-building instruction has focused traditional word knowledge approaches on morphemic analysis, dictionary definitions and synonyms, contextual analysis, and keyword studies.
100
Vocabulary
Morphemic Analysis A common practice in vocabulary instruction at the college level is training students in morphemic analysis as a means of helping them decipher the meanings of unknown words they might encounter in their reading. A morpheme is the smallest unit of language that still retains meaning. For example, triangle has two morphemes: tri and angle. Free morphemes (e.g., sad, boy, jump) are root words that can function independently or with bound morphemes. Bound morphemes (un, ing, ness), including prefixes and suffixes, have meaning but must be combined with free morphemes. Morphemic analysis requires knowledge of prefixes, suffixes, and their meanings; knowledge of associated spelling and pronunciation changes; and extensive knowledge of root words. In theory, students who know a multitude of prefixes and suffixes can generate new words by adding bound morphemes to newly acquired free morphemes or root words (Graves, 2004). These new word generations can lead to increasing vocabulary knowledge and then reading comprehension, as evidenced by Bowers and Kirby’s (2010) study of elementary school students. In their 20-session intervention, they found that students were able to develop their vocabulary, and then improve their reading comprehension, using the morphological families to infer the meaning of previously untaught words. Although this study was conducted with younger students, it has implications for college instructors. That is, teaching roots, suffixes, and prefixes does appear to help students decipher unknown words but only when the unknown words are in the same morphological family. Pacheco and Goodwin (2013) investigated middle school students’ use of morphological strategies when encountering unknown words in text. They interviewed 20 seventh- and eighth-grade students as they attempted to problem-solve unknown words. The authors paid particular attention to morphological strategies and determined that “Instruction should emphasize knowledge (i.e., knowledge of the meanings of roots and affixes) and awareness (i.e., the understanding of how to connect morphemes within the word to create meaning) to best support problem solving” (p. 548). In another study, Townsend, Bear, Templeton, and Burton (2016) researched the relationship between middle school students’ academic achievement and their academic word knowledge. They found that students who had a higher level of morphological awareness were more capable of reading and engaging with difficult academic texts, thereby improving their achievement. Although this study dealt with middle school students, the increase in achievement is important to note for college reading professionals because college students need vocabulary for success in postsecondary reading (Francis & Simpson, 2009). Therefore, teaching morphemic analysis might be key to increasing reading comprehension and thereby increasing college student achievement. Teaching morphemic analysis to college students does appear to be a college tradition, at least in the vocabulary textbook realm. When Stahl, Brozo, and Simpson (1987) conducted a content analysis of 60 college vocabulary textbooks, they found that 80 percent of them emphasized morphemic analysis as an independent word learning technique. In support of this earlier research, Roth (2017) used a similar content analysis technique to determine that college vocabulary textbooks do teach morphemic analysis as an independent word acquisition strategy. However, the problem still lies in the use of short workbook exercises rather than longer authentic texts that force students to flexibly integrate the words into their own vocabulary knowledge. Future researchers might investigate whether or not morphemic analysis can be paired with other instructional methods or whether students, especially striving readers, can be trained to transfer their knowledge of morphemic analysis to their independent reading, especially in the academic areas.
101
M. Andersen Francis and M. L. Simpson
Dictionary Definitions and Synonyms When students ask an instructor for the definition of an unknown word, the most common response they receive might be “Look it up in the dictionary.” Although this sounds humorous, it does address the notion that teaching dictionary definitions and synonyms is one of the most prevalent forms of vocabulary instruction, especially in the secondary and postsecondary realm. Students are often given lists of unrelated words to learn by searching the dictionary or thesaurus. The early empirical studies from the 1970s and 1980s that sought to determine the difference in vocabulary learning between a control group and an experimental group who learned synonyms and definitions found that there was no significant difference between the two groups. That is, those students who learned definitions and synonyms were not at an advantage when it came to improving vocabulary (Crump, 1966; Fairbanks, 1977; McNeal, 1973). The 1990s brought about McKeown’s (1993) and Nist and Olejnik’s (1995) studies about the problems of dictionary usage as the main method of vocabulary instruction and acquisition. Specifically, it may be inadequate to send students directly to the dictionary to learn the meanings of words since the dictionary definitions are often difficult to decipher and even harder to put into meaningful contexts. Research in the early 2000s (e.g., Baumann et al., 2003; McKeown & Beck, 2004) stressed that students must be taught to use the dictionary effectively if they are to benefit at all from the definition and synonym method of vocabulary instruction. Brozo and Simpson (2007), for example, recommended that students be taught the format and organization of a dictionary entry, how to interpret the abbreviations and symbols used in an entry, and how to select the most appropriate definition for the situation in which they encountered the word. The recent emphasis on 21st-century literacies has demonstrated that students must be taught to utilize literacy skills online (International Reading Association, 2009), and online dictionary use is no exception. However, there is a dearth of research on using online dictionaries to enhance vocabulary learning. Ebner and Ehri (2016) taught college students a structured think-to-yourself protocol to help regulate their vocabulary learning online. The results are encouraging, supporting the notion that teaching students to use metacognitive strategies when learning vocabulary online is an effective teaching endeavor. Future research studies might continue Ebner and Ehri’s research area, with a specific focus on online dictionary use, possibly using Nist and Olejnik’s (1995) findings.
Contextual Analysis The use of context clues for vocabulary improvement has long been highly recommended because of its purported advantages over other strategies. The theory is that students need not be dependent on a dictionary or glossary; instead, when confronted with unknown words, students can independently use the information in the surrounding contexts to unlock the meanings. Proponents suggest that students can be trained to “scrutinize the semantic and syntactic cues in the preceding or following words, phrases, or sentences” (Baumann & Kame’enui, 1991, p. 620). Many secondary and postsecondary reading method textbooks instruct teachers to tell their students to use contextual clues when they come across a word they do not know, and most commercial vocabulary materials for college students emphasize the use of contextual analysis. Many factors influence a student’s ability to use context clues to discover the meaning of an unknown word. Both Baumann and Kame’enui (1991) and Sternberg (1987) outlined some of the textual variables that they believe aid or hinder the process of contextual analysis. The variables that seem most pertinent to college learning include the density of unknown words in the selection, the importance of the word’s meaning to the comprehension of the selection, the proximity
102
Vocabulary
and specificity of the clues, and the overall richness of the context. Individual student characteristics, such as prior knowledge in the domain, general vocabulary knowledge, and the ability to make inferences, also impact a student’s capability to use contextual analysis. Sternberg’s (1987) theory of the three processes involved in contextual analysis underscores the importance of these individual variables and the complexity of the task of contextual analysis. He proposed that contextual analysis involves the selective encoding of only relevant information from the context, the combining of multiple cues into one definition, and the comparison of this new information with the reader’s prior knowledge. These are certainly complex cognitive tasks. McKeown (1985) found that simply instructing students to look around a target word’s sentence for context clues was not sufficient. Students, especially those with low verbal abilities, needed to be taught the more elusive skill of selecting constraints from context and using multiple contexts. Thus, it appears that the students with high verbal abilities were better able to use contextual analysis for vocabulary improvement than the struggling readers. In their book, Stahl and Nagy (2006) devoted an entire chapter to teaching students to learn from context, but they too cautioned that this approach is problematic and not overly effective. That is, contextual analysis is essentially a long-term process, as students are unable to integrate new word learning after simply encountering the word once. Instead, it may take them up to 10 exposures to the word ( Jenkins, Matlock, & Slocum, 1989) before they are able to fully acquire the word. Moreover, there is no definite evidence that students are able to transfer contextual analysis instruction to their actual authentic reading tasks. This lack of transfer is striking because contextual analysis is often labeled as the “natural” method of vocabulary learning (Stahl & Nagy, 2006). McKeown and Beck (2004) have also noted that students may learn a few new history or physics words using contextual analysis, but this vocabulary knowledge develops slowly and thus is not particularly powerful for those who struggle with reading on a consistent basis.
Keyword Studies Mnemonic strategies, such as the keyword approach, have received considerable attention in the past research. The keyword method was originally developed as a method for students learning a foreign language (Raugh & Atkinson, 1975). In this method, students are taught to use associative learning to develop a mental image from a keyword or clue in the unknown target word in order to better remember that word. Another variation includes asking the students to place the keyword and definition into a meaningful sentence. For instance, if the target word is astronomical, a student might use the clue astro and then create a mental image of an astronaut who does exceedingly great things. The sentence the student might then create would be something like: The astronaut, who does exceedingly great things, is considered to be an astronomical person. Several researchers have claimed that although the keyword method is indeed helpful for definition-remembering, it is only the first step in students’ quest for deep processing of vocabulary (e.g., Hwang & Levin, 2002; McCarville, 1993; Scruggs & Mastropieri, 2000). In general, research studies have concluded that college students who use the keyword method perform significantly better than the control subjects on numerous vocabulary measures (e.g., McDaniel & Pressley, 1989; Pressley, Levin, & Miller, 1981, 1982). Although Roberts and Kelly (1985), in their study of at-risk college students, found only slight differences favoring the keyword method on an immediate recall test, they did find significantly greater differences on a measure of delayed recall. However, the limitation of the keyword method lies in its lack of applicability to the real college classroom. Many of the words college professionals teach their students, as well as the words those students are encountering in their academic textbooks, are not conducive to keyword associations. Researchers of the keyword method are often using concrete, three-syllable, low- frequency nouns with concise definitions (Pressley et al., 1981, 1982). In addition, there has not been
103
M. Andersen Francis and M. L. Simpson
any research into whether or not students can and will transfer the method into their independent learning. However, the keyword method might be useful for students who are seeking to acquire academic vocabulary, given that many academic vocabulary words contain Latin roots that could be learned through the keyword method (Nagy & Townsend, 2012).
Studies on Academic Vocabulary Strategies Academic vocabulary has seen an influx of discussion over the past decade due to the shift to Common Core State Standards (2010), which has emphasized literacy across the disciplines, not just language arts literacy. Fisher and Frey (2014), in their discussion of vocabulary teaching in the upper elementary grades, remind teachers that “Vocabulary lies at the heart of content learning, as it serves as a proxy for students’ understanding of concepts” (p. 598). Academic vocabulary knowledge is especially important for college students who are delving deeply into different areas of study and are being asked to comprehend challenging text in those areas (Snow, 2010). Further research has determined that the more words students know, the more likely they will comprehend a text (Schmitt, Jiang, & Grabe, 2011). This is not a new revelation. Past authors (i.e., Sartain et al., 1982; Stahl, Simpson & Hayes, 1992) have indicated that students are being challenged not only by the academic language, but also by the general language professors use to talk about their discipline. This fits especially well into the knowledge hypothesis of vocabulary learning (Anderson & Freebody, 1981). The knowledge hypothesis suggests that vocabulary should be taught within the context of learning new concepts so that new words can be related to one another and to prior knowledge. Thus, the source for words to be taught or studied is not teacher-made word lists but the difficult or unknown words that are critical for students’ comprehension of specific content area reading assignments. Stahl and Nagy (2006) made the distinction that the knowledge hypothesis, as differing from the instrumentalist hypothesis, assumes that students know a word because of the concepts behind that word, not just because they learned the word itself. Therefore, the more knowledge students have about a concept, the better able they are to comprehend the material and, as a consequence, the words surrounding the concept. Some of the strategies previously discussed—particularly those related to contextual or morphemic analysis—could be used by students to comprehend challenging content-specific words. However, the strategies examined in this section are different from the traditional word knowledge strategies in that their main goal is to increase students’ comprehension of discipline-specific information and concepts. One of the recent avenues of vocabulary research, especially in the lower grades, has been in the field of academic language (e.g., Kelley, Lesaux, Kieffer, & Faller, 2010). Academic language is defined as language that is separate from everyday conversational language and is specific to the disciplines such as mathematics, history, and science (e.g., Nagy & Townsend, 2012; Snow, 2010). Within academic language there exist two types of academic vocabulary: general and domain-specific (Baumann & Graves, 2010; Hiebert & Lubliner, 2008). First, general academic vocabulary encompasses words that are broader and found in multiple disciplines (Nagy & Townsend, 2012). An example of a general academic vocabulary would be retain (Coxhead, 2000). It is a word with multiple dictionary definitions, but each discipline might use it in slightly different ways. This is extremely relevant given the proposal that academic literacy, or literacy for mathematics, sociology, and science, is necessary for success, especially for at-risk students (Neal, 2015). Domain-specific vocabulary, or content-specific vocabulary, as defined by Baumann and Graves (2010) is comprised of words that are specialized to a content field textbook or technical writing (i.e., mean, median, mode). Shanahan and Shanahan (2008) further the notion by offering a new perspective, disciplinary literacy, asserting that students in the upper grades need to learn the advanced literacy
104
Vocabulary
skills required in mathematics, science, and history if they are to be successful in future schooling or careers. Part of this literacy is the vocabulary knowledge expected, both general and domain-specific. Townsend, Filippini, Collins, and Biancarosa (2012), in their study of linguistically and socioeconomically diverse middle schoolers, further determined that a link exists between students’ general academic vocabulary and their overall academic achievement, leading to even more evidence that understanding academic vocabulary can boost student success. In a move toward identifying the vocabulary students most often encounter in academic texts, Coxhead (2000) created an Academic Word List (AWL) of 570 word families found in college-level texts. These word families fall solidly under the area of general academic vocabulary because they are found across multiple academic fields, not just science or only mathematics. Therefore, the words selected for instruction, either by the students or the teacher, should be of high utility and relevance to learning across the academic disciplines. Although the AWL is useful, Coxhead is quick to point out that simply having the list is not enough; it must be combined with effective teaching in order to be helpful to students. Gardner and Davies (2013) took the Coxhead (2000) list and modified it to create the Academic Vocabulary List (AVL). Although their list was primarily aimed at English Language Learners, it does have implications for college students. That is, all students are encountering the same types of words and have the potential to struggle with reading comprehension as a result. Newman (2016) determined that the AVL has a slight advantage over the Coxhead AWL (2000) because it seems to get at the saliency of the words in the academic texts. Hsu (2014), while preparing a corpus of the demands of engineering texts for English Language Learners, found that students in engineering need to know between 3,500 and 8,500 different word families, depending on the type of engineering. Some of the recent areas of research have focused on teaching morphemic analysis as a means of improving students’ general academic vocabulary, but the focus has been on younger students (e.g., Flanigan, Templeton, & Hayes, 2012). However, Mountain (2015) taught recurrent prefixes, roots, and suffixes to teacher-preparation students to increase their awareness of discipline-specific vocabulary. Although Mountain’s research was focused on preservice teachers, the students increased their word awareness, and they will hopefully then bring this knowledge into their own classrooms. It is possible for college reading teachers to bring this awareness to their own classrooms, but more research could be done into how morphemic awareness can translate into general academic vocabulary learning. However, there are academic strategies, such as visual organizers, that have been shown to help students learn vocabulary and targeted concepts.
Visual Organizers There are many names for visual organizers—structured overviews, concept maps, semantic maps, and graphic organizers—but they are all intended to demonstrate the relationship between vocabulary terms and new or known concepts. Although some visual organizers highlight text organization and others help students understand main ideas, most can trace their origin to Ausubel’s (1963) theory of meaningful receptive learning. Ausubel suggested that students can learn content-area vocabulary more effectively if they can connect previously learned concepts with new concepts and that one strategy for strengthening students’ existing cognitive structures is the advanced organizer. The benefit of visual organizers is that they can be teacher-directed or student-developed and can be completed before or after reading. In addition, Moore and Readence (1980) presented a time-honored meta-analysis, and since then, several studies have concluded that using visual organizers significantly improved students’ comprehension, especially if used as a post-reading strategy (e.g., Dalton & Grisham, 2011; Hoffman, 2003; Holschuh & Aultman, 2009).
105
M. Andersen Francis and M. L. Simpson
Unfortunately, there are a limited number of studies pertaining to college students, basic visual organizers, and vocabulary development. In the studies we did find, the students were either given a completed visual organizer or were asked to finish a partially completed visual organizer after reading a text excerpt (e.g., Barron & Schwartz, 1984; Dunston & Ridgeway, 1990; Pyros, 1980). In an effort to improve visual organizers, some researchers have combined organizers with other types of vocabulary activities that include nodes, links or personal associations, and spatial displays. For example, Carr and Mazur-Stewart (1988) developed the Vocabulary Overview Guide (VOG) and constructed a study to examine its usefulness. The findings indicated that the VOG group performed significantly better on the immediate and delayed vocabulary tests than the control group who read a passage, underlined unknown words, and used context clues for word meanings. Diekhoff, Brown, and Dansereau (1982) took another approach, developing The Node Acquisition and Integration Technique (NAIT), a matrix based primarily on network models of long-term memory structure. The experimental group performed significantly better than the untrained control group on both measures, supporting the effectiveness of the NAIT approach. Other researchers have capitalized on Diekhoff et al.’s (1982) findings and recommendations (e.g., Chmielewski & Dansereau, 1998; O’Donnell, Dansereau, & Hall, 2002) and reinforced that students retained more information after graphic organizer training. These research studies suggest that visual organizers can positively affect students’ comprehension of expository text, especially for those students who have low verbal ability or low prior knowledge of the information. Some articles have affirmed that visual organizers are effective for secondary students (e.g., Greenwood, 2002; Harmon et al., 2005; Rupley & Nichols, 2005), but it also appears that visual organizers as an academic vocabulary strategy need to again be studied at the college level. Holschuh, Scanlon, Shetron, and Caverly (2014) outline mobile apps as a means of creating concept maps for students’ disciplinary learning in science. These concept maps are essentially graphic organizers that students can use to visually understand ideas in science, allowing students to move items around and make bigger connections between ideas. Research can investigate whether or not students choose to create organizers or matrices when learning academic vocabulary, especially after they have been trained in how to create them, and whether they use paper and pen or online tools to create the organizers.
Studies on Student-Centered Approaches Some researchers have examined instructional approaches that capitalize on students’ interests or beliefs in order to enhance their word knowledge. The studies in this section focus on the following three approaches: (a) providing college students concrete, direct experiences when learning new words, (b) allowing students to have input into which vocabulary words they will learn, and (c) using technology to enhance interest in and use of vocabulary.
Concrete, Direct Experiences Since the late 1960s (Petty, Herold, & Stoll, 1968), research has indicated that direct experience in using a word is extremely important to building students’ vocabulary (e.g., Blachowicz & Fisher, 2004; Rupley & Nichols, 2005), especially when vocabulary instruction draws on students’ prior knowledge and encourages them to practice the new words and make connections to other words (Goerss, Beck, & McKeown, 1999). The connection to prior knowledge has relevance to the theory of Systemic Functional Linguistics (SFL). SFL is a “theoretical framework and analytical toolkit for examining the relationship between language, text, and context” (Neal, 2015, p. 12). The idea is to teach students the
106
Vocabulary
language of a specific discipline, such as history, and demonstrate how language features, such as nominalizations, are used within that discipline’s texts. It goes beyond simply learning words and delves into how one might communicate within that discipline. Intertextuality, as defined by Armstrong and Newman (2011), is another tool to build students’ disciplinary literacy in a developmental reading course. Using what students know, finding the gaps in their knowledge, and then using texts to build on already existing knowledge is key to intertextuality, while focusing on both the student and the text. This leads to another instructional approach, having student pick their own words to learn.
Students’ Input in Selecting Words There have been some research studies that have suggested that students’ motivation to learn new words can be enhanced when they are the ones selecting the words to be studied (e.g., Francis, 2002; Haggard, 1980; Rupley & Nichols, 2005; Scott & Nagy, 2004). Ruddell and Shearer (2002) used the Vocabulary Self-Selection Strategy (VSS), or a method that asked students to choose the words they thought everyone in class should study, to investigate seventh- and eighth-grade students’ vocabulary growth. With the VSS method, student scores were significantly better on the tests of the VSS lists than on the tests of the vocabulary lists given by the instructor. The study’s findings supported previous research (Gnewuch, 1974; Haggard, 1980) in that students were more engaged and motivated by learning words they found meaningful and useful to future study. In the literature, the VSS has been suggested as a means of improving students’ academic knowledge and comprehension (Harmon et al., 2005; Wolsey, Smetana, & Grisham, 2015). To further the VSS strategy, Wolsey et al. (2015) added a technology component. They used digital tools, such as online dictionaries and ThingLink (www.thinglink.com), to help students show their vocabulary knowledge and learning of academic vocabulary. This connection between the VSS and academic language is important and further research into how to integrate digital tools and academic vocabulary learning would be welcomed at the college level.
Vocabulary and Technology The plethora of technology tools available, both online and through mobile apps, can make learning vocabulary more accessible for students. With an eye to that notion, Abrams and Walsh (2014) investigated the use of gamification to help adolescents’ vocabulary acquisition. Students used a digital site (www.vocabulary.com) to enhance their vocabulary learning and acquisition. Students were provided direct feedback and rewards, and the results indicated that interest in words and student confidence both went up after intervention. Interest and confidence are encouraging, but it is also important to increase actual word knowledge. Ebner and Ehri (2013) taught college students to use a structured think-aloud procedure to acquire words in an online article. Their results indicated gains in overall and specific vocabulary knowledge. They furthered this study in 2016 when they initially scaffolded students on the online think-aloud procedure and then asked the students to engage in the procedure independently. The results indicate that teaching online self-regulation tasks can increase students’ metacognition and thereby influence independent word acquisition. The studies into online tools raise attention to the usefulness of technology tools and vocabulary, especially when considering online textbooks that include direct links to definitions. The three broad categories of research—traditional word knowledge approaches, academic vocabulary strategies, and student-centered approaches—all indicate the need to improve both students’ general vocabulary and their academic vocabulary. This integration includes increasing the number of words student acquire as well as their ability to acquire words on their own (Armstrong
107
M. Andersen Francis and M. L. Simpson
& Stahl, 2017; Simpson et al., 2004). Therefore, the next section will provide recommendations for quality vocabulary instruction that is informed by the research.
Recommendations for Vocabulary Instruction Most individuals would agree that the extant literature has not validated any one specific method, material, or strategy for enhancing college students’ vocabulary knowledge. However, it is possible to delineate the characteristics of effective vocabulary programs and practices. The following seven guidelines, gleaned from a variety of research studies, should be considered when planning vocabulary lessons: (a) provide students a balanced approach, (b) teach vocabulary from a context, (c) emphasize students’ active and informed role in the learning process, (d) stimulate students’ awareness of and interest in words, (e) reinforce word learning with intensive instruction, (f ) build a language-rich environment to support word learning, and (g) encourage students to read widely and frequently.
Provide Students a Balanced Approach Vocabulary development involves both the “what” and the “how.” The “what” focuses on the processes involved in knowing a word. The “how” is equally important because it involves students in learning strategies for unlocking word meanings on their own. Some individuals have referred to the former approach as “additive” and the latter approach as “generative.” One way to conceptualize this is as follows: If students are taught some words from a list, they will be able to recognize and add those particular words to their repertoire (i.e., the additive approach). However, if students are taught a variety of independent-word learning strategies, once they leave the college reading classroom, they will be able to expand their vocabulary and generate new understandings of text (i.e., the generative approach). Given the enormous number of words that struggling readers do not understand, the generative approach seems to make considerable sense, especially in college settings when students are seeking to learn academic literacy practices (A rmstrong, Stahl, & Kantner, 2015). Another area of generative strategies involves the increased use of online texts in college classrooms. Online texts can be a boon to vocabulary learning given the ease of looking up words through digital dictionaries. However, Ebner and Ehri (2013, 2016) advocate the use of structured digital tools to promote students’ self-regulation of vocabulary knowledge online. Their results show that using digital tools is an effective overall generative vocabulary learning strategy, thereby increasing students’ general and academic word knowledge. We are not suggesting, however, that college reading professionals discontinue the direct instruction of important words and concepts. Instead, we make the point that instructors should implement a balanced vocabulary program that emphasizes both the additive and generative approaches, carefully scrutinizing all classroom activities and asking themselves what is being taught and how.
Teach Vocabulary from a Context Researchers who have reviewed the literature on vocabulary instruction have concluded that vocabulary is best taught in a unifying context (Fisher & Blachowicz, 2005; Francis & Simpson, 2009; Simpson et al., 2004). Words taught in the context of an academic discipline will be learned more effectively than words taught in isolation because context allows students to integrate words with previously acquired knowledge (Marzano, 2004). This is borne out in the recent research on academic vocabulary that indicates knowing how to learn the words in a discipline is imperative to success in the discipline (Townsend et al., 2016). Therefore, rather than relying on lists of words
108
Vocabulary
from textbooks, college reading professionals should target words from materials that students are reading, whether it be textbooks, magazines, or novels, encouraging students to select the targeted words on their own.
Emphasize Students’ Active and Informed Role in the Learning Process The importance of students’ active participation and elaborative processing in learning new words is a consistent theme across the research literature (Ebner & Ehri, 2016; Fisher & Blachowicz, 2005; McKeown & Beck, 2004; Rupley, 2005). Elaborative or generative processing engages students in activities such as (a) sensing and inferring relationships between targeted vocabulary and their own background knowledge, (b) recognizing and applying vocabulary words to a variety of contexts, (c) recognizing examples and nonexamples, and (d) generating novel contexts for the targeted word. When students have an informed role in vocabulary development, they understand the declarative and procedural requirements of learning new words (Nagy & Scott, 2000). That is, they have the declarative knowledge that allows them to define a word and the procedural knowledge that allows them to do something with the words in other contexts. Unfortunately, it appears that most commercial materials do not actively engage students in their own learning, treating them instead as simple receptacles of massive word lists ( Joshi, 2005; Roth, 2017). If commercial vocabulary materials must be used with students, they should be modified or supplemented in several ways. For example, instructors could ask students to explain their particular answers to questions and encourage them to relate the words to their own personal experiences. Another strategy might be to take students’ misconceptions about words and challenge them through discussion and further investigation, possibly using mobile apps to enhance discussion and solidify learning (Holschuh et al., 2014). College reading professionals can also facilitate students’ active and informed processing when they incorporate into their classroom routines a variety of creative formats for practice and evaluation. Francis and Simpson (2003) described a variety of these formats, many of which are quite easy to design. For example, one such format is called the paired word question (Beck & McKeown, 1983) that pairs two targeted vocabulary words (e.g., Would melancholy make you doleful?). To answer these paired word questions, students must understand the underlying concepts or words and then determine if any relationships exist between them. The exclusion technique is another creative format for practice and evaluation (Francis & Simpson, 2003). With the exclusion format, students are given three or four words and are asked to determine the one word that does not fit and the general concept under which the other words are categorized. For instance, if students were given the words philanthropy, magnanimousness, and malevolence, they would have to know the definitions for all three words and that malevolence does not fit because the others are terms describing generosity of spirit.
Stimulate Students’ Awareness of and Interest in Words The importance of student interest as a means of improving attention, effort, persistence, thinking processes, and performance is well documented (Hidi & Harackiewicz, 2000; Lesaux, Harris, & Sloane, 2012). However, the fostering of the relationship between interest and vocabulary knowledge is not practiced as often as it should be. Most students find looking up the definitions for a list of words boring and irrelevant to their own areas of study. Instead, college reading professionals should be crafting strategies and situations that foster student interest in and transfer of new words. McKeown, Crosson, Artz, Sandora, and Beck (2013) found that when middle school students were able to find the targeted academic words outside of class (either on the Internet or in other print), they were more likely to be enthusiastic and motivated to learn the words.
109
M. Andersen Francis and M. L. Simpson
Further, Dalton and Grisham (2011) advocate using what they term “eVoc” strategies to increase student interest in words. These strategies include having students connect words and images online. One such tool is ThingLink (www.thinglink.com) where instructors can post an image and students can generate sentences with vocabulary words to demonstrate their learning. Gamification, or attaching rewards and points to vocabulary learning, has also found success at the intermediate levels (Abrams & Walsh, 2014) but could use more research at the college level. Another strategy aimed at increasing student interest in vocabulary is the VSS strategy outlined by Haggard (1986), further supported by Harmon et al. (2005) and Francis (2002), and changed when Wolsey et al. (2015) added a technology component. With this strategy, students are told to bring to class words they encountered in their lives (via television, peers, or reading). Through class discussion and instructor discretion, a class word list is generated. The benefit of this method is that students of all ability levels can self-select important words, allowing for more diverse word lists. Further, students are learning words that are directly useful to their reading and words that are generalizable to other reading situations, thereby becoming more excited word learners. One final note about student interest is the necessity of teacher interest in words. As Manzo and Sherk (1971) aptly stated, “the single most significant factor in improving vocabulary is the excitement about words which teachers can generate” (p. 78). In other words, college reading professionals should be playful with words and exhibit enthusiasm for words. They can accomplish this by using new and interesting words during class discussions, in email correspondences, and when responding to students’ work. They can post class Padlets (www.padlet.com), where they ask students to add images and sentences using new words. When students can see how exciting and intriguing word learning can be, they are more likely to gain back their own inherent excitement about learning. Moreover, when teachers encourage students to play with words and manipulate them, students are learning to take a “metalinguistic stand” on vocabulary, a stance that builds flexibility and confidence (Fisher & Blachowicz, 2005).
Reinforce Word Learning with Intensive Instruction Students’ word knowledge takes time to develop and increases in small, incremental steps (Scott & Nagy, 2004). Although it is impossible to identify a specific time frame for all students, we do know from the research literature that word ownership is reinforced when students receive intensive instruction characterized by multiple exposures to targeted words in multiple contexts (Marzano, 2004; Rupley & Nichols, 2005). We need to remember, however, that duration is not the only critical characteristic of intensive vocabulary instruction. Mere repetition of a word and its definition over time is not beneficial unless students are actively involved in elaborative processing. Intensive instruction without active student involvement can be boring and counterproductive to the goals of an effective vocabulary development program. In order to increase motivation and engagement, the intensive instruction must provide rigor, support, and opportunities to perceive progress (Lesaux et al., 2012). Thus, it is imperative that our vocabulary instruction include a variety of discussions and expressive activities that encourage students to question and experiment with new words, reinforcement and practice activities that require students to think and write rather than circle answers, and cumulative review activities that provide students with repeated exposures over time (Simpson et al., 2004). Implicit within the intensive model of instruction is the reality that there are fewer words taught, but they are taught in more depth. It is always a joy to watch students’ excitement and surprise when they encounter their newly acquired words in sociology lectures or psychology textbooks. That excitement is something to encourage, and strive for, as vocabulary programs continue to be developed.
110
Vocabulary
Build a Language-Rich Environment to Support Word Learning The findings from research studies suggest that students with strong expressive and receptive vocabularies are the ones who are immersed in environments characterized by “massive amounts of rich written and oral language” (Nagy & Scott, 2000, p. 280). Instructors can best promote vocabulary growth by working with students to create an environment where new words are learned, celebrated, and used in authentic communication tasks (Blachowicz & Fisher, 2004; Blachowicz, Fisher, Ogle, & Watts-Taffe, 2006; McKeown & Beck, 2004). Students should be provided opportunities to experiment with using words in low-risk situations. In classes, for example, students are sometimes asked to construct sentences using a targeted word as a way of gaining access to the classroom for that particular day. Such oral language activities allow students to learn not only how vocabulary words function but also how different sentences are constructed using multiple parts of speech. This word play is essential to students’ metalinguistic understanding of the words and increases their motivation to learn new words (Lesaux et al., 2012; McKeown et al., 2013). Another strategy that helps students acquire new words in a language-rich environment is to include discussions about word learning, especially within the academic disciplines and online (Blachowicz & Fisher, 2004; Ebner & Ehri, 2016; Francis & Simpson, 2003; Neal, 2015). Scaffolding the type of learning one might use to acquire words is important to students’ eventual independent word learning. For example, modeling how a history teacher might use the internet to learn about a new concept, clicking through and mentally assessing the applicability of each source and idea. These discussions and dialogues help students understand the versatile and metacognitive nature of vocabulary learning, especially as they transition to more challenging academic texts. This will allow students to integrate new knowledge into their existing academic language taxonomies (Neal, 2015).
Encourage Students to Read Widely and Frequently As noted by a variety of researchers, students who choose to read widely and frequently have the breadth and depth of word knowledge necessary to understand their reading assignments (Harmon et al., 2005; Joshi, 2005). This ability to cope successfully with reading tasks across disciplines occurs because students who read widely are more likely to increase their awareness of new words, their depth of vocabulary knowledge, their background knowledge, and their reading fluency. Moreover, findings from comprehensive studies, such as the National Assessment of Educational Progress in Reading (Donahue, Voelkl, Campbell, & Mazzeo, 1999), have indicated that the students who reported that they read frequently and widely were the ones who had higher achievement test scores. The implication for college reading professionals is obvious: If the goal is for students to understand what they read in their courses and to become successful independent learners, college reading professionals must encourage them to read beyond what they are assigned to read for class (Graves, 2004; Ocal & Ehri, 2017). It is also important to keep in mind that what students read is not as important as the fact that they are reading. Forcing students to read the “important” works or classics will not instill a love of reading and may, in fact, cause negative reactions. Rather than focusing exclusively on the classics, many college reading professionals encourage their students to read on a daily basis, suggesting materials such as online newspapers, magazines, or popular novels. The aforementioned seven guidelines should assist college reading professionals in providing a systematic and comprehensive vocabulary program for students rather than relying solely on commercial materials or defined vocabulary lists to dictate their programs. In the next section, we will discuss some of the future research avenues that college reading professionals could consider.
111
M. Andersen Francis and M. L. Simpson
Future Avenues for Scholarship After examining the extant literature on vocabulary improvement, we have determined that there are three major foci for future scholarship in the area of college-level vocabulary. These include the following: (a) analyzing, in an objective manner, extant vocabulary programs and practices; (b) providing ongoing feedback to vocabulary textbook publishers; and (c) conducting useful research, especially in emerging areas or often ignored areas.
Analyzing Extant Vocabulary Programs and Practices For college reading professionals, an important focus should be on analysis and evaluation of their present programs and practices. Some possible questions that could be used for an objective evaluation include the following: 1 Does the present vocabulary program offer a balance between the additive and generative approaches to vocabulary development? 2 Does the program offer a variety of strategies appropriate for individual learning preferences? 3 Does the present vocabulary program help students develop an appreciation and sensitivity to words so they will continue to develop their personal vocabularies on a long-term basis? 4 Does the present program provide direct instruction that takes into consideration what it means to know a word fully and flexibly? 5 Does the present vocabulary program use a variety of oral and written activities and evaluation measures? 6 Does the present vocabulary program have specific goals that match the characteristics of the students? 7 Does the program reflect the academic literacy tasks that students will encounter during their college career? The results of such an evaluation should be shared with others, as well as the checklists or questions used during the evaluation. Vocabulary, as we mentioned earlier in this chapter, is one of the five essential components of reading especially as students enter college with an emphasis on college and career readiness and thus, certainly deserves our attention and objective critique.
Providing Ongoing Feedback to Publishers The second challenge for college reading professionals is to provide ongoing feedback to the editors and writers of commercial materials concerning the relevance and quality of their products. College reading professionals must not accept without question what publishers disseminate. They need to examine materials in light of their own specific needs, keeping in mind what research has said about effective vocabulary instruction. As Stahl et al. (1987) and Roth (2017) concluded in their content analyses, the materials on the market tend to be based on tradition rather than on research-supported principles and that even if the material emphasizes flexible strategies, the instructor needs to flush out and reinforce those strategies. Therefore, the critical link between researchers and publishers is the instructor. Consequently, we highly recommend that college reading professionals offer informed, objective, and constructive opinions on materials they receive from publishers and that they take the time to chat with publishers who attend professional conferences and set up displays of commercial materials. Further, we recommend that instructors use these conversations as reflective practice, thinking about their own instruction and how they use the text materials to teach vocabulary strategies.
112
Vocabulary
Conducting Useful Research Given the dearth of studies that have asked useful and relevant questions about vocabulary development at the college level, the final challenge for college reading professionals is to conduct research with their own students. The process has been started at the secondary level, with research on academic vocabulary (e.g., Fang, 2006; Shanahan & Shanahan, 2008), but needs to be furthered at the college level with questions such as how developmental reading programs can address the challenging academic language students face in areas like biology or chemistry. This is especially relevant given the research into academic literacy and vocabulary that indicates developmental reading instructors must begin to connect the knowledge students are learning in their courses to the knowledge students are acquiring in the academic courses. It is possible that college reading instructors might need to transition their instruction to a paired-course model, keeping in mind the SFL of the academic disciplines. For example, a college reading instructor could offer a course aimed at teaching science majors how to acquire general purpose academic language in science, using students’ existing knowledge to build on the language of science and foster students’ independent word acquisition strategies. Another viable research avenue should be the continuation of research on vocabulary learning by students who speak English as a second language. Interestingly, there have been a number of recent studies in this area (i.e., Hsu, 2014; Lei, Berger, Allen, Plummer, & Worka, 2010). Similar to what is already known about teaching vocabulary to students who speak English as their first language, the researchers recommended that teachers of English Language Learners draw attention to academic words, instruct students on methods for deciphering word meanings from context, and allow students to play with words in their own contexts. These are notions that can benefit all learners of vocabulary. An important area of research to explore further for college students is directly related to metacognitive awareness of words and the flexible use of strategies. What was once research into students’ beliefs is in reality an examination of the metacognitive process students engage in when they come across unfamiliar words. This is particularly important for college students who are doing most of their learning alone and in the academic disciplines. It would seem that teaching students how and what they are doing is key to their future success (Ebner & Ehri, 2013, 2016). As to other possible research questions that should be addressed, our main suggestion is to avoid studies that seek to determine a superior strategy and instead to emphasize the flexible use of strategies and increase in students’ metacognition around vocabulary, especially given the connection between academic vocabulary and success. After suffering through countless studies comparing one strategy to another, we should acknowledge what theory and research has already told us— there is no magic answer to long-term and lasting vocabulary development.
Acknowledgment The authors would like to acknowledge the contributions of Sally Randall on earlier versions of this chapter.
References and Suggested Readings (*) Abrams, S. S., & Walsh, S. (2014). Gamified vocabulary: Online resources and enriched language learning. Journal of Adolescent & Adult Literacy, 58(1), 49–58. Anderson, R. C., & Freebody, P. (1981). Vocabulary knowledge. In J. T. Guthrie (Ed.), Comprehension and teaching: Research reviews (pp. 77–117). Newark, DE: International Reading Association. Armstrong, S. L., & Newman, M. (2011). Teaching textual conversations: Intertextuality in the college classroom. Journal of College Reading and Learning, 41(2), 6–21.
113
M. Andersen Francis and M. L. Simpson
Armstrong, S. L., & Stahl, N. A. (2017). Communication across the silos and borders: The culture of reading in a community college. Journal of College Reading and Learning, 47(2), 99–122. Armstrong, S. L., Stahl, N. A., & Kantner, M. J. (2015). Investigating academic literacy expectations: A curriculum audit model. Journal of Developmental Education, 38(2), 2–13, 23. Ausubel, D. P. (1963). The psychology of meaning ful verbal learning. New York, NY: Grune & Stratton. Barron, R. F., & Schwartz, R. N. (1984). Spatial learning strategies: Techniques, applications, and related issues. San Diego, CA: Academic Press. Baumann, J. F., & Graves, M. F. (2010). What is academic vocabulary? Journal of Adolescent & Adult Literacy, 54(1), 4–12. Baumann, J. F., & Kame’enui, E. J. (1991). Research on vocabulary instruction: Ode to Voltaire. In J. F. Flood, J. M. Jensen, D. Lapp, & J. R. Squire (Eds.), Handbook of research on teaching the English language arts (pp. 604–632). New York, NY: Macmillan. Baumann, J. F., Kame’enui, E. J., & Ash, G. E. (2003). Research on vocabulary instruction: Voltaire redux. In J. Flood, D. Lapp, J. R. Squire, & J. M. Jensen (Eds.), Handbook of research on the teaching the English language arts (2nd ed., pp. 752–785). Mahwah, NJ: Lawrence Erlbaum Associates. Beck, I., & McKeown, M. (1983). Learning words well: A program to enhance vocabulary and comprehension. The Reading Teacher, 36(7), 622–625. Blachowicz, C. L. Z., & Fisher, P. (2004). Vocabulary lessons. Educational Leadership, 61(6), 66–70. Blachowicz, C. L., Fisher, P. J. L., Ogle, D., & Watts-Taffe, S. (2006). Vocabulary: Questions from the classroom. Reading Research Quarterly, 41(4), 524–539. Bowers, P. N., & Kirby, J. R. (2010). Effects of morphological instruction on vocabulary acquisition. Reading and Writing, 23(5), 515–537. Brozo, W. G., & Simpson, M. L. (2007). Content literacy for today’s adolescents: Honoring diversity and building competence. Upper Saddle, NJ: Pearson. Carr, E. M., & Mazur-Stewart, M. (1988). The effects of the vocabulary overview guide on vocabulary comprehension and retention. Journal of Reading Behavior, 20(1), 43–62. Chmielewski, T. L., & Dansereau, D. F. (1998). Enhancing the recall of text: Knowledge mapping training promotes implicit transfer. Journal of Educational Psychology, 90(3), 407–413. Common Core State Standards Initiative. (2010). Common core state standards for English language arts & literacy in history/social studies, science and technical subjects. Washington, DC: Council of Chief State School Officers and the National Governors Association Center for Best Practices. Coxhead, A. (2000). A new academic word list. TESOL Quarterly, 34(2), 213–238. Craik, F. I. M. (1979). Levels of processing: Overview and closing comments. In L. S. Cermak & F. I. M. Craik (Eds.), Levels of processing in human memory (pp. 447–461). Hillsdale, NJ: Lawrence Erlbaum Associates. Craik, F. I. M., & Lockhart, R. S. (1972). Levels of processing: A framework for memory research. Journal of Verbal Learning and Verbal Behavior, 11(6), 671–684. Cromley, J. G., & Wills, T. W. (2016). Flexible strategy use by students who learn much versus little from text: Transitions within think-aloud protocols. Journal of Research in Reading, 39(1), 50–71. Crump, B. M. (1966). Relative merits of teaching vocabulary by a direct and an incidental method. Dissertation Abstracts International, 26, 901A–902A. Dale, E. (1965). Vocabulary measurement: Techniques and major findings. Elementary English, 42(8), 895–901. Dalton, B., & Grisham, D. L. (2011). eVoc strategies: 10 ways to use technology to build vocabulary. The Reading Teacher, 64(5), 306–317. Diekhoff, G. M., Brown, P. J., & Dansereau, D. F. (1982). A prose learning strategy training program based on network and depth-of-processing models. Journal of Experimental Education, 50(4), 180–184. Donahue, P., Voelkl, K., Campbell, J., & Mazzeo, J. (1999). NAEP 1998 reading report card for the nation. Washington DC: National Center for Education Statistics. Dunston, P. J., & Ridgeway, V. G. (1990). The effect of graphic organizers on learning and remembering information from connected discourse. Forum for Reading, 22(1), 15–23. Ebner, R. J., & Ehri, L. C. (2013). Vocabulary learning on the Internet: Using a structured think-aloud procedure. Journal of Adolescent & Adult Literacy, 56(6), 472–481. Ebner, R. J., & Ehri, L. C. (2016). Teaching students how to self-regulate their online vocabulary learning by using a structured think-to-yourself procedure. Journal of College Reading and Learning, 46(1), 62–73. Fairbanks, M. M. (1977, March). Vocabulary instruction at the college/adult level: A research review. (ERIC Document Reproduction No. ED134979).
114
Vocabulary
Fang, Z. (2006). The language demands of science reading in middle school. International Journal of Science Education, 28(5), 491–520. Fisher, P., & Blachowicz, C. L. (2005). Vocabulary instruction in a remedial setting. Reading and Writing Quarterly, 21(3), 281–300. Fisher, D., & Frey, N. (2014). Content area vocabulary learning. The Reading Teacher, 67(8), 594–599. Flanigan, K., Templeton, S., & Hayes, L. (2012). What’s in a word? Using content vocabulary to generate growth in general academic knowledge. Journal of Adolescent & Adult Literacy, 56(2), 132–140. Francis, M. A. (2002). Vocabulary instruction: Using four research topics to enhance students’ vocabulary knowledge. The Journal of Teaching and Learning, 6, 1–5. Francis, M. A., & Simpson, M. L. (2003). Using theory, our intuitions, and a research study to enhance students’ vocabulary knowledge. Journal of Adolescent & Adult Literacy, 47(1), 66–78. Francis, M. A., & Simpson, M. L. (2009). Vocabulary development. In R. Flippo & D. Caverly (Eds.), The handbook of college reading and study strategy research (2nd ed., pp. 97–120). New York, NY: Routledge. Gardner, D., & Davies, M. (2013). A new academic vocabulary list. Applied Linguistics, 35(3), 305–327. Gnewuch, M. M. (1974). The effect of vocabulary training upon the development of vocabulary, comprehension, total reading, and rate of reading of college students. Dissertation Abstracts International, 34, 6254A. Goerss, B. L., Beck, I. L., & McKeown, M. G. (1999). Increasing remedial students’ ability to derive word meaning from context. Journal of Reading Psychology, 20(2), 151–175. Graves, M. F. (2004). Teaching prefixes: As good as it gets? In J. F. Baumann & E. J. Kame’enui (Eds.), Vocabulary instruction: Research to practice (pp. 81–99). New York, NY: Guilford Press. Greenwood, S. C. (2002). Making words matter: Vocabulary study in the content areas. The Clearing House, 75, 258–263. Haggard, M. R. (1980). Vocabulary acquisition during elementary and post-elementary years: A preliminary report. Reading Horizons, 21(1), 61–69. Haggard, M. R. (1986). The Vocabulary Self-Collection Strategy: Using student interest and word knowledge to enhance vocabulary growth. Journal of Reading, 29(7), 612–634. Harmon, J. M, Hedrick, W. B., Wood, K. D., & Gress, M. (2005). Vocabulary self-selection: A study of middle-school students’ word selections from expository texts. Reading Psychology, 26(3), 313–333. Hidi, S., & Harackiewicz, J. (2000). Motivating the academically unmotivated: A critical issue for the 21st century. Review of Educational Research, 70(2), 151–179. Hiebert, E. H., & Lubliner, S. (2008). The nature, learning, and instruction of general academic vocabulary. In A. E. Farstrup & S. J. Samuels (Eds.), What research has to say about vocabulary instruction (pp. 106–129). Newark, DE: International Reading Association. Hoffman, J. (2003). Student-created graphic organizers bring complex material to life. College Teaching, 51(3), 105. Holschuh, J. P. (2000). Do as I say, not as I do: High, average, and low performing students’ strategy use in biology, Journal of College Reading and Learning, 31(1), 94–108. Holschuh, J. P., & Aultman, L. P. (2009). Comprehension development. In R. Flippo & D. Caverly (Eds.), The handbook of college reading and study strategy research (2nd ed., pp. 97–120). New York, NY: Routledge. Holschuh, J. P., & Paulson, E. J. (2013, July). The terrain of college developmental reading. Executive summary and paper commissioned by the College Reading and Learning Association (CRLA). Retrieved from www.crla.net/images/whitepaper/TheTerrainofCollege91913.pdf Holschuh, J. P., Scanlon, E., Shetron, T. H., Caverly, D. C. (2014). Techtalk: Mobile apps for disciplinary literacy in science. Journal of Developmental Education, 37(1), 32–33. Hsu, W. (2014). Measuring the vocabulary load of engineering textbooks for EFL undergraduates. English for Specific Purposes, 33, 54–65. Hwang, Y., & Levin, J. R. (2002). Examination of middle school students’ independent use of a complex mnemonic system. Journal of Experimental Education, 71(1), 25–38. International Reading Association. (2009, May). New literacies and 21st century technologies. Position paper by the International Reading Association, retrieved from www.literacyworldwide.org/docs/default-source/ where-we-stand/new-literacies-21st-century-position-statement.pdf?sfvrsn=6 Jenkins, J. R., Matlock, B., & Slocum, T. A. (1989). Two approaches to vocabulary instruction: The teaching of individual word meanings and practice in deriving word meaning from context. Reading Research Quarterly, 24(2), 215–235. Joshi. R. M. (2005). Vocabulary: A critical component of comprehension. Reading and Writing Quarterly, 21(3), 209–219. Kelley, J. G., Lesaux, N. K., Kieffer, M. J., & Faller, S. E. (2010). Effective academic vocabulary instruction in the urban middle school. The Reading Teacher, 64(1), 5–14.
115
M. Andersen Francis and M. L. Simpson
Lei, S. A., Berger, A. M., Allen, B. M., Plummer, C. V., & Worka, R. (2010). Strategies for improving reading skills among ELL college students. Reading Improvement, 47(2), 92–104. Lesaux, N. K., Harris, J. R., & Sloane, P. (2012). Adolescents’ motivation in the context of an academic vocabulary intervention in urban middle school classrooms. Journal of Adolescent & Adult Literacy, 56(3), 231–240. Manzo, A. V., & Sherk, J. K. (1971). Some generalizations and strategies for guiding vocabulary learning. Journal of Reading Behavior, 4(1), 78–89. Marzano, R. J. (2004). The developing vision of vocabulary instruction. In J. F. Baumann & E. J. Kame’enui (Eds.), Vocabulary instruction: Research to practice (pp. 100–117). New York, NY: Guilford Press. McCarville, K. B. (1993). Keyword mnemonic and vocabulary acquisition for developmental college students. Journal of Developmental Education, 16(3), 2–6. McDaniel, M. A., & Pressley, M. (1989). Keyword and context instruction of new vocabulary meanings: Effects on text comprehension and memory. Journal of Educational Psychology, 81(2), 204–213. McKeown, M. G. (1985). The acquisition of word meanings from context by children of high and low ability. Reading Research Quarterly, 20(4), 482–496. McKeown, M. G. (1993). Creating effective definitions for young word learners. Reading Research Quarterly, 28(1), 17–31. McKeown, M. G., & Beck, I. L. (2004). Direct and rich vocabulary instruction. In J. F. Baumann & E. J. Kame’enui (Eds.), Vocabulary instruction: Research to practice (pp. 13–27). New York, NY: Guilford Press. McKeown, M. G., Crosson, A. C., Artz, N. J., Sandora, C., & Beck. I. L. (2013). In the media: Expanding students’ experience with academic vocabulary. The Reading Teacher, 67(1), 45–53. McNeal, L. D. (1973). Recall and recognition of vocabulary word learning in college students using mnemonic and repetitive methods. Dissertation Abstracts International, 33, 3394A. Moore, D. W., & Readence, J. E. (1980). Meta-analysis of the effect of graphic organizers on learning from text. In M. L. Kamil & A. J. Moe (Eds.), Perspectives on reading research and instruction (pp. 213–218). Washington, DC: National Reading Conference. Mountain, L. (2015). Recurrent prefixes, roots, and suffixes: A morphemic approach to disciplinary literacy. Journal of Adolescent & Adult Literacy, 58(7), 561–567. Nagy, W. E., & Scott, J. (2000). Vocabulary processes. In M. Kamil, P. Mosenthal, P. D. Pearson, & R. Barr (Eds.), Handbook of reading research (Vol. III, pp. 269–284). Mahwah, NJ: Lawrence Erlbaum Associates. Nagy, W., & Townsend, D. (2012). Words as tools: Learning academic vocabulary as language acquisition. Reading Research Quarterly, 47(1), 91–108. Neal, H. N. (2015). Theory to practice: Cultivating academic language proficiency in developmental reading classrooms. Journal of Developmental Education, 39(1), 12–17, 33–34. Newman, J. A. (2016, July). A corpus-based comparison of the Academic Word List and the Academic Vocabulary List. Unpublished dissertation, Brigham-Young University. Nist, S. L., & Olejnik, S. (1995). The role of context and dictionary definitions on varying levels of word knowledge. Reading Research Quarterly, 30(2), 172–193. Nist, S.L., & Simpson, M.L. (2000). College studying. In M. L. Kamil, P. B. Mosenthal, P. D. Pearson, & R. Barr (Eds.), Handbook of reading research (Vol. 3, pp. 645–666). Mahwah, NJ: Erlbaum. Ocal, T., & Ehri, L. (2017). Spelling ability in college students predicted by decoding, print exposure and vocabulary. Journal of College Reading and Learning, 47(1), 58–74. O’Donnell, A. M., Dansereau, D. F., & Hall, R. H. (2002). Knowledge maps as scaffolds for cognitive processing. Educational Psychology Review, 14(1), 71–86. Pacheco, M. B., & Goodwin, A. P. (2013). Putting two and two together: Middle school students’ morphological problem-solving strategies for unknown words. Journal of Adolescent & Adult Literacy, 56(7), 541–553. Petty, W. T., Herold, C. P., & Stoll, E. (1968). The state of knowledge about the teaching of vocabulary. Champaign, IL: National Council of Teachers of English. Pressley, M., Levin, J. R., & Miller, G. E. (1981). How does the keyword method affect vocabulary, comprehension, and usage? Reading Research Quarterly, 16(2), 213–225. Pressley, M., Levin, J. R., & Miller, G. E. (1982). The keyword method compared to alternative vocabulary-learning strategies. Contemporary Educational Psychology, 7(1), 50–60. Pyros, S. W. (1980). Graphic advance organizers and the learning of vocabulary relationships. Dissertation Abstracts International, 41, 3509A. Raugh, M. R., & Atkinson, R. C. (1975). A mnemonic method for learning a second-language vocabulary. Journal of Educational Psychology, 67(1), 1–16. Rimbey, M., McKeown, M., Beck, I., & Sandora, C. (2016). Supporting teachers to implement contextualized and interactive practices in vocabulary instruction. Journal of Education, 196(2), 69–87.
116
Vocabulary
Roberts, J., & Kelly, N. (1985). The keyword method: An alternative strategy for developmental college readers. Reading World, 24(3), 34–39. Roth, D. (2017). Morphemic analysis as imagined by developmental reading textbooks: A content analysis of a textbook corpus. Journal of College Reading and Learning, 47(1), 26–44. Ruddell, M. R., & Shearer, B. A. (2002). “Extraordinary,” “tremendous,” “exhilarating,” “magnificent”: Middle school at-risk students become avid word learners with the Vocabulary Self-Collection Strategy (VSS). Journal of Adolescent & Adult Literacy, 45(5), 352–363. Rupley, W. H. (2005). Vocabulary knowledge: Its contribution to reading growth and development. Reading & Writing Quarterly, 21(3), 203–207. Rupley, W. H., & Nichols, W. D. (2005). Vocabulary instruction for the struggling reader. Reading & Writing Quarterly, 21(3), 239–260. Sartain, H. W., Stahl, N. A., Ani, U. A., Bohn, S. Holly, B., Smolenski, C. S., & Stein, D. W. (1982). Teaching techniques for the languages of the disciplines. Pittsburgh, PA: University of Pittsburgh and the Fund for the Improvement of Postsecondary Education. Schmitt, N., Jiang, X., & Grabe, W. (2011). The percentage of words known in a text and reading comprehension. The Modern Language Journal, 95(1), 26–43. Schoerning, E. (2014). The effect of plain English vocabulary on Students achievement and classroom culture in college science instruction. International Journal of Science and Mathematics Education, 12(2), 307–327. Scott, J., & Nagy, W. (2004). Developing word consciousness. In J. F. Baumann & E. J. Kame’enui (Eds.), Vocabulary instruction: Research to practice (pp. 201–217). New York, NY: Guilford Press. Scruggs, T. E., & Mastropieri, M.A. (2000). The effectiveness of mnemonic instruction for students with learning and behavior problems: An update and research synthesis. Journal of Behavioral Education, 10(2), 163–173. Shanahan, T., & Shanahan, C. (2008). Teaching disciplinary literacy to adolescents: Rethinking content-area literacy. Harvard Educational Review, 78(1), 40–59. Simpson, M. L., Stahl, N. A., & Francis, M. A. (2004). Reading and learning strategies: Recommendations for the 21st century. Journal of Developmental Education, 28(2), 2–15, 32. Snow, C. E. (2002). Reading for understanding: Toward an R&D program in reading comprehension. Santa Monica, CA: RAND Corporation, www.rand.org/pubs/monograph_reports/MR1465.html Snow, C. E. (2010). Academic language and the challenge of reading and learning about science. Science, 328(5977), 450–452. Stahl, S. A. (1985). To teach a word well: A framework for vocabulary instruction. Reading World, 24(3), 16–27. Stahl, S. A. (1999). Vocabulary development. Cambridge, MA: Brookline Press. Stahl, N. A., Brozo, W. G., & Simpson, M. L. (1987). A content analysis of college vocabulary textbooks. Reading Research and Instruction, 26(4), 203–221. Stahl, S. A., & Nagy, W. E. (2006). Teaching word meanings. Mahwah, NJ: Lawrence Erlbaum Associates. Stahl, N. A., Simpson, M. L., & Hayes, C. G. (1992). Ten recommendations from research for teaching highrisk college students. Journal of Developmental Education, 16(1), 2–10. Sternberg, R. J. (1987). Most vocabulary is learned from context. In M. G. McKeown & M. E. Curtis (Eds.), The nature of vocabulary acquisition (pp. 89–105). Hillsdale, NJ: Lawrence Erlbaum Associates. Townsend, D., Bear, D., Templeton, S., & Burton, A. (2016). The implications of adolescents’ academic word knowledge for achievement and instruction. Reading Psychology, 36(8), 1119–1148. Townsend, D., Filippini, A., Collins, P., & Biancarosa, G. (2012). Evidence for the importance of academic word knowledge for the academic achievement of diverse middle school students. Elementary School Journal, 112(3), 497–518. Willingham, D., & Price, D. (2009). Theory to practice: Vocabulary instruction in community college developmental education reading classes: What the research tells us. Journal of College Reading and Learning, 40(1), 91–105. Winne, P. H., & Jamieson-Noel, D. (2002). Exploring students’ calibration of self-reports about study tactics and achievement. Contemporary Educational Psychology, 27(4), 551–572. Wolsey, T. D., Smetana, L., & Grisham, D. L. (2015). Vocabulary plus technology: An after-reading approach to develop deep word learning. The Reading Teacher, 68(6), 449–458. Zimmerman, B. J. (2002). Becoming a self-regulated learner: An overview. Theory into Practice, 41(2), 64–70.
117
8 Comprehension Jodi Patrick Holschuh texas state university
Jodi P. Lampi northern illinois university
In the previous edition of the Handbook (Holschuh & Aultman, 2009), we discussed the role of comprehension strategies as they are related to the ideas that (a) strategies should have cognitive, metacognitive, and affective components; (b) strategies for comprehension are influenced by sociocultural and disciplinary elements; and (c) teacher-directed strategies should eventually lead to students’ use of generative strategies (Fiorella & Mayer, 2015; Simpson & Rush, 2003; Wittrock, 1986, 1990, 1992). Generative strategies involve attention, motivation, knowledge and preconceptions, and creation (Wittrock, 1986, 1990, 1992). Thus, they consist of strategies that students can eventually create and employ on their own. We also concur with Alexander (2012) that reading is multidimensional, developmental, and goal-directed. As we reviewed the literature for this revised edition, we noticed that although many of the strategies have remained the same, the theoretical underpinnings for why these strategies are effective have been further developed, and that further empirical research is needed to examine the role new technologies have in comprehension and strategy use. In this chapter, we discuss these advancements, including the role of disciplinary knowledge on learning, the importance of sociocultural influences, and the function of disciplinary conventions on strategy selection.
Theoretical Rationale Comprehension strategies that lead to the use of generative strategies are multidimensional (Alexander, 2012; Kucer, 2014), and in this chapter, we will explore some of the major elements influencing comprehension: sociocultural, metacognitive, cognitive, disciplinary, and affective influences. Each of these theoretical bases is discussed in the following sections.
Sociocultural Influences Comprehension is more than a cognitive act and instead is socially and culturally constructed (Freebody & Luke, 1990). Sociocultural theory is largely related to how an individual’s mental functioning is connected to cultural, institutional, and historical context during comprehension. That is, when students read, they draw knowledge about the meaning of the text and activate schema from different sources or “cue systems” (Winch, Johnston, March, Ljungdahl, & Holliday, 2010). Snow (2002) articulates this theory of comprehension as containing three elements
118
Comprehension
influenced by the sociocultural context: the reader, the text, and the activity of reading itself. In other words, readers bring their own unique set of skills, understandings, and prior experiences to the task of reading, resulting in multiple meanings, regardless of comprehension strategy approaches used. Thus, this sociocultural framework for comprehension is shaped by the belief that literacy is a social practice (Gee, 2001, 2015). As a result, scholars have come to understand that comprehension necessitates an understanding of concepts, knowledge, phenomena, and language/terminology unique to the sociocultural context under examination (Diller & Oats, 2002; Kapp & Bangeni, 2009; North, 2005). Thus, comprehension is an activity with social and cultural origins, meaning that it has to be learned and developed as a cognitive tool (Vygotsky, 1978). Hence, for students to succeed in applying appropriate strategies to support comprehension, they must become aware of the social and cultural practices of each context. Additionally, their social and cultural backgrounds influence learning (Graves, 2004), so instruction, ideally, should reflect lived experience. Comprehension, then, happens as a result of a contextualized learning environment that a student is within (Lea & Street, 1998) and as a socioculturally situated practice (Gee, 2001, 2015). To become effective readers, students need to understand how sociocultural influences impact comprehension, which requires an understanding of the role of metacognitive knowledge and regulation.
Metacognitive Influences Although basic notions about metacognition date back over a century (e.g., Dewey, 1910; James, 1890; Thorndike, 1917), the term was not directly related to reading comprehension until the late 1970s. At that time, Flavell (1978) defined metacognition as “knowledge that takes as its subject or regulates any aspect of any cognitive endeavor” (p. 8). More recently, research on metacognition has appeared in literature spanning cognitive, developmental, and educational psychology (Hacker, 1998; Hacker, Dunlosky, & Graesser, 2009), and has focused on self-regulated learning, cognitive development, and executive processing (Wolters & Hussain, 2015). Some scholars, such as Muis (2007), describe metacognitive processes as being subordinate to self-regulation, with metacognition as a means of moderating performance. Dinsmore, Alexander, and Loughlin (2008), however, view metacognition as a larger construct, with self-regulated learning nested within it. Although these research lines have led to varying definitions and distinctions of the processes and components of metacognition, we concur with Hacker’s (1998) suggestion that any definition of metacognition includes “knowledge of one’s knowledge, processes, and cognitive and affective states; and the ability to consciously and deliberately monitor and regulate one’s knowledge, processes, and cognitive and affective states” (p. 11). A majority of researchers define metacognition as consisting of two theoretically distinct components: knowledge about cognition and regulation of cognition (Baker & Brown, 1984; Martinez, 2006; Pintrich, 2002; Wolters & Hussain, 2015). The first key aspect of metacognition, knowledge about cognition, concerns what readers know about their cognitive resources and abilities as well as the regulation of these resources (Paris, Lipson, & Wixson, 1983; Park & Sperling, 2012). Regulation includes the ability to detect errors or contradictions in text, knowledge of different strategies to use with different kinds of texts, and the ability to separate important information from unimportant information. Knowledge about cognition is stable in that learners understand their own cognitive resources (Baker & Brown, 1984), including information about themselves as thinkers. It is also statable in that readers can reflect on their cognitive process and explain those processes to others. Moreover, knowledge about cognition is domain-specific and can differ depending on the type of material with which students are interacting (Alexander, 2005; Pintrich, 2002; Pressley, Van Etten, Yokoi, Freebern, Van Meter, 1998). However, an individual’s knowledge of cognition may also be fallible
119
Jodi Patrick Holschuh and Jodi P. Lampi
knowledge that is acquired through experiences with the learning process ( Jing, 2006; Ransdell, Barbier, & Niit, 2006; Yang, Potts, & Shanks, 2017). The second key aspect of metacognition is readers’ ability to control or self-regulate their actions during reading. Self-regulation includes planning and monitoring, testing, revising, and evaluating the strategies employed when reading and learning from text (Sperling, Howard, Staley, & DuBois, 2004; Winne, 2014). Metacognition involves the regulation and control of learning or, more specific to this chapter, the regulation and control of the comprehension process while reading and the strategies employed during this process (Schraw & Gutierrez, 2015). Because of its importance, metacognition has become an integral part of models of reading, studying, and learning (see McCombs, 1996; Paris et al., 1983; Pintrich, 2004; Thomas & Rohwer, 1986). In fact, we view metacognition as the foundation of understanding text. Students must be able to judge (i.e., metacomprehension; see Wiley, Griffin, Jaeger, & Jarosz, 2016) whether they understand the information presented in a written text, by the instructor during lecture, or through some other vehicle as well as the manner in which it was presented. Seminal studies on metacognitive knowledge about memory (Kreutzer, Leonard, Flavell, & Hagen, 1975) and comprehension while monitoring (Markham, 1977) initiated several decades of a focus on metacognitive studies with children. However, current research on metacognition has branched out considerably from this predominant focus on children. Studies with college students include exploration of the differences between students who have a learning disability and students who do not (e.g., Trainin & Swanson, 2005); differences between monolingual and bilingual students (Commander, Ashong, & Zhao, 2016; Ransdell et al., 2006); and measurement issues, using samples of college students enrolled in academic strategies and developmental courses versus other types of courses (Sperling et al., 2004; Taraban, Rynearson, & Kerr, 2000, 2004). Research indicates that there are major differences between the metacognitive abilities of good and poor readers (Baker, 1985; Ozgungor & Guthrie, 2004; Ransby & Swanson, 2003; Schommer & Surber, 1986; Simpson & Nist, 1997). Nowhere is this discrepancy more clearly seen than in college students, who, by the time they enter college, are expected to possess metacognitive skills. Professors have little sympathy for students who say they did poorly because they thought they understood the materials but did not, studied the wrong information, or felt ready for a test when they really were not. Moreover, in an environment where researchers estimate that 85 percent of learning comes from independent reading (Nist & Simpson, 2000) and texts are central to learning (Alfassi, 2004), college students who are not metacognitively aware will probably experience academic problems (Baker & Brown, 1984; Kiewra, 2002; Maitland, 2000). In order to help students develop metacognitive awareness, scholars are examining ways to embed and test metacognitive learning, and comprehension strategies in various learning settings, such as in the sciences (Connell, Donovan, & Chambers, 2016; Zhao, Wardeska, McGuire, & Cook, 2014), in mathematics (Ghazal, Cokely, & Garcia-Retamero, 2014; Stewart & Hadley, 2014), in social studies (Gish, 2016), and in general, for learning transfer (Scharff et al., 2017). Effective use of reading and learning strategies for comprehension implies metacognitive awareness, especially in students’ ability to monitor their own learning (Bransford, Brown, & Cocking, 2000; Gettinger & Seibert, 2002; Pintrich, 2004), which will enable them to achieve more effective outcomes while exhibiting more adaptive behaviors as they perform academic tasks (Kiewra, 2002; Pintrich, 2002; Taylor, Pearson, Garcia, Stahl, & Bauer, 2006; Wolters, 2003).
Cognitive Influences In addition to having sociocultural influences and a metacognitive component, generative strategies for comprehension also have a cognitive component. In this section, we address the issue
120
Comprehension
of knowledge and the degree to which one’s knowledge influences comprehension development and strategic learning. Current views of the cognitive component of comprehension focus on the multidimensional, interactive nature of knowledge, taking into consideration factors such as interest, strategies, disciplinary conventions, and task. For example, studies have examined the interactions of knowledge and task (Holschuh, 2014; Simpson & Nist, 1997), knowledge and beliefs (Dahl, Bals, & Turi, 2005; Mason, Scirica, & Salvi, 2006), knowledge and new literacies (Coiro, 2011), and knowledge and strategies (Hynd-Shanahan, Holschuh, & Hubbard, 2004). Of particular importance in this chapter is the interaction between domain and strategy knowledge, which can help researchers better address complex problems, such as how transfer can be achieved (Alexander, 1992), and can help students engage in discipline-specific literacy tasks (Shanahan & Shanahan, 2008).
Model of Domain Learning Alexander’s (e.g., Alexander, 1997, 2003, 2005, 2012) Model of Domain Learning (MDL) focuses on the developmental nature of comprehension and knowledge over a life span and is categorized by three stages: acclimation, competence, and proficiency. Alexander (2005) suggests that knowledge, strategy use, and interest are intertwined and interdependently determine the level of expertise of a learner. Thus, an individual’s level of competence is not necessarily ageor grade-dependent (Alexander, Dinsmore, Parkinson, & Winters, 2011). The MDL makes a distinction between topic knowledge and domain knowledge (Alexander, 2003, 2012). Topic knowledge is the amount one knows about a specific topic within a particular domain (e.g., understanding cellular reproduction or photosynthesis); domain knowledge is a broader understanding about a particular field (e.g., how much one knows about biology). For one’s knowledge to progress from acclimation to proficiency, one must develop both topic and domain knowledge. Acclimated learners are at the initial stage in learning about a domain. They exhibit fragmented knowledge about a subject matter, use inappropriate surface-level strategies, and show low levels of intrinsic interest (Alexander, 2005). Acclimated learners may rely mainly on situational interest as motivation for learning. Though they may have topic knowledge about particular areas of the domain, they often have difficulty discriminating between important information and supporting details (Alexander, 2003, 2012). If they become “hooked” by an interesting topic, learn better strategies, or gain knowledge, however, they may become competent (Murphy & Alexander, 2002). When students reach the competence level, they begin to categorize information and to acquire enough domain-specific knowledge to understand that knowledge is interrelated (Alexander, 2005, 2012). Competent readers have a more sophisticated understanding of text. For example, they can recognize how an author’s word choice can make an argument more cogent (Alexander, 2012). They are less likely to focus on insignificant information than acclimated learners, and their knowledge becomes more cohesive (Alexander, 2003). Their strategies include a combination of deep and surface approaches, and they exhibit a moderate degree of intrinsic motivation (Alexander, 2003). As knowledge, strategy use, and motivation develops, learners may become proficient or expert. At proficient levels of expertise, deep-processing strategies become automatic (Alexander & Jetton, 2000). Learners develop a knowledge base that has both breadth and depth (Alexander, 2003). Because strategy use is effective and efficient, learners can devote more energy to posing questions and investigating problems (Alexander, 2005). Learners exhibit a high degree of intrinsic motivation, and they may even contribute to knowledge production within a particular domain (Alexander, 2003, 2012).
121
Jodi Patrick Holschuh and Jodi P. Lampi
Disciplinary Influences Alexander’s (2003) MDL is based on the notion that knowledge is domain-specific. That is, knowledge is seen as situational and is studied within a particular context (Alexander, 1996). Because the structures of domains differ, strategies to understand information differ as well (Holschuh, 2014; Shanahan & Shanahan, 2008; Simpson & Nist, 1997; Wineburg & Reisman, 2015). For many decades, there has been an assumption that “reading comprehension is primarily a consequence of the deployment of generic reading strategies, and that when students learn to master such strategies they will be ready for reading in the content areas” (Lee & Spratley, 2010, p. 3). However, such approaches leave students unprepared to face the sophisticated, specialized progression of literacy skills that they are expected to possess; thus, there has been a push to move toward a disciplinary literacy approach (Fang & Coatoam, 2013; Shanahan & Shanahan, 2008). Disciplinary literacy emphasizes the knowledge, abilities, and strategies that are used within a particular discipline (Shanahan & Shanahan, 2012; Wineburg & Reisman, 2015) and assumes that literacy tasks differ based upon the demands, goals, and epistemology of each discipline (Shanahan & Shanahan, 2008). For example, historians read different types of texts than scientists, and they read them in a different manner (Shanahan & Shanahan, 2017). Thus, a major goal of instruction is helping students successfully negotiate literacy demands across disciplines (Shanahan & Shanahan, 2012) and allowing them to experience rigor along with scaffolding and support for learning (McConachie & Apodaca, 2009). Using challenging text may be vital for developing literacy skills as using easier texts with struggling learners can actually impede comprehension development (Shanahan, Fisher, & Frey, 2012). Lee (2004) emphasizes using a cultural-modeling framework and real-world language and experiences to build connections in ways that support struggling learners. What makes a text challenging is disciplinary as well (Shanahan et al., 2012). A text in science may be complex due to specialized vocabulary or a need for background scientific knowledge. A text in history may be difficult because of political framing, intricate language, or rhetorical patterns. Helping students understand these differences is one way to scaffold learning (Wineburg & Reisman, 2015). Thus, reading comprehension is a complex cognitive process of meaning making from text and is no longer viewed as operating independently of the affective or motivational components (Alexander, 2012; Kintsch, 1998). Readers must be able to connect ideas from text to personal knowledge and experiences, read within and across texts (Alexander, 2014), and “make critical, reasoned judgments after engaging with text” (Li et al., 2016, p. 101). Comprehension involves the interplay of learning approaches, domain and disciplinary knowledge, and strategy use. In order to become effective learners, along with understanding the role of the cognitive aspect, students need to understand the role of affective behaviors.
Affective Influences In addition to having metacognitive and cognitive components, generative strategies also have an affective component. In fact, some researchers argue that it is difficult to separate the constructs of cognition and affect because they occur in concert (Fiske & Taylor, 2013). Affective influences have been described as related to self-schemas, or generalized cognitive and affective characterizations individuals ascribe to themselves that are derived from past experiences (Fiske & Taylor, 2013; Ng, 2005; Pintrich & Garcia, 1994). Self-schemas act as a guide to processing self-related information ( Johnson, Taasoobshirazi, Clark, Howell, & Breen, 2016; Petersen, Stahlberg, & Dauenheimer, 2000) and generally, an individual strives to achieve positive self-schemas. Academic self-schemas are specifically related to an individual’s thoughts and emotions based on prior academic experiences. Therefore, self-schemas are domain-specific, situation-specific, and context-specific
122
Comprehension
(Alexander, 1997) in that individuals have varying reactions to different domain areas based on past experiences (Linnenbrink & Pintrich, 2003; Ng, 2005). For example, a student who has experienced high achievement in mathematics courses and low achievement in history may have a more positive self-schema and higher self-efficacy about mathematics. In this sense, affective influences can provide the motivation for self-regulated learning and strategy use “by providing critical feedback to the self about the self ’s thoughts, intentions, and behavior” (Tangney, 2003, p. 384). Affective influences are based in part on prior experiences, but they can be influenced by targeted interventions (e.g., Hu & Driscoll, 2013). Although there are many dimensions of the affective component, we will address three major influences on comprehension development that are influenced by instruction: motivation, beliefs about text, and epistemological beliefs.
Motivation Motivation is “an internal state that arouses, directs, and sustains human behavior” (Glynn, Aultman, & Owens, 2005, p. 150). Motivation and self-perception are associated with persistence and academic achievement in college (Fong et al., 2016). However, there is no clear definition of motivation, largely because of the myriad of constructs that have been used to describe it (Schiefele, Schaffner, Möller, & Wigfield, 2012). Paris and Turner (1994) have coined the term “situated motivation” in which motivation is dependent on specific situations and thus, is socially constructed ( Järvelä, Volet, & Järvenoja, 2010). Situated motivation is based on the framework of self-regulated learning because it involves evaluating, monitoring, and directing one’s learning. Motivation is situated based on personal beliefs, instructors, materials, and tasks. According to this definition, motivation, like metacognition, is unstable and domain-specific because an individual’s goals are not the same in all settings and may vary as a consequence of the learner’s assessment of social influence, expectations, values, goals, and rewards in a particular setting ( Järvelä et al., 2010). Thus, it is an appropriate model for college learning where tasks, expectations, rewards, and goals vary greatly (Turner, 2010). There are four characteristics that influence situated motivation (Paris & Turner, 1994; Turner, 2010). First, choice or intrinsic value plays a role. This is consistent with Ryan and Deci’s (2016) work on self-determination theory, which suggests that situational and individual interests result in increased intrinsic motivation, more focused attention, higher cognitive functioning, and increased persistence. For example, students reported higher intrinsic motivation when given choices for a course assignment (Koh, 2014). Second, challenge is important because students are not motivated when they experience success at tasks that did not require effort (Glynn et al., 2005; Turner & Meyer, 2004). However, students may experience anxiety when the challenge is too high (Brophy, 2013). A person is more apt to be motivated when challenge is at an optimal level (Nakamura & Csikszentmihalyi, 2009). A third important characteristic is control (Botvinick & Braver, 2015). A majority of the tasks involved in college learning are not under students’ control nor can teachers grant total freedom or control to their students, but students do have volitional control over the strategies they choose to learn material as well as strategies to regulate their motivation (Wolters, 2003). Finally, collaboration or social interaction with peers affects motivation ( Järvelä et al., 2010; Paris & Paris, 2001). Social interaction is motivational because talking to peers can enhance a student’s interests. Also, feedback provided by peers is often more meaningful than the feedback provided by instructors (Wentzel & Ramani, 2016). It is important to note that, despite the emphasis in the literature about the importance of social influences on learning, the vast majority of reading and studying in college is still completed in isolation (Winne, 1995). In response to research on collaborative and sociocultural theories of learning, more emphasis and energy has been aimed toward the establishment of learning communities on college campuses that encourage student motivation, co-regulation, and learning (Glynn et al., 2005; Nieto, 2015).
123
Jodi Patrick Holschuh and Jodi P. Lampi
College instructors often feel frustrated by their apparent inability to “motivate” students to learn (Brophy, 2013; Hofer, 2002; Svinicki, 1994), particularly when teaching required courses where students are only enrolled to meet general education requirements (Glynn et al., 2005). By examining the relationship between motivation, cognition, strategy use, and self-regulated learning, there seem to be some common conclusions that can be drawn about enhancing students’ motivation. First, students learn best in classrooms that encourage a combination of mastery and performance approaches to learning, which is competency based and utilizes explicit instruction to model learning outcomes (Fong, Acee, & Weinstein, 2016; Linnenbrink & Pintrich, 2002; Rupley, Blair, & Nichols, 2009). Mastery may be facilitated by more frequent, informative, and specific feedback (Hofer, 2002), and can be helpful for tasks such as reading before class (Fong et al., 2016). Performance goals, on the other hand, can be beneficial for tasks such as preparing for exams (Fong et al., 2016). Second, motivation can affect the use of effective learning strategies (Koh, 2014; Turner & Meyer, 2004; Wibrowski, Matthews, & Kitsantas, 2016). Students need to feel that the task is challenging enough to warrant strategy use; furthermore, they will use deeper processing strategies if they have a mastery approach to learning. Third, motivation is unstable and will vary depending on content and context (Linnenbrink & Pintrich, 2002; Murphy & Alexander, 2000). Explicitly discussing the relevance of course content to students’ lives helps them understand the value of course topics, making the content more meaningful and worthwhile (Brophy, 2013; Hofer, 2002). Acee and Weinstein (2010) found that value-reappraisal interventions can help students develop deeper understandings of task value, which has implications for improving motivation. Finally, although research has indicated that motivation is domain-specific, studies also indicate the same motivational constructs may be useful in describing, understanding, and influencing motivation in general. Student motivation for learning seems to be based on the factors of goal orientation, use of effective strategies, and self-regulated learning (Fong et al., 2016; McCombs, 1996; Pintrich, 2000). Acknowledging that motivation is multidimensional and is influenced by characteristics of the learner, the instructor, the course, and the task, allows us to recognize that there are many pathways to increasing student motivation.
Beliefs about Text The idea that students bring to a learning situation an array of beliefs about specific concepts or even complete domains is not particularly new. We know that students’ prior knowledge, of which beliefs are a part, influences comprehension at all levels. Some students believe that everything they read in text is truth, and even if they know better, it is somehow difficult not to be drawn into the printed page (Murphy, Holleran, Long, & Zeruth, 2005). How such beliefs influence students’ interactions with text is a topic of interest to researchers and practitioners alike. Several generalizations can be made about what research has shown about text beliefs. First, epistemological beliefs seem to influence beliefs about text (Hynd-Shanahan et al., 2004; Schommer, 1994a; Wineburg & Reisman, 2015). Second, mature learners approach texts from different disciplines in different ways (Carson, Chase, & Gibson, 1992; Shanahan & Shanahan, 2012). That is, effective learners believe that science text is approached differently than, say, history text (Nist & Holschuh, 2005). Third, even when text is persuasive, it is very difficult to change one’s beliefs (Murphy et al., 2005). Fourth, many students believe that they are passive recipients of information when they read and do not see their role as an active participant in the process of comprehension from text (Armstrong & Newman, 2011). Finally, experts and novices have beliefs about text that cause them to respond to and interpret text in different ways (Hynd-Shanahan et al., 2004; Shanahan & Shanahan, 2008; Wineburg & Reisman, 2015).
124
Comprehension
Wineburg’s (1991) research concerning students’ beliefs about history text suggests that subtexts, or underlying texts, supplement the more explicit meaning of the text. Wineburg asked college history professors and bright college-bound high school seniors to think aloud as they read seven different historical texts, asking both groups to verbalize their thought about the content (not the processes). Students rarely saw the subtexts in what they were reading. Wineburg suggests that this inability to understand a writer’s point of view is based on what he calls “an epistemology of text” (p. 510). That is, in order to be able to detect subtexts, students must believe that they actually exist. Hynd-Shanahan et al. (2004) found that students were able to change the way they read as a result of the types of reading assigned and the nature of engaging multiple texts. They attribute this change to a transformation in the purpose for reading history—from fact gathering to making decisions on what to believe about a historical event. Thus, reading multiple texts required students to make sense of the subtexts both within and across texts. Beliefs about text impact text understanding and approaches that students use to comprehend text information. Moreover, such beliefs seem to spill over into strategies that students select to learn text information as well as the more general beliefs that students possess about what constitutes knowledge and learning (Hynd-Shanahan et al., 2004).
Epistemological Beliefs Beliefs about knowledge also play a role in the affective component. These epistemological beliefs are an individual’s set of beliefs about the nature of knowledge (Hofer & Pintrich, 2002) and the process of knowing (Schommer, 1994a, 1994b). Because there is a growing body of research suggesting their influence on comprehension, readers’ goals, and reasoning (Bråten, Strømsø, & Ferguson, 2015; Hofer & Pintrich, 2002; Schommer, 1994b), epistemological beliefs have current interest to educators. Historically, epistemological beliefs were thought of as a system of complex unidimensional beliefs. Perry (1970) believed that students progressed through fixed stages of development. The college student begins in a naïve position and moves through a series of nine fixed positions on the way to a mature cognitive understanding. In the initial position, called basic dualism, the student views the world in polar terms: right or wrong, good or bad. Right answers exist for every question, and a teacher’s role is to fill students’ minds with those answers. The student then moves through a series of middle positions to a position of multiplicity, in which a student begins to understand that answers may be more opinion than fact and that not all answers can be handed down by authority. From this position, a student may move to a position of relativism. In this position, a student understands that truth is relative and that it depends on the context and the learner. A student who has moved to the position of relativism believes that knowledge is constructed. Schommer builds on Perry’s theory by examining a system of more or less independent, multidimensional epistemological beliefs that may influence students’ performance (Schommer, 1994b; Schommer-Aikins, 2002; Schommer-Aikins & Duell, 2013). Schommer and others have defined epistemological beliefs about learning as an individual’s beliefs about the certainty of knowledge, the organization of knowledge, and the control of knowledge acquisition (Schoenfeld, 1988; Schommer-Aikins, 2002). Moreover, these beliefs are thought to develop over time and can change depending on content, experience, and task (Schommer-Aikins & Duell, 2013). The way instructors teach also has an impact on student beliefs. Hofer (2004) found that students who held a belief in the simplicity of knowledge struggled when the way an instructor taught implied that knowledge was simple, but the exams indicated that knowledge was complex. There is evidence that epistemological beliefs may also affect the depth to which students learn and the learning strategies they select (Muis et al., 2015; Schommer, 1990; Schreiber & Shinn, 2003; Sinatra, Kienhues, & Hofer, 2014). Students who hold strong beliefs in certain or simple
125
Jodi Patrick Holschuh and Jodi P. Lampi
knowledge tend to use more surface-level strategies, while those holding beliefs in the uncertainty and complexity of knowledge tend to use deep-level strategies for learning (Holschuh, 2000; Schreiber & Shinn, 2003). Research has indicated that students’ epistemological beliefs are most obvious in higher-order thinking because students need to take on multiple perspectives and process information deeply rather than memorize information (Hynd-Shanahan et al., 2004; Schommer & Hutter, 1995). Of current interest to researchers is the issue of domain specificity on epistemological b eliefs. Some researchers have found differences in beliefs depending upon domain (Muis, 2008; Muis et al., 2015; Schommer-Aikins, Duell, & Barker, 2013), while others (Buel, Alexander, & Murphy, 2002; Schommer-Aikins & Duell, 2013) found some evidence of both domain specificity and generality in student epistemological beliefs. However, despite these conflicting results, it appears that academic discipline and domain do impact students’ beliefs about knowledge. Current research suggests that there is a relationship between students’ beliefs and their comprehension of text. With guidance, it appears that students begin to change their own beliefs when they have professors who communicate more sophisticated ways of knowing (Hofer, 2004; Nist & Holschuh, 2005). Afflerbach, Cho, Kim, Crassas, and Doyle (2013) argue that conceptualizations of successful reading should include epistemological beliefs because these beliefs “influence students’ approaches to learning in the classroom, the cognitive skills and strategies that students use, and the stances readers take toward text” (p. 444). Thus, based on our current understanding of both epistemological beliefs and strategy use, one way both can be enhanced is through explicit instruction.
Explicit Instruction Explicit instruction focuses on teaching in ways that makes learning goals and outcomes clear or transparent to learners and has its roots in direct instruction. The relevance of direct instruction emerged from the teacher effectiveness research that received attention in the late 1970s and early 1980s (Berliner, 1981; Rosenshine, 1979). Direct instruction serves as an instructional approach, based on the classical behaviorist models by B.F. Skinner, in which curriculum materials became packaged and programmed models providing educators with step-by-step, lesson-by-lesson plans (Luke, 2013). This direct instruction approach, often considered a specific type of explicit instruction, follows a scripted, incremental, and highly paced pattern that is meant to build skill acquisition predetermined for students placed in specific achievement groups (Luke, 2013). In fact, the implementation of direct instruction is often controlled and standardized to minimize student misinterpretations and maximize instructional effects (Barbash, 2011; Engelmann & Carnine, 1991). However, Durkin (1978–1979) began to suggest that this direct and heavily packaged practice did not enable students to determine what specific comprehension skills they actually needed, when to apply them, and how to use them. Thus, the emergence in the early 1970s of cognitive psychology, which emphasized the reading process rather than the product, also has contributed to the recognition of the important role explicit instruction plays in the comprehension process. Instead of direct instruction, the term explicit instruction started to become more prevalent to indicate the shift in instruction toward a cognitive goals and outcomes. As a result, educators have realized that when students get 5 out of 10 items correct, it does not necessarily mean that they know only 50 percent of the information. It means that instruction should consider the kinds of items students are missing and why they are missing them. These ideas continue to penetrate college reading programs and general college classrooms today. Alfassi (2004) suggests, “As students advance in their studies, they need to be able to rely on their ability to independently understand and use information gleaned from text. Text becomes
126
Comprehension
the major, if not the primary, source of knowledge” (p. 171). Hence, students need to be explicitly taught a repertoire of strategies and receive instruction on how, when, and why they should be employed (Paris, Byrnes, & Paris, 2001; Pintrich, 2002; Pressley, 2000; Simpson & Nist, 2000). This includes modeling and instruction of comprehension strategies that acknowledge new definitions of literacy, including both print and digital text (Schmar-Dobler, 2003). Most students, however, do not receive direct training in application of comprehension strategies (Cornford, 2002; Langer, 2001; Pressley, Wharton-McDonald, Mistretta-Hampston, & Echevarria, 1998). Pintrich (2002) stated, “In our work with college students we are continually surprised at the number of students who come to college having very little metacognitive knowledge; knowledge about different strategies, different cognitive tasks, and particularly, accurate knowledge about themselves” (p. 223). Therefore, it is ironic that multiple studies indicate that students who receive explicit strategy instruction perform better than students who did not, revealing a disconnect between research and practice (Alfassi, 2004). For example, Falk-Ross (2002) found that students who received explicit instruction in prereading, note-taking, annotating, and summarizing exhibited improved critical thinking, increased comprehension, and more effective contributions to classroom discourse. In addition, Friend (2001) found that students taught to write summaries using explicit instruction with explanation, modeling, and guided practice were more successful in learning to write summaries than students who did not receive explicit instruction. It appears, then, that explicit instruction can do more than just improve recall of information; it can show students ways to enhance their own knowledge. Shanahan and Shanahan (2008) argued that one reason students might find it difficult to transfer knowledge from one task to another is because most students require explicit instruction on advanced genres, specialized discourse, and/or on disciplinary or knowledge-building processes, particularly strategies that assist in comprehension of these activities. Some scholars even implied that instructors might not provide explicit instruction because they often learn to read and write in their respective fields through slow observation and apprenticeship, and not through explicit instruction (Carter, 2007; Russell, 1991). Therefore, several researchers suggest that explicit strategy training should include three components (Paris & Paris, 2001). First, students should become familiar with a definition or description of the strategy (Duffy & Rochler, 1982). The researchers believe that it is important to give a concrete and complete explanation of the strategy at the onset of training because students will be more likely to use the strategies effectively if they understand what the strategies are and why they work (Paris & Paris, 2001; Simpson & Nist, 2000). Second, an explanation of why the strategy should be learned must be addressed because providing this explanation is important for facilitating students’ self-control of the strategy (Boekaerts & Corno, 2005; Paris & Paris, 2001). For instance, when asking students to construct an argument across texts, students were more successful when given an explanation of the expectations (Linderholm, Therriault, & Kwon, 2014). Moreover, students will apply the strategy more effectively if they understand why it is important (Paris & Paris, 2001; Simpson & Nist, 2000). Third, providing instruction on how to use a strategy, including teacher modeling, explicit instruction, or guided practice, as well as observational and participatory learning with peers, will help facilitate learning (Boekaerts & Corno, 2005; Paris & Paris, 2001; Pearson & Gallagher, 1983; Simpson & Nist, 2000). One model of explicit instruction includes the following interrelated steps: 1 Modeling the Process. The instructor must show the “how” of learning. Instructors think aloud, showing students how a mature learner thinks through an idea or solves a problem. Modeling the strategy should be done through concrete examples and explicit verbal elaboration. Teacher modeling of strategy and self-regulated use of the strategy are what constitutes good
127
Jodi Patrick Holschuh and Jodi P. Lampi
instruction (Lapp, Fisher, & Grant, 2008; Pearson & Gallagher, 1983; Pintrich, 2002; Pressley, Graham, & Harris, 2006; Taraban et al., 2004). 2 Providing Examples. During this phase, the instructor shows students examples of how the strategy has been used in a variety of contexts. Providing examples of the strategy helps students understand how the strategy works (Alfassi, 2004). 3 Practicing Strategy Use. Strategy practice should be guided at first, where students repeat the instructor’s strategy using new situations or problems. Instructors should be available to help students and to provide feedback. Eventually, students should practice independently outside the classroom (Alfassi, 2004; Pressley et al., 2006). 4 Evaluating Strategy Use. Evaluation that includes both teacher-provided feedback and self- monitoring techniques will help students become independent learners (Alfassi, 2004; Paris & Paris, 2001). In addition, students need to become familiar with the appropriate circumstances for strategy use. A second model of explicit instruction is the “cognitive apprenticeship” method (Boekaerts & Corno, 2005; Brown, Collins, & Duguid, 1989; Clark & Graves, 2005). In this model, the instructor (a) models the strategy in an authentic activity, (b) supports the students doing the task through scaffolding, (c) allows the students to articulate their knowledge and monitor the effectiveness of the strategy on their learning, and (d) gradually fades or withdraws support as students become proficient. With each of these models for explicit instruction, the responsibility for learning shifts from the instructor to the student. It is once students become responsible for their own learning that transfer of strategic knowledge occurs. In one study, Simpson and Rush (2003) found that after being taught various strategies, including problem-solving, note-taking, test preparation, planning and goal setting, and reviewing and rehearsal strategies, students tended to transfer strategies related to planning and distributing study time. Every college reading instructor strives to get students to the point of transfer, but this is a difficult goal to accomplish. Research on strategy instruction offers substantial evidence that students, especially learners in developmental education courses, need direct instruction on strategy selection and use. The next section discusses comprehension strategies that can be used to help students on the road to becoming self-regulated learners and to be able to transfer information to new learning situations.
Comprehension Strategies In this section, we discuss teacher-directed comprehension strategies that lead to generative use. Because reading is a multidimensional, developmental process (Alexander, 2012), students need strategies that do more than develop one skill at a time and they need strategies that are taught beyond their procedural aspects. This means readers do more than identify a main idea or detect an inference when they read. Therefore, the comprehension strategies students learn need to be more advanced as well. The strategies we present have metacognitive, cognitive, and affective components. All the strategies require purposeful effort and students generate meaning by building relations between the text and what they already know. Thus, the mind is not passive while reading; rather it is intentionally organizing, isolating, and elaborating on key information (Hadwin & Winne, 1996; Wittrock, 1990). We also focus on strategies that are flexible. Strategies should be flexible in order to be utilized in a variety of contexts and must eventually be self-selected by the learner to attain a specific goal (Simpson & Nist, 2000; Weinstein, 1994). Effective comprehension strategies should allow students to actively interact, elaborate, and rehearse the text information in order to retain it for
128
Comprehension
later use (Nist & Simpson, 1994). In addition, strategy selection necessitates a deliberate decision and effort by the learner (Hadwin & Winne, 1996; Paris, Lipson, & Wixson, 1983; Winne, 2013). However, before many students are able to self-select appropriate learning strategies, they need a good deal of direct instruction and scaffolding, and they need to practice the strategies with different disciplines both within and across genres of text. The ultimate goal is for students to use the strategy or modifications of a strategy without guidance from the instructor. We need to make one final comment about strategy use. The results of research examining the efficacy of strategy use have not been consistent (Donker, de Boer, Kostons, Dingnath-van Ewjik, & van der Werf, 2014; Hadwin & Winne, 1996; Winne, 2013). Many studies did not allow students to self-select strategies; instead, the studies focused on comparing one strategy with another in a “horse race”—the best strategy wins in the end. Another reason for inconsistent results is because the studies often did not portray normal reading/studying conditions by imposing time constraints or by employing extremely short, easy passages (Wade & Trathen, 1989). In addition, some comprehension strategies are difficult to observe and thus may not be easily captured in research studies (Donker et al., 2014; Winne, 2013). Finally, many strategies taught in college reading classes do not have support in research. Instead, these strategies are found in content-reading texts or other “methods” resources (Winne, 2013). For these reasons, we concentrate on the underlying processes of strategy use, and we offer a suggestion of an established, research-based strategy that embodies those processes. The underlying processes we discuss are effective across domains; however, the way specific strategies are used depends on disciplinary demands. Where possible, we cite research that has been used with high school and college students. We narrowed our focus to strategies that met three basic criteria. First, the strategies had to possess metacognitive, cognitive, and affective components. Second, they had to be strategies that can be scaffolded through instruction. Third, all strategies must be able to be adapted across disciplines. Fourth, they must permit students to self-test on the information, whether individually or cooperatively. Too often, the first indication of gaps in comprehension is a low test score. Self-testing allows students to determine whether or not they are comprehending information so that they can modify their strategies if necessary before formal assessment (Weinstein, 1994). In other words, they must be strategies that students can eventually generate themselves and strategies that allow students to check their knowledge and comprehension. We discuss a research-based strategy within each of the processes of organizing information, isolating key ideas, and elaborating on information, as examples of the ways these underlying processes can be used in strategy instruction.
Organizing Strategies The purpose of organizing strategies is to build and activate students’ background knowledge, cue awareness of the quality and quantity of that knowledge, and focus attention before reading. Many types of organizing activities are presented in content area or developmental reading texts, but only some of these strategies are generative in nature and have cognitive, metacognitive, and affective elements. For example, early work in organizing strategies focused on advanced organizers (Ausubel, 1963, 1968), which would not be considered generative. However, teaching students how to create graphic organizers, concept maps, and how to preview texts would be generative because students would be able to eventually use these strategies on their own.
Concept Mapping One popular organizing strategy is concept mapping. Concept maps allow students to create a visual representation of information (Hay, 2007). Much of the current research focuses on the use of
129
Jodi Patrick Holschuh and Jodi P. Lampi
technology to create concept maps (e.g., Cheung, 2006; Perry & Winne, 2006) or how to embed concept maps into online learning (Hwang, Yang, & Wang, 2013). For example, Perry and Winne (2006) include concept mapping in their gStudy software as a means to promote self-regulated learning from text. Maps can look like flow charts, depicting a hierarchy or linear relationship, or they can be created in such a way to represent complex interrelationships among ideas. Mapping helps students link concepts together and also helps their metacognitive awareness of their comprehension of text information (Nesbit & Adescope, 2006). Mapping has been shown to facilitate learning in many content areas because this strategy helps students organize information, relate it to their prior knowledge, and elaborate on the relationships between ideas by providing personal examples (Lipson, 1995). Lipson (1995) describes mapping in the following manner. First, students identify key concepts, then they identify supporting concepts, then they identify relationships between the key and supporting concepts. One of the benefits of concept mapping is that it helps students identify relationships among ideas (Lipson, 1995). In addition, concept mapping will help students process information at deeper levels. It can also serve as an elaborative strategy when students use maps for retrieval of information (Blunt & Karpicke, 2014). However, students must have fairly well-honed metacognitive skills in order to organize the relationships between and among ideas. Research has indicated that students with low content knowledge may feel insecure about concept mapping (Hadwin & Winne, 1996). Therefore, in order for graphic organizers to be generative, instructors need to provide students with a great deal of direct instruction, practice, and feedback initially. Then, instruction will need to be scaffolded as students grapple with lengthier texts, become familiar with various organizational patterns, and detect key concepts and their interrelationships.
Isolating Key Information In addition to organizing, students must also be able to isolate key information. The purpose of isolating key information is to reduce the amount of information that a student must remember. Thus, teaching students to isolate is both crucial and difficult because the inability to identify important information can lead to academic frustration and failure. Research has indicated that many students encounter difficulty in isolating important material (Anderson & Armbruster, 1984; Nist & Simpson, 2000) and that the type of information that counts as important differs by discipline (Shanahan & Shanahan, 2008). Some of the most widely used strategies for isolating key ideas are text-marking strategies. As students read and mark they isolate and concentrate on the information at the time of reading, thereby engaging in deeper processing of the information (Nist & Hogrebe, 1987). However, many students come to college without appropriate text-marking strategies and will not be able to effectively use these strategies without explicit training (Nist & Simpson, 2000). Underlining and highlighting are popular methods of isolating information, but they do not meet Wittrock’s (1990) definition of generative learning because they do not require students to organize, transform, or elaborate on the material. Instead, learners would be better off to use more generative strategies, such as annotation.
Annotation Annotation is an effective, generative text-marking strategy. Annotating text includes the following components: (a) writing brief summaries in the text margins in the students’ own words, (b) enumerating multiple ideas (e.g., cause-and-effect relations, characteristics), (c) noting examples in the margins, (d) putting information on graphs and charts if appropriate, (e) marking possible test questions, (f ) noting confusing ideas with a question mark in the margins, and (g) selectively underlining key words or phrases (Simpson & Nist, 1990). Students are not only responsible for
130
Comprehension
pulling out the main points of the text but also the other key information (e.g., examples and details) that they will need to rehearse for exams. In this way, annotation goes beyond the process of isolation. Students are actually transforming information by changing or personalizing it in some way. Much of the current research focuses on creating or using software programs for text annotation or annotating in online environments (e.g., Chen & Yen, 2013; Erçetin, 2011; Perry & Winne, 2006; Wentling, Park, & Peiper, 2007). Wentling et al. (2007) found that students who used annotation software scored higher than students who did not on each of three exams. Chen and Yen (2013) found that hypertext annotations aided reading comprehension. This line of research offers a promising glimpse into the future of annotation as more course materials are offered online. The benefits of annotation are numerous. First, students are actively reading and monitoring their understanding. When students encounter information that they cannot put into their own words, they know that they do not comprehend the information. Second, students using annotation are actively constructing ideas and making connections to what they know (Simpson & Nist, 1990). In this way, the strategy is flexible and should facilitate deeper processing (Anderson & Armbruster, 1984) and metacognitive awareness. Third, annotation can be motivating for students because they are approaching the text with a purpose (Nist-Olejnik & Holschuh, 2013). Fourth, annotating helps students organize the information so that they can see links between the main points and supporting details. But annotation does have drawbacks. One possible drawback is that its usefulness depends on the depth of processing. If students are simply copying the text verbatim, then there will not be much benefit (Anderson & Armbruster, 1984; Liu, 2006). For deeper processing and comprehension, students must annotate in their own words (Simpson & Nist, 1990; Strode, 1991). Another drawback, especially from students’ perspectives, is that it takes longer to read and interact with texts. This may be especially troublesome for learners in developmental education courses who may already read laboriously. Finally, as previously mentioned, annotation instruction also takes a good deal of time. Research has indicated that mastering this strategy may necessitate more than one semester of instruction and practice (Holschuh, 1995; Mealey & Frazier, 1992).
Elaborating Although organizing and isolating key information are important elements in being academically successful, students also need to know and use elaborative strategies. Of the three strategic processes, elaboration is the final step. In other words, students cannot elaborate on information without first organizing and isolating key information in some way. College students are often in learning situations where they are required to synthesize and analyze information, situations where rote memorization strategies will not suffice (Karpicke, 2009; Nist & Simpson, 2000; Pressley, Ghatala, Woloshyn, & Pirie, 1990). Moreover, tasks that require elaboration of information across texts, including electronic sources and websites, are frequently assigned in college courses (Dornisch & Sperling, 2006; Hagen, Braasch, & Bråten, 2014) and often cause frustration (Simpson, 1994; Simpson, Stahl, & Francis, 2004). Elaborative strategies allow students to relate new information to what they already know (Donker et al., 2014; Ozgungor & Guthrie, 2004; Wittrock, 1986). When students elaborate, they add information that is not explicit in the text they are studying (Hamilton, 1997; Ozgungor & Guthrie, 2004; Simpson, Olejnik, Tam, & Supattathum, 1994). The use of elaborative strategies often distinguishes successful learners from unsuccessful learners (Willoughby, Wood, & Kraftcheck, 2003). Some research has found that students taking online courses are more apt to use elaborative strategies (Broadbent, 2017) and that these strategies may be more effective than
131
Jodi Patrick Holschuh and Jodi P. Lampi
concept mapping alone (Karpicke & Blunt, 2011). There are many different strategies students can use to go about the difficult task of elaboration. These comprehension strategies include elaborative interrogation and Elaborative Verbal Rehearsals.
Elaborative Verbal Rehearsals Elaborative Verbal Rehearsal (EVR), or a talk-through, is a strategy that provides an important means of monitoring understanding of text (Nist & Diehl, 1998). This strategy has been shown to have an impact on students’ exam performance (Simpson et al., 1994). When students use this strategy, they are rehearsing aloud the important information as if they were teaching it to an audience (Nist & Simpson, 2000; Simpson, 1994). A good talk-through consists of the following processes: (a) relating ideas across text and to prior knowledge, (b) incorporating personal reactions or opinions about the ideas, (c) summarizing key ideas in students’ own words, and (d) including appropriate text examples (Simpson, 1994). Research has indicated that the quality of the talk-through played a major role in its effectiveness, so students will need explicit instruction on how to conduct an effective EVR (Simpson et al., 1994). This instruction should include modeling a good example, explaining the rationale for strategy use, and providing feedback on students’ use of the strategy (Simpson et al., 1994). EVRs are metacognitive because they help students distinguish what they know from what they do not know. EVR can also be a good way to incorporate self-testing. Self-testing has many purposes as a way of elaborating on information, from prompting the retrieval of prior knowledge and focusing attention to checking comprehension of information and predicting possible test items. This has been shown to improve comprehension and performance on exams (Gettinger & Seibert, 2002; King, 1992; Taraban et al., 2004; Tierney & Cunningham, 1984). The ability to respond to questions that tap into higher-level thinking is beneficial to deeper comprehension (Graesser & Olde, 2003) as opposed to factual questions that encourage only shallow processing. One of the drawbacks to EVR is that students must have a good understanding of the information before it can be used. Simpson (1994) suggests that this strategy only be used by students who can decode and comprehend the text material. One of the benefits of this strategy is that it facilitates students’ active and elaborative comprehension of text information. Although the comprehension strategies described here are flexible, enabling the learner to utilize them in various contexts, they should also be taught in a manner that emphasizes the underlying processes of each strategy and the conventions of multiple disciplines. This requires explicit instruction and lots of strategy practice before students will be able to self-select strategies to use within different disciplines and across texts.
Conclusions Because we know that studying is usually an isolated activity (Thomas & Rohwer, 1986), we need to teach college students, particularly students in developmental education, generative strategies that they can use independently. Unless college students can move beyond teacher dependence and apply strategies on their own, they will have a difficult time being academically successful in college. One of the most important elements of developing comprehension strategies is to make students aware of how to select task-appropriate strategies. This can be accomplished by modeling the strategy in a variety of contexts and through discussions with peer groups. Students should be encouraged to modify strategies in such a way that they have “ownership” in the strategy. They should also be taught using authentic text from multiple disciplines to help students understand the disciplinary nature of strategy use. In addition to providing scaffolding on strategy use so that
132
Comprehension
students can eventually use the strategy on their own, instructors should be sure that the strategy possesses metacognitive, cognitive, and affective elements. Research on comprehension strategies and their use has progressed dramatically in the past few decades from a focus on specific strategies to more of a focus on the processes involved in strategy use. Thus, currently, it is perhaps safe to conclude that it is not so much the strategy itself that makes the difference but the processes that underlie that strategy, and that often, disciplinary conventions will guide those processes.
Implications In this chapter, we have taken the stance that every strategy presented to students should have the potential of becoming generative in nature. For example, when teaching organizing strategies, instructors may begin by introducing the idea of graphic overviews or teacher-directed prereading activities. But instruction does not stop here. Instructors need to scaffold instruction to the point where students can create concept maps or engage in a variety of useful prereading activities independently as a way of generatively processing information and creating meaning by building relationships between parts of the text (Wittrock, 1990). Because the field seems to be moving in the direction of a better understanding of the processes that underlie effective strategy use, it is important for instructors to explain these processes explicitly to students so that they can make decisions about which strategies might meet their needs. Rather than solely teaching procedural aspects of strategies, teaching students about the processes that underlie strategy use seems to be more worthwhile. This means that students need to understand the declarative and conditional knowledge about a strategy in addition to the procedural knowledge. Students need to know a wide repertoire of strategies, understand why those strategies work, and have an understanding of when to select the strategies for a given discipline. We believe that students need to understand how tasks may be discipline-specific because of the level of thinking required by a particular domain. This means that creating an argument may not be the same task in biology as it is in history, nor may it require the same strategies. Additionally, the strategies students select will be based on both on the domain and the task. We have also suggested that the ability to transfer strategies to new situations takes time. This is partially because many students are at the point of acclimation each time they encounter a new domain. The specific comprehension strategies discussed in this chapter rely on initial teacher instruction and direction but then allow students to modify the strategies for any number of situations. We have found that it is helpful for students to have ample opportunity to try out each of the strategies and time to discuss the modifications they made with other students. In addition, we believe that knowledge about affect is important in strategy use. Students must possess not only the skill but the will to use a variety of strategies (Weinstein, 1997). Successful strategy use depends on students’ understanding of how their affective behaviors impact learning. Because they need to have the will to make deliberate choices about which strategies to use and follow through with their use as they learn, study, and prepare for exams, we believe it is important that students learn about their own affective stances as they learn comprehension strategies. Finally, in order to stay on top of the tasks students encounter in their courses, we believe that instructors need to take note of the increasing role of technology in current college courses in order to help students learn strategies to organize, isolate, and elaborate this information as well.
Recommendations for Future Research Our review of the theory and research-related comprehension strategies at the college level points to several future directions. First, because of the factors that impact learning, research needs to
133
Jodi Patrick Holschuh and Jodi P. Lampi
further examine the roles of discipline, context, and task on strategy selection and usage. The field is also in need of research examining the move toward integrating developmental reading and writing instruction. For example, how can educators incorporate comprehension strategies that support both? Research conducted across disciplines and across various groups of learners will add to the literature on comprehension development and strategy use. Second, more long-term studies need to be conducted that focus on a variety of questions. Just how much scaffolding is necessary in order for students to be able to successfully use strategies on their own? How do disciplinary considerations impact a student’s ability to generate strategies? How long and what kind of instruction leads to transfer of strategic knowledge? How can we get students to “buy into” strategy use? Because much of the research on specific strategies, especially those that have traditionally been termed “teacher-directed,” has been short in duration, there are many questions left to answer. Third, examining the role of technology and the Internet in college courses can help researchers gain greater insight into students’ learning tasks. Accessing appropriate information, evaluating the information, and synthesizing the information from various print and digital texts can be daunting for students. Several studies with an eye toward developing new methods for strategy use are pioneering new and fertile grounds for research. Additionally, there has been some interesting initial work on comprehension of online versus traditional text that could lead to some meaningful implications for teaching comprehension strategies. More research is needed in these areas, especially with students enrolled in developmental literacy courses. Finally, the dynamic and complex nature of the development and use of comprehension strategies calls for more research that ties together the cognitive and affective components. Research that investigates how the affective component can be used to engage students in strategy selection and use is needed. As previously mentioned, students must not only have the skill but also the will to engage in strategy use. The more college reading and learning professionals understand about the underlying processes and factors that impact learning and strategy use, the greater the opportunity students have to generate and use appropriate strategies as independent learners.
References and Suggested Readings(*) Acee, T. W., & Weinstein, C. E. (2010). Effects of a value-reappraisal intervention on statistics students’ motivation and performance. The Journal of Experimental Education, 78(4), 487–512. Afflerbach, P., Cho, B. Y., Kim, J. Y., Crassas, M. E., & Doyle, B. (2013). Reading: What else matters besides strategies and skills? The Reading Teacher, 66(6), 440–448. *Alexander, P. A. (1992). Domain knowledge: Evolving themes and emerging concerns. Educational Psychologist, 27(1), 33–51. *Alexander, P. A. (1996). The past, present, and future of knowledge research: A reexamination of the role of knowledge in learning and instruction. Educational Psychologist, 31(2), 89–92. Alexander, P. A. (1997). Knowledge-seeking and self-schema: A case for the motivational dimensions of exposition. Educational Psychologist, 32(2), 83–94. *Alexander, P. A. (2003). The development of expertise: The journey from acclimation to proficiency. Educational Researcher, 32(8), 10–14. *Alexander, P. A. (2005). The path to competence: A lifespan developmental perspective on reading. Journal of Literacy Research, 37(4), 413–436. *Alexander, P. A. (2012). Reading into the future: Competence for the 21st century. Educational Psychologist, 47(4), 259–280. Alexander, P. A. (2014). Thinking critically and analytically about critical-analytic thinking: An introduction. Educational Psychology Review, 26(4), 469–476. Alexander, P. A., Dinsmore, D. L., Parkinson, M. M., & Winters, F. I. (2011). Self-regulated learning in academic domains. In B. J. Zimmerman & D. H. Schunk (Eds.), Handbook of self-regulation of learning and performance (pp. 393–407). New York, NY: Routledge.
134
Comprehension
Alexander, P. A., & Jetton, T. L. (2000). Learning from text: A multidimensional and developmental perspective. In M. L. Kamil, P. B. Mosenthal, P. D. Pearson, & R. Barr, (Eds.), Handbook of reading research III (pp. 285–310). Mahwah, NJ: Lawrence Erlbaum. Alfassi, M. (2004). Reading to learn: Effects of combined strategy instruction on high school students. The Journal of Educational Research, 97(4), 171–184. *Anderson, T. H., & Armbruster, B. B. (1984). Studying. In P. D. Pearson (Ed.), Handbook of reading research (pp. 657–679). New York, NY: Longman. Armstrong, S. L., & Newman, M. (2011). Teaching textual conversations: Intertextuality in the college reading classroom. Journal of College Reading and Learning, 41(2), 6–21. Ausubel, D. P. (1963). The psychology of meaning ful verbal learning. New York, NY: Grune & Stratton. Ausubel, D. P. (1968). Educational psychology: A cognitive view. New York, NY: Holt, Rinehart, & Winston. *Baker, L. (1985). Differences in the standards used by college students to evaluate their comprehension of expository prose. Reading Research Quarterly, 20(3), 297–313. *Baker, L., & Brown, A. L. (1984). Metacognitive skills and reading. In P. D. Pearson (Ed.), Handbook of reading research (pp. 353–394). New York, NY: Longman. Barbash, S. (2011). Clear teaching: With direct instruction, Sieg fried Engelmann discovered a better way of teaching. Arlington, VA: Education Consumers Foundation. Berliner, D. C. (1981). Academic learning time and reading achievement. In J. T. Guthrie (Ed.), Comprehension and teaching: Research reviews (pp. 203–226). Newark, DE: International Reading Association. Blunt, J. R., & Karpicke, J. D. (2014). Learning with retrieval-based concept mapping. Journal of Educational Psychology, 106(3), 849. Boekaerts, M., & Corno, L. (2005). Self-regulation in the classroom: A perspective on assessment and intervention. Applied Psychology: An International Review, 54(2), 199–231. Botvinick, M., & Braver, T. (2015). Motivation and cognitive control: From behavior to neural mechanism. Annual Review of Psychology, 66, 83–113. Bransford, J. D., Brown, A. L., & Cocking, R. (2000). How people learn: Brain, mind, experience, and school. Washington, DC: National Academy Press. Bråten, I., Strømsø, H. I., & Ferguson, L. E. (2015). The role of epistemic beliefs in the comprehension of single and multiple texts. In B. K. Hofer & P. R. Pintrich (Eds.) Handbook of individual differences in reading: Reader, text, and context (pp. 261–275). Mahwah, NJ: Erlbaum. Broadbent, J. (2017). Comparing online and blended learner’s self-regulated learning strategies and academic performance. The Internet and Higher Education, 33, 24–32. Brophy, J. E. (2013). Motivating students to learn. New York, NY: Routledge. *Brown, J. S., Collins, A., & Duguid, P. (1989). Situated cognition and the culture of learning. Educational Researcher, 18(1), 32–42. Buel, M. M., Alexander, P. A., & Murphy, P. K. (2002). Beliefs about schooled knowledge: Domain specific or domain general? Contemporary Educational Psychology, 27(3), 415–449. Carson, J. G., Chase, N. D., & Gibson, S. U. (1992). Literacy analyses of high school and university courses: Summary descriptions of selected courses. Atlanta, GA: Center for the Study of Adult Literacy, Georgia State University. Carter, M. (2007). Ways of knowing, doing, and writing in the disciplines. College Composition and Communication, 58(3), 385–418. Chen, I. J., & Yen, J. C. (2013). Hypertext annotation: Effects of presentation formats and learner proficiency on reading comprehension and vocabulary learning in foreign languages. Computers & Education, 63, 416–423. Cheung, L. S. (2006). A constructivist approach to designing computer supported concept-mapping environment. International Journal of Instructional Media, 33(2), 153–173. Clark, K. F., & Graves, M. F. (2005). Scaffolding students’ comprehension of text. Reading Teacher, 58(6), 570–580. Coiro, J. (2011). Predicting reading comprehension on the Internet: Contributions of offline reading skills, online reading skills, and prior knowledge. Journal of Literacy Research, 43(4), 352–392. Commander, N. E., Ashong, C., & Zhao, Y. (2016). Metacognitive awareness of reading strategies by undergraduate U.S. and Chinese students. Journal of College Literacy and Learning, 42, 40–54. Connell, G. L., Donovan, D. A., & Chambers, T. G. (2016). Increasing the use of student-centered pedagogies from moderate to high improves student learning and attitudes about biology. CBE – Life Sciences Education, 15(1), 1–15. Cornford, I. R. (2002). Learning to learn strategies as a basis for effective lifelong learning. International Journal of Lifelong Learning, 21(4), 357–368.
135
Jodi Patrick Holschuh and Jodi P. Lampi
Dahl, T., Bals, M, & Turi, A. L. (2005). Are students’ beliefs about knowledge and learning associated with their reported use of learning strategies? British Journal of Educational Psychology, 75(2), 257–273. Dewey, J. (1910). How we think. Lexington, MA: D. C. Heath. Diller, C., & Oates, S. (2002). Infusing disciplinary rhetoric into liberal education: A cautionary tale. Rhetoric Review, 21(1), 53–61. Dinsmore, D. L., Alexander, P. A., & Loughlin, S. M. (2008). Focusing on the conceptual lens of metacognition, self-regulation, and self-regulated learning. Educational Psychology Review, 20(4), 391–409. Donker, A. S., de Boer, H., Kostons, D., Dignath-van Ewijk, C. C., & van der Werf, M. P. C. (2014). Effectiveness of learning strategy instruction on academic performance: A meta-analysis. Educational Research Review, 11, 1–26. *Dornisch, M. M., & Sperling, R. A. (2006). Facilitating learning from technology-enhanced text: Effects of prompted elaborative interrogation. The Journal of Educational Research, 99(3), 156–165. Duffy, G., & Roehler, L. (1982). Instruction as sense-making: Implications for teacher education. Action in Teacher Education, 4(1), 1–7. Durkin, D. (1978–1979). What classroom observations reveal about reading comprehension instruction. Reading Research Quarterly, 14(4), 481–533. Engelmann, S., & Carnine, D. (1991). Theory of instruction: Principles and applications. Eugene, OR: ADI Press. Erçetin, G. (2010). Effects of topic interest and prior knowledge on text recall and annotation use in reading a hypermedia text in the L2. ReCALL, 22(2), 228–246. Falk-Ross, F. C. (2002). Toward a new literacy: Changes in college students’ reading comprehension strategies following reading/writing projects. Journal of Adolescent & Adult Literacy, 45(4), 278–288. Fang, Z., & Coatoam, S. (2013). Disciplinary literacy: What you want to know about it. Journal of Adolescent & Adult Literacy, 56(8), 627–632. Fiorella, L., & Mayer, R. E. (2015). Eight ways to promote generative learning. Educational Psychology Review, 28(4), 717–741. Fiske, S. T., & Taylor, S. E. (2013). Social cognition: From brains to culture. Los Angeles, CA: Sage. Flavell, J. H. (1978). Metacognitive development. In J. M. Scandura & C. J. Brainerd (Eds.), Structural/ process theories of complex human behavior (pp. 213–245). Alphen as. Rijn, The Netherlands: Sijthoff and Noordhoff. Fong, C. J., Acee, T. W., & Weinstein, C. E. (2016). A person-centered investigation of achievement motivation goals and correlates of community college student achievement and persistence. Journal of College Student Retention: Research, Theory & Practice, 18, 257–264. Fong, C. J., Davis, C. W., Kim, Y., Kim, Y. W., Marriott, L., & Kim, S. (2016). Psychosocial factors and community college success: A meta-analytic investigation. Review of Educational Research, 87(2), 388–424. doi:10.3102/0034654316653479. Freebody, P., & Luke, A. (1990). ‘Literacies’ programs: Debates and demands. Prospect: Australian Journal of TESOL, 5(7), 7–16. Friend, R. (2001). Teaching summarization as a content area reading strategy. Journal of Adolescent & Adult Literacy, 44(4), 320–329. Gee, J. P. (2001). Reading as situated language: A sociocognitive perspective. Journal of Adolescent & Adult Literacy. 44(8), 714–725. Gee, J. P. (2015). Social linguistics and literacies: Ideology in discourse. New York, NY: Routledge. Gettinger, M., & Seibert, J. K. (2002). Contributions of study skills to academic competence. School Psychology Review, 31(3), 350–365. Ghazal, S., Cokely, E. T., & Garcia-Retamero, R. G. (2014). Predicting biases in very highly educated samples: Numeracy and metacognition. Judgment and Decision Making, 9(1), 15–34. Gish, H. (2016). Beyond the lecture: Using texts to engage students in the secondary social studies classroom. Academic Excellence Showcase Schedule, 274. Retrieved from www.digitalcommons.wou.edu/ aes_event/2016/all/274 Glynn, S. M., Aultman, L. P., & Owens, A. M. (2005). Motivation to learn in general education programs. The Journal of General Education, 54(2), 150–170. Graesser, A. C., & Olde, B. A. (2003). How does one know whether a person understands a device? The quality of the questions the person asks when the device breaks down. Journal of Educational Psychology, 95(3), 524–536. Graves, M. F. (2004). Theories and constructs that have made a significant difference in adolescent literacy, but have the potential to produce still more positive benefits. In T. L. Jetton & J. A. Dole (Eds.), Adolescent literacy research and practice (pp. 433–452). New York, NY: Guilford.
136
Comprehension
Hacker, D. J. (1998). Definitions and empirical foundations. In D. J. Hacker, J. Dunlosky, & A. C. Graesser (Eds.) Metacognition in educational theory and practice (pp. 1–23), Mahwah, NJ: Erlbaum. Hacker, D. J., Dunlosky, J., & Graesser, A. C. (2009). A growing sense of “agency.” In D. J. Hackler, J. Dunlosky, & A. C. Graesser (Eds.) Handbook of metacognition in education (pp. 1–4), New York, NY: Routledge. *Hadwin, A. F., & Winne, P. H. (1996). Study strategies have meager support. Journal of Higher Education, 67(6), 692–715. Hagen, Å. M., Braasch, J. L., & Bråten, I. (2014). Relationships between spontaneous note-taking, self-reported strategies and comprehension when reading multiple texts in different task conditions. Journal of Research in Reading, 37(S1), 141–157. Hamilton, R. J. (1997). Effects of three types of elaboration on learning concepts from text. Contemporary Educational Psychology, 22(3), 229–318. Hay, D. B. (2007). Using concept maps to measure deep, surface, and non-learning outcomes. Studies in Higher Education, 32(1), 39–57. Hofer, B. (2002). Motivation in the college classroom. In W. J. McKeachie (Ed.), McKeachie’s teaching tips: Strategies, research and theory for college and university teachers (pp. 118–127). Belmont, CA: Wadsworth. *Hofer, B. K. (2004). Exploring the dimensions of personal epistemology in differing classroom contexts: Student interpretations during the first year of college. Contemporary Educational Psychology, 29(2), 129–163. *Hofer, B. K., & Pintrich, P. R. (2002). Personal epistemology: The psychology of beliefs about knowledge and knowing. Mahwah, NJ: Erlbaum. Holschuh, J. P. (1995, November). The effect of feedback on annotation quality and test performance. Paper presented at the annual meeting of the College Reading Association, Clearwater, FL. Holschuh, J. P. (2000). Do as I say, not as I do: High, average, and low performing students’ strategy use in biology. Journal of College Reading and Learning, 31(1), 94–107. Holschuh, J. P. (2014). The Common Core goes to college: The potential for disciplinary literacy approaches in developmental literacy classes. Journal of College Reading and Learning, 45(1), 85–95. Holschuh, J. P., & Aultman, L. (2009) Comprehension development. In R. F. Flippo & D. C. Caverly (Eds.), Handbook of college reading and study strategy research (2nd ed., pp. 97–120). New York, NY: Routledge. Hu, H., & Driscoll, M. P. (2013). Self-regulation in e-learning environments: a remedy for community college? Educational Technology & Society, 16(4), 171–184. Hwang, G. J., Yang, L. H., & Wang, S. Y. (2013). A concept map-embedded educational computer game for improving students’ learning performance in natural science courses. Computers & Education, 69, 121–130. Hynd-Shanahan, C. R., Holschuh, J. P., & Hubbard, B. P. (2004). Thinking like a historian: College students’ reading of multiple historical documents. Journal of Literacy Research, 36(2), 141–176. James, W. (1890). The principles of psychology. New York, NY: Holt. Järvelä, S., Volet, S., & Järvenoja, H. (2010). Research on motivation in collaborative learning: Moving beyond the cognitive–situative divide and combining individual and social processes. Educational Psychologist, 45(1), 15–27. Jing, H. (2006). Learner resistance in metacognitive training? An exploration of mismatches between learner and teacher agendas. Language Teaching Research, 10(1), 95–117. Johnson, M. L., Taasoobshirazi, G., Clark, L., Howell, L., & Breen, M. (2016). Motivations of traditional and nontraditional college students: From self-determination and attributions, to expectancy and values. The Journal of Continuing Higher Education, 64(1), 3–15. Kapp, R., & Bangeni, B. (2009). Positioning (in) the discipline: Undergraduate students’ negotiations of disciplinary discourses. Teaching in Higher Education, 14(6), 587–596. Karpicke, J. D. (2009). Metacognitive control and strategy selection: Deciding to practice retrieval during learning. Journal of Experimental Psychology: General, 138(4), 469–485. Karpicke, J. D., & Blunt, J. R. (2011). Retrieval practice produces more learning than elaborative studying with concept mapping. Science, 331(6018), 772–775. Kiewra, K. A. (2002). How classroom teachers can help students learn and teach them how to learn. Theory into Practice, 41(2), 71–80. *King, A. (1992). Facilitating elaborative learning through guided student-generated questioning. Educational Psychologist, 27(1), 111–126. Kintsch, W. (1998). Comprehension: A paradigm for cognition. New York, NY: Cambridge University Press. Koh, J. (2014). The more, the better? Examining choice and self-regulated learning strategies. International Journal of Learning: Annual Review, 21, 13–32. Kreutzer, M. A., Leonard, C., Flavell, J. H., & Hagen, J. W. (1975). An interview study of children’s knowledge about memory. Monographs of the Society for Research in Child Development, 40(1), 1–60.
137
Jodi Patrick Holschuh and Jodi P. Lampi
Kucer, S. B. (2014). Dimensions of literacy: A conceptual base for teaching reading and writing in school settings. New York, NY: Routledge. Langer, J. A. (2001). Beating the odds: Teaching middle and high school students to read and write well. American Educational Research Journal, 38(4), 837–880. Lapp, D., Fisher, D., & Grant, M. (2008). “You can read this text – I’ll show you how.” Interactive comprehension instruction. Journal of Adolescent & Adult Literacy, 51(5), 373–383. Lea, M. R., & Street, B. V. (1998). Student writing in higher education: An academic literacies approach. Studies in Higher Education, 23(2), 157–172. Lee, C. D. (2004). Literacy in the academic disciplines and the needs of adolescent struggling readers. Voices in Urban Education, 3, 14–25. Lee, C. D., & Spratley, A. (2010). Reading in the disciplines: The challenges of adolescent literacy. New York, NY: Carnegie Corporation of New York. Li, M., Murphy, P. K., Wang, J., Mason, L. H., Firetto, C. M., Wei, L., & Chung, K. S. (2016). Promoting reading comprehension and critical–analytic thinking: A comparison of three approaches with fourth and fifth graders. Contemporary Educational Psychology, 46, 101–115. Linderholm, T., Therriault, D. J., & Kwon, H. (2014). Multiple science text processing: Building comprehension skills for college student readers. Reading Psychology, 35(4), 332–356. *Linnenbrink, E. A., & Pintrich, P. R. (2002). Motivation as an enabler for academic success. School Psychology Review, 31(3), 313–327. Linnenbrink, E. A., & Pintrich, P. R. (2003). The role of self-efficacy beliefs in student engagement and learning in the classroom. Reading & Writing Quarterly, 19(2), 119–137. *Lipson, M. (1995). The effect of semantic mapping instruction on prose comprehension of below-level college readers. Reading Research and Instruction, 34(4), 367–378. Liu, K. (2006). Annotation as an index to critical writing. Urban Education, 41(2), 192–207. Luke, A. (2013). Back to the future. The Australian Educator, Summer(80), 14–15. Maitland, L. E. (2000). Ideas in practice: Self-regulation and metacognition in the reading lab. Journal of Developmental Education, 24(2), 26–31. Markham, E. (1977). Realizing that you don’t understand: A preliminary investigation. Child Development, 48(3), 986–992. Martinez, M. E. (2006). What is metacognition? Phi Delta Kappan, 87(9), 696–699. Mason, L., Scirica, F., & Salvi, L. (2006). Effects of beliefs about meaning construction and task instructions on interpretation of narrative text. Contemporary Educational Psychology, 31(4), 411–437. *McCombs, B. L. (1996). Alternative perspectives for motivation. In L. Baker, P. Afflerbach, & D. Reinking (Eds.), Developing engaged readers in school and home communities (pp. 67–87). Mahwah, NJ: Erlbaum. McConachie, S. M., & Apodaca, R. E. (2009). Embedding disciplinary literacy. In S. M. McConachie & A. R. Petrosky (Eds.), Content matters: A disciplinary literacy approach to improving student learning (pp. 163–196). San Francisco, CA: Jossey-Bass. Mealey, D. L., & Frazier, D. W. (1992). Directed and spontaneous transfer of textmarking: A case study. In N. D. Paduk, T. Rasinski, & J. Logan (Eds.), Literacy research and practice: Foundations for the year 2000 (pp. 153–164). Pittsburg, KS: CRA Yearbook. Muis, K. R. (2007). The role of epistemic beliefs in self-regulated learning. Educational Psychologist, 42(3), 173–190. Muis, K. R. (2008). Epistemic profiles and self-regulated learning: Examining the relations in the context of mathematics problem solving. Contemporary Educational Psychology, 33(2), 177–208. Muis, K. R., Pekrun, R., Sinatra, G. M., Azevedo, R., Trevors, G., Meier, E., & Heddy, B. C. (2015). The curious case of climate change: Testing a theoretical model of epistemic beliefs, epistemic emotions, and complex learning. Learning and Instruction, 39, 168–183. Murphy, P. K., & Alexander, P. A. (2000). A motivated exploration of motivation terminology. Contemporary Educational Psychology, 25(1), 3–53. *Murphy, P. K., & Alexander, P. A. (2002). What counts? The predictive powers of subject-matter knowledge, strategic processing, and interest in domain-specific performance. Journal of Experimental Education, 70(3), 197–214. Murphy, P. K., Holleran, T. A., Long, J. F., & Zeruth, J. A. (2005). Examining the complex roles of motivation and text medium in the persuasion process. Contemporary Educational Psychology, 30, 418–438. Nakamura, J., & Csikszentmihalyi, M. (2009). Flow theory and research. In S. J. Lopez & C. R. Snyder (Eds.), Handbook of positive psychology (195–206). New York, NY: Oxford University Press. Nesbit, J. C., & Adesope, O. O. (2006). Learning with concept and knowledge maps: A meta-analysis. Review of Educational Research, 76(3), 413–448.
138
Comprehension
Ng, C. H. (2005). Academic self-schemas and their self-congruent learning patterns: Findings verified with culturally different samples. Social Psychology of Education, 8(3), 303–328. Nieto, S. (2015). The light in their eyes: Creating multicultural learning communities. New York, NY: Teachers College Press. Nist, S. L., & Diehl, W. (1998). Developing textbook thinking (4th ed.). Boston, MA: Houghton Mifflin. Nist, S. L., & Hogrebe, M. C. (1987). The role of underlining and annotating in remembering textual information. Reading Research and Instruction, 27(1), 12–25. Nist, S. L., & Holschuh, J. P. (2005). Practical applications of the research on epistemological beliefs. Journal of College Reading and Learning, 35(2), 84–92. Nist, S. L., & Simpson, M. L. (1994). Why strategies fail: Students’ and researchers’ perceptions. In C. K. Kinzer & D. Leu (Eds.), Multidimensional aspects of literacy research, theory, and practice. Forty-Third Yearbook of the National Reading Conference (pp. 287–295). Charleston, SC: NRC. *Nist, S. L., & Simpson, M. L. (2000). College studying. In M. Kamil, P. Mosenthal, & P. D. Pearson (Eds.), Handbook of reading research (pp. 645–666). Mahwah, NJ: Erlbaum. Nist-Olejnik, S. L., & Holschuh, J. P. (2013). College success strategies (4th ed.). New York, NY: Longman. North, S. (2005). Different values, different skills? A comparison of essay writing by students from arts and science backgrounds. Studies in Higher Education, 30(5), 517–533. Ozgungor, S., & Guthrie, J. T. (2004). Interactions among elaborative interrogation, knowledge, and interest in the process of constructing knowledge from text. Journal of Educational Psychology 96(3), 437–443. Paris, S. G., Byrnes, J. P., & Paris, A. H. (2001). Constructing theories, identities, and actions of self-regulated learners. In B. J. Zimmerman & D. H. Schunk (Eds.) Self-regulated learning and academic achievement (2nd ed., pp. 253–288). Mahwah, NJ: Erlbaum. *Paris, S. G., Lipson, M. Y., & Wixson, K. K. (1983). Becoming a strategic reader. Contemporary Educational Psychology, 8(3), 293–316. Paris, S. G., & Paris, A. H. (2001). Classroom applications of research on self-regulated learning. Educational Psychologist, 36(2), 89–101. *Paris, S. G., & Turner, J. C. (1994). Situated motivation. In P. R. Pintrich, D. R. Brown, & C. E. Weinstein (Eds.), Student motivation, cognition, and learning: Essays in honor of Wilbert J. McKeachie (pp. 213–238). Hillsdale, NJ: Erlbaum. Park, S. W., & Sperling, R. A. (2012). Academic procrastinators and their self-regulation. Psychology, 3(1), 12–23. Pearson, P. D., & Gallagher, M. C. (1983). The instruction of reading comprehension. Contemporary Educational Psychology, 8, 317–344. *Perry, W. G., Jr. (1970). Forms of intellectual and ethical development in the college years: A scheme. New York, NY: Holt, Rinehart, & Winston. Perry, N. E., & Winne, P. H. (2006). Learning from learning kits: Study traces of students’ self-regulated engagements with computerized content. Educational Psychology Review, 18(3), 211–228. Petersen, L. E., Stahlberg, D., & Dauenheimer, D. (2000). Effects of self-schema elaboration on affective and cognitive reactions to self-relevant information. Genetic, Social, and General Psychology Monographs, 126(1), 25–42. Pintrich, P. R. (2000). The role of goal orientation in self-regulated learning. In M. Boekaerts, P. R. Pintrich, & M. Zeidner (Eds.) Handbook of self-regulation (pp. 451–502). San Diego, CA: Academic Press. *Pintrich, P. R. (2002). The role of metacognitive knowledge in learning, teaching, and assessing. Theory into Practice, 41(4), 219–225. *Pintrich, P. R. (2004). A conceptual framework for assessing motivation and self-regulated learning in college students. Educational Psychology Review, 16(4), 385–407. *Pintrich, P. R., & Garcia, T. (1994). Self-regulated learning in college students: Knowledge, strategies, and motivation. In P. R. Pintrich, D. R. Brown, & C. E. Weinstein (Eds.), Student motivation, cognition, and learning: Essays in honor of Wilbert J. McKeachie (pp. 113–134). Hillsdale, NJ: Erlbaum. *Pressley, M. (2000). What should comprehension instruction be the instruction of? In M. L. Kamil, P. B. Mosenthal, P. D. Pearson, & R. Barr (Eds.) Handbook of reading research (Vol. 3, pp. 545–563). Mahwah, NJ: Erlbaum. *Pressley, M., Ghatala, E. S., Woloshyn, V., & Pirie, J. (1990). Sometimes adults miss the main ideas in text and do not realize it: Confidence in responses to short-answer and multiple-choice comprehension questions. Reading Research Quarterly, 25(3), 232–249. Pressley, M., Graham, S., & Harris, K. (2006). The state of educational intervention research as viewed through the lens of literacy intervention. British Journal of Educational Psychology, 76(1), 1–19.
139
Jodi Patrick Holschuh and Jodi P. Lampi
Pressley, M., Van Etten, S., Yokoi, L., Freebern, G., & Van Meter, P. (1998). The metacognition of college studentship: A grounded theory approach. In D. J. Hacker, J. Dunlosky, & A. C. Graesser (Eds.) Metacognition in educational theory and practice (pp. 347–381), Mahwah, NJ: Erlbaum. Pressley, M., Wharton-McDonald, R., Mistretta-Hampston, J., Echevarria, M. (1998). Literacy instruction in 10 fourth- and fifth-grade classrooms in upstate New York. Scientific Studies of Reading, 2(2), 159–194. Ransby, M. J., & Swanson, H. L. (2003). Reading comprehension skills of young adults with childhood diagnosis of Dyslexia, Journal of Learning Disabilities, 36(6), 538–555. Ransdell, S., Barbier, M., & Niit, T. (2006). Metacognitions about language skill and working memory among monolingual and bilingual college students: When does multilingualism matter? The International Journal of Bilingual Education and Bilingualism, 9(6), 728–741. Rosenshine, B. V. (1979). Content, time, and direct instruction. In P. L. Peterson & H. J. Walberg (Eds.). Research on teaching: Concepts, findings, and implications. Berkeley, CA: McCutchan. Rupley, W. H., Blair, T. R., & Nichols, W. D. (2009). Effective reading instruction for struggling readers: The role of direct/explicit teaching. Reading & Writing Quarterly, 25(2–3), 125–138. Russell, D. (1991). Writing in the academic disciplines, 1870–1990: A curricular history. Carbondale, IL: Southern Illinois University Press. Ryan, R. M., & Deci, E. L. (2016). Facilitating and hindering motivation, learning and well-being in schools: Research and observations from self-determination theory. In K. R. Wentzel & D. B. Miele, Eds., Handbook of motivation at school (2nd ed.), New York, NY: Routledge. Scharff, L., Draeger, J., Verpoorten, D., Devlin, M., Dvorakova, L. S., Lodge, J. M., & Smith, S. (2017). Exploring metacognition as support for learning transfer. Teaching & Learning Inquiry, 5(1), 1–14. Schiefele, U., Schaffner, E., Möller, J., & Wigfield, A. (2012). Dimensions of reading motivation and their relation to reading behavior and competence. Reading Research Quarterly, 47(4), 427–463. Schmar-Dobler, E. (2003). Reading on the internet: The link between literacy and technology. Reading Online, 47(1), 17–23. Schoenfeld, A. H. (1988). When good teaching leads to bad results: The disasters of well-taught mathematics courses. Educational psychologist, 23(2), 145–166. *Schommer, M. (1990). Effects of beliefs about the nature of knowledge on comprehension. Journal of Educational Psychology, 82(3), 498–504. *Schommer, M. (1994a). An emerging conceptualization of epistemological beliefs and their role in learning. In R. Garner & P. A. Alexander (Eds.), Beliefs about text and instruction with text (pp. 25–40). Hillsdale, NJ: Erlbaum. *Schommer, M. (1994b). Synthesizing epistemological belief research: Tentative understandings and provocative confusions. Educational Psychology Review, 6(4), 293–319. Schommer, M., & Hutter, R. (1995, April). Epistemological beliefs and thinking about everyday controversial issues. Paper presented at the meeting of the American Educational Research Association, San Francisco, CA. *Schommer, M., & Surber, J. R. (1986). Comprehension-monitoring failure in skilled adult readers. Journal of Educational Psychology, 78(5), 353–357. Schommer-Aikins, M. (2002). Epistemological belief system. In B. K. Hofer & P. R. Pintrich (Eds.), Personal epistemology: The psychology of beliefs about knowledge and knowing (pp. 105–118). New York, NY: Routledge. Schommer-Aikins, M., & Duell, O. K. (2013). Domain specific and general epistemological beliefs their effects on mathematics. RIE: Revista de Investigación Educativa, 31(2), 317–330. Schommer-Aikins, M., Duell, O. K., & Barker, S. (2003). Epistemological beliefs across domains using Biglan’s classification of academic disciplines. Research in Higher Education, 44(3), 347–366. Schraw, G., & Gutierrez, A. P. (2015). Metacognitive strategy instruction that highlights the role of monitoring and control processes. In A. Pena-Ayala (Ed.), Metacognition: Fundaments, applications, and trends (pp. 3–16). Switzerland: Springer International Publishing. Schreiber, J. B., & Shinn, D. (2003). Epistemological beliefs of community college students and their learning processes. Community College Journal of Research and Practice, 27(8), 699–710. Shanahan, T., Fisher, D., & Frey, N. (2012). The challenge of challenging text. Educational Leadership, 69(6), 58–63. Shanahan, T., & Shanahan, C. (2008). Teaching disciplinary literacy to adolescents: Rethinking content-area literacy. Harvard Educational Review, 78(1), 40–59. Shanahan, T., & Shanahan, C. (2012). What is disciplinary literacy and why does it matter? Topics in language disorders, 32(1), 7–18. Shanahan, T., & Shanahan, C. (2017). Disciplinary literacy: Just the FAQs. Educational Leadership, 74(5), 18–22.
140
Comprehension
*Simpson, M. L. (1994). Talk throughs: A strategy for encouraging active learning across the content areas. Journal of Reading, 38(4), 296–304. *Simpson, M. L., & Nist, S. L. (1990). Textbook annotation: An effective and efficient study strategy for college students. Journal of Reading, 34(2), 122–129. *Simpson, M. L., & Nist, S. L. (1997). Perspectives on learning history: A case study. Journal of Literacy Research, 29(3), 363–395. *Simpson, M. L., & Nist, S. L. (2000). An update on strategic learning: It’s more than textbook reading strategies. Journal of Adolescent & Adult Literacy, 43(6), 528–541. *Simpson, M. L., Olejnik, S., Tam, A. Y., & Supattathum, S. (1994). Elaborative verbal rehearsals and college students’ cognitive performance. Journal of Educational Psychology, 86(2), 267–278. Simpson, M. L., & Rush, L. (2003). College students’ beliefs, strategy employment, transfer, and academic performance: An examination across three academic disciplines. Journal of College Reading and Learning, 33(2), 146–156. Simpson, M. L., Stahl, N. A., & Francis, M. A. (2004). Reading and learning strategies: Recommendations for the 21st century. Journal of Developmental Education, 28(2), 2–32. Sinatra, G. M., Kienhues, D., & Hofer, B. K. (2014). Addressing challenges to public understanding of science: Epistemic cognition, motivated reasoning, and conceptual change. Educational Psychologist, 49(2), 123–138. Snow, C. (2002). Reading for understanding: Toward an R&D program in reading comprehension. Santa Monica, GA: RAND Corporation. Retrieved from www.rand.org/pubs/monographc_reports/MR1465. Sperling, R. A., Howard, B. C., Staley, R., & DuBois, N. (2004). Metacognition and self-regulated learning constructs. Educational Research and Evaluation, 10(2), 117–139. Stewart, P. W., & Hadley, K. (2014). Investigating the relationship between visual imagery, metacognition, and mathematics pedagogical content knowledge. Journal of the International Society for Teacher Education, 18(1), 26–35. Strode, S. L. (1991). Teaching annotation writing to college students. Forum for Reading, 23(1–2), 33–44. Svinicki, M. D. (1994). Research on college student learning and motivation: Will it affect college instruction? In P. R. Pintrich, D. R. Brown, & C. E. Weinstein (Eds.), Student motivation, cognition, and learning: Essays in honor of Wilbert J. McKeachie (pp. 331–342). Hillsdale, NJ: Erlbaum. Tangney, J. P. (2003). Self-relevant emotions. In M. R. Leary & J. P. Tangney (Eds.), Handbook of self and identity (pp. 384–400). New York, NY: Guilford Press. Taraban, R., Rynearson, K., & Kerr, M. S. (2000). Metacognition and freshman academic performance. Journal of Developmental Education, 24(1), 12–18. *Taraban, R., Rynearson, K., & Kerr, M. S. (2004). Analytic and pragmatic factors in college students’ metacognitive reading strategies. Reading Psychology, 25(2), 67–81. Taylor, B. B., Pearson, P. D., Garcia, G. E., Stahl, K. E., & Bauer, E. B. (2006). Improving students’ reading comprehension. In K. A. D. Stahl & M. C. McKenna (Eds.), Reading research at work: Foundations of effective practice (pp. 303–315). New York, NY: Guilford Press. *Thomas, J. W., & Rohwer, W. D. (1986). Academic studying: The role of learning strategies. Educational Psychologist, 21(1–2), 19–41. Thorndike, E. L. (1917). Reading as reasoning: A study of mistakes in paragraph reading. Journal of Educational Psychology, 8(6), 323–332. Tierney, R. J., & Cunningham, J. W. (1984). Research on teaching reading comprehension. In P. D. Pearson (Ed.), Handbook of reading research. New York, NY: Longman. Trainin, G., & Swanson, H. L. (2005). Cognition, metacognition, and achievement of college students with learning disabilities. Learning Disabilities Quarterly, 28(4), 261–272. Turner, J. C. (2010). Unfinished business: Putting motivation theory to the “classroom test.” In T. Urdan & S. A. Karabenick (Eds.), In Advances in motivation and achievement: The decade ahead: Applications and contexts of motivation and achievement (Vol. 16B, pp. 109–138). Bingley, UK: Emerald Group. Turner, J. C., & Meyer, D. K. (2004). A classroom perspective on the principle of moderate challenge in mathematics. The Journal of Educational Research, 97(6), 311–318. Vygotsky, L. (1978). Mind and society: The development of higher psychological processes (Ed. and trans. M. Cole, V. John Steiner, S. Scribner & E. Souberman). Cambridge, MA: Harvard University Press. *Wade, S. E., & Trathen, W. (1989). Effect of self-selected study methods on learning. Journal of Educational Psychology, 81(1), 40–47. Weinstein, C. E. (1994). A look to the future: What we might learn from research on beliefs. In R. Garner & P. A. Alexander (Eds.), Beliefs about text and instruction with text (pp. 294–302). Hillsdale, NJ: Erlbaum.
141
Jodi Patrick Holschuh and Jodi P. Lampi
Weinstein, C. E. (1997, March). A course in strategic learning: A description and research data. Paper presented at the annual meeting of the American Educational Research Association, Chicago, IL. Wentling, T. L., Park, J., & Peiper, C. (2007) Learning gains associated with annotation and communication software designed for large undergraduate classes. Journal of Computer Assisted Learning, 23(1), 36–46. Wentzel, K. R., & Ramani, G. B. (Eds.). (2016). Handbook of social influences in school contexts: Social-emotional, motivation, and cognitive outcomes. New York, NY: Routledge. Wibrowski, C. R., Matthews, W. K., & Kitsantas, A. (2016). The role of a skills learning support program on first-generation college students’ self-regulation, motivation, and academic achievement: A longitudinal study. Journal of College Student Retention: Research, Theory & Practice, 18(3), 1–16. Wiley, J., Griffin, T. D., Jaeger, A. J., & Jarosz, A. F. (2016). Improving metacomprehension accuracy in an undergraduate course context. Journal of Experimental Psychology: Applied, 22(4), 393–405. Willoughby, T., Wood, E., & Kraftcheck, E. R. (2003). When can a lack of structure facilitate strategic processing of information? British Journal of Educational Psychology, 73(1), 59–69. Winch, G., Johnston, R., March, P., Ljungdahl, L., & Holliday, M. (2010). Literacy: Reading, writing and children’s literature (4th ed.). South Melborne, Australia: Oxford University Press. *Wineburg, S. S. (1991). On the reading of historical texts: Notes on the breach between school and academy. American Educational Research Journal, 28(3), 495–519. Wineburg, S., & Reisman, A. (2015). Disciplinary literacy in history. Journal of Adolescent & Adult Literacy, 58(8), 636–639. *Winne, P. H. (1995). Inherent details in self-regulated learning. Educational Psychologist, 30(4), 173–188. Winne, P. H. (2013). Learning strategies, study skills, and self-regulated learning in postsecondary education. In M. B. Paulsen (Ed.), Higher education: Handbook of theory and research (pp. 377–403). Netherlands: Springer. Winne, P. H. (2014). Issues in researching self-regulated learning as patterns of events. Metacognition and Learning, 9(2), 229–237. *Wittrock, M. C. (1986). Students’ thought processes. In M. C. Wittrock (Ed.), Handbook of research on teaching (pp. 297–314). New York, NY: Macmillan. *Wittrock, M. C. (1990). Generative processes of comprehension. Educational Psychologist, 24(4), 345–376. *Wittrock, M. C. (1992). Generative learning processes of the brain. Educational Psychologist, 27(4), 531–541. *Wolters, C. A. (2003). Regulation of motivation: Evaluating an underemphasized aspect of self-regulated learning. Educational Psychologist, 38(4), 189–205. *Wolters, C. A., & Hussain, M. (2015). Investigating grit and its relations with college students’ self-regulated learning and academic achievement. Metacognition Learning, 10(3), 293–311. Yang, C., Potts, R., Shanks, D. R. (2017). Metacognition unawareness of the errorful generation benefit and its effects on self-regulated learning. Journal of Experiential Psychology: Learning, Memory, and Cognition, [Epub ahead of print]. Zhao, N., Wardeska, J. F., McGuire, S. Y., & Cook, E. (2014). Metacognition: An effective took to promote success in college science learning. Journal of College Science Teaching, 43(4), 48–54.
142
9 Reading and Writing Sonya L. Armstrong texas state university
Jeanine L. Williams university of maryland
Norman A. Stahl northern illinois university
The reading-writing connection has been well established across multiple disciplines (for detailed discussions, please see Jackson, 2009, and Valeri-Gold & Deming, 2000, in earlier editions of the Handbook). Presently, the most important conversation related to reading and writing at the college level has to do with the integration of the two areas. Integrated Reading and Writing (IRW), also known as Basic Reading and Writing (BRW), among other names, has reemerged as an instructional focus of late, largely because of a response to major policy shifts and resulting state and institutional mandates (i.e., Pierce, 2017), particularly regarding the acceleration of students through developmental education. However, it is an unfortunate present reality that most IRW courses and programs currently being developed to respond to such mandates tend not to be rooted in any type of identifiable empirical or theoretical evidence base. What such an approach to curriculum design enables is a fairly narrow focus on structural aspects rather than on evidence-based content, pedagogical, or disciplinary/epistemological aspects. Likely as a result of the long-time separation of reading and writing (see Harl, 2013; Jackson, 2009; Yood, 2003, for a comprehensive exploration of the divide), key knowledge bases, which are needed to inform the development of an IRW course/program, have not been at the forefront of most conversations surrounding this topic in the postsecondary literature. Instead, the vast majority of the literature within the field of college literacy has focused on practical issues. By contrast, beyond college-level literature, much attention has focused on scholarship on the theoretical connections between reading and writing (i.e., Applebee, 1977; Kucer, 2012; Tierney & Pearson, 1983), especially the overlap of reading and writing as communication processes (e.g., Rosenblatt, 2013; Shanahan, 1990). Our purpose in this chapter is to bridge these practical and theoretical areas of the literature as our argument is that a purposeful approach to the integration of reading and writing at the college level must be informed by multiple critical knowledge areas. With this goal in mind, it is important to note that this chapter does not provide a how-to approach for designing an IRW course or program nor does it offer a cafeteria-style approach to best-practice recommendations. In short, this chapter is designed as a response to the frequent exclusive focus on what to do in an IRW course and instead emphasizes the why and the how behind an IRW approach: why and 143
Armstrong, Williams, and Stahl
how to conceptualize reading and writing together for purposes of informed course/program development and implementation. Thus, we present knowledge bases that form the foundation of an IRW curriculum and its inherent instructional approaches: theoretical knowledge, content and disciplinary knowledge, and curriculum and instruction knowledge. In our view, it is critical that those involved with curriculum development, design, implementation, assessment, and the evaluation of IRW courses and programs (henceforth, IRW professionals) have expertise in these knowledge bases. We begin this exploration with a discussion of the rich history associated with IRW.
Historical Background of IRW The scholarly community has long been interested in the theory and research on the relationship between reading and writing. As such, there are a number of literature reviews of both breadth and depth that should be read by individuals at the initial stages of designing an IRW program so that any program that is implemented has a firm foundation in impactful scholarship (Berninger, Abbott, Abbott, Graham, & Richards, 2002; Berninger, Cartwright, Yates, Swanson, & Abbott, 1994; Fitzgerald & Shanahan, 2000; Nelson & Calfee, 1998; Shanahan, 2006, 2016; Shanahan & Tierney, 1990; Stotsky, 1983; Tierney & Shanahan, 1991). Besides these texts, which cross pedagogical and developmental boundaries, two detailed literature reviews have been released previously in this Handbook series ( Jackson, 2009; Valeri-Gold & Deming, 2000) in addition to another in its earlier predecessor (Pugh & Pawan, 1991). Understanding the history underpinning the current IRW movement requires a willingness to cross boundaries in both the pedagogical and the historical sense. We believe that there are two pivotal sources that provide a necessary foundation for IRW professionals. First, Clifford’s (1988) “A Sisyphean Task: Historical Perspectives” examines theory, research, and praxis from across the past centuries so as to postulate that writing had been in a subordinate status to reading and the other language arts, and that language skills had been separated from each other, with reading being isolated from writing. Clifford’s analysis interrogated what she called five forces (the democratization of schooling, the professionalization of educators, technological change, the functionalist or pragmatic character of American culture, and liberationist ideologies) with the goal of delineating how each force led to both the separation and the integration of teaching reading and writing. A second, but equally important, foundational work appeared in the 1998 National Society for the Study of Education Yearbook, titled “The Reading-Writing Connection.” The two editors of this seminal text, Nelson and Calfee, also wrote a chapter titled “The Reading-Writing Connection Viewed Historically,” which presents the points of convergence and divergence between reading and writing in educational philosophy and history. The authors review the topic historically (as suggested by the title) but also discuss trends in rhetoric and literary studies as part of a shared intellectual history. The authors look at historical events and developments that promote the convergence of the fields as centripetal forces and other events and developments that lead to divergence as centrifugal forces. Moving across the last three decades of the 20th century, the authors continue to employ the theoretical constructs to investigate five integrative movements: comprehension as construction, reader response, “process” writing, whole language, and discourse communities. The authors step back in closing to theorize on the impact of the political, historical, and theoretical forces that work to keep these language constructs in separate camps. Of course, additional sources cover reading (e.g., Kaestle, 1991; Robinson, 1977; Smith, 2002) and writing (e.g., Berlin, 1984; Donahue & Moon, 2007; Murphy, 2001; Russell, 1991) from a historical perspective, as found under the respective big tent. Furthermore, we understand that
144
Reading and Writing
even with a large literature base, the scholarship of literacy history is at best an outlier component of the field. Still, the two sources summarized here are inherently approachable and, even more so, fundamentally important for leading both researchers and practitioners in obtaining a basic foundation for understanding the synergistic power that can be harnessed from an informed integration of reading and writing. In striving to understand the forces leading to the modern IRW revolution here, we present three historical cases that must be examined. (Admittedly, other writers might select other innovative programs from across the historical eras to review, e.g., Bojar, 1983; Dickson, 1995; Morrison, 1990.) The development of these exemplar programs sheds light on how policy, whether internal or external to the institution, as well as the extant theoretical base and research played important roles in leading to each unit’s innovative position in the field.
The Pittsburgh Model If there is a touchstone for the modern IRW movement, it would be at that moment in 1977 when Robert Marshall, the Dean of the College of Liberal Arts and Sciences at the University of Pittsburgh, requested that David Bartholomae (of the College of Liberal Arts and Sciences) and Anthony Petrosky (from the School of Education) develop a program to teach reading and writing to at-risk first-year students. This cross-college partnership, along with the initial contributions of James Peebles, led to the development of a synergistic curricular model that was revolutionary compared to the programs for teaching college reading instruction or college composition generally found across the country. The strength of the eventual praxis for what was then referred to as the BRW model was built upon theories and texts that crossed the traditional academic borders of composition, reading, educational psychology, curriculum studies, etc. A sampling of the these texts, which still has great currency some four decades later, includes Jerome Bruner’s The Process of Education (1960), Paulo Freire’s Pedagogy of the Oppressed (1968), Daniel Fader and E. B. McNeil’s Hooked on Books: Program and Proof (1969), Mina Shaughnessy’s Errors and Expectations: A Guide for the Teacher of Basic Writing (1977), James Moffett’s Teaching the Universe of Discourse (1968), Kenneth Burke’s The Philosophy of Literary Form: Studies in Symbolic Action (1973), and I. A. Richards’s How to Read a Page (1942). The discussions in which the Pittsburgh team engaged—on the differing philosophical perspectives of Roland Barthes (structural theory) and Jacques Derrida (deconstruction and post-structuralism)—also influenced their view on the role of reading and writing in the academy. What is important to understand here is that the BRW program, with its innovative curriculum and instruction, was built upon a firm theoretical foundation that integrated constructs such as the spiral curriculum, thematic-oriented content, naturalistic texts that crossed disciplinary boundaries, innovative practices in composition, and portfolio assessment, among other constructs. Furthermore, in going well beyond the status quo in basic composition and developmental reading instruction, the program had an underlying philosophy that students were not enrolled in a traditional remedial course but rather in an advanced seminar that expected them to approach learning as fully participating members of the postsecondary academic community (Bartholomae & Petrosky, 1986, Preface). Almost a decade after the development of the BRW course at the University of Pittsburgh, Bartholomae and Petrosky released Facts, Artifacts and Counterfacts (1986), which presented the philosophical and theoretical constructs as well as the design parameters for the BRW course. It would, of course, become a seminal work in the field, as would the solo-authored articles titled “Inventing the University” (Bartholomae, 1985) and “From Story to Essay” (Petrosky, 1982), which presented strong position statements about learners’ experiences.
145
Armstrong, Williams, and Stahl
The San Francisco State Model Although the BRW program at the University of Pittsburgh was born of an internal need to serve a local population of students, a second IRW historical case evolved two decades later at San Francisco State University (SFSU) as a response to a system-wide policy initiative by the California State University (CSU) Board of Trustees, requiring the number of incoming students in “remedial” courses to be no greater than 10 percent within a decade. Furthermore, the policy limited remedial instruction for any student to a single year, with a requirement for a student’s disenrollment from the university if the remediation was not successful. The work represented in Goen and Gillotte-Tropp (2003) and Goen-Salter (2008) had been influenced by a set of works now considered seminal (e.g., Ackerman, 1991; Nelson & Calfee, 1998; Salvatori, 1996; Spivey & King, 1989), which, in their totality, made a convincing case for the strong relationship between reading and writing processes. On the reverse side, the SFSU team acknowledged McCormick’s (1994) position that the relational values of teaching reading and writing in an integrated manner are lost when the two processes are taught separately. In addition, they had come to the conclusion that the students’ performance on the reading section of the institution’s placement test had a major impact on placement in remedial writing. These observations led them to propose that “students’ difficulty constructing meaning from texts may be a significant source of their difficulty constructing meaning in texts” (Goen & Gillotte-Tropp, 2003, p. 91). Hence, from their reading of theory and research, as well as their analysis of two decades of local placement data, the team was to conclude that an integrated approach to teaching reading and writing could lead the program’s diverse clientele to move into the “academic mainstream” within the time frame set by the trustees. The design of the curricular model and instructional procedures, as undertaken over 1999 and 2000, was influenced by the “Stretch Program” at Arizona State University (Glau, 1996). Furthermore, the SFSU team adopted the stance that the instruction must lead students to encounter reading and writing in a truly integrated manner, which included the exploration of a range of authentic texts, as encountered in a college environment, coupled with equally authentic purposes for writing. The program objectives were built upon six key principles, as presented by Goen and Gillotte-Tropp (2003): • • •
• •
•
Principle 1. Integration, which calls for the integration of reading and writing from both the perspective of the curriculum model and the instructional methods; Principle 2. Time, which calls for having enough time (two semesters) for learning to emerge and solidify, along with time for a community of learners to develop; Principle 3. Development, which requires students to develop strategies for reading, writing, and thinking through multiple passes of more and more complex contexts, supported by appropriate academic assistance; Principle 4. Academic Membership, which promotes programming that moves students through the remedial sequence as quickly as possible and lessens the stigma of remediation; Principle 5. Sophistication, which proposes that an IRW program should include scaffolded coursework that is virtually indistinguishable from tasks to be encountered in credit-bearing coursework, including book-length assignments, original research, and collaborative endeavors; and Principle 6. Purposeful Communication, which requires learners to interact with texts in meaningful contexts.
Of importance from the historical perspective is that in this case, innovation was driven by a body external to the institution, leading this particular team to design a program that would eventually
146
Reading and Writing
influence curricular design, both at the university level and even more so at the community college level, with the coming of the developmental education reform movement. Of equal importance is that the extant theory and research from the period drove the foundational work of model design.
The Chabot Model During the decade in which Goen-Salter and Gillotte-Tropp were developing a new IRW program for the university level, just across the San Francisco Bay, the faculty in the English Department at Chabot College were adopting a philosophical stance, articulated in a document called the “Throughline,” which was to drive all of the school’s writing courses, including basic writing classes and transfer-level composition courses. Unlike the previous two models, in which the reasons for innovation were driven by a primary directive, Fitzgerald (1999, 2003) makes the case that at Chabot College, the impetus was due to a cumulative effect, including the arrival of a new generation of faculty trained in reading and writing (both as new hires and through graduate-level professional development), a shift to the semester system, institutional research suggesting that the separate-but-equal reading and writing lab approach was not working, grant money via Title III, and the rearticulation of courses with an emphasis on critical thinking. This led a faculty team under the leadership of Cindy Hicks and Dennis Chowenhill to engage in a discussion of programmatic and pedagogical philosophy rather than a simple redesign of instruction (also see Grubb with Gabriner, 2013, pp. 80–82). Fitzgerald (1999, 2003) presented the seven propositions of the Throughline philosophy as they were formulated over two years and then implemented in 1993 by faculty members, both full time and part time. The seven propositions for all Chabot College English classes, which were innovative for the era and timeless for IRW programs today, are summarized in the following (for the revised English Department Throughline, refer to www.chabotcollege.edu/languagearts/ english/aathroughline.asp): • • • • • •
•
Integrate reading, writing, critical thinking, speaking, and listening; Address directly students’ reading practices from the perspectives of breadth and depth; Teach writing by having students compose prose of varying length and complexity rather than moving through a hierarchical skills model; Emphasize critical thinking competencies for the creation of meaning Include speaking, listening, and responding activities that promote community and link to critical reading and writing; Include full-length works that are integrated into a course thematically and with other readings with the expectations that students will respond to readings both personally and analytically; Increase learners’ understanding of the academic culture as well as of themselves as learners in the academy.
An additional set of language arts-centric “Articulated Assumptions” serving as the philosophical foundation for the basic writing courses evolved logically from the Throughline. The “Articulated Assumptions” were interwoven in a curricular redesign in which students might enroll in a twocourse IRW sequence or an “accelerated” IRW course where in either case successful completion led to enrollment in a college transfer course in composition (for the current version of the “Articulated Assumptions” refer to www.chabotcollege.edu/languagearts/english/articassump.asp). Although the history of acceleration in the postsecondary literacy field can be traced back to World War II, with the accelerated study skills program for U.S. Army personnel at the Ohio
147
Armstrong, Williams, and Stahl
State University (Robinson, 1943), the modern acceleration model for postsecondary reading and writing instruction can be tied in at least an implicit manner to both the Pitt BRW model, as it provided an intense, integrated model to students who might have required a precollege-level course, and the SFSU IRW sequence, as it was designed to maximize students’ movement out of remedial coursework into the greater university environment. Still, it is the IRW course design, now in its third decade, at Chabot College that provided the foundation and philosophy for the current wave of accelerated IRW courses. Katie Hern of the Chabot College English Department, while participating in a statewide project that brought together developmental education faculty, observed that the Chabot IRW model with its accelerated option was quite unlike other programs in the state (Stahl, 2017). Then, as an aftermath of conversations on the power of acceleration with another attendee, Myra Snell, Hern compared data from across multiple cohorts of Chabot students enrolled in the accelerated IRW course to students from the two-term IRW course sequence (Hern & Snell, 2010). She discovered that those who enrolled in the accelerated course were more likely to complete transfer-English coursework than those in the traditional model (Stahl, 2017). Next, these two individuals then partnered to create the California Acceleration Project (CAP; accelerationproject.org), and with a number of grants in support of CAP’s work in both developmental math reform and IRW programming, CAP has championed reform of placement policies, implementation of corequisite models, and development of two-course pathways (Hern & Snell, 2010). Of particular note here is that research has been conducted with the Chabot program (Edgecombe, Jaggars, Xu, & Barragan, 2014) and across the state (Hayward & Willett, 2014) demonstrating the fidelity and benefits to students of the accelerated IRW approach. With CAP as the vehicle, the Chabotaccelerated IRW model has become the prototypic model for the state, and it is having influence on the possibilities for intersections of IRW curriculum development and acceleration as a model of pedagogical mutualism across the country. Given this broader history—and present context—for IRW at the college level, we now turn to the three broad knowledge bases that we argue should inform the development and implementation of IRW courses and programs: theoretical knowledge, content and disciplinary knowledge, and curriculum and instruction knowledge. We start first with theoretical knowledge.
Theoretical Knowledge In order to explore theoretical models of the reading and writing connection, we draw from the seminal work of Timothy Shanahan and Robert Tierney (e.g., Shanahan, 2006; Shanahan & Tierney, 1990; Tierney & Shanahan, 1991), who for over three decades have both conducted impactful research and authored numerous integrative texts and chapters on the reading-writing relationship. Shanahan (2016) posits that there are three theoretical models that guide research on the reading-writing relationship. The first of the models, the shared-cognition model, investigates the similarities and the differences of cognitive functions in reading and writing as well as knowledge bases (i.e., domain knowledge, metaknowledge, text-attribute knowledge, procedural knowledge) and competencies upon which both rely. The second theoretical model, the sociocognitive model, is concerned with the transactional relationship of communicative acts as literate behaviors that transpire between reader and writer. The third model, the combined-use model, views the two acts as separate processes, but reading and writing, in serving as tools, can be employed together in a cumulative manner to accomplish a task, promote learning, or solve a problem. Increasingly more sophisticated research, as driven by each of these three theoretical constructs, has been undertaken over the years with students at varied developmental levels. Furthermore, although questions have been answered, each investigation has opened the door for further new questions and research.
148
Reading and Writing
Both the practitioner and researcher interested in IRW might ask what is the value of such a theoretical scheme as proposed by Shanahan? In answering that question, we propose that IRW professionals can find guidance in the often quoted and wise proposition that the father of social psychology, Kurt Lewin, once attributed to “a business man”: “There is nothing so practical as a good theory” (1943, p. 118). The very foundation for any curricular movement and its concomitant instructional plan is a theory, whether presented explicitly or found in the shadows in an implicit manner. Shanahan’s (2016) scheme makes both logical and pedagogical sense for the field. Might another theoretical construct be adopted? The answer is yes, and every program must have a theoretical foundation as we have observed with the historical cases previously covered. Indeed, the first step of any formative IRW design must be the explicit understanding and adoption of a theoretical base for the curriculum and the instruction, and then it must be intertwined throughout design, implementation, and delivery stages. Furthermore, the theory underlying the program must be incorporated in any midstream or summative evaluation or research. A solid and sophisticated theoretical framework is essential to the development and implementation of an IRW course or program. Equally important, though, is a rich understanding of the content of IRW, informed by an appreciation of the associated disciplinary traditions and conversations.
Content Knowledge As Shulman (1986, 1987) and others (Grossman, Wilson, & Shulman, 1989; Wilson, Shulman, & Richert, 1987) have defined it, content knowledge “includes knowledge of the subject and its organizing structures” (Ball, Thames, & Phelps, 2008, p. 391). However, this amounts to more than simply knowing subject-based topics and concepts appropriate for instruction (basically, what to teach) (Cunningham, Perry, Stanovich, & Stanovich, 2004; Shulman, 1986). Indeed, a comprehensive knowledge of one’s content also includes relevant disciplinary background and knowledge. For Shulman and others working in the area of the scholarship of teaching, this is a basic guiding principle for all disciplines. However, IRW presents a unique case in that it is not comprised of a single academic discipline but rather a combination of two (or more, depending on the perspective and assumptions embedded in the integration). On the simplest and most obvious level, however, this amounts to reading and writing. Yet the disciplinary context of IRW is far more complex as it spans to broad discipline areas: literacy and English but also includes the subareas of composition-rhetoric, college writing, and developmental/basic writing as well as college reading and developmental reading. It is by no means obvious, then (either to the newcomer or the veteran in the field), what the content of IRW might entail, nor by what means an IRW professional might seek such content knowledge, especially since these various fields too have endured crises of content (Donahue, 2005). Even so, we argue that IRW curriculum and instruction that is informed by a comprehensive content knowledge (both subject-matter and disciplinary knowledge) is needed. In the sections that follow, each of these areas will be explained in more depth.
Subject-Matter Knowledge The simplest explanation of subject-matter knowledge related to reading and writing at the college level is a sophisticated understanding of what reading is and what writing is. Beyond this, however, it is important to know how reading and writing build upon each other in order to understand the nature, purpose, and goals of integration. The bulk of the scholarship relevant to the interconnectedness of reading and writing has focused on PK-12 learners and contexts. Much of the earliest work on this topic focused on establishing the connection by measuring achievement in one area and measuring skills in the other (e.g., Applebee, 1977; Loban, 1963; Shanahan,
149
Armstrong, Williams, and Stahl
1980; see Stotsky, 1983, for a thorough review of this literature), or by exploring the effects of either reading or writing on another area (e.g., Atwell, 1987; Crowhurst, 1990, 1991). This earlier research has played a significant role in shaping present theories of reading-writing relationships but has also been limiting: In the past, what seems to have limited our appreciation of reading-writing relationships has been our perspective. In particular, a sentiment that there exists a general single correlational answer to the question of how reading and writing are related has pervaded much of our thinking. (Tierney & Leys, 1986, pp. 23–24) In addition, some research has focused on the impact of instruction in one area on the other (e.g., Abartis & Collins, 1980; Graham & Hebert, 2010; Lewis, 1987; Murray & Scott, 1993; Palmer, 1984; Tierney, Soter, O’Flahavan, & McGinley, 1989). Over the years, there has been no shortage of theoretical models and justifications of the relationship between reading and writing (e.g., Langer & Flihan, 2000; Malinowski, 1986; Parodi, 2013; Rosenblatt, 2013; Shanahan, 1997, 2006). In fact, this body of scholarship is expansive enough that it is now commonly accepted that reading and writing are parallel processes. As Tierney and Leys (1986) have noted, “Having to justify the integration of reading and writing is tantamount to having to validate the nature and role of literacy in society” (p. 24). Yet, in practice at the postsecondary level, the two are not only taught separately but also are usually disconnected geographically across programs, colleges, and departments, and staffed by instructors with completely separate professional and academic identities. Thus, despite the long history of research on reading-writing relationships, and the extensive body of scholarship that demonstrates the interconnectedness of the two areas, it cannot be ignored that IRW at the postsecondary level is indeed unique and that, at present, no “field” or “discipline” called IRW exists. This, unfortunately, serves to perpetuate the separation. Indeed, each of the fields (broadly, college reading and college writing) has an extensive professional and scholarly literature base—what is absent, however, is a substantial body of literature specific to IRW at the postsecondary level. Over the years, discussions of subject matter have mostly taken the form of practical recommendations for what to teach, and have largely been handled separately in the reading and writing literature at the postsecondary level. There is a considerable chunk of the literature in each area, however, that explores subject-matter recommendations for including the other. First, a body of scholarship focused on college reading instruction has discussed integration of writing activities (Bladow & Johnson, 2009–2010; Chamblee, 1998; Frailey, Buck-Rodriguez, & Anders, 2009; Hamer, 1996–97; Hayes, 1990; Kennedy, 1980; Morrison, 1990; Stern, 1995; Stotsky, 1982). The body of scholarship focused on college writing instruction that has addressed the integration of reading is less clear-cut, however. This is largely because of the tendency to conflate “reading” and “literature.” Some of this work is more directly addressing the composition-literature divide (Horner, 1983; Lindemann, 1993; Pezzulich, 2003; Tate, 2002; Von Bergen, 2001). However, some authors move beyond to address the teaching of reading within the context of a writing course (Bladow & Johnson, 2009–2010; Elbow, 2002; Hayes, 1981–82; Kimmel, 1993; Mathews, Larsen, & Butler, 1945; Salvatori, 1996; Smith, 1988), with a smaller subset focusing exclusively on the use of specific texts (Bernstein, 2008; McCrary, 2009; Sirc, 1994; Spigelman, 1998) or the use of literary texts in writing (Bereiter & Scardamalia, 1984). Much of the literature that explicitly focuses on bridging reading and writing at the college level deals with specific instructional materials and strategies. For instance, some practical literature has put forward recommendations for structuring IRW courses (DuBrowa, 2011; Morris &
150
Reading and Writing
Zinn, 1995; Murray & Scott, 1993) and instructional strategies relevant to IRW (Auten, 1983; Hayes, Stahl, & Simpson, 1991; Kerstiens & Tyo, 1983; McCarthy, 1975; Perin & Hare, 2010; Saxon, Martirosyan, & Vick, 2016a, 2016b). A much smaller strand has examined professional tips for managing an IRW course (Dwyer & Lofton, 1995; Pike & Menegas, 1986). As presented previously in this chapter in the discussion of several exemplar institutional models, some literature has explored existing IRW course/program models (Bartholomae & Petrosky, 1986; De Fina, Anstendig, & De Lawter, 1991; Edgecombe et al., 2014; Engstrom, 2005; Goen & Gillotte-Tropp, 2003; Hjelmervik & Merriman, 1983). Other work has detailed the process of developing an IRW course (Davis & Kimmel, 1993; DuBrowa, 2011; Lampi, Dimino, & Salsburg Taylor, 2015; McKusick, Holmberg, Marello, & Little, 1997; Phillips, 2000; Robertson, 1985; Salsburg Taylor, Dimino, Lampi, & Caverly, 2016; Stern, 1995; Sutherland & Sutherland, 1982). Even with these pockets within the literature on postsecondary literacy, very little empirical or theoretical work exists that defines content for IRW at the postsecondary level. Instead, the subject-matter knowledge for IRW professionals must be pieced together through relevant linkages that provide a deeper understanding of academic literacies. Thus, one such linkage area that we deem necessary for IRW professionals to include in their subject-matter knowledge bank is disciplinary literacies. Especially with the recent emphasis on disciplinary literacies (Moje, 2008; Shanahan & Shanahan, 2008; see also Chapter 6), subject-matter knowledge should entail an awareness of the literacy practices and demands that students will encounter in subsequent coursework and beyond. Certainly, there has been prior published work on the topic of literacy demands and expectations at the college level (e.g., Burrell, Tao, Simpson, & Mendez-Berrueta, 1997; Carson, Chase, Gibson, & Hargrove, 1992; Chase, Gibson, & Carson, 1994; National Center on Education and the Economy, 2013; Orlando, Caverly, Swetnam, & Flippo, 1989; Wambach, 1998) as well as in the workforce (e.g., Long, 1987; Mikulecky, 2010; Sticht, 1975, 1977). On a related note, there is a small body of the scholarship that has focused on the integration of reading and writing instruction with a content-area course (Austin, 1986; Cervi & Schaefer, 1986; Compton & Pavlik, 1978; French et al., 1989; Greene, 1993; Herlin, 1978; Powell, Pierre, & Ramos, 1993; Sherman, 1976), and contextualized reading and writing instruction within career technical education courses in programs such as I-BEST (Wachen, Jenkins, & Van Noy, 2010). These two related areas—on preparing students for the actual reading and writing expectations they will face, and on integrating or contextualizing that instruction within a content course— require IRW professionals to not only understand the underpinnings of disciplinary literacy as a theory but also have at least a solid familiarity with the actual literacy differences and similarities across the disciplines or professional/technical fields. Clearly, an expectation of proficiency across discipline/field areas is unrealistic; however, an understanding of how to become familiar with a particular area’s literacy practices should be part of the IRW professional’s subject-matter knowledge. To that end, Simpson (1993, 1996) has argued that college literacy instructors need to make informed decisions via a method she termed “reality checks” about what to teach based on what students are being asked to do in their content area courses. Thus, context matters to what gets taught in an IRW course, as it depends a great deal on a current, comprehensive knowledge base about the typical literacy practices at the local level. Otherwise, actual knowledge of reading and writing strategies, skills, and techniques is decontextualized and represents a shot-in-the-dark approach (Armstrong & Stahl, 2017).
Disciplinary Knowledge Knowledge of the rich disciplinary traditions of the fields associated with composition/rhetoric and literacy/reading, particularly at the postsecondary level, is essential for IRW professionals (see Downs
151
Armstrong, Williams, and Stahl
& Wardle, 2007, for a critique of the practice of hiring nonspecialists in writing, for example). Some of this complex knowledge that professionals have about reading and writing—as disciplines—tends to get overlooked in planning IRW. Once again, however, IRW at the postsecondary level presents an interesting problem: Because there is no discipline of IRW, professionals are trained in and therefore rooted in one or the other (reading or writing) academic and professional traditions. In any case, defining integration first requires IRW professionals to not only define reading and define writing but also interrogate traditional understandings of these, such as writing as production and reading as consumption (e.g., Barthes, 1979). True integration requires a conceptualization of each as parallel, meaning-making processes (i.e., Kucer, 1985; Parodi, 2013; Rosenblatt, 2013; Shanahan, 1990; Tierney & Pearson, 1983; Tierney & Shanahan, 1991). It is through this careful consideration of the common characteristics of reading and writing that a true understanding of integration begins to emerge. If separate and distinct understandings of reading and writing will be driving the process of course development, full integration simply will not be possible. It is the intersection that needs to be explored. It should be noted that although our emphasis has been exclusively on reading and writing, there is a body of scholarship that has long called for the integration across the language arts. As Emig (1982) notes, For learning and teaching, writing and the other language arts cannot sensibly be regarded discretely and in isolation from one another. Reading impinges on writing, which in turn is transformed by listening and talking. Sponsorship of wholly autonomous research inquiries and curricular ventures into any one of the four language processes is now theoretically and empirically suspect. (p. 2031) Although some work has described pilots or experiments to find creative linkages across all of the language arts (Brown, Watson, & Bowden, 1970; Engstrom, 2005; Hjelmervik & Merriman, 1983; Lockhart & Soliday, 2016; McCarthy, 1975; McKusick et al., 1997; Stern, 1995), in g eneral, the language arts at the postsecondary level continue to maintain separate disciplinary—and curricular—spaces. In sum, when designing an IRW course or program, it is essential for IRW professionals to have subject-matter knowledge and disciplinary expertise in reading and writing. Indeed, a firm grasp on the actual subject matter, along with an understanding of the theories and principles that undergird academic literacy are essential components. However, having this disciplinary expertise does not automatically position an IRW professional to effectively prepare students for the demands of postsecondary literacy. In addition to content knowledge, curriculum and instruction knowledge must also be the basis of an IRW course or program. In light of this, we shift our focus in the next section to pedagogical content knowledge.
Curriculum and Instruction Knowledge Pedagogical content knowledge (Shulman, 1986) merges both content and pedagogy in a way that makes the subject matter teachable. This transformation of the subject matter for instruction requires more than knowledge of the content and a basic understanding of teaching strategies. Instead, Shulman (1986) asserts that such a transformation only occurs when a teacher interprets the subject matter, finding different ways to represent it and make it accessible to learners. In this section, we discuss key considerations for course development, classroom practice, and assessment and evaluation. Together, these areas form the curriculum and instruction knowledge base for postsecondary literacy coursework that integrates reading and writing.
152
Reading and Writing
Course Development When thinking about course development, we urge IRW professionals to consider what is known about the actual structure and packaging of IRW courses (Armstrong, 2015a, 2015b; Stahl, 2015). Much of the recent literature in IRW focuses more on structure than on curriculum (Edgecombe et al., 2014). Furthermore, much of the current work around IRW is done with a primary goal of acceleration (Holschuh & Paulson, 2013; Xu, 2016). For example, many developmental writing programs have embraced a corequisite/mainstreaming model where students take a college composition course, while also enrolled in a developmental writing course for additional support. Another example is seen when institutions simply combine developmental reading and developmental writing instruction into a single course and reduce the number of semester hours. In either a corequisite/mainstreaming model or a combined-instruction/reduced-credit model, the actual integration of reading and writing instruction can vary from very little to none at all. When developing an IRW course, it is important to note that acceleration is not the same as integration. That is not to say that IRW and acceleration are mutually exclusive. The point is that a structural course redesign, as seen in the aforementioned models, does not necessarily include a true integration of reading and writing instruction. In general, IRW tends to be linked directly with the acceleration movement (Bickerstaff & Raufman, 2017; Edgecombe et al., 2014; Goen & Gillotte-Tropp, 2003; Goen-Salter, 2008; Hern, 2011, 2013). There are two basic types of acceleration: acceleration that aims to eradicate or drastically reduce remediation, like initiatives in Florida and Connecticut; and acceleration that aims to fit within a developmental education tradition. Often, IRW courses are—at least in intention and spirit—aimed toward the latter. However, if acceleration is the primary purpose for the curricular change, it is unlikely to be driven by expert theory and evidence, as IRW is simply the vehicle, and acceleration is the guiding principle. Although an IRW course can certainly accelerate students’ academic literacy development, the acceleration has more to do with the curricular and pedagogical design of the course than the structure of the course. An IRW course that is based on the theoretical and disciplinary knowledge base outlined in the previous sections of this chapter is seen in the work of Bartholomae and Petrosky (1986), who developed the most comprehensive published articulation of a model for an IRW course. Bartholomae and Petrosky approached postsecondary literacy as an ideological social practice (Lea & Street, 2006; Street, 1995, 2000). In this view, reading and writing are interconnected processes of meaning-making. Reading and writing are social practices that are situated in the context of academia and students’ lives—not a collection of decontextualized subskills (Street, 1994). As such, IRW instruction should emphasize a strategic, process-based approach that takes into account the technical as well as the ideological aspects of reading and writing. In other words, IRW courses should convey what reading and writing practices are as well as where, when, why, and how to apply specific academic literacy practices in college and in beyond. When designing an integrated course, Shanahan (1997) posited that integration works best when there are clearly specified outcomes through which instructors can plan, teach, and assess in powerful ways. In his view, integration is not an end in itself. Instead, it is a vehicle for accomplishing learning goals that “take advantage of the best and most rigorous thinking of the disciplinary fields” (Shanahan, 1997, p. 18) and that can only be accomplished through integration. Additionally, successful integration makes students conscious of the connections between reading and writing, along with “the cultural differences that exist across disciplines and how to translate across these boundaries” (Shanahan, 1997, p. 18). Finally, for Shanahan, integration is most effective when students are given adequate instruction, guidance, and practice within a meaningful context. In other words, explicit instruction and guided practice are necessary, even within integrated instruction, for students to develop academic literacy (see Pawan & Honeyford, 2009 for
153
Armstrong, Williams, and Stahl
a complete discussion on academic literacy development). Along these lines, many IRW course models, such as those introduced previously in this chapter, have reported success in providing holistic, process-based literacy instruction. Each of these course models emphasizes a seamless integration of reading and writing through authentic college-level academic literacy tasks. In terms of integration, equal attention must be given to both curriculum and instruction. This means moving beyond an understanding of the conceptual links between reading and writing to gain an understanding of how the reading-writing connection informs literacy instruction. Advocating for intensive one-semester and/or multiple-semester IRW courses, Goen and Gillotte-Tropp (2003) explain that “learning and improvement in reading and writing develop gradually and are directly related to the notion of writing and reading as situated within communities of discourse” (p. 96). Thus, time is a necessary consideration for IRW course success, as it is through time that learning develops and communities form. In addition to time, academic literacy development takes practice; therefore, IRW courses should move at a pace “more conducive to learning, as opposed to teaching” (Goen & Gillotte-Tropp, 2003, p. 97). Specifically related to developmental or basic writing courses, Goen and Gillotte-Tropp (2003) promote the idea of mainstreaming, where students are extended academic membership through access to college-level, credit-bearing writing coursework along with additional supports. This support and scaffolding should “help students become adept at sophisticated literate activities” and provide “opportunities for students to interact with language actively for authentic communicative purposes” (Goen & Gillotte-Tropp, 2003, p. 98). Similar principles guide the course design in IRW programs across the nation (Edgecombe et al., 2014; Hayes & Williams, 2016; Hern, 2011, 2012, 2013; Lockhart & Soliday, 2016; Marsh, 2015; Salsburg Taylor et al., 2016; Soliday & Gleason, 1997). The value and necessity of IRW instruction goes well beyond developmental or basic writing courses and should be practiced in first-year writing courses and upper-level writing courses alike. As several researchers point out, there should be no true distinction in the IRW activities seen in developmental, first-year, and upper-level writing courses (Bartholomae & Petrosky, 1986; Goen & Gillotte-Tropp, 2003; Goen-Salter, 2008; Hayes & Williams, 2016; Hern, 2011, 2012; Lockhart & Soliday, 2016; Marsh, 2015). Although the standard for postsecondary academic literacy remains the same at all levels, the intensity of the support and scaffolding provided through the course instruction will vary based on where students are in their academic literacy development. Marsh (2015) presents three key concepts that should guide IRW instruction at both the developmental and college-credit levels. First, reading is transactional in that students dialogue with a text using annotation to make connections between and among texts to facilitate understanding and analysis. Second, interactive reading leads to “intellectual transcendence that moves students beyond superficial understandings of both texts and the issues raised in texts” (Marsh, 2015, p. 64). Third, effective writing often results from effective reading, and these academic literacy practices can transfer to other subjects, disciplines, and the community at large as students learn to “think for themselves, advocate for themselves, and process the complexity of the world around them” (Marsh, 2015, p. 65).
IRW Course Technologies As IRW continues to grow in popularity, online platforms that claim to support IRW instruction continue to emerge. Several companies have devised programs that are presented as complete IRW courses that can be delivered through various learning management systems. These programs, such as MindTap, Connect, MyLab, EdReady, NROC, and Lumen Waymaker, can be alluring in that they provide adaptive instruction where modules, activities, and assignments are tailored to the individual needs of the students and based on their performance on an initial diagnostic exercise. These programs also provide automated grading and student feedback, which
154
Reading and Writing
can significantly reduce the burden on instructors to complete such tasks. Automated grading and feedback also makes it possible for these “courses” to be offered with little to no support from an actual instructor—thus, many institutions see these programs as a major cost saver. Another attractive element of these programs is the ability to collect data on student progress that can then be analyzed and used by program instructors and administrators. Although there are several attractive features of these online programs, there are at least as many unattractive features. A major problem with these online programs is that they are often heavily focused on grammar and sentence-level writing instruction. In some cases, these programs do not require students to write full-length essays, if they are required to complete any writing at all. When writing is required, it is too often focused on personal narratives and reflections as opposed to the kinds of source-based expository and argumentative writing that is required in college-level coursework. In terms of reading instruction, these programs are most likely to provide instruction on very basic reading skills, such as finding the main idea, and basic reading strategies, such as annotating a text (see NROC, 2015 for a sample table of contents). These programs also fail to offer students authentic reading experiences with the complex academic texts that they are more likely to encounter in college coursework. Unfortunately, in most cases, these technologies simply combine BRW in one course while still promoting a decontextualized, skills-based approach. There is little to no focus on critical thinking and reading and writing are not presented as interconnected, meaning-making processes. When considering the use of online programs or textbooks and other instructional materials, IRW professionals should consider how well these materials align with the principles, goals, and objectives of IRW instruction as outlined in, for example, the work of Bartholomae and Petrosky (1986), Goen and Gillotte-Tropp (2003), or others as cited in this chapter.
Classroom Practice With an understanding of how to structure an IRW course, it is vital to consider the actual execution of an IRW course. In other words, IRW professionals need to understand how, specifically, they carry out the content of the course. In terms of classroom practice, using a thematic approach in IRW courses is strongly endorsed by research literature (Bartholomae & Petrosky, 1986; College Academic Support Programs in Texas [CASP], 2014; Hayes & Williams, 2016; Hern, 2011, 2012; Shanahan, 1997). One of the ways in which this seamless integration can take place is through a course that is structured around themes or critical issues. When taking a thematic/ critical issues approach, students focus on larger issues and ideas as opposed to reading and writing subskills. At the crux of a thematic/critical issues approach is critical thinking; reading and writing are used as vehicles for this thinking. An IRW course can be organized around a single theme or around two or more themes within a semester. When using a single theme, instructors start the course with a challenging, high- interest reading on the topic. As the semester progresses, students engage readings on the same topic that may present diverse perspectives and genres. The benefit of a single-theme course is that students develop schemata and topic-specific vocabulary over the course of the semester— allowing them to “more easily bridge between reading and transfer reading and writing skills to the next selection, project, or assignment” (CASP, 2014, p. 2). On the other hand, a multi-themed course accommodates a wide variety of student interests while still offering the schema-building, and the reading and writing skills transfer from theme to theme. While supporting the use of thematic units, Shanahan (1997) cautions that taking a thematic approach does not necessarily lead to improved reading and writing skills, a better grasp of the ideas that are studied, increased application of knowledge to real problems, or greater motivation among students. Effective IRW classroom practice is more than the arrangement of content and includes the specific ways in which the instructor provides guidance and practice with critical
155
Armstrong, Williams, and Stahl
reading and writing skills. For example, grammar should be taught but not in an isolated, subskills fashion. As seen in Chabot College’s IRW course pedagogy, grammar should be taught using a whole-language or holistic approach. Specifically, “to the extent possible, instruction in sentence structure, punctuation, and other grammar topics are embedded in writing and reading assignments” (Edgecombe et al., 2014, p. 6; see also Hern, 2011, 2012, 2013). Along these lines, Hayes and Williams (2016) emphasize IRW classroom practice centered on skill-embedded curriculum and thinking-focused pedagogy, along with a thematic approach, to promote deeper learning and motivation among students. Aside from organizing the course around themes, other suggestions for IRW classroom practice include recursive reading-writing activities, continuous reading-writing activities at many levels, taking the reader and writer perspective, peer review, summary writing, and metacognition (CASP, 2014; El-Hindi, 1997). According to CASP (2014), Recursive reading-writing activities allow students to view ideas through multiple levels by reading and writing about the topic in several ways, evaluating both an author’s and their own ideas about the topic, and responding to peers’ ideas about the topic. (p. 5) Continuous reading and writing activities facilitate learning from texts at a variety of levels, such as grammar, style, ideas, and vocabulary. Taking the perspective of the reader and the writer allows students to use text to learn content, evaluate ideas, present ideas, and to consider audience. Peer review puts the focus on student-produced texts and allows students to respond their classmates through the lens of both a reader and a writer. With summary writing, students practice engaging a text with the goal of locating central themes and the support for these themes and then writing a text for an uninformed audience. Finally, metacognition encourages students to consider how, why, and when to employ various reading and writing strategies and processes (CASP, 2014). Goen and Gillotte-Tropp (2003) suggest five core objectives for ensuring that an IRW course “provides students with opportunities for active interaction with texts in meaningful contexts” (p. 98). To help students understand reading and writing practices for and beyond academic purposes and across a range of tasks, required readings should include a wide range of materials written from various points of view. Building students’ awareness and knowledge of their own mental processes includes providing them with a variety of idea-generating tools, such as freewriting, clustering, previewing, prereading, and questioning. Another important objective is developing students’ understanding of the rhetorical purposes of reading and writing through tasks such as recall and interpretation of texts, employing efficient study skills, and essay planning, drafting, and revision. IRW classroom practice should facilitate students’ experience of literacy as problem-solving, reasoning, and reflecting. This is accomplished by using reading and writing activities to participate in current conversations about important social issues and through community-building activities that build students’ investment in and motivation for academic literacy. Lastly, a key objective is to help students develop enjoyment, satisfaction, and confidence in reading and writing through student self-assessment and reflection (Goen & Gillotte-Tropp, 2003). Emphasizing the importance of integrating reading and writing at both the developmental and college level, Marsh (2015) identifies general “contact zones” for classroom practice at any instructional level. First, students should have contact with language by interacting with their own language, the instructor’s language, the language of other students, and the language used in the various course readings. Second, student should have contact with quote integration beyond simple quoting to valuing and evaluating ideas. Third, students should have contact with texts that facilitate “absorbing, branching, and committing” (Marsh, 2015, p. 68) to an ethical stance in relation to the text and its meaning. Finally, students should have contact that results in the changing
156
Reading and Writing
of preexisting ideas and that results in civic and academic engagement. These contact zones reflect what should be the goals of reading and writing instruction and also a set of interventions that clarify the practice of IRW instruction. Lockhart and Soliday (2016) shed light on the crucial role that IRW at all levels of writing instruction plays on students’ success in their academic major coursework. Their research found that students who experienced literacy instruction that tied reading and writing together as interrelated meaning-making practices continued to use these academic literacy practices as they encountered more difficult courses and more complex reading/writing situations. Specifically, students reported that it was their critical “reading practices which allowed them to enter the discourse of their major, to write successfully, and to find a place within the academy” (Lockhart & Soliday, 2016, p. 24). Lockhart and Soliday conclude that curricular and instructional practices that place reading at the center of the course alongside writing are crucial in promoting positive learning experiences and academic literacy skills transfer to more advanced coursework.
Assessment and Evaluation A very important, yet often overlooked, component of IRW courses is assessment and evaluation. This includes assessment and evaluation of the course itself as well as the evaluation and assessment of student success beyond IRW courses. When assessing student success, and the success of any course or program, the use of multiple data points is strongly supported in the research literature (Maxwell, 1997; Morante, 2012; Perin, Raufman, & Kalamkarian, 2015; Simpson, 2002; Simpson & Nist, 1992; Wood, 1988; see also Chapter 20, for additional discussion of multiple measures). Student success in IRW courses is determined based on a range of data points including course grades, scores on standardized reading and writing tests, portfolios of student writing, and student self-assessments. Hern (2011, 2012) outlines a comprehensive, growth-centered approach to assessing student success throughout the course. In this approach, assessment of students’ work is holistic and progressive—meaning that there is a tolerance for less-than-perfect work early on in the course, but as the course progresses, the standards for student work increase. This progressive approach to student assessment is also seen the work of Hayes and Williams (2016) and others. In addition, several IRW courses emphasize the use of students’ writing portfolios along with a self-assessment of their work in order to provide valuable insight on how successful students have been in meeting the course objectives (Edgecombe et al., 2014; Hayes & Williams, 2016; Hern, 2011, 2012; Lockhart & Soliday, 2016; Marsh, 2015; Salsburg Taylor et al., 2016; Soliday & Gleason, 1997). Although not exclusive to the postsecondary level, the Joint Task Force on Assessment of the International Reading Association (now the International Literacy Association) and the National Council of Teachers of English have published “a set of standards to guide decisions about assessing the teaching and learning of literacy” (2010, p. 1) that may also be a useful reference. Soliday and Gleason (1997) point to the importance of embedded assessments. They assert that along with a programmatic emphasis, “a thorough documentation of the alternative program’s success has to be incorporated into any plan from its inception” (Soliday & Gleason, 1997, p. 76). Embedded in their program was ongoing formative and summative evaluation. Statistical analyses of student progress and achievement, along with an assessment of student writing and learning, were key components of their evaluation. They go on to explain that such evaluations are an invaluable resource for program development, accountability, and effective representation of the interest of IRW programs. Goen and Gillotte-Tropp (2003) outline a robust plan for program assessment that includes multiple measures, both quantitative and qualitative. Specifically, their program assessment focused on end-of-year grade comparisons, comparative gains on standardized reading tests, comparisons of holistically scored portfolios of student writing, self-assessments of students completing
157
Armstrong, Williams, and Stahl
the IRW program, and the second-year written composition course pass rates compared to those students who took the traditional, multicourse sequence. The combination of these various data sources allowed for a thorough assessment of whether the IRW course was meeting the intended objectives. Other important data points for course and program assessment are credit accumulation (especially for students in developmental courses); overall grade point average (GPA) in college-level coursework; and persistence outcomes, including retention, transfer, and graduation rates (Edgecombe et al., 2014).
Conclusion In conclusion, we call upon a summation of Scribner and Cole’s (1981) foundational work in literacy posited by Schultz and Hull (2002, p. 19) that “literacy is not literacy is not literacy” by acknowledging that reading and writing practices are not universal across modes, mediums, and contexts. Indeed, literacy practices are diverse, particularly in academic contexts, intended to prepare learners for multiple disciplinary specializations. Thus, our contention in this chapter is that professionals associated with the integration of these literacy practices must possess expertise with knowledge bases that go well beyond the scope of the traditional college literacy generalist. In this chapter, we have presented three broad knowledge bases that we believe should drive the development and implementation (including assessment and evaluation) of IRW courses and programs at the postsecondary level: theoretical knowledge, content and disciplinary knowledge, and curriculum and instruction knowledge. We have argued in this chapter that the work of IRW must be informed by the scholarship that both bridges and transcends reading and writing as discrete instructional areas. In addition, in the section that follows, we explore some tangential areas that remain moving targets for the field and thus have implications for ongoing exploration.
Implications for Practice The primary implication for practice that stems from this overview of knowledge bases needed for IRW professionals is meaningful, intentional, and targeted professional development. This may, of course, become part of a much larger conversation regarding credentialing that has emerged of late (Stahl & Armstrong, 2018). For instance, the point that Downs and Wardle (2007) make about writing can easily be applied to IRW: Our field’s current labor practices reinforce cultural misconceptions that anyone can teach writing because there is nothing special to know about it. By employing nonspecialists to teach a specialized body of knowledge, we undermine our own claims as to that specialization and make our detractors’ argument in favor of general writing skills for them. (pp. 575–576) There is, therefore, a need for additional work on areas related to professional development specific to IRW. To this end, Caverly, Salsburg Taylor, Dimino, and Lampi (2016) have explored the use of the “generational model” (Caverly, Peterson, & Mandeville, 1997) for purposes of evaluating—both formatively and summatively—the professional development and subsequent instruction of IRW professionals. Here, we put forward a call to action not just for individual faculty and staff to seek out professional development activities but also for institutions to provide opportunities for meaningful, well-informed faculty/staff-enrichment specific to IRW. In addition to, and perhaps as part of, such professional development, another implication of this overview of the literature is that those involved in the development—or redesign—of IRW
158
Reading and Writing
courses and programs undertake a strategic and thoughtful curriculum-design approach. Such an approach begins with an understanding of reading, of writing, and of integration (Armstrong, 2015a, 2015b, Stahl, 2015) that is both shaped by and vetted by all stakeholders. This is particularly important when faculty and staff across multiple programs or departments are involved in the design. Such conversations should further be guided by cross-disciplinary and big-tent scholarship as discussed in the “Theoretical Knowledge” section of this chapter to get to the why of IRW. With a solid theoretical footing, IRW professionals can then embark on discussions related to the what of IRW, as discussed in the “Content Knowledge” section of this chapter. Although we have already noted that a substantial literature base specific to postsecondary IRW content has not yet been developed, we have provided areas of the literature where intersections of reading and writing instruction can inform course content. What is especially important here is frequent checking of the what against the why. Next, as discussed in the “Curriculum and Instruction Knowledge” section of this chapter, the how of IRW requires professionals to have a broad understanding of principles of pedagogy, curriculum design, assessment, and evaluation. We see this call for professional development and credentialing opportunities, as well as the related call for well-informed and well-planned course development, as the most important implications to follow from our explanation of needed knowledge bases. One reason for this parallels our call, throughout the chapter, for immersion in a broader literature base (both with respect to publication date and affiliated discipline). Namely, we acknowledge that there is no unified discipline or field called IRW; thus, we, as IRW professionals, carry the onus of self-guided inquiry if we strive for informed, meaningful, and beneficial learning for our students.
Recommendations for Future Research Perhaps the most important conclusion reached from this review of the literature on and surrounding IRW is that there is a need for research in the area, and particularly at the postsecondary level. Although we are of the opinion that much research is needed, across a number of areas, populations, and contexts, we focus here on two major areas that stemmed from our review of the literature: acceleration and practice.
Informed Acceleration Especially considering the current focus on college and career readiness, informed by the construct of disciplinary literacies, research that explores reading and writing practices across and within specific disciplines at the college level might be undertaken to inform a backward design model (Hern, 2013; Wiggins & McTighe, 2005) that aims to understand the literacy practices students require in their next-level courses in order to better back-map to instruction in IRW courses. Math, and particularly developmental math, at the college level provides a valuable model in the two pathways, Quantway and Statway, commissioned by the Carnegie Foundation (Cullinane & Treisman, 2010). These pathways provide informed acceleration of introductory coursework that is guided by the actual needs of learners rather than traditional assumptions of what is important. Similarly, research that explores discipline/major-specific IRW courses (in the same vein as traditional paired courses, for instance) could be useful for the field.
Impactful Practice There is a need for further research on the effectiveness of IRW classroom practices; that is, we urge research on impactful practices. Although this chapter presents several classroom practices
159
Armstrong, Williams, and Stahl
garnered from studies of IRW courses and programs, there is limited research on how these specific practices impact students’ learning and success in IRW courses and beyond. There is a need for such research, and one model would be formative and design experiments (Bradley & Reinking, 2011; Reinking & Bradley, 2008). Directly related to this, there is also a need for more research on IRW course and program assessment (see Chapter 20 for additional discussion of this need). Most significantly and perhaps most urgently, there currently exists no validated or nationally normed IRW-specific assessment instrument for use in placement, exit, or program evaluation protocols. A third subarea of impactful practice would focus on hybrid or online IRW instruction. A lthough the literature on IRW instruction focuses exclusively on traditional, face-to-face classroom formats, institutions across the nation are faced with the challenge of providing effective IRW instruction online. Practical guidance and studies on IRW in online platforms would be of great benefit to distance education professionals. We end this chapter with a simple caution. IRW as a comprehensive model, when built on a firm foundation of theory, pedagogically oriented research (although limited), and sound praxis has great promise in providing students at all developmental and academic levels with opportunities to develop greater and more complex knowledge bases, competencies, and positive dispositions pertaining to literacy. On the other hand, there is clearly a responsibility for all instructors and administrators to undertake regularly both formative and summative evaluation so as to revise programming to promote ever more positive curricular design and effective instruction.
References and Suggested Readings(*) Abartis, C., & Collins, C. (1980). The effect of writing instruction and reading methodology upon college students’ reading skills. Journal of Reading, 23(5), 408–413. Ackerman, J. M. (1991). Reading, writing, and knowing: The role of disciplinary knowledge in comprehension and composing. Research in Teaching of English, 25(2), 133–178. Applebee, A. N. (1977). Writing across the curriculum: The London projects. English Journal, 66(9), 81–85. Armstrong, S. L. (2015a, February). Tracing the roots of IRW: The theory and history of reading and writing at the college level. Invited session presented at the NADE/CRLA Co-Sponsored Integrated Reading and Writing Summit at the National Association for Developmental Education annual conference. Greenville, SC. Armstrong, S. L. (2015b, November). Something old, something new: Five considerations for integrating. Invited session presented at the CRLA/NADE Co-Sponsored Integrated Reading and Writing Summit at the College Reading and Learning Association annual conference, Portland, OR. Armstrong, S. L., & Stahl, N. A. (2017). Communication across the silos and borders: The culture of reading in a community college. Journal of College Reading and Learning, 47(2), 99–122. Atwell, N. (1987). In the middle: Writing, reading, and learning with adolescents. Westport, CT: Heinemann-Boynton/ Cook. Austin, R. (1986). Developmental writing courses: A vehicle for teaching writing across the disciplines. Journal of College and Reading Learning, 19(1), 87–90. Auten, A. (1983). Reading and writing: A mutual support system. Journal of Reading, 26(4), 366–369. Ball, D. L., Thames, M. H., & Phelps, G. (2008). Content knowledge for teaching: What makes it special? Journal of Teacher Education, 59(5), 389–407. Barthes, R. (1979). The Eiffel Tower, and other mythologies. New York, NY: Hill and Wang. *Bartholomae, D. (1985). Inventing the university. In M. Rose (Ed.), When a writer can’t write: Studies in writer’s block and other composing process problems (pp. 134–175). New York, NY: Guilford. *Bartholomae, D., & Petrosky, A. (Eds.). (1986). Facts, artifacts and counterfacts: Theory and method for a reading and writing course. Portsmouth, NH: Boynton/Cook. Bereiter, C., & Scardamalia, M. (1984). Learning about writing from reading. Written Communication, 1(2), 163–188. Berlin, J. A. (1984). Writing instruction in nineteenth-century American colleges. Carbondale, IL: Southern Illinois University Press. Berninger, V. W., Abbott, R. D., Abbott, S. P., Graham, S., & Richards, T. (2002). Writing and reading: Connections between language by hand and by eye. Journal of Learning Disabilities, 35, 39–56.
160
Reading and Writing
Berninger, V. W., Cartwright, A. C., Yates, C. M., Swanson, H. L., & Abbott, R. D. (1994). Developmental skills related to writing and reading acquisition in the intermediate grades. Reading and Writing: An Interdisciplinary Journal, 6, 161–196. Bernstein, S. N. (2008). Material realities in the basic writing classroom: Intersections of discovery for young women reading Persepolis 2. Journal of Basic Writing, 27(1), 80–104. Bickerstaff, S. & Raufman, J. (2017). From “additive” to “integrative”: Experiences of faculty teaching developmental integrated reading and writing courses. CCRC Working Paper. New York, NY: Teachers College, Columbia University. Bladow, K., & Johnson, S. (2009–2010). Infusing reading strategies in the composition classroom. Innovative Learning Strategies, 20, 1–12. Bojar, K. (1983). Beyond the basics: The humanities in the developmental curriculum. Journal of Developmental & Remedial Education, 6(2), 18–21. Bradley, B. A., & Reinking, D. (2011). Revisiting the connection between research and practice using formative and design experiments. In N. K. Duke & M. H. Mallette (Eds.), Literacy research methodologies (2nd ed., pp. 188–212). New York, NY: Guilford. Brown, J. W., Watson, M., & Bowden, R. (1970). Building basic skills at the community college level: A new approach. Journal of the Reading Specialist, 9, 144–150, 158. *Bruner, J. S. (1960). The process of education. Cambridge, MA: Harvard University Press. *Burke, K. (1973). The philosophy of literary form: Studies in symbolic action (3rd ed.). Berkeley, CA: University of California Press. Burrell, K. I., Tao, L., Simpson, M. L., & Mendez-Berrueta, H. (1997). How do we know what we are preparing our students for? A reality check of one university’s academic literacy demands. Research and Teaching in Developmental Education, 13(2), 55–70. Carson, J. G., Chase, N. D., Gibson, S. U., & Hargrove, M. F. (1992). Literacy demands of the undergraduate curriculum. Reading Research and Instruction, 31(4), 25–50. Caverly, D. C., Peterson, C. L., & Mandeville, T. F. (1997). A generational model for professional development. Educational Leadership, 55(3), 56–59. Caverly, D. C., Salsburg Taylor, J., Dimino, R. K., & Lampi, J. P. (2016). Connecting practice to research: Integrated reading and writing instruction assessment. Journal of Developmental Education, 39(3), 30–31. Cervi, P., & Schaefer, S. (1986). Reading, writing, and reasoning for health science majors. Journal of College Reading and Learning, 19(1), 59–63. Chamblee, C. M. (1998). Bringing life to reading and writing for at-risk college students. Journal of Adolescent & Adult Literacy, 41(7), 532–537. Chase, N. D., Gibson, S. U., & Carson, J. G. (1994). An examination of reading demands across four college courses. Journal of Developmental Education, 18(1), 10–12, 14, 16. *Clifford, G. J. (1988). A Sisyphean task: Historical perspectives on the relationship between writing and reading instruction. Urbana-Champaign, IL: University of Illinois at Urbana-Champaign, Center for the Study of Reading. College Academic Support Programs in Texas. (2014). Texas toolbox: IRW. Retrieved from www.casp-texas. com/wp-content/uploads/2013/08/Texas-Toolbox_IRW-FINAL.pdf Compton, M., & Pavlik, R. A. (1978). The UNC reading-writing in the content fields program bridges the gap between language and content. Proceedings of the Eleventh Annual Conference of the Western College Reading Association, 11, 124–128. Crowhurst, M. (1990). Reading/writing relationships: An intervention study. Canadian Journal of Education, 15, 348–359. Crowhurst, M. (1991). Interrelationships between reading and writing persuasive discourse. Research in the Teaching of English, 25(3), 314–338. Cullinane, J., & Treisman, P. U. (2010). Improving developmental mathematics education in community colleges: A prospectus and early progress report on the Statway initiative. An NCPR Working Paper. New York, NY: National Center for Postsecondary Research. Cunningham, A. E., Perry, K. E., Stanovich, K. E., & Stanovich, P. J. (2004). Disciplinary knowledge of K-3 teachers and their knowledge calibration in the domain of early literacy. Annals of Dyslexia, 54, 139–167. Davis, J. R., & Kimmel, I. (1993). Reading whole books is basic. Research and Teaching in Developmental Education, 9(2), 65–71. De Fina, A. A., Anstendig, L. L., & De Lawter, K. (1991). Alternative integrated reading/writing assessment and curriculum design. Journal of Reading, 35(5), 354–359. Dickson, M. (1995). It’s not like that here: Teaching academic writing and reading to novice writers. Portsmouth, NH: Boynton/Cook.
161
Armstrong, Williams, and Stahl
Donahue, P. A. (2005). Content (and discontent) in composition studies. Writing on the Edge, 15(2), 30–36. Donahue, P. A., & Moon, G. F. (Ed.). (2007). Local histories: Reading the archives of composition. Pittsburgh, PA: University of Pittsburgh Press. Downs, D., & Wardle, E. (2007). Teaching about writing, righting misconceptions: (Re)Envisioning ‘firstyear’ composition as ‘introduction to writing studies.’ College Composition and Communication, 58(4), 552–584. DuBrowa, M. (2011). Integrating reading and writing: One professor’s story. Research & Teaching in Developmental Education, 28(1), 30–33. Dwyer, E. J., & Lofton, G. (1995). Organization for instruction in the college reading/writing classroom. Research and Teaching in Developmental Education, 12(1), 79–82. Edgecombe, N., Jaggars, S. S., Xu, D., & Barragan, M. (2014). Accelerating the integrated instruction of developmental reading and writing: An analysis of Chabot College’s developmental English pathway. New York, NY: Columbia University, Teachers College, Community College Research Center. El-Hindi, A. E. (1997). Connecting reading and writing: College learners’ metacognitive awareness. Journal of Developmental Education, 21(2), 10–12, 14, 16, 18. Elbow, P. (2002). The cultures of literature and composition: What could each learn from the other? College English, 64(5), 533–546. Emig, J. (1982). Writing, composition, and rhetoric. In H. E. Mitzell (Ed.), Encyclopedia of educational research (5th ed., pp. 2021–2036). New York, NY: Free Press. *Emig, J. (1983). The web of meaning: Essays on writing, teaching, learning, and thinking. Portsmouth, NH: Boynton/Cook. Engstrom, E. U. (2005). Reading, writing, and assistive technology: An integrated developmental curriculum for college students. Journal of Adolescent & Adult Literacy, 49(1), 30–39. *Fader, D., & McNeil, E. B. (1969). Hooked on books: Program and proof. New York, NY: Berkeley Publishing. Fitzgerald. S. (1999). Basic writing in one California community college. Basic Writing e-Journal, 1(2). Retrieved from bwe.ccny.cuny.edu/Issue%201.2.html#sally Fitzgerald, S. H. (2003). Serving basic writers: One community college’s mission statements. Journal of Basic Writing, 22(1), 5–12. Fitzgerald, J., & Shanahan, T. (2000). Reading and writing relations and their development. Educational Psychologist, 35, 39–50. *Flood, J., & Lapp, D. (1987). Reading and writing relations: Assumptions and directions. In J. Squires (Ed.), The dynamics of language learning (pp. 9–26). Urbana, IL: NCTE. Frailey, M., Buck-Rodriguez, G., & Anders, P. L. (2009). Literary letters: Developmental readers’ responses to popular fiction. Journal of Developmental Education, 33(1), 2–12. *Freire, P. (1968). Pedagogy of the oppressed. New York, NY: Seabury. French, M. P., Danielson, K. E., Conn, M., Gale, W., Lueck, C., & Manley, M. (1989). Reading and writing in content areas. The Reading Teacher, 43(3), 266. Glau, G. R. (1996). The “Stretch Program”: Arizona State University’s new model of university-level basic writing instruction. Writing Program Administration, 20(1–2), 79–91. *Goen, S., & Gillotte-Tropp, H. (2003). Integrated reading and writing: A Response to the basic writing ‘crisis.’ Journal of Basic Writing, 22(2), 90–113. *Goen-Salter, S. (2008). Critiquing the need to eliminate remediation: Lessons from San Francisco State University. Journal of Basic Writing, 27(2), 81–105. Graham, S., & Hebert, M. (2010). Writing to read: Evidence for how writing can improve reading. A Carnegie Corporation Time to Act Report. Washington, DC: Alliance for Excellent Education. Greene, S. (1993). The role of task in the development of academic thinking through reading and writing in a college history course. Research in the Teaching of English, 27(1), 46–75. Grossman, P. L., Wilson, S. M., & Shulman, L. S. (1989). Teachers of substance: Subject matter knowledge for teaching. In M. C. Reynolds (Ed.), Knowledge base for the beginning teacher. New York, NY: Pergamon Press. Grubb, W. N., & Gabriner, B. (2013). Basic skills education in community colleges; Inside and outside of classrooms. New York, NY: Routledge. Hamer, A. B. (1996–97). Adding in-class writing to the college reading curriculum: Problems and pluses. Forum for Reading, 27, 25–33. *Harl, A. L. (2013). A historical and theoretical review of the literature: Reading and writing connections. In A. S. Horning & E. W. Kraemer, (Eds.), Reconnecting reading and writing. Anderson, SC: Parlor Press. Hayes, C. G. (1990, May). Using writing to promote reading to learn in college. Paper presented at the Annual Meeting of the International Reading Association, Atlanta, GA.
162
Reading and Writing
Hayes, C. G., Stahl, N. A., & Simpson, M. L. (1991). Language, meaning, and knowledge: Empowering developmental students to participate in the academy. Reading Research and Instruction, 30(3), 89–100. Hayes, M. F. (1981–82). Reading theory and practice: What writing teachers need to know. Innovative Learning Strategies, 5, 44–49. Hayes, S. M., & Williams, J. L. (2016). ACLT 052: Academic literacy—An integrated, accelerated model for developmental reading and writing. NADE Digest, 9(1), 13–22. Hayward, C., & Willett, T. (2014). Curricular redesign and gatekeeper completion: A multi-college evaluation of the California Acceleration Project. Retrieved from cap.3csn.org/files/2014/04/CAPReportFinal3.0.pdf Herlin, W. R. (1978). An attempt to teach reading and writing skills in conjunction with a beginning biology course for majors. Proceedings of the Eleventh Annual Conference of the Western College Reading Association, 11, 134–136. Hern, K. (2011). Accelerated English at Chabot College: A synthesis of key findings. Hayward, CA: California Acceleration Project. Hern, K. (2012). Acceleration across California: Shorter pathways in developmental English and math. Change: The Magazine of Higher Learning, 44(3), 60–68. *Hern, K. (with Snell, M.). (2013). Toward a vision of accelerated curriculum and pedagogy: High challenge, high support classrooms for underprepared students. Oakland, CA: LearningWorks. Hern, K., & Snell, M. (2010). Exponential attrition and the promise of acceleration in developmental English and math. Hayward, CA: Chabot College. Retrieved from www.careerladdersproject.org/docs/Exponential%20 Attrition.pdf. Hjelmervik, K. M., & Merriman, S. B. (1983). The basic reading and writing phenomenon: Writing as evidence of thinking. In J. N. Hayes (Ed.), The writer’s mind: Writing as a mode of thinking (pp. 103–112). Urbana, IL: National Council of Teachers of English. Holschuh, J. P., & Paulson, E. J. (2013, July). The terrain of college developmental reading. Executive summary and paper commissioned by the College Reading and Learning Association (CRLA). Retrieved from www.crla.net/images/whitepaper/TheTerrainofCollege91913.pdf Horner, W. B. (1983). Composition and literature: Bridging the gap. Chicago, IL: University of Chicago Press. *Jackson, J. M. (2009). Reading/writing connection. In R. F. Flippo & D. C. Caverly (Eds.), Handbook of college reading and study strategy research (2nd ed., pp. 145–173). New York, NY: Routledge. Joint Task Force on Assessment of the International Reading Association and the National Council of Teachers of English (2010). Standards for the assessment of reading and writing. Newark, DE: International Reading Association and the National Council of Teachers of English. Kaestle, C. F. (Ed.). (1991). Literacy in the United States: Readers and reading since 1880. New Haven, CT: Yale University Press. Kennedy, M. L. (1980). Reading and writing; Interrelated skills of literacy on the college level. Reading World, 20(2), 131–141. Kerstiens, G., & Tyo, J. (1983). Write to read: A tale of two techniques. Journal of College Reading and Learning, 16(1), 26–33. Kimmel, I. W. (1993). Instructor response: Yet another reading-writing connection. Research and Teaching in Developmental Education, 10(1), 117–122. Kucer, S. L. (1985). The making of meaning: Reading and writing as parallel processes. Written Communication, 2(3), 317–336. Kucer, S. B. (2012). Dimensions of literacy: A conceptual base for teaching reading and writing in school settings (2nd ed.). Mahwah, NJ: Lawrence Erlbaum. Lampi, J. P., Dimino, R. K., & Salsburg Taylor, J. (2015). Connecting practice to research: A shared growth professional development model. Journal of Developmental Education, 39(1), 32–33. *Langer, J. A., & Flihan, S. (2000). Writing and reading relationships: Constructive tasks. In R. Indrisano & J. R. Squire, (Eds.), Writing: Research/theory/practice (pp. 112–139). Newark, DE: International Reading Association. Lea, M. R., & Street, B. V. (2006). The “academic literacies” model: Theory and applications. Theory into Practice, 45(4), 368–377. Lewin, K. (1943). Psychology and the process of group living. Journal of Social Psychology, 17, 113–131. Lewis, H. D. (1987). Interactive influences in the composing and comprehending processes of academically underprepared college students. Research and Teaching in Developmental Education, 3(1), 50–65. Lindemann, E. (1993). Freshman composition: No place for literature.” College English, 55(3), 311–316. Loban, W. (1963). The language of elementary school children. Urbana, IL: National Council of Teachers of English.
163
Armstrong, Williams, and Stahl
Lockhart, T., & Soliday, M. (2016). The critical place of reading in writing transfer (and beyond): A report of student experiences. Pedagogy, 16(1), 23–37. Long, S. J. (1987). Designing a reading course for industrial workers: what materials to include and what factors to consider. Journal of College Reading and Learning, 20(1), 63–70. Malinowski, P. A. (1986). The reading-writing connection: An overview and annotated bibliography. Retrieved from ERIC database (ED285138). Marsh, B. (2015). Reading-writing integration in developmental and first-year composition. Teaching English in the Two-Year College, 43(1), 58–70. Mathews, E. G., Larsen, R. P., & Butler, G. (1945). Experimental investigation of the relation between reading training and achievement in college composition classes. Journal of Educational Research, 38(7), 499–505. Maxwell, M. (1997). Improving student learning skills: A new edition. Clearwater, FL: H&H Publishing. McCarthy, D. N. (1975). A confluent reading/writing fundamentals curriculum. Proceedings of the Eighth Annual Conference of the Western College Reading Association, 8, 139–144. McCormick, K. (1994). The culture of reading & the teaching of English. Manchester, UK: Manchester University Press. McCrary, D. (2009). [Not] losing my religion: Using The Color Purple to promote critical thinking in the writing classroom. Journal of Basic Writing, 28(1), 5–31. McKusick, D., Holmberg, B., Marello, C., & Little, E. (1997). Integrating reading and writing: Theory to research to practice. NADE Selected Conference Papers, 3, 30–32. Mikulecky, L. (2010). An examination of workplace literacy research from New Literacies and social perspectives. In E. A. Baker (Ed.). The New Literacies: Multiple Perspectives on Research and Practice (pp. 217–241). New York, NY: Guilford Press. *Moffett, J. (1968). Teaching the universe of discourse. Boston, MA: Houghton Mifflin. Moje, E. B. (2008). Foregrounding the disciplines in secondary literacy teaching and learning: A call for change. Journal of Adolescent & Adult Literacy, 52(2), 96–107. Morante, E. A. (2012). Editorial: What do placement tests measure? Journal of Developmental Education, 35(3), 28. Morris, L., & Zinn, A. (1995). Ideas in practice: A workshop format for developmental reading classes. Journal of Developmental Education, 18(3), 26–28, 30, 32. Morrison, C. (1990). A literary, whole language college reading program. Journal of Developmental Education, 14(2), 8–10, 12, 18. Murphy, J. J. (Ed.). (2001). A short history of writing instruction: From ancient Greece to modern America. Mahwah, NJ: Erlbaum. Murray, B., & Scott, D. (1993). Text-interactive instruction as a component of the reading-writing connection. Research and Teaching in Developmental Education, 9(2), 25–36. National Center on Education and the Economy. (2013). What does it really mean to be college and work ready? A study of the English literacy and mathematics required for success in the first year of community college. Washington, DC: Author. *Nelson, N., & Calfee, R. C. (Eds.). (1998). The reading-writing connection. Ninety-seventh Yearbook of the National Society for the Study of Education. Chicago, IL: National Society for the Study of Education. NROC (2015). NROC developmental English: An integrated program. Retrieved from www.nroc.org/wpcontent/uploads/sites/30/2015/11/NROC-English.pdf Orlando, V. P., Caverly, D. C., Swetnam, L. A., & Flippo, R. F. (1989). Text demands in college classes: An investigation. Forum for Reading, 21(1), 43–49. Palmer, J. C. (1984). Do college courses improve basic reading and writing skills? Community College Review, 12(2), 20–28. Parodi, G. (2013). Reading-writing connections: Discourse-oriented research. In D. E. Alvermann, N. J. Unrau, & R. B. Ruddell (Eds.), Theoretical models and processes of reading (6th ed., pp. 957–977). Newark, DE: International Reading Association. Pawan, F., & Honeyford, M. A. (2009). Academic literacy. In R. F. Flippo & D. C. Caverly (Eds.), Handbook of college reading and study strategies research (2nd ed., pp. 26–46). New York, NY: Taylor and Francis. Perin, D., & Hare, R. (2010). A contextualized reading-writing intervention for community college students. A CCRC Brief. New York, NY: Columbia University, Teachers College, Community College Research Center. Perin, D., Raufman, J., & Kalamkarian, H. S. (2015). Developmental reading and English assessment in researcherpractitioner partnership. New York, NY: Columbia University, Teachers College, Community College Research Center. *Petrosky, A. R. (1982). From story to essay: Reading and writing. College Composition and Communication, 33(1), 19–36.
164
Reading and Writing
Pezzulich, E. (2003). Shifting paradigms: The reemergence of literary texts in composition classrooms. In M. M. Tokarczyk & I. Papoulis (Eds.), Teaching composition/teaching literature: Crossing Great Divides (pp. 26–40). New York, NY: Peter Lang. Phillips, N. (2000). The seamless seven: Integrating writing, reading, and speaking. NADE Selected Conference Papers, 6, 47–50. Pierce, C. A. (2017). Research-based integrated reading and writing course development. Journal of Developmental Education, 40(2), 23–25, 34. Pike, D., & Menegas, I. (1986). Ideas in practice: On the merits of modeling. Journal of Developmental Education, 10(1), 26–27. Powell, A. B., Pierre, E., Ramos, C. (1993). Researching, reading, and writing about writing to learn mathematics: Pedagogy and product. Research and Teaching in Developmental Education, 10(1), 95–109. *Pugh, S. L., & Pawan, F. (1991). Reading, writing, and academic literacy. In R. F. Flippo & D. C. Caverly (Eds.), College reading & study strategy programs (pp. 1–27). Newark, DE: International Reading Association. Reinking, D., & Bradley, B. A. (2008). On formative and design experiments: Approaches to language and literacy research. New York, NY; Teachers College Press. *Richards, I. A. (1942). How to read a page. New York, NY: Norton. Robertson, E. (1985). Entering the world of academic reading. In J. Cooper, R. Evans, & E. Robertson (Eds.) Teaching college students to read analytically: An individualized approach. Urbana, IL: National Council of Teachers of English. Robinson, F. P. (1943). Study skills of soldiers in ASTP. School and Society, 58, 398–399. Robinson, H. A. (Ed.). (1977). Reading and writing instruction in the United States: Historical trends. Urbana, IL: ERIC Clearinghouse on Reading and Communication Skills. *Rosenblatt, L. M. (2013). The transactional model of reading and writing. In D. E. Alvermann, N. J. Unrau, & R. B. Ruddell (Eds.), Theoretical models and processes of reading (6th ed., pp. 923–956). Newark, DE: International Reading Association. Russell, D. R. (1991). Writing in the academic disciplines, 1870–1990: A curricular history. Carbondale, IL: Southern Illinois Press. Salsburg Taylor, J., Dimino, R. K., Lampi, J. P., & Caverly, D. C. (2016). Connecting practice to research: Making informed pedagogical decisions. Journal of Developmental Education, 39(2), 30–31. Salvatori, M. (1996). Conversations with texts: Reading in the teaching of composition. College English, 58(4), 440–454. Saxon, D. P., Martirosyan, N. M., & Vick, N. T. (2016a). Best practices and challenges in integrated reading and writing: A survey of field professionals, part 1. Journal of Developmental Education, 39(2), 32–34. Saxon, D. P., Martirosyan, N. M., & Vick, N. T. (2016b). Best practices and challenges in integrated reading and writing: A survey of field professionals, part 2. Journal of Developmental Education, 39(3), 34–35. Schultz, K., & Hull, G. (2002). Locating literacy theory in out-of-school contexts. Retrieved from www.repository. upenn.edu/ gse_pubs/170 Scribner, S., & Cole, M. (1981). The psychology of literacy. Cambridge, MA: Harvard University Press. Shanahan, T. (1980). The impact of writing instruction on learning to read. Reading World, 19, 357–368. Shanahan, T. (1990). Reading and writing together: What does it really mean? In T. Shanahan (Ed.), Reading and writing together: New perspectives for the classroom (pp. 1–21). Norwood, MA: Christopher Gordon Publishers, Inc. Shanahan, T. (1997). Reading-writing relationships, thematic units, inquiry learning…in pursuit of effective integrated literacy instruction. The Reading Teacher, 51(1), 12–19. Shanahan, T. (2006). Relations among oral language, reading, and writing development. In C. A. MacArthur, S. Graham, & J. Fitzgerald (Eds.), Handbook of writing research (pp. 171–183). New York, NY: Guilford. Shanahan, T. (2016). Relationships between reading and writing development. In C. A. MacArthur, S. Graham, & J. Fitzgerald (Eds.), Handbook of writing research (pp. 194–207). New York, NY: Guilford. Shanahan, T., & Shanahan, C. (2008). Teaching disciplinary literacy to adolescents: Rethinking c ontentarea literacy. Harvard Educational Review, 78(1), 40–59. *Shanahan, T., & Tierney, R. J. (1990). Reading-writing connections: The relations among three perspectives. In J. Zutell & S. McCormick (Eds.), Literacy theory and research: Analyses from multiple paradigms. Thirty-ninth Yearbook of the National Reading Conference (pp. 13–34). Chicago, IL: National Reading Conference. *Shaughnessy, M. (1977). Errors and expectations: A guide for the teacher of basic writing. New York, NY: Oxford University Press. Sherman, D. C. (1976, October). An innovative community college program integrating the fundamentals of reading and writing with a college level introductory psychology course. Paper presented at the Annual Meeting of the College Reading Association. Miami, FL.
165
Armstrong, Williams, and Stahl
Shulman, L. S. (1986). Those who understand: Knowledge growth in teaching. Educational Researcher, 2, 4–14. *Shulman, L. S. (1987). Knowledge and teaching: Foundations of the new reform. Harvard Educational Review, 57(1), 1–22. Simpson, M. L. (1993). Cutting edge: Reality checks as a means of defining ourselves. Journal of Developmental Education, 17(1), 36–37. Simpson, M. L. (1996). Conducting reality checks to improve students’ strategic learning. Journal of Adolescent & Adult Literacy, 41(2), 102–109. Simpson, M. L. (2002). Program evaluation studies: Strategic learning delivery model suggestions. Journal of Developmental Education, 26(2), 2–4, 6, 8, 10, 39. Simpson, M. L., & Nist, S. L. (1992). Toward defining a comprehensive assessment model for college reading. Journal of Reading, 35(6), 452–458. Sirc, G. (1994). The Autobiography of Malcolm X as a basic writing text. Journal of Basic Writing, 13(1) 50–77. Smith, C. B. (1988). Does it help to write about your reading? Journal of Reading, 32(3), 276–277. Smith, N. B. (2002). American reading instruction (Special edition). Newark, DE: IRA. Soliday, M., & Gleason, B. (1997). From remediation to enrichment: Evaluating a mainstreaming project. Journal of Basic Writing, 16(1), 64–78. Spigelman, C. (1998). Taboo topics and the rhetoric of silence: Discussing Lives on the Boundary in a basic writing class. Journal of Basic Writing, 17(1), 42–55. Spivey, N. N., & King, J. (1989). Readers as writers composing from sources. Reading Research Quarterly, 24(2), 7–26. Stahl, N. A. (2015, November). It’s been a long time coming, but we got to carry on. Invited keynote presented at the CRLA/NADE Co-Sponsored Integrated Reading and Writing Summit at the College Reading and Learning Association annual conference, Portland, OR. Stahl, N. A. (2017). Integrating reading and writing instruction in an accelerated curriculum: An interview with Katie Hern. Journal of Developmental Education, 40(3), 24–28. Stahl, N. A., & Armstrong, S. L. (2018). Re-claiming, re-inventing, and re-reforming a field: The future of college reading. Journal of College Reading and Learning, 48(1), 47–66. Stern, C. (1995). Integration of basic composition and reading. NADE Selected Conference Papers, 1, 29–31. Sticht, T. G. (1975). Reading for working: A functional literacy anthology. Alexandria, VA: Human Resources Research Organization. Sticht, T. G. (1977). Comprehending reading at work. In M. Just & P. Carpenter (Eds.), Cognitive processes in comprehension (pp. 221–246). Hillsdale, NJ: Lawrence Erlbaum Associates. Stotsky, S. (1982). The role of writing in developmental reading. Journal of Reading, 25(4), 330–340. Stotsky, S. (1983). Research of reading/writing relationships: A synthesis and suggested directions. Language Arts, 60, 568–580. Street, B. (1994). Literacy in theory and practice. Cambridge, UK: CUP. Street, B. (1995). Social literacies: Critical approaches to literacy in development, ethnography and education. London, UK: Longman. Street, B. (2000). Introduction. In B. Street (Ed.), Literacy and development: Ethnographic perspectives. London, UK: Routledge. Sutherland, B. J., & Sutherland, D. (1982). Read writers: A sensible approach to instruction. Journal of Developmental & Remedial Education, 6(1), 2–5. Tate, G. (2002). A place for literature in freshman composition. In C. Russell McDonald & R. L. McDonald (Eds.), Teaching writing: Landmarks and horizons (pp. 146–151). Carbondale, IL: Southern Illinois UP. *Tierney, R. J., & Leys, M. (1986). What is the value of connecting reading and writing? In B. T. Peterson (Ed.), Convergences: Transactions in reading and writing (pp. 15–29). Urbana, IL: NCTE. *Tierney, R. J., & Pearson, P. (1983). Toward a composing model of reading. Language Arts, 60(5), 568–80. *Tierney, R. J., & Shanahan, T. (1991). Research on the reading-writing relationship: Interactions, transactions, and outcomes. In R. Barr, M. L. Kamil, P. Mosenthal, & P. D. Pearson (Eds.), Handbook of reading research (vol. 2, pp. 246–280). Mahwah, NJ: Erlbaum. Tierney, R. J., Soter, A., O’Flahavan, J. F., & McGinley, W. (1989). The effects of reading and writing upon thinking critically. Reading Research Quarterly, 24(2), 134–173. *Valeri-Gold, & Deming, M. P. (2000). Reading, writing, and the college developmental student. In R. F. Flippo, & D. C. Caverly (Eds.), Handbook of college reading and study strategy research (pp. 149–173). Mahwah, NJ: Erlbaum. Von Bergen, L. (2001). Shaping the point with poetry. Journal of Basic Writing, 20(1), 77–88.
166
Reading and Writing
Wachen, J., Jenkins, D., & Van Noy, M. (2010). How I-BEST works: Findings from a field study of Washington State’s integrated basic education and skills training program. New York, NY: Community College Research Center, Columbia University. Wambach, C. A. (1998). Reading and writing expectations at a research university. Journal of Developmental Education, 22(2), 22–24, 26. Wiggins, G., & McTighe, J. (2005). Understanding by design (Expanded 2nd ed.). Upper Saddle River, NJ: Pearson. Wilson, S. M., Shulman, L. S., & Richert, A. E. (1987). 150 different ways of knowing: Representations of knowledge in teaching. In J. Calderhead (Ed.), Exploring teachers’ thinking. Sussex, UK: Holt, Rinehart, & Winston. Wood, N. V. (1988). Standardized reading tests and the postsecondary reading curriculum. Journal of Reading, 32(3), 224–230. Xu, D. (2016). Assistance or obstacle? The impact of different levels of English developmental education on underprepared students in community colleges. Educational Researcher, 45(9), 496–507. Yood, J. (2003). Writing the discipline: A generic history of English studies. College English, 65(5), 526–540. *Zamel, V. (1992). Writing one’s way into reading. TESOL Quarterly, 26(3), 463–485.
167
10 Gaming and College Reading Janna Jackson Kellinger university of massachusetts boston
What does gaming have to do with college reading? Although there are several ways to approach that question, three will be explored in this chapter: • • •
Using games to teach college reading “Reading” games as texts Using “game-based teaching” to teach college reading.
Although the focus in this chapter is on the what and the how pertaining to using gaming to support struggling college readers, addressing the why should foreground this discussion. Why use games to teach college reading? Why read games as texts? Why use game-based teaching to teach college reading? An examination of the who—current and future college students—will help answer the why question. Prensky (2001) argued that not only has the world in which 21st-century learners immerse themselves changed, but the thinking of 21st-century learners has changed as well. Building on evidence from neurology that “neurons that fire together, wire together” (Doidge, 2007, p. 63)—in other words, experiences shape the brain—Prensky (2001) identified several differences between the cognition of what he calls “digital immigrants,” or people not raised in the digital age, and those he calls “digital natives,” or people “bathed” (p. 38) in technology since they were born.1 Some differences between digital natives and digital immigrants that he notes include “twitch speed versus conventional speed,” “parallel processing versus linear processing,” “graphics first versus text first,” “random access versus step-by-step,” “connected versus standalone,” “active versus passive,” and “play versus work” (p. 52). Prensky’s (2001) notion that members of today’s generation are no longer linear thinkers but rather think in more web-like ways has huge implications for how we teach today’s and tomorrow’s generations of students. In addition, today’s college students are gamers, with 65 percent playing video games regularly ( Jones, 2003). Even those who do not consider themselves serious gamers engage in gaming, often casual games—games on their phones usually played to pass time while waiting at the bus stop, for friends to show up for lunch, for class to begin, etc. In the rare instance that a college student does not play any video games whatsoever, chances are that every college student has played board games or participated in sports, either as participants or observers. In fact, a large part of college culture is built around rooting for the college team. Gaming pervades the lives of college students.
168
Gaming and College Reading
Indeed, they often have grown up on games: A Pew Research Center survey reported that 97 percent of 12- to 17-year-olds play video games, with 50 percent stating at the time of the study that they had played “yesterday” (Lenhart et al., 2008, p. 8). Sometimes, the newfound freedom of college can lead to unlimited gaming: One online game popular among college students, EverQuest, is jokingly known as ‘EverCrack’ because of the amount of time its ‘addicted’ players spend using it….One college student recently confided to [a researcher that] he had skipped an exam because he was so close to ‘beating’ a video game. (Prensky, 2002, p. 6) While some view gaming as a “waste of time,” instead of fighting against the tide, college reading and learning educators can tap into the positives of gaming. Although some may argue that emulating video game players is the last thing instructors should want their students to do, according to Squire (2008), playing video games fosters positive qualities: Surveys of gamers show that they have an increased appetite for risk, a greater comfort with failure, a stronger desire for social affiliations, a preference for challenges, a capacity for independent problem solving, and a desire to be involved in meaningful work when compared with nongamers. (p. 658) Gee (2003) described video game players as people who take on new identities and perspectives, see themselves as active problem solvers, view mistakes as “opportunities for reflection and learning,” can undo a previous way of solving a problem in order to learn new ways, and take risks. In addition, gamers “regularly exhibit persistence [and]... attention to detail” (Klopfer, Osterweil, & Salen, 2009). All of these are qualities Friedman (2007) argued are necessary for the changing needs in a global marketplace that relies less and less on vertical hierarchies and more and more on ad hoc horizontal groups working together to solve problems.
Using Games to Teach Reading Although the previous descriptions were specifically about video games, teaching reading through games is not limited to the digital realm. In fact, Reed’s (2014) study examined the language development of English Language Learners using a popular board game, The Settlers of Catan. He found that the game “situated the language as something to be used for the performance of goal-oriented tasks and problem-solving in negotiating and enacting social identities” (p. 71). This task-based approach to learning language mirrors real-world language use: “language is not an isolated skill but rather a skill that people employ to accomplish tasks in the world” (Barrett & Johnson, 2010, p. 285). In this way, the game provided both the motivation and the context for learning a language. Across earlier educational levels, video games are used both formally and informally to teach children to read. Games with text-based dialogue and narratives, such as The Legend of Zelda, help teach reading by providing visual and action-based context while motivating reluctant readers to read. However, too often, what school districts across the country tend to adopt to teach reading to elementary and adolescent readers have the trappings of games (i.e., levels, graphics, badges) but fail to embody authentic gaming experiences. As Gillespie explains, these software programs that just dress up as games are missing: “unexpected things... hard, difficult choices... telling a
169
Janna Jackson Kellinger
story that sucks people in, the continuous feedback, giving kids a challenge at just the right level, where it’s tough, but not frustrating” (quoted by Schaff hauser, 2013, p. 33). Instead, they are just technological “gold star[s] on the chart” in which students just perform to get a reward, and when that reward is removed, their motivation to read decreases (Schaff hauser, 2013, p. 33). These types of digital reading programs tend to be “edutainment”: entertainment that poses as educational or education that poses as entertainment. Often, they are “little more than interactive quizzes” (Klopfer et al., 2009) that test isolated facts instead of teaching reading. There are some simple tests to tell if a game that proposes to teach reading actually teaches or just tests reading skills and to tell if this teaching is done via game or edutainment. One test is to ask if there is a story line. A game story is essential for hooking the reader and providing a context for learning. Obviously, the game story should be of high interest to college readers. It is important to remember that even when helping students learn to read informational texts, Narrative is at the heart of all knowledge, not just as a learning strategy, but also as a way of knowing. For example, what is science but the story of how the universe and we came to be, backed up by observation and experimental data? (Van Eck, 2007, p. 295) Because of this, “‘video games have the potential to narratize the curriculum’ (Barab, Pettyjohn, Gresalfi, & Solomou, 2010), reconnecting the content into those situations in which it has meaning so that learners can appreciate why the content has value” (Barab et al., 2012, p. 321). In addition, characters in stories allow readers to reach what Wolf (2007) deemed to be the highest level of reading: empathy. The next test is to determine if the learning is “endogenous” (Squire, 2006) to the game—if the game derives from the content. To make sure the game is not just an “interactive quiz” (Klopfer et al., 2009), the “content-swapping” test can be done (Kellinger, 2017, p. 93). In other words, instructors can determine if it is possible to swap different content in and out of the game easily. If this can be done, then the game is not endogenous and is likely to fall into the category of edutainment. Lastly, instructors should test to see if the game has validity. In other words, does it teach what it purports to teach (in the present case, reading)? As De Castell and Jenson (2003) stated in their guidelines for distinguishing between educational games and edutainment, the learning should be a by-product of a player’s actions. In the case of reading, strategies should be integral to the game. However, in order for them to teach reading, they need to include more than just opportunities for players to read; the player must have to demonstrate reading strategies in order to progress in the game and have some natural way to learn about the reading strategies that are embedded in it. Scholars in this field describe this as “pulling” the learning instead of having it “pushed” onto students (Rabone, 2013, p. 2). There does need to be an additional element in order for games that teach reading to have validity, and that is tailored feedback or scaffolding. The game should use “stealth assessment” (Shute, 2011) to detect reading errors and provide prompts or a graduated series of hints that go from the least intrusive and end with direct teaching, if necessary. This flips the typical way in which digital reading programs are designed, which is instruction followed by practice. Instead, reading games should employ “performance before competence” (Cazden, 1981) or opportunities for practice, then instruction, and then chances for success. This scaffolding should be given within the context of the story in order to give it plausibility. This could be through a nonplaying character (NPC) perhaps acting as a mentor or a guide, or through the reaction of an intelligent object within the game, such as a computer, or any other number of means. The following are descriptions taken from the websites of a number of popular digital reading programs designed for adolescents and young adults that would be appropriate for college
170
Gaming and College Reading
students. College reading and learning professionals can practice discrimination skills using the tests outlined earlier by determining if each of the following descriptions qualifies as a game or “edutainment”: •
•
•
“[The reading program uses] engaging online instruction (Word Training)....The student- directed technology focuses on foundational skills in an online platform that motivates students with personal avatars, rewards, and a social media platform for peer-to-peer engagement…The program features high-interest topics that students care about. It also incorporates peer-to-peer learning in an online environment with peer tutors who introduce reading, language, and spelling concepts in fun, engaging videos.”2 “[The reading program] uses adaptive technology and software engineered to accelerate reading proficiency with six zones of instruction. Students can track their growth in real time, building motivation to continue progressing and navigating confidently through their learning process.”3 “[The reading program] makes the best use of computer-based and teacher-led instruction to accelerate student progress: • • •
Multiple teaching and learning modalities motivate and engage students, to make learning fun Comprehensive instruction supports skill development, fluency, comprehension strategies, and writing-centered projects The research-proven, gradual release model of instruction accelerates student learning throughout the Experiences.”4
In fact, none of these qualified as an educational game using the tests outlined earlier. Although there are some elements (“avatar,” “rewards,” and feedback), none of them contains an overarching game story. Instead, they focus on teaching discrete skills through instruction, then performance. Despite the vast majority of digital reading programs being edutainment, they can still teach, just not in the immersive, context-rich way that games can. For example, for new readers of English, having the repeated practice of identifying first-letter sounds can be a gateway to learning to read English. For struggling college readers who are native English speakers, having reading skills explicitly taught might resolve reading difficulties that plagued them throughout their educational careers. However, endogenous games provide context, tasks, and social interactions that more closely mimic language use in the real world.
“Reading” Games as Texts Although many scholars do not include video games in research on reading, viewing video games as texts is not much of a stretch. Video games have recognized genres (first-person shooter, adventure games, casual games, etc.). Video games have the basic elements of fictional texts: characters, setting, plot, mood, etc. And video games have story arcs or episodic story lines. Even casual games, which may lack characters, contain objects that move and interact and so can be personified, and have story arcs where the gameplay has rising action as the difficulty increases leading to a boss challenge, or climax. Jenkins (2006) discussed how video games not only tell stories but, like fiction, create whole worlds similar to worlds created by the best fantasy authors. Video games even have their own supplemental texts, including game manuals, walk-throughs, and even fan fiction. It is the story that gives a video game meaning. Just as standard texts, like books, can be read at many different levels, so can video games. For example, a player could view an abstract game
171
Janna Jackson Kellinger
like Tetris as trying to fit blocks together or could read it as trying to pack differently shaped boxes in as tightly as possible to save on shipping: “the bare mechanics of the game do not determine its semantic freight” (Koster, 2005, p. 168). One of attributes Malone and Lepper (1987) found motivating in video games is fantasy. As Squire (2011) explains, fantasy “renders the skills learned by players meaningful” (p. 21). In many ways, video games can be read as traditional texts. However, Gee (2007b) made the case that video games are their own “semiotic domain” complete with “situated meanings” as signs convey meaning. There are shared meanings across video games as common conventions, such as HUDs (heads-up displays), leaderboards, power-ups, NPCs, cut scenes, and so forth, create a specialized language in gaming. This shared language helps create what Gee (2007a) called “affinity spaces” and Jenkins (2009) called a “participatory culture” in which gamers create “meta-texts,” or texts about video games where they share tips, encouragement, and opinions. In fact, some gamers use specialized jargon called “leetspeak” (for “elite speech”) to set themselves apart and above others (Squire, 2011, p. 152). On the other hand, some gamers created a whole “university” dedicated to teaching “newbies” various aspects of gameplay of a particular video game. Although video games can be viewed in the same ways as traditional written texts, video games have some unique qualities that make them more than linear texts, more than hypertext, more than print text, more than graphic texts, more than media texts, and more than just about any other text imaginable. Because the reader is the protagonist and thus controls the way the protagonist navigates the narrative, and in some cases, the narrative itself, games, in particular video games, are immersive experiences where the reader has ownership over the narrative: Gamers have grown up with a medium built on assumptions unlike those in print cultures (e.g., a game engine can be tinkered with, a text is not necessarily print based or defined by book covers); game players are coauthors along with game designers, co-constructing the game-as-text through their own action (cf., Robison, 2005). Gamers have grown up in simulated worlds, worlds where anything is possible, and where learning through trial and error is expected, information is a resource for action, and expertise is enacted through both independent and collaborative problem solving in self-directed tasks. (Squire, 2008, p. 658) In these ways, games are “ludic narrans, ‘playful stories’” (Davidson & LeMarchand, 2012, p. 104) that provide readers with the means to play with story elements, even making mistakes on purpose to see what happens. The world in video games that designers create are not static but rather “evocative spaces” ( Jenkins, 2006, p. 677) where players can interact with the elements of that world or sometimes even change the world itself. Construction and Management Simulation games, games that provide virtual environments where users can design and build objects, use resources, and/or control avatars, such as SimCity, do not tell stories but rather provide narrative spaces where players can create their own “emergent narratives” ( Jenkins, 2006, p. 684). In these ways and more, video games bring together reading and writing in one text as players read the game while producing their own avatar’s story in the game. Just as fiction invites critical readings, so do video games: Videogames are an expressive medium. They represent how real and imagined systems work. They invite players to interact with those systems and form judgments about them. As part of the ongoing process of understanding this medium and pushing it further as players, developers, and critics, we must strive to understand how to construct and critique the representations of our world in video game form. (Bogost, 2007, p. vii)
172
Gaming and College Reading
Bogost (2007) laid out a robust argument that “videogames open a new domain for persuasion” (p. ix), what he called “procedural rhetoric.” In other words, the rules governing gameplay convey not only meaning but also values. For example, in one video game, eating junk food decreased the player’s health points. These messages have the potential to skew players’ world views. Squire (2008) interviewed a group of black youth who were concerned that “white kids might develop false impressions about economic mobility for African Americans in the United States” (p. 177) from playing a certain video game. It is important to remember, though, that those values are interpreted by players who bring their own cultural contexts to the game: “games can encourage pro-social or anti-social behaviors (Athnes, 2009), although De Castell and Jenson (2003) point out that the impact of all games, including antisocial ones, must be examined within the larger culture of the players” (Kellinger, 2017, p. 6). Just as all texts operate within cultural contexts, so do video games. Not only can video games themselves be read as texts, the coding used to program them is a language as well. In fact, there are different “programming languages” used to code all with their own set of vocabulary, semantics, and syntax. Learning to code is actually learning the grammar, or the rules, of a language. Before being able to run a program, they are run through a compiler— a program that translates the programming language into machine language and, in doing so, checks to make sure the syntax of the programming language is correct. Currently, in the state of Texas, taking a coding class can satisfy the public schools’ foreign language requirement (Hatter, 2016). Thus, there are multiple ways to read video games as texts.
Using “Game-Based Teaching” to Teach Reading Although educational games can be used to teach reading, as demonstrated earlier, most digital reading programs, while they still teach, do not qualify as games although they may be disguised as them. However, when Commercial Off-the-Shelf (COTS) software fails to meet an educator’s needs, more and more teachers are designing their own curricular games. Instructors are doing so by creating no-tech games, such as immersive role-playing games; using technology they are familiar with to create games, such as creating branched narratives by using the internal linking feature of PowerPoint and other presentation software; and using the increasing number of free or low-cost software programs that allow people to design their own games by having a “low floor and high ceiling.” These include software that allows users to create text-based branched narratives like Twine, plug-and-play programming that has pre-written code segments that fit together like jigsaw puzzles like Scratch, and animation software that allows users to create animated videos they can embed in learning management programs like Tellagami. All these platforms allow instructors to design their own curricular games with the same criteria outlined previously in the section on using games to teach reading: a high-interest immersive game story, endogenous content, and feedback in the form of graduated personalized scaffolding. It is this graduated personalized scaffolding that takes readers from where they are and moves them toward the way expert readers approach texts: Learning theorists suggest that it is therefore not enough for a learner to know only what an expert knows, but to also know it in the same way that an expert knows it. In other words, we want the student’s mental model of the domain to eventually approximate the mental model of an expert. (Van Eck, 2007, p. 291) By thinking about how expert readers approach texts (previewing, using metacognition, predicting, rereading, making connections, asking questions, making inferences, figuring out vocabulary
173
Janna Jackson Kellinger
using context, using reading strategies, and, in particular, corrective strategies), games can be designed to make a reader’s thinking visible, and to use feedback to model and teach these strategies in the context of striving to reach game goals. Teaching reading skills does not occur in isolation. When people read, they read about something. Gee (2007a) has argued that certain texts are better at facilitating “situated understandings” instead of superficial “verbal understandings.” In other words, it is one thing to be able to parrot back a fact read in a textbook (“verbal understanding”), but it is quite another to understand the concept underlying that fact (“situated understanding”). Gee (2007a) argued that “researchers in several different areas have raised the possibility that what we might call ‘game-like’ learning... can facilitate situated understandings in the context of activity and experience grounded in perception” (p. 114). To that list, I would add Shaffer’s (2006) groundbreaking work investigating how students develop not only conceptual learning by placing what Cazden (1981) calls “performance before competence,” or doing before learning, but also “epistemic frames” (Shaffer, 2006), or specialized ways of making meaning. Barrett and Johnson (2010) enacted this in their research that found that “task-based language learning” emulated real-world language use and thus facilitated transfer.
Implications for Practice Each of these approaches has its own implications for practice. Using games to teach reading emulates real-world reading in that players use reading to pull information in order to achieve tasks, whether that task be reading for pleasure or for information, while providing graduated personalized feedback to teach the player reading strategies. By viewing games as texts, an instructor can tap into the reading skills already used by gamers to “read” games and to “read” meta-gaming material, such as gaming manuals, walk-throughs, gaming forums, and even fan fiction based on games; make those reading skills explicit; and then show how those reading skills transfer to more traditional texts. This follows Lee’s (2001) notion of cultural modeling, where educators tap into skills that students employ in nonacademic contexts and models how these skills are used in academic contexts. By designing their own curricular games, instructors of reading can tailor a game to their readers while providing the immersive experience a game provides. All of these approaches can meet the qualities of games identified as increasing learning: Malone and Lepper (1987) define four characteristics of games that contribute to increases in motivation and eagerness for learning. These are challenge, fantasy, curiosity, and control. Challenges in a game … keep [students] engaged with the activity by means of adjusted levels of difficulty. Fantasy in a game increases enthusiasm by providing an appealing imaginary context, whereas curiosity offers interesting, surprising, and novel contexts that stimulate students’ needs to explore the unknown. Finally, the control characteristic gives learners the feeling of self-determination. (Akilli, 2007, p. 6) In these ways, games can be used to enhance reading instructors’ teaching. Up until this point, I have treated these three approaches as distinct, and, on their own, they can be used to strengthen any reading program by providing motivation, context, and ownership while providing a pathway to achieve Wolf ’s (2007) highest level of reading, empathy, by having the reader play the protagonist. However, they can also be used together. This could be done linearly—having students play a game that teaches reading skills, then having them analyze that game as a text, and finally by having the instructor design her or his own curricular reading game. There is one more step that instructors can take: having students design their own reading games.
174
Gaming and College Reading
Teaching helps learners organize and own their learning. Learning and teaching by designing a game forces the designer to uncover the system of the content, in this case language, and can result in deep learning. Thus, these different approaches do not have to be treated separately. They can be integrated together to truly amplify a reading program. A reading instructor could design a curricular game that has students “read” a game designed to teach reading and then have students design their own game to teach reading. All in all, whether used as stand-alone approaches, taught linearly, or integrated, these various approaches to teaching reading can greatly enhance a college reading program.
Recommendations for Further Research While all this sounds promising theoretically, more empirical research about the actual impact of using these approaches needs to be conducted. In addition, further experimentation is needed with innovative approaches, such as designing games using “teachable agents” (Okita & Schwartz, 2013) in which students teach digital students whose responses are powered by Artificial Intelligence to read and “level up” based on the progress of their virtual tutees. Similarly, games with reciprocal reading agents5 where students engage in reciprocal reading with a virtual agent and “level up” based on their virtual partner’s increase in comprehension could be tested. In this research, the impact in terms of reading, motivation, and feelings all need to be measured against human-based forms of instruction. For example, for my own twins, the one who reads more hardcopy books is the better reader even though the other one is twice as far in the “game-like” reading program assigned by their teacher. Researchers should ask questions like the following: • •
•
Is a computerized “teachable agent” demonstrably more effective than having a more advanced reader teach a less advanced one? Is reciprocal reading better done with a real person or does a “reciprocal reading agent” allow a struggling reader who is embarrassed by her or his skills opportunities to learn via this instructional method? Is social interaction one of the key aspects of a game specifically designed to teach reading that make it effective? If so, is this better done in-person?
Before educators spend exorbitant amounts of money on reading games, research needs to find out if they improve reading and what it is about those games that improves reading, and carefully consider whether or not that can be replicated in the classroom without technology with the same degree of impact.
Conclusion As educators, we all know how effective high-interest texts can be (McDaniel, Waddill, Finstad, & Bourg, 2000). That “high-interest” should not derive from being a game but rather the content itself, which of course lends itself to many types of media. When I taught high school, I remember a student who was a gang member coming up to me sheepishly after class one day, checking over his shoulder to make sure no one else was seeing him, and admitting that the novel he had chosen for a unit where students got to choose their own books was the first book he had ever read. Although he chose it because it was the shortest book on the list, his eyes got wide as he tried to express how moved he was by the book. The book was Of Mice and Men. If our ultimate goal is to improve the reading skills of our students and we recognize that reading occurs with a wide variety of texts, then the more our students read and discuss a wide variety of texts for pleasure,
175
Janna Jackson Kellinger
we will be doing more than improving our students reading skills. We will be creating lifelong readers. Gaming is one way to do this but not the only way. Although I certainly advocate for the use of games in teaching reading, it should not replace reading itself.
Notes 1 Although I do have an issue with using the terms “native” and “immigrant” as these are not two mutually exclusive groups of people, nor do the denotations or connotations of the terms match what Prensky (2001) described, this does seem to be an analogy that allows people to understand general differences, so they will be used in this chapter. 2 Language! Live: www.voyagersopris.com/docs/librariesprovider7/literacy-solutions/language-live/howto-choose-the-most-effective-adolescent-reading-program.pdf ?sfvrsn=8&utm_source=website&utm_ medium=download-library. 3 READ 180: www.hmhco.com/products/read-180/about-us.php#tab-first. 4 FLEX Literacy: www.flexliteracy.com/texassampler/sites/default/files/FLEX.pdf. 5 An “agent” is a digital person or creature who is programmed to respond to inputs versus an “avatar” who is a digital person or creature controlled by a real human being.
References and Suggested Readings(*) About READ180. (2015). Boston, MA: Houghton Mifflin Harcourt. Retrieved from www.hmhco.com/ products/read-180/about-us.php#tab-First Akilli, G. (2007). Games and simulations: A new approach in education? In D. Gibson, C. Aldrich, & M. Prensky (Eds.). Games and simulations in online learning: Research and development frameworks (pp. 1–20). Hershey, PA: Information Science Publishing. Barab, S., Pettyjohn, P., Gresalfi, M., & Solomou, M. (2012). Game-based curricula, personal engagement, and the Modern Prometheus design project. In C. Steinkuehler, K. Squire, & S. Barab, (Eds.) Games, learning, and society: Learning and meaning in the digital age (pp. 306–326). New York, NY: Cambridge University Press. Barrett, K., & Johnson, W. (2010). Developing serious games for learning language-in-culture. In R. Van Eck (Ed.) Gaming and cognition: Theories and practice from the learning sciences (pp. 281–311). Hershey, PA: Information Science Reference. *Bogost, I. (2007) Persuasive games: The expressive power of videogames. Cambridge, MA: MIT Press. A buyer’s guide; How to choose the most effective adolescent reading program. (2016). New Bay Media. Retrieved from www.voyagersopris.com/docs/librariesprovider7/literacy-solutions/language-live/how-to-choosethe-most-effective-adolescent-reading-program.pdf ?sfvrsn=8&utm_source=website&utm_medium= download-library Cazden, C. (1981). Performance before competence: Assistance to child discourse in the zone of proximal development. Quarterly Newsletter of the Laboratory of Comparative Human Cognition, 3, 5–8. Davidson, D., & Lemarchand, R. (2012). Uncharted 2: Among thieves—How to become a hero. In C. Steinkuehler, K. Squire, & S. Barab, (Eds.) Games, learning, and society: Learning and meaning in the digital age (pp. 75–107). New York, NY: Cambridge University Press. De Castell, S., & Jenson, J. (2003). Serious play. Journal of Curriculum Studies, 35(6), 649–665. Doidge, N. (2007). The brain that changes itself: Stories of personal triumph from the frontiers of brain science. New York, NY: Penguin Books. Flex literacy: A powerful flexible data-driven intervention system. Columbus, OH: McGraw Hill Education. Retrieved from www.flexliteracy.com/texassampler/sites/default/files/FLEX.pdf Friedman, T. (2007). The world is flat: A brief history of the twenty-first century. New York, NY: Picador. Gee, J. P. (2003). Opportunity to learn: A language-based perspective on assessment. Assessment in Education, 10(1), 27–46. Gee, J. P. (2007a). Good videogames + good learning: Collected essays on videogames, learning, and literacy. New York, NY: Peter Lang. *Gee, J. P. (2007b). What videogames have to teach us about learning and literacy. New York, NY: Palgrave Macmillan. Hatter, L. (2016, March 1). French, Spanish, German... Java? Making Coding Count as a Foreign Language. On All Things Considered. NPR. Retrieved from www.npr.org/sections/ed/2016/03/01/468695376/ french-spanish-german-java-making-coding-count-as-a-foreign-language
176
Gaming and College Reading
*Jackson, J. (2011). Game changer: How principles of videogames can transform teaching. In M. S. Khine (Ed.) Learning to play: Exploring the future of education with video games (pp. 107–128). New York, NY: Peter Lang. Jenkins, H. (2006). Game design as narrative architecture. In K. Salen & E. Zimmerman (Eds.). The game designer reader: A rules of play anthology (pp. 670–689). Cambridge, MA: MIT Press. *Jenkins, H. (2009). Confronting the challenges of a participatory culture: Media education for the 21st century. Cambridge, MA: MIT Press. Jones, S. (2003). Let the games begin: Gaming technology and entertainment among college students. Washington, DC: Pew Internet and American Family Project. *Kellinger, J. (2017). A guide to designing curricular games: How to “game” the system. Cham, Switzerland: Springer. Klopfer, E., Osterweil, S., & Salen, K. (2009). Moving learning games forward: Obstacles, opportunities, and openness. Cambridge, MA: The Education Arcade at MIT. Koster, R. (2005). A theory of fun for game design. Scottsdale, AZ: Paraglyph Press. Lee, C. (2001). Is October Brown Chinese? A cultural modeling activity system for underachieving students. American Education Research Journal, 38(1), 97–141. Lenhart, A., Kahne, J., Middaugh, E., Magill, A. R., Evans, C., & Vitak, J. (2008). Teens, videogames, and civics. Washington, DC: Pew Internet and American Life Project, Retrieved from www.pewinternet. org/pdfs/PIP_Teens_Games_and_Civics_Report_FINAL.pdf Malone, T., & Lepper, M. (1987). Making learning fun: A taxonomy of intrinsic motivations for learning. In R. Snow & M. Farr (Eds.) Aptitude, learning, and cognition (pp. 223–253). Hillsdale, NJ: Lawrence Erlbaum Associates. McDaniel, M., Waddill, P., Finstad, K., & Bourg, T. (2000). The effects of text-based interest on attention and recall. Journal of Educational Psychology 92(3), 492–502. Okita, S. Y., & Schwartz, D. L. (2013). Learning by teaching human pupils and teachable agents: The importance of recursive feedback. Journal of the Learning Sciences, 22(3), 375–412. Prensky, M. (2001). Digital natives, digital immigrants. On the Horizon, 9(5), 1–6. Prensky, M. (2002). The motivation of game play, or the REAL 21st century learning revolution. On the Horizon, 10(1), 5–11. Rabone, D. (2013). How ‘game mechanics” can revitalize education. SchoolNews, Retrieved from www. eschoolnews.com/2013/02/12/how-game-mechanics-can-revitalize-education/3/ Reed, C. (2014). What’s in a game? Identity negotiations and pedagogical implications of gameplay discourse. Master’s Thesis, Applied Linguistics, UMass Boston. Schaff hauser, D. (2013). Can gaming improve teaching and learning? T.H.E. Journal, 40(8), 26–33. Shaffer, D. W. (2006). How computer games help children learn. New York, NY: Palgrave MacMillan. Shute, V. (2011). Stealth assessment in computer-based games to support learning. In S. Tobias & J. D. Fletcher (Eds.) Computer games and instruction (pp. 503–524). Charlotte, NC: Information Age Publishers. Squire, K. (2006). From content to context: Videogames as designed experience. Educational Researcher, 35(8), 19–29. Squire, K. (2008). Open-ended video games. In K. Salen (Ed.). The ecology of games: Connecting youth, games, and learning (pp. 167–198). Cambridge, MA: MIT Press. Squire, K. (2011). Video games and learning: Teaching and participatory culture in the digital age. New York, NY: Teachers College Press. Van Eck, R. (2007). Building artificially intelligent learning games. In D. Gibson, C. Aldrich, & M. Prensky (Eds.). Games and simulations in online learning: Research and development frameworks (pp. 271–307). Hershey, PA: Information Science Publishing. *Wolf, M. (2007). The Proust and the squid: The story and science of the reading brain. New York, NY: HarperCollins.
177
Part III
Study Skills and Strategies Dolores Perin and Kristen Gregory teachers college, columbia university
In this section of the handbook, a variety of topics centering on study skills and strategies in various contexts are explored and discussed. All of the chapters in this section offer recommendations for practitioners as well as directions for future research. Chapter 11, Academic Preparedness by Dolores Perin, reviews concepts of academic readiness, with a focus on reading and writing skills. The concept of academic preparedness is defined, low literacy skills are considered as a risk factor for academic failure in college, and research on assessment and intervention to promote the success of academically underprepared students is discussed. Next, in Chapter 12, Strategic Study-Reading by Patricia I. Mulcahy-Ernt and David C. Caverly, a new framework for understanding college students’ academic learning strategies from a sociocultural perspective is the focus. The authors discuss topics including how students learn from studying, the role of metacognitive strategies, self-regulation, cognitive processing, and educational technology in learning. Chapter 13, Linguistically Diverse Students by Christa de Kleine and Rachele Lawton, reviews research on the pressing issue of working with linguistically diverse students in college, a large and growing population. The chapter addresses the fact that non-native and nonstandard English speakers are a highly heterogeneous group, requiring carefully selected instructional approaches to support academic performance. The authors present findings on both “world English speakers” and “speakers of U.S.-based dialects,” and point to strategies to support each group. Chapter 14, Study and Learning Strategies by Claire Ellen Weinstein and Taylor Acee, discusses research on strategic and self-regulated learning to promote the success of college students. A model of strategic learning is offered, which comprises critically important factors and proposes how the critically important factors of skill, will, self-regulation, and the academic environment each interact in student learning. Then, this section concludes with Chapter 15, Test Preparation and Test Taking by Rona F. Flippo, Victoria Appatova, and David Wark. These authors provide a historic as well as current review of research on test preparation and test performance, test-w iseness and test-taking skills, coaching to prepare college students for tests, and the pervasive problem of test anxiety and approaches for dealing with it.
11 Academic Preparedness Dolores Perin teachers college, columbia university
Introduction Several factors come together to create concern about academic preparedness for college in the United States. First, attainment of a college degree, or at least successful completion of some college courses, is essential for career entry and advancement (Kroeger, Cooke, & Gould, 2016). Second, national statistics show that only 59.4 percent of four-year college entrants obtain a degree, and only 27.9 percent of community (two-year) college students complete a degree or certificate program (Snyder, de Brey, & Dillow, 2016, Tables 326.10 and 326.20). Third, only 38 percent of students in the last year of secondary education are proficient in reading (Institute of Education Sciences, 2014; National Center for Education Statistics, 2012), meaning that 62 percent of U.S. 12th graders demonstrate difficulties with the reading skills that are needed in college. Although many of these students go on to enroll in college, their academic progress and chances of graduating are placed at risk by low skills in combination with several social and cultural factors (Lindsay, Davis, Stephan, & Proger, 2017). Risk factors cover a wide range of social, cultural, economic, and noncognitive variables (Attewell, Lavin, Domina, & Levey, 2006; Cohen, Brawer, & Kisker, 2013; Le, Mariano, & Faxon-Mills, 2016; Lopez & Jones, 2017; Nagaoka et al., 2013; Sandoval-Lucero, 2014; Schademan & Thompson, 2016; Smith, 2016; Tierney & Sablan, 2014).Important sociocultural variables that predict low college outcomes include being of African American or Latino/a background, growing up in a low-income family, and being in the first generation of one’s family to attend college. Poverty is closely related to low academic achievement and disengagement throughout the elementary and secondary education years, creating barriers for students who, upon entering college, may be unfamiliar with supports such as counseling and tutoring. In combination with sociocultural risk, low literacy skills pose a particularly strong threat to college achievement and completion (Camara, 2014; Kodama, Han, Moss, Myers, & Farruggia, 2016). Fourth, although many colleges have developmental education programs that aim to help atrisk students improve their academic skills, progress and outcome data suggest that these programs may not be effective, especially for students who are very low-skilled and/or spend long periods of time in such programs (Xu, 2017). The problem of academic preparedness may be summarized as follows. Attending college is important, but many students enter with low academic skills, and many do not earn a degree. Although there are a range of barriers to college achievement, low literacy skills seem especially problematic. Finding ways to improve students’ levels of academic
181
Dolores Perin
preparation for college learning requirements is an urgent matter because low skills persist, placing students at risk for failure and dropout. This chapter discusses academic readiness for college, with a focus on literacy skills. The following questions are considered: How is academic preparedness defined? What is known about low literacy skills as a risk factor for academic preparedness? How can academic preparedness be assessed? How can college students with low literacy skills be helped to increase their academic preparedness?
Defining Academic Preparedness Along with many other concepts in education, the term “academic preparedness” is difficult to define (Porter & Polikoff, 2012), especially because institutions vary considerably in their standards when assessing students for entry. There is a “dizzying array” (Yancey, 2009, p. 257) of institutions offering postsecondary curricula, ranging from leading research universities to comprehensive universities to open-access community (two-year) colleges, each of which may have different expectations for incoming students. For example, students with low academic skills may be admitted to a community college but denied entry to a four-year college (Long & B oatman, 2013). Over 50 percent of higher education students begin at a community college (Bailey, Jaggars, & Jenkins, 2016). These institutions serve a highly heterogeneous population but include many students, both recent high school graduates and older, nontraditional students, who are academically underprepared but nevertheless wish to transfer to a four-year college and earn a baccalaureate degree (Rosenberg, 2016). A complicating factor in defining academic preparedness is student goals. Although obtaining a baccalaureate degree is the most common goal, some students would like to obtain an associate’s degree or career-related certificate. The amount of academic preparation needed to proceed through the courses and experiences necessary to achieve these differing goals will vary (Conley, 2014; Yancey, 2009). Despite the difficulty in defining academic preparedness in a way that generalizes across institutional requirements and student goals, instructors appear to have strong perceptions and beliefs about the level of academic preparedness of students in their classrooms (Brockman, Taylor, Kreth, & Crawford, 2011; Perin, Lauterbach, Raufman, & Santikian Kalamkarian, 2017). For example, Perin, Lauterbach, et al. (2017) administered a teacher judgment scale focusing on college reading and writing to a group of community college developmental reading and writing instructors. The scale was tied to a self-efficacy scale that had been administered to the instructors’ students. Instructors were asked to rate on a 100-point scale their level of confidence in individual students’ ability to complete college reading and writing tasks. The students were about to complete developmental education courses, and, overall, instructors reported a mean of 72 percent confidence in their ability to read and write at the introductory college level. Further, the teacher judgment measure was a better predictor of students’ actual performance on a reading and writing task than were the students’ self-efficacy ratings. To understand academic preparation, it is important to view it within a larger framework of college readiness. The most comprehensive approach to this concept is found in the work of David Conley (2007, 2012, 2014, 2015), who has proposed a four-part model of college readiness. In its most recent form (Conley, 2015), the model covers both college and career readiness and comprises four “keys” to success: Think, Know, Act, and Go (www.epiconline.org/what-we-do/ the-four-keys/). The Think element involves cognitive and linguistic processing, and refers to students’ ability to use and apply information actively in order to formulate problems, conduct research, interpret material, communicate solutions, and use language and conventions with precision and accuracy.
182
Academic Preparedness
The Know component involves knowledge of content; students need to organize information, know the main ideas, understand the role of effort in learning, and recognize the importance of learning content. This element includes both cognitive and affective dimensions of learning. The third part of the model, Act, includes cognitive, affective, and dispositional aspects of learning. The model proposes that if students are to own their learning, they must be able to set goals, persist, be aware of themselves as learners, develop motivation, seek help as needed, monitor their progress, and develop confidence in their abilities (self-efficacy). Further, in this component, students need to apply techniques of time management, study and test-taking skills, memorization of material, strategic reading, collaborative learning, and using technology. Finally, the Go component describes behaviors related to navigating college systems. This element refers to students’ need to select college programs aligned with their interests and goals; move through the steps of college admissions procedures; manage finances; exhibit awareness of the culture of college, including the need for maturity and independence in the learning process; and advocate for themselves in interactions with college faculty and administrators. Readiness for college may go beyond the efforts and abilities of an individual student and involve the support of instructors, peers, academic advisors, and parents (Le et al., 2016). Further, beyond academic skills, it also involves more general knowledge, such as how to obtain and manage financial aid and how to utilize student support services, such as tutoring (Conley, 2015; Le et al., 2016). Against this backdrop, academic preparedness can be defined as the possession of the basic skills (reading, writing, mathematics) in conjunction with the wide range of contextual knowledge, dispositions, attitudes, and behaviors described in Conley’s (2015) model.
Low Literacy Skills as a Risk Factor Reading Difficulties A major risk factor in academic preparedness is low reading ability. The ability to comprehend, interpret, and apply written information fits with the “Think” and “Know” dimensions of Conley’s (2015) college readiness model. A comprehensive examination of reading requirements in a single community college (Armstrong, Stahl, & Kantner, 2015a, 2015b, 2016) found that most of the reading in introductory general education courses, such as psychology, sociology, biology, and economics, consisted of textbooks written at the 12th-grade readability level or higher. Faculty interviewed and surveyed for the study stated that, ideally, they expected their students to enter their courses able to understand and draw conclusions from assigned readings independently. However, they reported that, in fact, they explained most of the concepts in the reading during lectures, suggesting that students had difficulty understanding the text. Students who enter college with a history of reading difficulties may never have been evaluated for a learning disability1 but nevertheless show reading patterns consistent with the diagnosis (Bergey, Deacon, & Parrila, 2017). Bergey et al. (2017) collected data with a sample of Canadian university students that pointed to specific risk factors. Reading and study strategies were measured using a researcher-designed metacognitive measure (Taraban, Kerr, & Rynearson, 2004) and the Learning and Study Strategies Inventory (Weinstein & Palmer, 2002; also see Chapter 14 of this volume), respectively. Students with a history of reading difficulty showed lower scores on both strategy measures, as well as lower academic achievement, compared to a group reporting no such history, although, interestingly the strategy measures did not in themselves predict the level of academic achievement in the reading difficulty group. This finding suggests that university students with reading difficulties may be finding ways to bypass and accommodate their problem, but, at the same time, the lower strategy scores point to areas in which they need help.
183
Dolores Perin
Learning Disabilities among College Students Over the years, there has been an increase in the number of college students with documented learning disabilities ( Joshi & Bouck, 2017; Kane et al., 2011; Richards, 2015; Sparks & Lovett, 2014), many of whom show patterns of performance indicating a strong need for academic support and accommodations (Weis et al., 2012). For example, Weis et al. (2012, Table 2) obtained a mean standard score of 84.95 on a test of basic reading skills in a group of community college students with learning disabilities; this score, one standard deviation below the test mean, suggested that these students were academically underprepared for college reading. Low basic reading skills have also been identified in four-year college students who have documented reading disabilities (Birch, 2014). Specifically, in the sample tested, students with specific reading disabilities displayed difficulties in applying sound-symbol relationships to individual words, i.e., sounding out the words, and reading words that were not spelled according to regular orthographic patterns, i.e., irregular words. However, it was found that, although they obtained lower scores on standardized reading and spelling tests than a comparison group with no reading disability, the reading disabled students showed reading ability within the normal range, suggesting that they were able to compensate for their word-reading difficulties.
Literacy and College Career Preparation Low literacy skills are also a risk factor from the perspective of labor market needs and personal career goals. For example, a large number of college entrants aspire to careers in nursing, an area of workforce shortage (see the Bureau of Labor Statistics’ career outlook for nurses at www.bls. gov/ooh/healthcare/registered-nurses.htm) but are hindered by problems with basic literacy skills. Problems with reading and writing skills have been reported, both for nursing aspirants who have not yet been admitted to nursing programs (Perin, 2006) and advanced-level nursing degree students (Miller, Russell, Cheng, & Skarbek, 2015). Perin (2006) found that only 30 percent of a cohort of students who wished to enter a nursing degree program in a single community college passed the college’s placement test in reading, and only 21 percent passed the writing test, indicating that more than two-thirds were at risk for academic failure. At the other end of the spectrum, Miller et al. (2015) reported that a group of advanced nursing students who had relatively competent writing skills nevertheless experienced “fear and dread” (Miller et al., 2015, p. 176) of writing tasks, and a general lack of confidence in their ability to meet professional standards in work-related writing.
Assessing Academic Preparedness The level of students’ academic preparedness is often measured using scores on college placement tests, but since these measures are not good predictors of college performance, the assessment of the skills of incoming students may use additional measures, such as high school grade point average, students’ self-reported assessment of high school performance, high school courses taken, and ranking in high school graduating class. However, no one measure or combination of measures has yet been demonstrated as high predictive of college achievement or outcomes (Conley, 2014; Hughes & Scott-Clayton, 2011; Porter & Polikoff, 2012). An alternative to current metrics is qualitative assessment of academic preparation for college, such as in a college readiness assessment called Diagnostic Assessment and Achievement of College Skills, which focuses on strengths and weaknesses in academic skills and self-regulation. Described by its developers as “an alternative, cost effective approach to traditional placement exams and remedial education” (http://daacs.net/research.html), the assessment is an open source, freely available resource funded by the U.S. Department of Education. The assessment focuses on students’ knowledge
184
Academic Preparedness
and beliefs about college, domain knowledge, knowledge of learning strategies, motivational beliefs, goal setting, “grit” (Wolters & Hussain, 2015), use of support services, and persistence in college. Whatever assessment approach is used, deeper understanding of students’ performance and perceptions is gained from interviewing students after they have finished a reading or writing assignment, through retrospective reporting (Perin, Grant, Raufman, & Santikian Kalamkarian, 2017). College instructors sometimes wonder whether students in their classrooms who are showing difficulty with reading or writing might have a learning disability or attentional problem. An instrument called the “Learning Difficulties Assessment” is available that compiles self-reported information from individual students on basic academic skills, listening ability, concentration, memory, ability to organize material, locus of control, and level of anxiety related to academic tasks (Kane et al., 2011). The authors of the instrument provide reliability and validity data suggesting that it may be a useful screening method for identifying students who are at risk for learning disabilities or attention deficit hyperactivity disorder and warrant further testing. The topics of college assessment and testing are addressed more fully in Chapters 19 and 20 of this volume. In the remainder of the current section, patterns of performance on authentic classroom tasks are discussed as examples of the assessment of specific reading and writing skills needed to be academically prepared for college. In determining academic preparation, it is important to understand the actual skills students possess; understanding specific patterns of reading and writing permits design of interventions based directly on students’ observed academic needs. Written summarization and persuasive writing are two important tasks in college classrooms (Bridgeman & Carlson, 1984; Hale et al., 1996; Wolfe, 2011); several studies have provided researcher-designed measures to assess these skills (MacArthur, Philippakos, & Ianetta, 2015). In a study of academically underprepared students attending developmental education classrooms in two universities, MacArthur et al. (2015) used two measures of students’ ability to write a persuasive essay based on their prior knowledge and experience. The first was a seven-point holistic writing quality rubric designed to capture an essay’s content, organization, word choice and usage, sentence fluency, and grammatical errors. The second measure focused on the grammar, mechanics, and word usage within individual sentences, using T-units (Hunt, 1965). The quality rubric and T-unit measure were used along with a standardized writing test and motivation questionnaire to assess students’ progress after an experimental writing intervention. While MacArthur et al. (2015) used an experiential writing task, Perin and colleagues (Perin, Bork, Peverly, & Mason, 2013; Perin, Keselman, & Monopoli, 2003; Perin, Lauterbach, et al., 2017) have used text-based writing to examine the academic preparedness of students in c ommunity college developmental education courses. In these studies, students were asked to read either textbook passages or newspaper articles, and then write a summary or persuasive essay. These products were then analyzed using a variety of measures, including (1) the use of source text, measured as the proportion of sentences referring directly to information in the source text; (2) r eproductions and paraphrasing measures of direct, word-for-word copying from the source text; (3) accuracy of the information in the essay; (4) the proportion of main ideas in source text included in a summary; (5) the proportion of idea units in a persuasive essay that function to persuade a reader; (6) English language conventions, defined as grammar, punctuation, and spelling; and (7) the use of academic vocabulary based on an word frequency in authentic text (Coxhead, 2000), measured using open-source software, “VocabProfile” (Cobb, n.d.), available at www.lextutor.ca/vp/eng/ (Lesaux, Kieffer, Kelley, & Harris, 2014; Olinghouse & Wilson, 2013; Perin et al., 2016).
Helping Students Increase Their Academic Preparedness Two major approaches to improving academic preparedness are requiring students to take “college success” courses on entry to college and implementing evidence-based instructional
185
Dolores Perin
techniques. Success courses are broad-ranging in their goals, and aim to provide orientation to college for new students including those identified as having low academic skills. This approach fits with the “Go” component of Conley’s (2015) college readiness model and is consistent with approaches that emphasize student motivation (MacArthur, Philippakos, & Graham, 2016). These courses focus on study skills; college resources, such as the library, financial aid office, counseling services, and learning centers; time management; interpersonal skills; self-regulation; motivation (Kitsantas & Zimmerman, 2009; Liu, Bridgeman, & Adler, 2012); and the decisions and responsibilities of college students. Although success courses are in widespread use, the few rigorous studies of this approach that have been conducted have found a lack of effect on academic achievement, although they do appear to be beneficial for psychosocial development and self-regulation, as well as persistence from semester to semester (Cho & Karp, 2012; Rutschow, Cullinan, & Welbeck, 2012). Instructional approaches that have been found to boost students’ academic preparedness for college reading and writing, based on at least minimal evidence from college students, include the following.
Basic Reading Skills •
To build reading fluency, in 25-minute sessions students read one 400-word passage four times in a row and answer 10 comprehension questions (Ari, 2011).
Vocabulary Development •
The student works independently or with a coach to build vocabulary knowledge using a given online text. Students think aloud, monitor and reflect on the pronunciation, semantic, syntactic, and contextual knowledge of each target word (Ebner & Ehri, 2016).
Reading Comprehension •
•
•
Reciprocal Teaching. Students learn to work in groups to predict upcoming information, clarify concepts, develop questions, and summarize ideas in assigned text. All students in the group take turns at role-playing being teacher in order to deepen their comprehension skills (Gruenbaum, 2012). Explicit instruction in strategies for utilizing prior knowledge, learning the meanings of unfamiliar words, clarifying the meaning of sentences, asking questions about reading passages, drawing inferences, summarizing information, and focusing on important concepts through annotating text (Nash-Ditzel, 2010). After a period of instruction, students keep “metacognitive reading blogs” (Pacello, 2014, p. 127) to record reading strategies they were using and reflections on their reading process.
Writing Skills •
186
Writing intensive, also known as writing-across-the-curriculum, courses are taught in the disciplines. Here, students receive a greater number of writing assignments than usual (Brownell, Price, & Steinman, 2013; Fallahi, 2012; Mynlieff, Manogaran, St. Maurice, & Eddinger, 2014). These courses aim to deepen subject-area knowledge but may also improve writing skills through implicit learning and the opportunity to practice. In rare cases, writing skills are taught explicitly in a writing intensive course (Miller et al., 2015).
Academic Preparedness
•
• •
•
•
Self-Regulated Strategy Development (SRSD). Teachers provide explicit, structured instruction using modeling and guided practice in order to help students improve their persuasive writing skills (MacArthur et al., 2015; note that this technique has very strong evidence). Students write descriptions of given objects in order to build audience awareness and communicate information more effectively (Mongillo & Wilder, 2012). Students practice a set of reading skills (vocabulary building, reading comprehension, written summarization, and persuasive writing) independently, at their own pace, using 10 different disciplinary reading passages over one college semester (Perin et al., 2013). Apply the results of a structured assessment to developing a writing intervention customized to individual needs, using a “learning triangle” (Richards, 2015, p. 337) comprising curriculum and instruction, instructional methods and materials, and the student’s pattern of strengths and weaknesses. This approach may be effective for college students with learning disabilities (Richards, 2015). Other approaches to improving writing skills include peer review, in which groups of students work collaboratively on reviewing each other’s writing, guided by instructor feedback (Fallahi, 2012).
In addition, classroom approaches to teaching reading and writing to academically underprepared college students, as described by teachers, may be found in the Postsecondary Completion discussion group within the Literacy Information and Communication System (LINCS) Community of Practice https://community.lincs.ed.gov/, an adult education resource funded by the U.S. Department of Education.
Conclusions Although it is difficult to define “academic preparedness” with precision, the low reading and writing skills of many incoming college students are of concern. A comprehensive model of college readiness provides a framework for understanding academic preparedness, and there is a robust body of literature on the literacy skills of academically underprepared students, and related task-based assessment methods. Further, although the amount of research pointing to effective instructional techniques is slim compared to the number of K-12 literacy intervention studies, there do exist some documented approaches that can be employed in order to help students boost their academic preparedness.
Note 1 Findings from an analysis of a large dataset (the National Longitudinal Transition Study–2) indicate that students who have been diagnosed with learning disabilities during their K-12 education are more likely to enter community than four-year colleges ( Joshi & Bouck, 2017), although four-year college students may also report this history and require assistance (Kane, Walker, & Schmidt, 2011; Weis, Sykes, & Unadkat, 2012).
References and Suggested Readings Ari, O. (2011). Reading fluency interventions for developmental readers: Repeated readings and wide reading. Research and Teaching in Developmental Education, 28(1), 5–15. *Armstrong, S. L., Stahl, N. A., & Kantner, M. J. (2015a). Investigating academic literacy expectations: A curriculum audit model. Journal of Developmental Education, 38(2), 2–23. *Armstrong, S. L., Stahl, N. A., & Kantner, M. J. (2015b). What constitutes ‘college-ready’ for reading? An investigation of academic text readiness at one community college (Technical Report Number 1). DeKalb, IL: Center for the Interdisciplinary Study of Literacy and Language, Northern Illinois University. Retrieved from www.niu.edu/cisll/_pdf/reports/TechnicalReport1.pdf.
187
Dolores Perin
*Armstrong, S. L., Stahl, N. A., & Kantner, M. J. (2016). Building better bridges: Understanding academic text readiness at one community college. Community College Journal of Research and Practice, 40(11), 885–908. doi:10.1080/10668926.2015.1132644 Attewell, P., Lavin, D., Domina, T., & Levey, T. (2006). New evidence on college remediation. Journal of Higher Education, 77(5), 886–924. Bailey, T. R., Jaggars, S. S., & Jenkins, D. (2016). Redesigning America’s community colleges: A clearer path to student success. Cambridge, MA: Harvard University Press. Bergey, B. W., Deacon, S. H., & Parrila, R. K. (2017). Metacognitive reading and study strategies and academic achievement of university students with and without a history of reading difficulties. Journal of Learning Disabilities, 50(1), 81–94. doi:10.1177/0022219415597020 Birch, S. L. (2014). Prevalence and profile of phonological and surface subgroups in college students with a history of reading disability. Journal of Learning Disabilities (online first). doi:10.1177/0022219414554007 Bridgeman, B., & Carlson, S. B. (1984). Survey of academic writing tasks. Written Communication, 1(2), 247–280. doi:10.1177/0741088384001002004 Brockman, E., Taylor, M., Kreth, M., & Crawford, M. K. (2011). What do professors really say about college writing? English Journal, 100(3), 75–81. Brownell, S. E., Price, J. V., & Steinman, L. (2013). A writing-intensive course improves biology undergraduates’ perception and confidence of their abilities to read scientific literature and communicate science. Advances in Physiology Education, 37(1), 70–79. doi:10.1152/advan.00138.2012 Camara, W. (2014). Defining and measuring college and career readiness: A validation framework. Educational Measurement: Issues and Practice, 32(4), 16–27. doi:10.1111/emip.12016 Cho, S.-W., & Karp, M. M. (2012). Student success courses and educational outcomes at Virginia Community Colleges (CCRC Working Paper No. 40). New York, NY: Community College Research Center, Teachers College, Columbia University. Cobb, T. (n.d.). The Compleat Lexical Tutor Website. Retrieved November 13, 2015 from http://www. lextutor.ca Cohen, A. M., Brawer, F. B., & Kisker, C. B. (2013). The American community college (6th ed.). Boston, MA: Wiley. Conley, D. (2007). Redefining college readiness. Eugene, OR: Education Policy Improvement Center. Retrieved from www.epiconline.org/. Conley, D. (2012). A complete definition of college and career readiness. Eugene, OR: Educational Policy Improvement Center. Retrieved from www.epiconline.org/. Conley, D. (2014). New conceptions of college and career ready: A profile approach to admissions. Journal of College Admission, (Spring 2014), 13–23. *Conley, D. (2015). A new era for educational assessment. Education Policy Analysis Archives, 23(8), 1–40. doi:10.14507/epaa.v23.1983 Coxhead, A. (2000). A new academic word list. TESOL Quarterly, 34(2), 213–238. Ebner, R., & Ehri, L. C. (2016). Teaching students how to self-regulate their online vocabulary learning by using a structured think-to-yourself procedure. Journal of College Reading and Learning, 46(1), 62–73. Fallahi, C. R. (2012). Improving the writing skills of college students. In E. L. Grigorenko, E. Mambrino & D. D. Preiss (Eds.), Writing: A mosaic of new perspectives (pp. 209–219). New York, NY: Psychology Press. Gruenbaum, E. A. (2012). Common literacy struggles with college students: Using the Reciprocal Teaching Technique. Journal of College Reading and Learning, 42(2), 110–116. doi:10.1080/10790195.2012.10850357 Hale, G., Taylor, C., Bridgeman, B., Carson, J., Kroll, B., & Kantor, R. (1996). A study of writing tasks assigned in academic degree programs (RR-95-44, TOEFL-RR-54). Princeton, NJ: Educational Testing Service. Hughes, K. L., & Scott-Clayton, J. (2011). Assessing developmental assessment in community colleges. Community College Review, 39(4), 327–351. doi:10.1177/0091552111426898 Hunt, K. (1965). Grammatical structures written at three grade levels (NCTE Research Report No. 3). Education Resources Information Center (ERIC) Report No. ED113735. Champaign, IL: National Council of Teachers of English. Institute of Education Sciences. (2014). The nation’s report card: Reading (NCES 2014-087). Washington, DC: Author, U.S. Department of Education. Retrieved from http://nationsreportcard.gov/reading_ math_g12_2013/#/. Joshi, G. S., & Bouck, E. C. (2017). Examining postsecondary education predictors and participation for students with learning disabilities. Journal of Learning Disabilities, 50(1), 3–13. doi:10.1177/0022219415572894 Kane, S. T., Walker, J. H., & Schmidt, G. R. (2011). Assessing college-level learning difficulties and “at riskness” for learning disabilities and ADHD: Development and validation of the Learning Difficulties Assessment. Journal of Learning Disabilities, 44(6), 533–542. doi:10.1177/0022219410392045
188
Academic Preparedness
Kitsantas, A., & Zimmerman, B. J. (2009). College students’ homework and academic achievement: The mediating role of self-regulatory beliefs. Metacognition and Learning, 4(2), 97–110. *Kodama, C. M., Han, C.-W., Moss, T., Myers, B., & Farruggia, S. P. (2016). Getting college students back on track: A summer bridge writing program. Journal of College Student Retention: Research, Theory & Practice (online first). doi:10.1177/1521025116670208 Kroeger, T., Cooke, T., & Gould, E. (2016). The class of 2016. Washington, DC: Economic Policy Institute. Retrieved from www.epi.org/publication/class-of-2016/. *Le, V.-N., Mariano, L. T., & Faxon-Mills, S. (2016). Can college outreach programs improve college readiness? The case of the College Bound, St. Louis Program. Research in Higher Education, 57(3), 261–287. doi:10.1007/s11162-015-9385-8 Lesaux, N. K., Kieffer, M. J., Kelley, J. G., & Harris, J. R. (2014). Effects of academic vocabulary instruction for linguistically diverse adolescents: Evidence from a randomized field trial. American Educational Research Journal, 51(6), 1159–1194 doi:10.3102/0002831214532165 Lindsay, J., Davis, E., Stephan, J., & Proger, A. (2017). Impacts of Ramp-Up to Readiness™ after one year of implementation (REL 2017–241). Retrieved from https://eric.ed.gov/?id=ED572863 Liu, O. L., Bridgeman, B., & Adler, R. M. (2012). Measuring learning outcomes in higher education: Motivation matters. Educational Researcher, 41(9), 352–362. doi:10.3102/0013189X12459679 Long, B. T., & Boatman, A. (2013). The role of remedial and developmental courses in access and persistence. In A. Jones & L. W. Perna (Eds.), The state of college access and completion: Improving college success for students from underrepresented groups. New York, NY: Routledge. Lopez, C., & Jones, S. J. (2017). Examination of factors that predict academic adjustment and success of community college transfer students in STEM at 4-year institutions. Community College Journal of Research and Practice, 41(3), 168–182. doi:10.1080/10668926.2016.1168328 MacArthur, C. A., Philippakos, Z. A., & Graham, S. (2016). A multicomponent measure of writing motivation with basic college writers. Learning Disability Quarterly, 39(1), 31–43. doi:10.1177/0731948715583115 *MacArthur, C. A., Philippakos, Z. A., & Ianetta, M. (2015). Self-regulated strategy instruction in college developmental writing. Journal of Educational Psychology, 107(3), 855–867. doi:10.1037/edu0000011 Miller, L. C., Russell, C. L., Cheng, A.-L., & Skarbek, A. J. (2015). Evaluating undergraduate nursing students’ self-efficacy and competence in writing: Effects of a writing intensive intervention. Nurse Education in Practice, 15(3), 174–180. doi:10.1016/j.nepr.2014.12.002 Mongillo, G., & Wilder, H. (2012). An examination of at-risk college freshmen’s expository literacy skills using interactive online writing activities. Journal of College Reading and Learning, 42(2), 27–50. Mynlieff, M., Manogaran, A. L., St. Maurice, M., & Eddinger, T. J. (2014). Writing assignments with a metacognitive component enhance learning in a large introductory biology course. Life Sciences Education, 13(2), 311–321. doi:10.1187/cbe.13-05-0097 Nagaoka, J., Farrington, C. A., Roderick, M., Allensworth, E., Keyes, T. S., Johnson, D. W., & Beechum, N. O. (2013). Readiness for college: The role of noncognitive factors and context. Vue, (Fall 2013), 45–52. Retrieved from https://ccsr.uchicago.edu/sites/default/files/publications/VUE%20Noncognitive%20 Factors.pdf. *Nash-Ditzel, S. (2010). Metacognitive reading strategies can improve self-regulation. Journal of College Reading and Learning, 40(2), 45–63. doi:10.1080/10790195.2010.10850330 National Center for Education Statistics. (2012). The nation’s report card: Writing 2011 (NCES 2012–470). Washington, DC: Institute of Education Sciences, U.S. Department of Education. Retrieved from http://nces.ed.gov/nationsreportcard/pdf/main2011/2012470.pdf. Olinghouse, N. G., & Wilson, J. (2013). The relationship between vocabulary and writing quality in three genres. Reading and Writing: An Interdisciplinary Journal, 26(1), 45–65. doi:10.1007/s11145-012-9392-5 *Pacello, J. (2014). Integrating metacognition into a developmental reading and writing course to promote skill transfer: An examination of student perceptions and experiences. Journal of College Reading and Learning, 44(2), 119–140. doi:10.1080/10790195.2014.906240 Perin, D. (2006). Academic progress of community college nursing aspirants: An institutional research profile. Community College Journal of Research and Practice, 30(8), 657–670. doi:10.1080/10668920600746094 Perin, D., Bork, R. H., Peverly, S. T., & Mason, L. H. (2013). A contextualized curricular supplement for developmental reading and writing. Journal of College Reading and Learning, 43(2), 8–38. Perin, D., Grant, G., Raufman, J., & Santikian Kalamkarian, H. (2017). Learning from student retrospective reports: Implications for the college developmental classroom. Journal of College Reading and Learning, 47(2), 77–98. doi:10.1080/10790195.2017.1286956 Perin, D., Keselman, A., & Monopoli, M. (2003). The academic writing of community college remedial students: Text and learner variables. Higher Education, 45(1), 19–42.
189
Dolores Perin
Perin, D., Lauterbach, M., Raufman, J., & Santikian Kalamkarian, H. (2017). Text-based writing of lowskilled postsecondary students: Relation to comprehension, self-efficacy and teacher judgments. Reading and Writing: An Interdisciplinary Journal, 30(4), 887–915. doi:10.1007/s11145-016-9706-0 Porter, A. C., & Polikoff, M. S. (2012). Measuring academic readiness for college. Educational Policy, 26(3), 394–417. doi:10.1177/0895904811400410 Richards, S. A. (2015). Characteristics, assessment, and treatment of writing difficulties in college students with language disorders and/or learning disabilities. Topics in Language Disorders, 35(4), 329–344. doi:10.1097/TLD.0000000000000069 Rosenberg, M. J. (2016). Understanding the adult transfer student—Support, concerns, and transfer student capital. Community College Journal of Research and Practice, 40(12), 1058–1073. doi:10.1080/10668926.201 6.1216907 Rutschow, E. Z., Cullinan, D., & Welbeck, R. (2012). Keeping students on course: An impact study of a student success course at Guilford Technical Community College. New York, NY: MDRC. Sandoval-Lucero, E. (2014). Serving the developmental and learning needs of the 21st century diverse college student population: A review of literature. Journal of Educational and Developmental Psychology, 4(2), 47–64. doi:10.5539/jedp.v4n2p47 Schademan, A. R., & Thompson, M. R. (2016). Are college faculty and first-generation, low-income students ready for each other? Journal of College Student Retention: Research, Theory & Practice, 18(2), 194–216. doi:10.1177/1521025115584748 Smith, D. J. (2016). Operating in the middle: The experiences of African American female transfer students in STEM degree programs at HBCUs. Community College Journal of Research and Practice, 40(12), 1025–1039. doi:10.1080/10668926.2016.1206841 Snyder, T. D., de Brey, C., & Dillow, S. A. (2016). Digest of education statistics 2015 (NCES 2016-014). Washington, DC: National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education. Retrieved from http://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=2016014. Sparks, R. L., & Lovett, B. J. (2014). Learning disability documentation in higher education: What are students submitting? Learning Disability Quarterly, 37(1), 54–62. doi:10.1177/0731948713486888 Taraban, R., Kerr, M., & Rynearson, K. (2004). Analytic and pragmatic factors in college students’ metacognitive reading strategies. Reading Psychology, 25(2), 67–81. doi:10.1080/02702710490435547 *Tierney, W. G., & Sablan, J. R. (2014). Examining college readiness. American Behavioral Scientist, 58(8), 943–946. doi:10.1177/0002764213515228 Weinstein, C. E., & Palmer, D. (2002). Learning and Study Strategies Inventory (LASSI): User’s manual (2nd ed.). Clearwater, FL: H&H. Weis, R., Sykes, D., & Unadkat, D. (2012). Qualitative differences in learning disabilities across postsecondary institutions. Journal of Learning Disabilities, 45(6), 491–502. doi:10.1177/0022219411400747 Wolfe, C. R. (2011). Argumentation across the curriculum. Written Communication, 28(1), 193–219. doi:10.1177/0741088311399236 Wolters, C. A., & Hussain, M. (2015). Investigating grit and its relations with college students’ self- regulated learning and academic achievement. Metacognition and Learning, 10(3), 293–311. doi:10.1007/ s11409-014-9128-9 Xu, D. (2017). Assistance or obstacle? The impact of different levels of English developmental education on underprepared students in community colleges. Educational Researcher, 45(9), 496–507. doi:10.3102/0013189X16683401 Yancey, K. B. (2009). The literacy demands of entering the university. In L. Christenbury, R. Bomer & P. Smagorinsky (Eds.), Handbook of adolescent literacy research (pp. 256–270). New York, NY: Guilford Press.
190
12 Strategic Study-Reading Patricia I. Mulcahy-Ernt university of bridgeport
David C. Caverly texas state university
How do college students, when reading academic texts, when preparing for their course assignments and tests, and when participating in their academic literacy communities, make meaning of what they read and use effective learning and study strategies to do so? These are the central questions of this chapter, and our focus will be inclusive of both the print and digital texts required of today’s college students. Our intent is to expand the definition of textbook study-reading strategies to that of academic literacy learning strategies to account for multimodal print and digital texts in discipline-specific college courses and the strategies students need to use while learning from these texts. Thus, this chapter will discuss what have been considered traditional study-reading strategies through a new framework of academic literacy learning strategies, consider these strategies from the perspective of the sociocultural contexts of college discipline-based literacy communities, and discuss research that provides insights about feasible approaches for learning from and studying with multimodal texts. Our framework of academic literacy learning strategies considers a sociocultural perspective that accounts for meaning-making literacy events in which the learner is a participant with others in an academic, discipline-based learning community and strategically uses learning practices for accomplishing metacognitive goals when learning from multimodal texts.
Purposeful Studying in Literacy Communities Purposeful Studying The perspective posited in this chapter is that studying, to be purposeful and effective, should be contextualized in the meaning-making literacy activities of academic communities. In other words, purposeful studying is tied to the literacy tasks as well as the social and cultural Discourses (discourses with a capital “D,” as noted by Gee (2012)) that are required of a specific discipline. From a sociocultural perspective (Lea & Street, 2006), academic literacy—not just reading—is developed as students develop academic literacy-specific learning strategies; contextualize these strategies within each discipline; and address the social and cultural practices of that discipline, including the identities, authority, and ideologies specific to the discipline. Lea and Street (2006) proposed that there are three models for communicating in an academic literacy setting: a study skills model, a socialization model, and an academic literacy model, all building upon one another. Students matriculate into higher education with many of the initial
191
Mulcahy-Ernt and Caverly
skills and strategies required to participate in academic communities. This allows them to make meaning, to build understanding, and to recall what is read; this is the study skills model. Applying the skills strategically within each discipline domain as students are socialized within these discipline cultures guides them toward deeper understanding of the contextualization of texts and their learning; this is the socialization model. Adapting the strategies to the unique social and cultural Discourses, and contexts of their academic major’s discipline begins to build agency, power, ideologies, and identity within these students as they interpret the texts and join their respective academic communities; this is the academic literacy model. The successful student strategically approaches the task and the material in consideration of these academic communities, which may vary by discipline and the sociocultural contexts of each discipline. For instance, the freshman college student studying a biology textbook for a midterm examination uses different academic literacy strategies for learning from those he or she would use for a college psychology exam. On the other hand, reading and learning from a psychology textbook often requires different learning and study strategies than reading and learning from a marketing textbook. Subsequent meaning making (see Figure 12.1) in both examples can be understood through six lenses situated within a sociocultural context: (a) specific literacy events, such as course and lab assignments, as well as discussions with professors and peers inside and outside the classroom; (b) specific participants in a learning community, such as professors, students, and those using technology applications; (c) specific literacy demands created by the sociocultural rules and norms required to effectively communicate within the academic disciplines, which vary according to a student’s major field of study, minor field of study, and core curriculum courses; (d) specific multimodal modes of communication with the academic context, such as print, oral, visual, audio, tactile, gesture, spatial, and/or emotional “texts” (Cope & Kalantzis, 2013), used to complete the literacy event by the literacy community participant; (e) specific self-regulated learning tasks in which the learner sets goals for studying, plans what to do, selects appropriate learning strategies, monitors performance, and reflects on the outcomes of studying and learning; and (f ) specific literacy
Literacy Practices Annotating Summarizing Mapping Questioning Evaluating Credibility Synthesizing Collaborating Self-Regulated Learning Tasks Goal Setting Planning Selecting Strategy Monitoring Reflecting
Literacy Events Course Assignments Lab Activities Class Discussions Online Discussions Learner Prior Knowledge Epistemology Interest Motivation Attitude Culture Multimodal Modes of Communication Print Texts Oral Texts Audio Texts Visual Texts Digital Texts Digital Apps
Learning Community Participants Professors Students Supervisors Peer Reviewers Digital Users Academic Disciplines English History Science Foreign Language Mathematics Arts Humanities Business
Figure 12.1 Strategic study-reading contexts for academic literacy learning and representative examples from a sociocultural perspective.
192
Strategic Study-Reading
practices needed to making meaning of these texts, such as annotating, summarizing, mapping, questioning, evaluating the credibility of the source for digital texts, synthesizing ideas across texts, and collaborating with others when working with texts. The contextualization of academic literacy strategies within specific disciplines indicates that the effective college student selects the strategies that most closely align with the types of thinking displayed by experts in specific disciplines.
A Disciplinary Literacy Perspective Using a different lens for viewing the various disciplines studied in college reveals keen distinctions among historical, scientific, mathematical, and literary analyses, as noted in studies from a disciplinary literacy perspective. For instance, Shanahan, Shanahan, and Misischia (2011), in their description of the reading approaches of experts in the fields of history, chemistry, and mathematics, cite distinctions in critically responding to text, in using text structures, and in close reading, all of which have implications for strategies for studying. Shanahan et al. do not endorse a generic approach to selecting and using study strategies, but they consider an approach that values the unique disciplinary knowledge and abilities of those in the field. In other words, thinking like a historian versus thinking like a scientist requires different discipline-based literacies to successfully communicate in the history community, such as considering the text’s authorial source or how concepts are organized. The perspective held by Shanahan et al. is that traditionally taught content area study strategies are decontextualized. Therefore, learning in a discipline requires thinking like an expert in the discipline. The implication for college students is that they need to develop those habits of mind integral for learning in their specific disciplines.
The Role of Prior Knowledge in Metacognitive Processing The influence of the academic discipline on a proficient college student’s strategy selection points to the importance of prior knowledge when learning and studying. For decades, reading research has documented the role of a student’s prior knowledge when comprehending texts (Butcher & Kintsch, 2013; Holschuh & Paulson, 2013), particularly discipline-specific knowledge (see Chapter 8 in this volume). Prior procedural knowledge about specific cognitive processes during reading, such as selecting, organizing, synthesizing, and elaborating (Holschuh & Paulson), and metacognitive knowledge about processes when studying afford the student the ability to select the best strategy for the task. Taraban, Rynearson, and Kerr (2000) point to the need for college developmental readers to recognize the need to use metacognitive strategies and build on the prior knowledge and comprehension skills learned in earlier grades. Cantrell et al. (2013) studied patterns of self-efficacy among college students in developmental reading classes and found that even though students reported their confidence was positively influenced by knowledge of reading strategies, overall, developmental college students had lower levels of self-efficacy. Nash-Ditzel (2010) as well as Paulson and Bauer (2011) noted the importance of metacognitive strategies for college students in their development of self-regulated reading performance. In sum, prior knowledge (i.e., much domain and strategy knowledge versus little) and reading ability (i.e., good versus poor readers) influence not only strategy selection but also the success of that strategy when studying and can serve as constraints for effective strategic reading.
Self-Regulated Learning Zimmerman’s (2002) research on self-regulated learning processes noted how students create purposeful learning through self-awareness, self-motivation, and the skill to implement knowledge (see Chapter 14 in this volume). Zimmerman (1989) had previously argued that these elements of self-regulation are situated within a purposeful act, naming the process strategic learning. Pintrich
193
Mulcahy-Ernt and Caverly
and Garcia’s (1994) study of self-regulated learning noted that college students can become strategic learners when academic assistance reflects actual learning tasks required by professors in core content classes. Central themes for college reading courses, as noted by Paulson and Bauer (2011), include the criticality of the student’s own goal setting during reading tasks, along with self-reflection about how learning goals are achieved. Thus, teaching college readers to be metacognitively aware of their reading and study goals, their strategy selections, and the worth of those strategies is paramount for college reading instruction (Holschuh & Paulson, 2013; see Chapter 2 in this volume). Students’ ownership of self-regulated learning strategies is an essential factor in any academic literacy strategy. The purposeful, self-regulated student sets clear learning goals; adopts the necessary strategies to reach those goals; monitors performance; evaluates progress and attainment of those goals; and adjusts strategies as needed, depending on the task at hand. The student’s self-motivations depend on their perceived self-efficacy, their beliefs in their own ability to learn, and their intrinsic valuing of the worth of learning, all of which influence academic performance (Young & Ley, 2002; Zimmerman & Schunk, 2011). Both the student’s skill and will contribute to strategic purposeful learning (see Chapter 14 in this volume). Corno and Snow (2001) noted that not only are metacognitive elements important, but so are noncognitive elements, like conation or the choice to access prior knowledge. Thus, a constraint to the effectiveness of the academic literacy study strategies noted in this chapter is precisely this: The student’s own self-efficacy and self-regulation fuel the student’s motivation to use the strategies (Pintrich & De Groot, 1990). Without a doubt, many learning and study strategies take much effort. Continued research about self-regulated learning supported the belief of its benefits. As noted in prior expert-novice studies, Cleary and Zimmerman (2000) found that experts react differently in their application of knowledge at crucial times in the learning process when compared to novices. The underlying premise for study skills instruction noted in prior editions of this chapter (Caverly & Orlando, 1991; Caverly, Orlando, & Mullen, 2000; Mulcahy-Ernt & Caverly, 2009) was that if the developmental college student learned, practiced, acquired the academic literacy strategies of successful students, and adapted these strategies to the task demands of academic disciplines, then the developmental student would experience similar success in college. Through metacognitive strategy instruction, the successful college reader would be able to use self-regulated learning strategies for monitoring learning and evaluating the effectiveness of those strategies. The assumptions of these premises were that developmental learners were novice students, successful learners were the experts, and study strategies would provide the means for the novice student to become the expert student.
Cognitive Processing Models Hartley’s (1986) review of study strategies in Britain and the United States provided an overview that is still useful today. This overview described three different groups of academic literacy strategies: support strategies for organizing one’s study environment; information processing strategies for organizing one’s thinking, integrating what one already knows with what is to be learned, and elaborating information; and metacognitive processing strategies, such as reviewing, paraphrasing, self-questioning, imaging, and predicting, for monitoring comprehension. Hartley’s work echoes that of Weinstein and Mayer (1986), who delineated eight categories of levels of cognitive processing from basic to complex strategies, with distinctions ranging from rehearsal to elaboration to organization (see Table 12.1 and Chapter 14 in this volume). Weinstein, Ridley, Dahl, and Weber (1988) were instrumental in defining learning strategies as “behaviors or thoughts that facilitate learning” (p. 17). Historically, Weinstein and Mayer’s (1986) work pointed to the importance of metacognitive strategy instruction and the efficacy of learning strategies for improving academic performance. The Weinstein and Mayer framework provided a classification scheme that is still used today to note distinctions in academic literacy learning
194
Strategic Study-Reading
strategies and in understanding the depth of processing required for long-term retention when studying. Their classification is useful not only for distinctions in the utility and effectiveness of different academic literacy learning strategies but also in the range of strategies. A more recent perspective linking effortful learning with the successful outcomes of using self-regulated study strategies comes from contemporary research on “grit.” Here, grit is defined as a person’s perseverance and passion for long-term goals (Duckworth, Peterson, Matthews, & Kelly, 2007). Its relation to college students’ self-regulated learning and academic achievement has been a recent topic of interest to researchers. Wolters and Hussain (2015) found that perseverance of effort predicted achievement as well as engagement in self-regulated learning, particularly when managing time, choosing an appropriate study environment, and avoiding unnecessary delays when completing tasks. This is the critical pathway between personal dispositions and academic achievement. Another burgeoning area of interest that points to the importance of self-regulated learning strategies focuses on mindfulness as essential for the cultivation of the dispositions necessary for learning (Nielson, 2017). Necessary “habits of mind” include engagement, risk-taking, motivation, and curiosity (Fletcher, Najarro, & Yelland, 2015); these influence the student’s persistence in academic tasks. Developmental college students in particular benefit from instructor modeling and coaching for the cultivation of these habits of mind and for the development of students’ self-efficacy with academic learning strategies (Najarro, 2015).
A Technological Lens for Academic Literacy Tasks Academic Literacy Tasks Redefined for Digital Texts Learning through technological literacies emerged at least 500,000 years ago, with a zigzag etching made by a Homo erectus using a shark tooth on a clam shell. Joordens et al. (2014) found this clam shell at a Trinil archaeological site in Java and concluded from their archaeological epistemic stance that this action was the first artistic activity. As literacy professors, we might conclude from another stance that this could have been the first evidence of writing; the shark tooth was a technological stylus, and the clam shell was the paper. Socioculturally, we might also argue that this was the first evidence of an intentional message. In previous editions of this chapter, we framed study-reading strategies within Weinstein and Mayer’s (1986) model based upon a cognitive processing, construction-integration model of text comprehension (Kintsch, 1998) for understanding and remembering single texts. We have now expanded this model (see Figure 12.1) to account for academic literacy learning strategies interacting with multimodal texts, student characteristics, and task demands for different literacy events. For example, a recent study described how students took their smartphones to a forest where they had captured photographs of plants for a botany course, recaptured the photos through their smartphones, subsequently labeled the dissecting and compound microscopic photos, shared these labeled photographs with other students and family via social media, and used them to master plant identification. Through these mobile learning academic literacies, students had a strong engagement with the class and developed emerging identities as scientists (Harper, Burrows, Moroni, & Quinnell, 2015). In another developmental education classroom, students used multimodal, academic literacies to communicate with each other within a virtual world created through Second Life (Burgess, Price, & Caverly, 2012) and made greater reading achievement gains than did a control group. Another group of developmental education students in a makerspace environment consumed and produced multimodal texts, such as mangas, infographics, and labeled photos as they told the stories of their lives (Hughes, 2017). In all three of these scenarios, students made meaning through academic literacy learning strategies with texts that had provided little traditional print. Through digital academic literacy learning
195
Mulcahy-Ernt and Caverly
strategies, they were able to demonstrate how within complex learning environments, they were prepared to engage in academic Discourses in the 21st century (New London Group, 1996). Prior to the Information Age, making meaning when reading printed texts required recalling, reasoning, and creativity, using learning strategies such as rehearsing, elaborating, and organizing knowledge (Weinstein & Mayer, 1986). However, over the last five decades, with a proliferation of multimodal texts and hyperlinks-based texts, new literacy strategies have emerged for reading both single digital texts and multiple digital documents (Rouet, Britt, Mason, & Perfetti, 1996). When reading multiple texts found on the Internet, students use macro-strategies, such as identifying and learning important information, monitoring, and evaluating (Afflerbach & Cho, 2009). Each of these declarative macro-strategies has multiple procedural micro-strategies. For example, when using a monitoring strategy, students monitor the degree and nature of comprehension of a current passage by referencing exogenous sources, using knowledge established previously. Similar microstrategies exist for identifying and for evaluating macro-strategies. Other macro-strategies have been developed more recently (Cho & Afflerbach, 2017; Leu et al., 2015), including (1) determining the purpose and forming questions in order to read multiple texts; (2) locating information within multiple, digital sources on the Internet; (3) critically evaluating the information for trustworthiness, usefulness, and epistemic stance; (4) synthesizing the information into a coherent, responsive text; and (5) sharing that text within a discipline-based community to confirm one’s understanding.
Hyperlinks An additional challenge when reading single digital texts or multiple texts online is the phenomenon of hyperlinks. Hyperlinks have been found to interrupt text processing, particularly among low domain knowledge students and students with nonspecific goals (Chen & Yen, 2013). However, the ability of the low domain knowledge students to choose their own path improves motivation to read since they have a choice. Digital texts have differed significantly from traditional print texts in their nonlinear organization, caused by hyperlinks. Flexibility provided by hyperlinks when navigating the text in whatever path one chooses often creates an ill-structured domain requiring complex processing (Spiro, Feltovich, Jacobson, & Coulson, 1991). Consequently, strategic navigation toward meaning making within digital text requires the reader to monitor how information is presented, combined, and understood while he/she integrates additional information presented through hyperlinks. Strategic reading has been particularly salient when learning through multiple digital texts hyperlinked together. Accomplished hyperlinked readers continually make decisions regarding which hyperlink path to choose, which depth of meaning to construct, and how to determine the credibility and trustworthiness of the sources (Bråten, Braasch, Strømsø, & Ferguson, 2015). Each hyperlink must be selected and evaluated to fit into this meaning-making model, subsequently affecting the cognitive load. Prior knowledge mediates this processing as it reduces the choices on which links the student access (Burin, Barreyro, Saux, & Irrazábal, 2015). Weak readers, on the other hand, often get lost due to the preponderance of hyperlinks (Shang, 2016) or multitask navigating in social media (Baron, Calixte, & Havewala, 2017). However, when students monitor themselves through self-regulated learning strategies when using hypertexts, choose effective metacognitive behaviors, identify where problems exist, decide when to adjust learning strategies, or evaluate the validity of the sources and the effectiveness of the strategic approach, they are much more effective (Greene et al., 2015). To improve comprehension and motivation even more, researchers have embedded hyperlinks with analysis or synthesis questions and found them to be productive (Hathorn & Rawson, 2012; Yang, 2010). Thus, how and when hyperlinks are used affect the reader’s comprehension and learning with digital texts.
196
Strategic Study-Reading
Multiple Document Literacies While Web 2.0 has allowed for a more democratic inclusion of voices on the shared Internet, there is a need for critical reading due to the proliferation of “disinformation” and “fake news” (cf., Allcott & Gentzkow, 2017; Brewer, Young, & Morreale, 2013). As noted in the most recent National Assessment of Education Progress data (National Center for Educational Statistics, 2016), only 6 percent of 12th graders are able to critique and evaluate multiple documents. For those who graduate with a two-year or four-year degree, only 23 percent and 40 percent, respectively, are able to critique multiple documents (Baer, Cook, & Baldi, 2006). Researchers have addressed learning through multiple sources (Britt and Rouet, 2012). From a cognitive perspective, Rouet and Britt (2011) developed a document literacy model with embedded strategies to guide strategic learning. Other researchers have taught students how to process multiple documents by examining “source nodes” that are present in all texts, such as content, form, setting, author, and rhetorical goals nodes (Anmarkrud, Bråten, & Strømsø, 2014). Students who use author nodes to evaluate their epistemic stance tend to accept some sources and not others (Bråten, Britt, Strømsø, & Rouet, 2011). Subsequently, academic literacy learning strategies can be applied across multiple documents to make critical meaning.
Academic Literacy Learning Strategies This chapter explores the types of strategies for reading and learning from print or digital academic texts. We agree the more generic term “reading strategies” are defined as “deliberate, goal-directed attempts to control and modify the reader’s efforts to decode text, understand words, and construct meanings of text” (Afflerbach, Pearson, & Paris, 2008, p. 368), while “reading skills” are defined as “automatic actions that result in decoding and comprehension with speed, efficiency, and fluency and usually occur without awareness of the components or control involved” (A fflerbach et al., 2008, p. 368). For working with text in specific disciplines, Bulgren, Deshler, and Lenz (2007) emphasized prior knowledge as essential to the comprehension and learning from text. They recommended both general and disciplinary strategies for learning and reading. Thus, in this chapter, the term “academic literacy learning strategies” broadens the notion of reading comprehension to learning from text, be it print or digital. When reading and learning from academic texts in college classrooms, students have a repertoire of choices of effective strategies. These strategies have been of interest to a variety of researchers who have categorized them in a variety of frameworks. Nist-Olejnik and Holschuh (2014) identified commonly used rehearsal strategies, such as reviewing and self-testing, and more elaborative meaning-making strategies, such as questioning, note-taking, and annotating. Other strategies include those for summarizing the text, glossing, mapping, and a host of combinational strategies. Pedagogical discussions of effective reading strategies have included self-questioning, predicting, visualizing, inferring, using text structure, and summarizing (Cartwright, 2015). Bulgren et al. (2007) note questioning, summarizing, identification of key ideas, use of graphic organizers, and the understanding of cause-effect relationships as essential disciplinary strategies for learning and reading. Dunlosky, Rawson, Marsh, Nathan, and Willingham (2013) identified 10 commonly used college study strategies as elaborative interrogation, self-explanation, summarization, highlighting/underlining, using keyword mnemonics, forming imagery for text, rereading, self-testing, a schedule of distributed practice, and a schedule of interleaved practice. Simsek and Hooper (1992) in their assessment of the learning strategies of university students in Turkey identified clusters of strategies for rehearsal, elaboration, organization, metacognition, and motivation that encompassed a full range of strategies based on Weinstein and Mayer’s (1986) cognitive classifications; their research supported previous research that showed that successful students employed a greater number of strategies and more successful learning experiences than unsuccessful students.
197
Mulcahy-Ernt and Caverly
This chapter focuses on explicitly taught academic literacy learning strategies identified for making meaning from college texts not only in college developmental reading courses but also in college courses in specific discipline areas. Though several of these strategies were reviewed in the prior editions of this chapter, additional strategies are included in this chapter. Of note are studies that are in non-Western countries with applications for non-English speakers. While there has been a dearth of recent research about specific study strategies in Western countries, particularly in the United States, it is noteworthy that recent strategy research has been occurring in non-Western countries. The strategies noted in Table 12.1 are discussed in the subsequent sections of this chapter. Table 12.1 Framework of Study-Reading Strategies for Print and Digital Texts Basic Learning Techniques for Print
Basic Learning Strategies for Digital Text
Rehearsal
Techniques for repeating information to remember, such as common memorizing, rereading the text, or self-testing
Selecting pop-up, marginal, or in-text gloss Marking text through highlighting or underlining Making digital notes by repeating text in margin
Elaboration
Techniques for generating mental images to remember, such as imaging
Organizational Techniques for grouping lists of items, such as mnemonics
Monitoring
Affective and Motivational Strategies
198
Complex Learning Strategies for Print
Complex Learning Strategies for Digital Text
Strategies for marking material to be learned, such as underlining or highlighting
Strategies for information literacy of marking source material for its source location, source date, source content
Strategies for integrating new information with prior knowledge, such as generative notetaking, elaborative interrogation, questioning, annotating, or summarizing
Strategies for integrating new information with prior knowledge, such as student generated annotating, glossing, self-questioning, noting author source, summarizing with others online, synthesizing discrete notes into a coherent whole Strategies for mapping the organization of the text, such as outlining, graphing, and mapping
Strategies for recognizing, recalling, and recreating the structure of the information, such as outlining, mapping, or using other graphic organizers Combinational strategies, such as SQ3R, PLAN, ROWAC, EVOKER for orchestrating solo strategies and monitoring one’s progress (through self-regulation) toward a learning goal Volitional strategies, including attention, concentration, and mindfulness
Think-alouds; elaborations for monitoring comprehension Collaborations with others on digital texts
Strategic Study-Reading
Rehearsal Strategies: Underlining/Highlighting Reported as the most widely used study-reading strategies (Bell & Limber, 2010; Gurung, Weidert, & Jeske, 2010) are underlining and highlighting, which are considered to be similar strategies (Fowler & Barker, 1974) since whether the text is underlined or marked with a highlighter, the student selects what he/she considers to be important for remembering. As a rehearsal strategy, underlining/highlighting is rated as low in utility (Dunlosky et al., 2013). The effectiveness of this strategy depends on the student’s reading expertise and the student’s prior knowledge about the text content. Bell and Limber noted that low skilled readers more often choose this strategy, often buy textbooks already marked, and are less capable of choosing relevant information (Gurung et al.). Seldom are there reported benefits of highlighting/underlining; an exception to this is if the “relevant” text is highlighted (Blanchard & Mikkelson, 1987; Johnson, 1988), such as when the relevant text passages are marked by the author, helping the student focus on the essential sections.
Basic Rehearsal Strategies with Digital Glosses Digital glosses provide basic rehearsal-level strategic learning (cf., Weinstein & Mayer, 1986) through the various types of hyperlinks (i.e., pop-up, marginal, or in-text) to deliver to the student word definitions, word pronunciation, word translations into multiple languages, annotations linking to multimodal representations of the words, teacher-created prompts providing monitoring questions, and/or cognitive maps depicting the organization and citations to other information sources (Dalton & Strangman, 2013). Chen (2016) found among English as a Second Language (ESL) students view glosses are varied in their strategic rehearsal benefits. Pop-up hyperlink glosses were useful for active readers in low-stakes tasks. Marginal hyperlink glosses a ssisted in second language conversations. In-text hyperlink glosses provided information to inform comprehension for limited proficiency of reluctant readers.
Complex Rehearsal Strategies for Digital Texts When reading digital texts either individually or in groups, students must first learn how to find sources on the Internet that fit their questions or task. This includes how to locate information within multiple, digital databases using search engines and Boolean logic and how to explore the topic without anxiety (Gross & Latham, 2013; Kulthau, 2004). Analyzing the task, using library search tools, and evaluating the results are complex rehearsal processes.
Self-Testing Nist-Olejnik and Holschuh (2014) described the benefits of study strategies from a cognitive perspective, noting how rehearsal strategies, including self-testing, help the student remember and retrieve information that is studied. Their CARDS (Cognitive Aids for Rehearsing Difficult Subjects) provided a strategy for remembering vocabulary. In this strategy, students develop vocabulary cards that include the key term, definition, examples, and key points to remember. After the student creates the study cards, the student reviews and self-tests to see what is remembered. Van Blerkom and Mulcahy-Ernt (2005) provided a similar study strategy of discipline-specific vocabulary and included on the cards the student’s paraphrase of the definition and self-created mnemonics so that the student could make meaning of the textbook definition and remember it. Nilson (2013) notes the efficacy of self-testing for recalling and reviewing information that needs to be remembered. As a strategy for self-regulated learning, self-testing provides immediate
199
Mulcahy-Ernt and Caverly
feedback about what has been learned and what needs to be learned. Self-testing provides a strategy for practicing what needs to be retrieved from memory. Flippo (2015) provides and demonstrates many practical applications of these and other self-testing strategies appropriate for use with secondary as well as postsecondary students. While self-testing is a strategy for learning from textbooks, similarly, digital apps provide mobile platforms for students to test themselves on self-selected vocabulary and disciplinary concepts.
Complex Elaborative Strategies: Generative Note-Taking As cited in the previous examples in which students generated their own notes for retesting and remembering, the essential feature was that students created their own paraphrases and in so doing integrated what they knew about the topic with what they had to know about the topic. The worth of utilizing prior knowledge in order to learn new information is well documented in schema research (Anderson, 1984; Ensar, 2015) and vocabulary research (Nagy, Townsend, Lesaux, & Schmitt, 2012). According to the Weinstein and Mayer’s (1986) classification, as an elaboration strategy for complex learning tasks, generative note-taking is considered worthwhile due to the active processing of making connections between the information to be learned and the information already known. Wilson (2012) studied the note-taking processes of non-native speakers and surmised that essential to effective note-taking was learning how to use the language of academic text and learning the fine line of avoiding plagiarism. Dunlosky et al. (2013) reported that when students provided self-explanations about what they had learned, their learning was enhanced; the assumption was that the effectiveness rested with the integration of new information with prior knowledge. Similarly, Linderholm, Therriault, and Kwon (2014) found significant benefits for self-explanation for reading comprehension specifically in multiple science texts. Flippo (2015) demonstrates the use of generative note-taking and learning strategies throughout her book, referring to some of the recommended practices as note-making and condensing notes. However, as noted in the previous discussion about prior knowledge, the effectiveness of these elaborative academic learning strategies depends on the interaction of the student’s familiarity with the discipline, the student’s familiarity with the strategy, the student’s understanding of the criterion task, and the difficulty of the text itself.
Complex Elaborative Strategies: Self-questioning While the purpose of self-testing generally is to rehearse, such as for an objective test, self- questioning has been one of the more widely recommended strategies for learning from academic texts. Also, named “elaborative interrogation,” the strategy has been deemed in previous reviews of literature as moderately effective (Dunlosky et al., 2013). Teaching students to generate “why” questions at the sentence level about an explanation about a stated fact in a text has been shown to foster learning. Pressley et al.’s (1987) seminal research about elaborative interrogation showed that college students significantly outperformed control groups 72 percent to 37 percent when using this strategy. Seifert (1993) found effect sizes in similar studies (d = 0.83 to d = 2.57) for sentence recall; however, in reality, since college students must process and recall text more than at a sentence level, a sentence-level strategy may not be practical. A key to effective elaborative interrogation is for students to generate an elaboration from their prior knowledge. A strong moderator of elaborative interrogation occurs when the student has strong prior knowledge (24 percent for strong prior knowledge students versus 12 percent for weak prior knowledge students; Woloshyn, Willoughby, Wood, & Pressley, 1990). Therefore, this strategy is not as effective when students have low prior knowledge. Elaborative interrogation works effectively in recalling factual material but not in meaning making beyond literal comprehension.
200
Strategic Study-Reading
During the last decade, research about elaborative interrogation has contextualized this strategy in discipline-specific courses. Pease (2012) found that when college students reading a college chemistry textbook asked elaborative interrogation “why” questions, they performed better in their quantitative chemistry problem-solving in comparison to just rereading the text. Likewise, Smith, Holliday, and Austin (2010) found that when college students enrolled in a first-year science course read challenging informational text and used an elaborative interrogation strategy in which they recalled background material about the topic, they performed better in their comprehension of science in comparison to the control group who just reread the text passages. Ozgungor and Guthrie (2004) found that when students become engaged in higher-order questioning by answering “why” rather than “what” questions through elaborative interrogation, students were able to recall more information, identify more accurate inferences, and create more coherent mental representations of the text than students who merely reread the text for understanding. In this study, “why” questions were embedded in the text; the benefit of using elaborative interrogation proved higher for those college students who had less prior knowledge about the topic. Buehl (2011) provided a self-questioning taxonomy as a metacognitive strategy for understanding academic texts in different disciplines. The premise of self-questioning is that it promotes thinking like those in the discipline and leads to deeper questioning of complex disciplinary texts. A caveat for this strategy, though, is that developing readers need to hear the think-alouds of experienced readers in the field who model the types of questions that lead to deep thinking about the discipline. In other words, developing readers need to learn the essential questions that frame disciplinary knowledge in a specific field. Flippo (2015) noted the use of self-questioning as a viable opportunity for students to review materials to be learned, pose questions, and answer questions, combined with the dynamics of small study-group discussion and participation.
Complex Elaborative Strategies for Digital Texts When students use self-questioning for digital texts, they evaluate each source for the author’s qualifications, as well as judge the source for trustworthiness and usefulness. Anmarkrud et al. (2014) demonstrated that students can judge the source for its authenticity. Students use complex organizational processing to synthesize information in digital texts to create a coherent text model. Throughout this process, students monitor what they read, use peer reviews to share their texts and receive feedback, and then disseminate their findings within the discipline-based community.
Complex Elaborative Strategies: Annotating Since underlining/highlighting as a strategy for reading and learning from academic text is not rated highly (Dunlosky et al., 2013), strategies that promote close, critical reading prove to be more preferred. According to Beers and Probst (2013), the goal of close reading is to attend to the text itself, to focus on the reader’s own feelings and experiences, and to consider other readers’ interpretations. To do so, the reader needs to focus on the text, reread it, analyze it, and critically evaluate it. Nist-Olejnik and Holschuh (2014) recommended using annotations as a strategy for reading and learning from academic texts; in this strategy, readers write directly on the page and note key ideas, definitions, examples, relevant lists, possible test questions, key details, and relevant relationships among concepts. Their recommendations mirror the REAP strategy (Eanet, 1976) in which the student reads (R) the text, encodes it (E), annotates it (A), then ponders it (P); annotations include summary notes, thesis notes, critical notes, and question notes. The utility of the annotations is that the student can bring the annotated questions to class for clarification, note
201
Mulcahy-Ernt and Caverly
essential vocabulary and ideas that need to be remembered, and raise critical points that may be discussed in class or in assigned papers. Vacca, Vacca, and Mraz (2017) provided recommendations to students using a text marking system as annotations; they listed common symbols and notations that students can use for identifying concepts in a text, understanding text structure, noting questions, locating main ideas and supporting evidence, and noting essential vocabulary. In a study, unique to digital texts Tseng, Yeh, and Yang (2015) reported the use of online annotations, in which Chinese English as a Foreign Language (EFL) students created four types of annotations that marked important vocabulary, marked unknown vocabulary words through explanatory notes, added notes explaining the relationships between sentences and paragraph, and added summary notes with a personal reflection. They found students’ annotations that marked vocabulary or explained vocabulary reached only surface-level comprehension (Kintsch, 1998), while annotations that explained relationships, summarized, or reflected textbase-level comprehension or situation-level comprehension reached deeper comprehension. Chen and Liu (2012) found that learner-generated glosses in the form of annotations can be an elaborative strategy improving comprehension more than faculty-provided annotations. For learner-generated annotation glosses to be useful, the learner must create quality annotations. In sum, deeper processing through annotations, whether it is in print or digital text, improves the level of comprehension.
Digital Glossing While another term for a text annotation is a text gloss, digital texts provide the tools for explaining vocabulary and text concepts through both words and pictures. The affordances within mobile devices have significantly increased the complexity of multimodal representations of concepts, ideas, and processes. If these “re-presentations” from one mode to another mode are cohesive, there can be a “multimedia effect” (Mayer, 2009a). That is, comprehension is improved when students study texts that have multimodal representations with both print and graphic representations, which explain one another, in comparison to either mode separately. Through multiple studies of the multimedia effect, Mayer (2009b) documented increased understanding and recall with strong statistical effect sizes averaging (d > 1.39). This understanding is strengthened specifically when students focus on the spatial recall necessary in disciplines such as the hard sciences or business.
Complex Elaborative Strategies: Summarizing Summarization as a strategy promotes deep processing about the text and requires the writer to recognize the text structure, select main ideas, locate supporting details that provide evidence for the main premise of the text, and develop a condensed version of the text. Research about its effectiveness as a learning strategy is mixed. Hebert, Gillespie, and Graham (2013) found that summarization was better than answering questions on a free recall measure. Cordero-Ponce (2000) examined the effects of summarization training for foreign language learners on the comprehension performance of French expository text passages and also found that summarization was an effective strategy. In addition, Graham and Hebert (2010) in the Carnegie Report of effective writing instruction note that students’ comprehension of science, social science, and language arts texts improves when students write summaries of what they read. On the other hand, Dunlosky et al. (2013) found that summarization had low utility, mostly due to the amount of time it takes to teach students how to use it. While some students benefit from summarization, additional research is needed to evaluate the conditions of its utility. Since writing summaries is a complex writing task, the variability of the effectiveness of this strategy may be due to the complexity
202
Strategic Study-Reading
of the interaction of the student’s background discipline knowledge in order to write a good summary, the student’s knowledge of writing a summary, and the time needed to write the summary. Consideration should be given measuring summary strategic effectiveness with concurrent think-aloud protocols or retrospective reports (Perin, Grant, Raufman, & Kalamkarian, 2017). Interestingly, in a study of the reading strategies of Iranian students completing their postgraduate studies in an English-speaking country, summarization was not the preferred strategy because of the difficulty of manipulating the English language (Ebrahimi, 2012), thus showing that summarization requires a sophisticated understanding of writing and language.
Complex Organizational Strategies: Mapping Mapping, as noted in prior editions of this chapter (Caverly & Orlando, 1991; Caverly et al., 2000; Mulcahy-Ernt & Caverly, 2009), has been deemed one of the most effective reading strategies. As a complex organizational strategy, mapping includes the use of graphic organizers, which include flowcharts, time lines, tables, and T-charts, and provides a visual means of showing the relationships of text ideas (Vacca et al., 2017). Nesbit and Adesope’s (2006) meta-analysis of studies using concept maps showed that they are effective for attaining knowledge retention and transfer. Concept maps have been used in a wide variety of disciplines. For example, Doorn and O’Brien (2007) used concept mapping in an introductory statistics course; students using concept maps reported a greater gain in the area of “active study.” Liu, Chen, and Chang (2010) reported using a concept mapping learning strategy with EFL college readers; this mapping strategy had higher benefits for the lower-reading level group in comparison to the higher group. For digital texts, while using mapping online is very similar to mapping on paper, apps for laptops and mobile devices simplify creating maps as well as sharing maps during collaborative learning.
Combinational Systems In contrast to the solo strategies described earlier, combinational systems combine multiple solo strategies into one system. Combinational systems are algorithmic, with multistep strategies guiding the reader through regulated learning steps while following formulaic reading and learning strategies required in a specific order. For example, SQ3R (Robinson, 1946) requires students to follow specific strategic reading steps with every type of text. Combinational systems are heuristic when micro-strategies are used through a series of steps; the reader self-regulates within each step and chooses which micro-strategy to use, monitors performance, and reflects on the effectiveness of the chosen strategy.
SQ3R and Its Progenies One of the more popular algorithmic systems for college study-reading has been SQ3R ( Robinson, 1946) and its subsequent progenies, including SQ4R, and SQ5R. The success of the strategies inherent in SQ3R (Survey the topic headings and summary, turn topic headings into Questions, Read to answer the questions, Recite to recall the main points and answers to the questions, and Review the main points) includes worthwhile elaborative interrogation strategies, self-testing strategies, and reviewing strategies. However, the effectiveness of this strategy, as do the solo strategies, depends on the interaction of the reader’s prior discipline knowledge, strategy knowledge, text variables, and task variables. According to Sticht (2002), SQ3R was created by Professor Francis Robinson at the Ohio State University when he was asked to head a newly formed Learning and Study Skills program to train World War II personnel in reading strategies; Sticht ironically refers to SQ3R as “the reading formula that helped win World War II” (p. 18).
203
Mulcahy-Ernt and Caverly
Later modifications to the original basic SQ3R emerged through recommendations to add additional steps, such as wRite to create SQ4R (Smith, 1961). Here the R step explicitly asked the student to write the responses to self-generated questions. In an attempt to incorporate the effectiveness of self-regulated learning (Zimmerman & Schunk, 2011), a morphing to this SQ4R included a fifth algorithmic step to form the SQ5R reading system as Survey, Question, Read, Record, Recite, Reflect, Review (Bepko Learning Center, 2016). For over 70 years, up to the present day, SQ3R has been widely used as a preferred study- reading strategy, though empirical research about its benefits has been mixed throughout its history. Johns and McNamara (1980) reviewed the effectiveness of SQ3R and concluded that the empirical evidence demonstrating its evidence was lacking. Huber’s (2004) review of the research about it 20 years later concluded similarly but noted that a benefit of using this strategy was that in comparison to other comprehension strategies, SQ3R favored student independent learning. Carlston (2011) almost a decade later noted that the existing research about SQ3R was anecdotal and lacked ecological validity; however, Carlston’s research in a college introductory psychology class showed that students who used SQ3R performed better on recall exams in comparison to those who did not use this strategy. This study called for additional research about SQ3R, especially since this strategy is time-consuming and may not be as effective as higher processing strategies, such as concept mapping. Reviews of SQ3R research conclude that the effectiveness of this strategy depends on the prior knowledge of the student, the task demands, and the instructional variables. More recent research in the last decade has studied the use and effectiveness of SQ3R in discipline-specific courses. Artis (2008) endorsed the benefits of active reading and the use of SQ3R in marketing courses in order to promote self-regulated learning strategies of previewing the text, actively reading to comprehend the text, and using review strategies to evaluate comprehension. Li, Fan, Huang, and Chen (2014) studied the use of an e-book instructional system for mobile devices to support students learning how to use SQ3R. The app reminded readers of each step of SQ3R and provided examples. As students chose nodes for creating headings, questioning, highlighting, commenting, or reciting, the app created a concept map for the student. Subsequent research concluded that SQ3R was not suitable for all students and recommended that teachers assign poorer readers this app to allow them adaptive levels of control when students read e-books. Interestingly, this study demonstrated how adaptive technology could benefit the development of any student’s self-regulated strategies. However, the complex nature of using an algorithmic academic literacy strategy with students who were not educated in how to use the app interfered with its success. Students in this study did not plan their reading time well; some ignored the prompts to complete the steps of SQ3R. Another drawback was that students could only select what they deemed important content, and annotations creating the real-time map became a distraction. This research exposed the side effects of technology that did not provide students the flexibility to self-question or to generate their own map (cf., Bertsch, Pesta, Wiscott, & McDaniel, 2007). Follow-up research for SQ4R or SQ5R is needed. The underlying processes in SQ3R are similar in concept to a number of other study-reading algorithmic systems. These include ROWAC (Read, Organize, Write, Actively Read, Correct Predictions; Roe, Stoodt-Hill, & Burns, 2007), developed to emphasize organizing ideas; SQRQCQ (Survey, Question, Read, Question, Compute, Question; Fay, 1965), used for comprehending word problems in mathematics; and SQRC (State, Question, Read, Conclude; Sakta, 1999), developed to emphasize critical thinking, voicing a viewpoint, and preparing to defend that perspective in a class discussion. Other combinational algorithmic systems were created to help students develop study-reading strategies: S-RUN (Survey, Read, Underline, Notate; Bailey, 1988) to emphasize note-taking; S-RUN-R (Survey, Read, Underline, Notate, Review; van Blerkom & Mulcahy-Ernt, 2005)
204
Strategic Study-Reading
developed by van Blerkom to combine Bailey’s system with a final review step for studying difficult text material; P2R (Preview, Read Actively, Review; van Blerkom, van Blerkom, & Bertsch, 2006), developed as a more condensed version to study texts of easy to average difficulty. As noted by McGuire (McGuire, 2011) Frank L. Christ created the Preview-Lecture-Review-Study (PLRS) Learning Cycle (Christ, 1988) study strategy, which is used more for studying in general than for studying while reading. Cook, Kennedy, and McGuire (2013) adapted this strategy as Preview, Attend, Review, Study, and Assess. In another study, SOAR ( Jairam, Kiewra, Rogers-Kasson, Patterson-Hazley, & Marxhausen, 2014), for which the components are Select (key information), Organize (key information), Associate (connect what is read with prior knowledge), and Regulate (practice testing), the findings indicated that SOAR was more effective than SQ3R since students created meaningful and memorable relationships between the new material and the student’s prior knowledge.
Predict, Organize, Rehearse, Practice, Evaluate Other combinational systems have been created to include writing as an essential study-reading strategy within the algorithmic systems in order to select, organize, and recall essential information. Tested with college freshmen in developmental reading/study courses, PORPE (Predict, Organize, Rehearse, Practice, Evaluate; Simpson, 1986; Simpson, Hayes, Stahl, & Connor, 1988) was developed as a strategy for creating well-developed essay responses. In this reading-study system, students read passages and completed five steps: Predict potential essay questions, Organize key ideas in their own words, Rehearse key ideas, Practice recall of key ideas in analytical writing tasks, and Evaluate the completeness, accuracy, and appropriateness of their writing. Subsequent research demonstrated that those who used PORPE significantly outperformed the control group on immediate and delayed essay and comprehension exams.
Pre-Plan, List, Activate, Evaluate PLAE (Pre-Plan, List, Activate, Evaluate) initiated heuristic academic literacy strategy systems. It was first introduced by Simpson and Nist (1984) and was designed to guide students through four steps of planning toward regulating cognition. Informed by research from Brown, Bransford, Ferrara, and Campione (1982), Simpson and Nist proposed that independent planning involved the learner predicting and defining tasks to be accomplished, scheduling appropriate learning strategies to accomplish these tasks, and checking the learning outcomes for effectiveness and efficiency. Built upon a strong theoretical framework, they proposed PLAE as four heuristic steps beginning with Pre-planning, where the student identified the professor’s purpose for reading and the task he/she was to accomplish. In the next step, List, the student recalls a list of reading strategies he/she might choose to meet the identified task and to choose a plan of action. The Activating step occurs when the student activates a strategic plan of action and monitors the effectiveness of the chosen strategy, determining if understanding and remembering is occurring and if there are any interferences where modifications need to be made. Finally, in the Evaluating step the student evaluates his/her success with meeting the goal successfully. Subsequent research has documented student independence and the transfer of this system to other courses (Simpson & Nist, 1984).
Predict, Locate, Add, and Note A reading strategy reported in the literature is PLAN (Predict, Locate, Add, and Note; Caverly, Mandeville, & Nicholson, 1995). PLAN is a combinational heuristic strategy developed to incorporate effective solo strategies when reading expository texts. Predicting involves skimming to
205
Mulcahy-Ernt and Caverly
create a tentative concept map. Locating involves activating one’s prior knowledge on the topic as well as self-questioning each node, adding a check mark if it is old information, and adding a question mark if it is new information. Adding involves a close reading using a variety of close reading strategies in order to assimilate and accommodate the information in the map where the check marks and questions marks are found. Noting involves representing what is understood by restructuring the map to better represent the rhetorical structure, composing a summary of what is read, and reflecting on the process to adjust the strategy for the next text. Caverly, Mandeville, and Nicholson (1995) compared first-semester students who learned PLAN (i.e., Takers) to those who delayed enrolling (i.e., “Skippers”) in a developmental reading course, as well as against a random-assigned control group (i.e., Control) of those who were not required to enroll in the developmental reading course. After four semesters, results demonstrated those Takers who learned to use PLAN were similar to the control group in overall grade point average and grade in American History (i.e., a reading intensive core curriculum course and had significantly outperformed the Skippers who withdrew after two semesters). In addition, the Takers significantly outperformed the Control group in retention. This initial study documented the benefits of PLAN with a small sample. Caverly, Nicholson, and Radcliffe (2004) replicated the study with larger numbers of students and found pretest/posttest growth in cognitive, metacognitive, and affective measures, though no difference in self-efficacy. In a second study, transfer to an American history course provided positive retention. This confirmed PLAN was effective for success in a reading intensive course. Teaching PLAN at a two-year college provided a third replication with a much larger sample (Caverly, Bower, Liu, & Jung, 2013, November). Through a regression discontinuity protocol, a group of Takers (n = 2417) who enrolled in the developmental reading course was compared to a group of a single propensity score matched Passers (n = 2415) on a state-mandated test. Takers outperformed Passers on variables such as persistence, gateway course passing, number of credits accumulated, and graduation rates with statistical significance. Caverly and his colleagues have completed other replication studies on PLAN with seventh- and eighth-grade students ( Radcliffe, Caverly, Peterson, & Emmons, 2004) and fifth-grade students (Radcliffe, Caverly, Hand, & Franke, 2008) in the United States as well as seventh- and eighth-grade Chemistry students in Iraq (Al-Mosawi, Kadim, & Subhi, 2015).
Reading Multiple Documents Online Heuristic academic literacy strategies have also been created to manage and comprehend multiple digital texts when making meaning when learning online. Cho and Afflerbach (2017) proposed a macro-strategy of four stages building upon Rouet and Britt’s (2011) theoretical model. Similarly, Leu, Kinzer, Coiro, Castek, and Henry (2013) proposed a dual level of theory including new literacies (lowercase) representing ever-changing digital strategic tools, such as search engines or mobile apps while New Literacies (uppercase) represent strategies for social practice. From this research, Leu et al. proposed a five-stage macro-strategy, including reading to define important questions, reading to locate online information, reading to critically evaluate online information, reading to synthesize, as well as reading and writing to share understanding. Leu, Forzani, Rhoads, Maykel, Kennedy, and Timbrell (2015) updated these macro-strategies focusing on new literacies for online research and comprehension.
Conclusions and Implications for Practice The implications of research on academic literacy learning strategies are twofold for learning and teaching. First, student goals should focus on becoming strategic self-regulated learners who are
206
Strategic Study-Reading
clear about their goals for reading and learning from text, as well as selecting and using study strategies that promote deep thinking about the discipline. These goals need to monitor the effectiveness of students’ selected strategies in consideration of the constraints of the task assignments, the difficulty and type of texts to be read, and the amount of time it takes to use them. The overall goal for teachers is to provide the instructional supports essential for students to develop the habits of mind for learning in their discipline and for reading and learning from text, whether the text is in print or digital form. This symbiosis of learning and teaching in an academic community has important implications for students and teachers.
Recommendations for Students Students have much choice in their selection of strategies in consideration of their disciplines. While it clearly takes much effort and time to read and study from text, the research about effective study strategies concludes the following: 1 While rehearsal strategies, such as underlining and highlighting, may be initially appealing as strategies to use to learn from text, more effective strategies involve elaborative interrogations that require asking the “why” not just the “what” questions about the text. Questions that promote analysis and critical inquiry about the discipline lead to deeper thinking about the domain that is studied and often more success in future academic literacy task demands. 2 While there is not a proven combinational study system for the one study system that all students should use, effective systems have much in common. Previewing, or surveying a text before reading, helps the reader connect the ideas in the text to their prior knowledge about the topic so that new knowledge can be integrated with it. Utilizing an active reading strategy that involves questioning and composing responses promotes thinking about the discipline. Self-testing utilizing self-generated notes helps to reflect, evaluate, and monitor what has been read. 3 An effective study strategy involves the use of mapping since this strategy involves a conceptualization of discipline knowledge that shows how ideas are related to each other through representing the knowledge in a different multimodal form. The process of organizing then describing and representing the types of conceptual relationships within a domain helps the reader to think about and later to recall discipline knowledge. Inherent in this process is the reader’s cognitive processing of connecting the new knowledge with prior knowledge, which is essential for learning in a discipline. 4 At this point in time, the research has not shown whether strategies such as hyperlinks or embedded annotations (or glosses) within digital technologies help the reader more in learning from text than do traditional print-based strategies. While hyperlinks and glosses can improve vocabulary among second language learners, using the hyperlinks for building prior knowledge will garner more understandings. Therefore, the selection of the strategy depends on the reader’s own choice and comfort when using traditional texts or digital texts. However, the depth of thinking that occurs when reading in print or in digital forms is the critical outcome.
Recommendations for Teachers 1 Developmental college students may benefit from instructor modeling and coaching for the cultivation of the habits of mind in a discipline and the development of students’ self-efficacy. Teachers need to model, teach, and demonstrate self-regulated learning strategies so that the students can learn how to develop questions, how to annotate, and how to test themselves in a discipline.
207
Mulcahy-Ernt and Caverly
2 An effective means of learning both self-regulation and the strategies that are best fitted for the reading task is the gradual release of responsibility instructional model (Pearson, 2011; Pearson & Gallagher, 1983). Modeling in academic texts, coaching students through a variety of discipline-based texts, and then requiring students to apply it to their classroom reading in other classes has been found to be an effective instructional protocol (Nash-Ditzel, 2010). 3 All study strategies are moderated by the students’ level of prior knowledge in the domain they are reading as well as their ability to access that knowledge. Teaching students how to activate their prior knowledge through building vocabulary has found to be necessary and effective (Cromley & Azevedo, 2007; Rupley, Nichols, & Blair, 2008). Concepts become the cognitive hooks through which new knowledge can be assimilated and accommodated with prior knowledge. Teachers can provide annotations in multiple digital texts from a variety of disciplines, guiding students to websites to build vocabulary, viewing representations of concepts and processes, following the text through or creating one’s own cognitive maps, as well as requiring student reflections and creations of examples. Modeling effective reading strategies and learning from texts provide examples for students. Follow-up with multiple opportunities for continued practice helps students see the critical connections so important for learning. Thus, teachers can help students acquire not only the declarative knowledge about a discipline but also the procedural knowledge that demonstrates how experts think and use domain knowledge. 4 As instructional coaches, a key role of teachers is to provide feedback to students. Specific feedback about the quality, elaboration, and accuracy of maps that the students create in response to a text is one type of beneficial feedback. When students create annotations, summaries, or other written study materials, teachers can provide feedback about the content of the materials and feedback about the students’ written texts in consideration of the way discipline knowledge is communicated. Another more general type of feedback is to use think-alouds to model how the teacher thinks about the discipline. 5 In consideration of the prior knowledge of students and of their reading abilities, the teacher should select academic texts that are considerate of the students’ background knowledges and to provide instructional supports that clarify the essential questions in the discipline. Likewise, in consideration of the students’ background knowledge, the teacher should provide sufficient conceptual background information so that when the students work with the texts, they will know which key ideas they need to select, annotate, and remember. 6 Clearly, these recommendations take much time and effort to implement. A final recommendation, then, is to model effective study and reading time management schedules so that students may be able to complete the course readings, be able to follow up with relevant questions about them, and prepare requisite study materials for long-term use in the content domain.
Recommendations for Future Research While students have a plethora of strategies to select for reading and studying academic texts, there is still a need for much more empirical research about solo and combinational systems, particularly for digital texts. Continued research is needed that shows the applications of the strategies in countries outside of the United States showing significant cultural considerations for learning and studying from text, whether it is print or digital. Continued research about the worth of strategies in different disciplines is also needed, particularly in consideration of individual differences in student prior knowledge and reading ability. Likewise, there is a need for much more empirical research for learning and studying from digital texts in contrast to paper texts, particularly since there is a growth in the delivery of online
208
Strategic Study-Reading
instruction and for digital course textbooks. As technology provides newer platforms for reading digital texts and for accessing multimodal texts, the tools for using strategies for literacy learning become available; research is needed in this area. There is much growth in the types of digital apps for highlighting, annotating, note-taking, managing time, studying vocabulary, and cognitive mapping; however, much more research is needed to study their underlying theoretical assumptions and their effectiveness for long-term study, particularly as academic texts move more into digital worlds. Additional research is needed to test the Weinstein and Mayer (1986) model with digital texts, since applications of this model for digital texts are not answered in the literature. During the past two decades, there has been much attention, instructional effort, and research in the role of self-regulated learning strategies and its importance for college students. An area related to this topic that holds much promise for future research is the topic of mindfulness, particularly as it relates to the habits of mind for reading and studying in the different disciplines. Likewise, as research in the field of neuropsychology delves into the physiology of the brain for learning and remembering, this research informs educators about the neural network pathways that impact learning and studying. This interdisciplinary research holds much promise for the future.
References and Suggested Readings Afflerbach, P., & Cho, B. Y. (2009). Identifying and describing constructively response comprehension strategies in new and traditional forms of reading. In S. E. Israel & G. G. Duffy (Eds.), Handbook of research on reading comprehension (pp. 69–90). New York, NY: Routledge. Afflerbach, P., Pearson, P. D., & Paris, S. G. (2008). Clarifying differences between reading skills and reading strategies. The Reading Teacher, 61(5), 364–373. Al-Mosawi, F. U. H., Kadim, U. A., & Subhi, M. S. (2015). The reliability of using PLAN strategy in developing the creative thinking of the second intermediate class students in the acquisition of chemistry (PLAN). Basic Education College Magazine for Educational and Humanities Sciences, 24, 380–401. Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Standford, CA: Stanford University. Retrieved from web.stanford.edu/~gentzkow/research/fakenews.pdf Anderson, R. C. (1984). Role of the reader’s schema in comprehension, learning, and memory. In R. C. Anderson, J. Osborn, & R. J. Tierney (Eds.), Learing to read in American schools: Basal readers and content texts (pp. 243–258). Hillsdale, NJ: Erlbaum. Anmarkrud, Ø., Bråten, I., & Strømsø, H. I. (2014). Multiple-documents literacy: Strategic processing, source awareness, and argumentation when reading multiple conflicting documents. Learning and Individual Differences, 30, 64–76. doi:10.1016/j.lindif.2013.01.007 Artis, A. B. (2008). Improving marketing students’ reading comprehension with the SQ3R method. Journal of Marketing Education, 30(2), 130–137. Baer, J. D., Cook, A. L., & Baldi, S. (2006). The literacy of America’s college student. Washington, DC: American Institutes for Research. Retrieved from ERIC Database (ED 518670) Retrieved from http://files.eric. ed.gov.libproxy.txstate.edu/fulltext/ED518670.pdf. Bailey, N. (1988). S-RUN: Beyond SQ3R. Journal of Reading, 32(2), 170–171. Baron, N. S., Calixte, R. M., & Havewala, M. (2017). The persistence of print among university students: An exploratory study. Telematics and Informatics, 34(5), 590–604. doi:10.1016/j.tele.2016.11.008 Beers, K., & Probst, R. E. (2013). Notice & note: Strategies for close reading. Portsmouth, NH: Heinemann. Bell, K. E., & Limber, J. E. (2010). Reading skill, textbook marking, and course performance. Literacy Research and Instruction, 49, 56–67. doi:dx.doi.org/10.1080/19388070802695879 Bepko Learning Center. (2016). SQ5R reading system. Bepko Learning Center Academic Success Strategies. Indianapolis, IN: Indianapolis University Retrieved from https://blc.iupui.edu/success-coaching/ academic-success-strategies/note-taking/index.html. Bertsch, S., Pesta, B. J., Wiscott, R., & McDaniel, M. A. (2007). The generation effect: A meta-analytic review. Memory & Cognition, 35, 201–210. Blanchard, J., & Mikkelson, V. (1987). Underlining performance outcomes in expository text. Journal of Educational Research, 80(4), 197–201. Bråten, I., Braasch, J. L. G., Strømsø, H. I., & Ferguson, L. E. (2015). Establishing trustworthiness when students read multiple documents containing conflicting scientific evidence. Reading Psychology, 36(4), 315–349. doi:10.1080/02702711.2013.864362
209
Mulcahy-Ernt and Caverly
Bråten, I., Britt, M. A., Strømsø, H. I., & Rouet, J. F. (2011). The role of epistemic beliefs in the comprehension of multiple expository texts: Toward an integrated model. Educational Psychologist, 46(1), 48–70. doi:10.1080/00461520.2011.538647 Brewer, P. R., Young, D. G., & Morreale, M. (2013). The impact of real news about “fake news”: Intertextual processes and political satire. International Journal of Public Opinion Research, 25(3), 323–343. doi:10.1093/ijpor/edt015 Britt, M. A., & Rouet, J. F. (2012). Learning with multiple documents: Component skills and their acquisition. In J. R. Kirby & M. J. Lawson (Eds.), Enhancing the quality of learning: Dispositions, instruction, and learning processes (pp. 276–314). New York, NY: Cambridge University. Brown, A. L., Bransford, J. D., Ferrara, R. A., & Campione, J. C. (1982). Learning, remembering and understanding (Technical Report 244). Urbana-Champaign, IL: University of Illinois. Buehl, D. (2011). Developing readers in academic disciplines. Newark, DE: International Reading Association. Bulgren, J., Deshler, D. D., & Lenz, B. K. (2007). Engaging adolescents with LD in higher order thinking about history concepts using integrated content enhancement routines. Journal of Learning Disabilities, 40(2), 121–133. Burgess, M. L., Price, D. P., & Caverly, D. C. (2012). Digital literacies in multi-user virtual environments among college level developmental readers. Journal of College Reading & Learning, 42, 13–30. Burin, D. I., Barreyro, J. P., Saux, G., & Irrazábal, N. C. (2015). Navigation and comprehension of digital expository texts: Hypertext structure, previous domain knowledge, and working memory capacity. Electronic Journal of Research in Educational Psychology, 13(3), 529–550. Butcher, K. R., & Kintsch, W. (2013). Text comprehension and discourse processing. In A. F. Healy, R. W. Proctor, & I. B. Weiner (Eds.), Handbook of psychology (2nd ed., Vol. 4, Experimental psychology, pp. 578–604). Hoboken, NJ: John Wiley & Sons Inc. Cantrell, S. C., Correll, P., Clouse, J., Creech, K., Bridges, S., & Owens, D. (2013). Patterns of self-efficacy among college students in developmental reading. Journal of College Reading and Learning, 44(1), 8–34. Carlston, D. L. (2011). Benefits of student-generated note packets: A preliminary investigatoin of SQ3R implementation. Teaching of Psychology, 38(3), 142–146. Cartwright, K. B. (2015). Executive skills and reading comprehension: A guide for educators. New York, NY: Guilford. Caverly, D. C., Bower, P., Liu, L., & Jung, J. H. (2013, November). Measures of effectiveness in a two-year college developmental literacy program. Paper presented at the College Reading and Learning Association, Boston, MA. Caverly, D. C., Mandeville, T. F., & Nicholson, S. (1995). PLAN: A study-reading strategy for informational text. Journal of Adolescent & Adult Literacy, 39(3), 190–199. Caverly, D. C., Nicholson, S., & Radcliffe, R. (2004). The effectiveness of strategic reading instruction at the college level for college developmental readers. Journal of College Reading and Learning, 35(1), 25–49. Caverly, D. C., & Orlando, V. P. (1991). Textbook study strategies. In R. F. Flippo & D. C. Caverly (Eds.), Teaching reading and study strategies at the college level (pp. 86–165). Newark, DE: International Reading Association. Retrieved from ERIC Database. (ED326834) Caverly, D. C., Orlando, V. P., & Mullen, J. L. (2000). Textbook study strategies. In R. F. Flippo & D. C. Caverly (Eds.), Handbook of college reading and study strategy research (pp. 105–147). Mahwah, NJ: Lawrence Erlbaum and Associates. Chen, C. J., & Liu, P. L. (2012). Comparisons of learner-generated versus instructor-provided multimedia annotations. Turkish Online Journal of Educational Technology – TOJET, 11(4), 72–83. Chen, I. J. (2016). Hypertext glosses for foreign language reading comprehension and vocabulary acquisition: Effects of assessment methods. Computer Assisted Language Learning, 29(2), 413–426. Chen, I. J., & Yen, J. C. (2013). Hypertext annotation: Effects of presentation formats and learner proficiency on reading comprehension and vocabulary acquisition in foreign languages. Computers & Education, 63, 416–423. doi:10.1016/j.compedu.2013.01.005 Cho, B. Y., & Afflerbach, P. (2017). An evolving perspective of constructively responsive reading comprehension strategies in multilayered digital text environments. In S. E. Israel & G. G. Duffy (Eds.), Handbook of research on reading comprehension (Vol. 2, pp. 109–134). New York, NY: Routledge. Christ, F. L. (1988). Strengthening your study skills. In R. Zarn (Ed.), Getting the most out of your university experience (pp. 5–10). Long Beach, CA: California State University Long Beach. Cleary, T., & Zimmerman, B. J. (2000). Self-regulation differences during athletic practice by experts, nonexperts, and novices. Journal of Applied Sport Psychology, 13, 61–82. Cook, E., Kennedy, E., & McGuire, S. Y. (2013). Effect of teaching metacognitive learning strategies on performance in general chemistry courses. Journal of Chemical Education, 90, 961–967.
210
Strategic Study-Reading
Cope, B., & Kalantzis, M. (2013). “Multiliteracies”: New literacies, new learning. In M. R. Hawkins (Ed.), Framing languages and literacies: Socially situated views and perspectives (pp. 105–134). New York, NY: Routledge. Cordero-Ponce, W. L. (2000). Summarization instruction: Effects on foreign language comprehension and summarization of expository texts. Reading Research and Instruction, 39(4), 329–350. Corno, L., & Snow, R. E. (2001). Conative individual differences in learning. In J. M. Collis & S. Messick (Eds.), Intelligence and personality: Bridging the gap in theory and measurement (pp. 121–138). Mahwah, NJ: Lawrence Erlbaum Associates. Cromley, J. G., & Azevedo, R. (2007). Testing and refining the direct and inferential mediation model of reading comprehension. Journal of Educational Psychology, 99, 311–325. Dalton, B., & Strangman, N. (2013). Improving struggling readers’ comprehension through scaffolded hypertexts and other computer-based literacy programs. In M. C. McKenna, L. D. Labbo, R. D. Kieffer, & D. Reinking (Eds.), International handbook of literacy and technology (Vol. II, pp. 75–92). New York, NY: Routledge. Retrieved from http://site.ebrary.com/lib/txstate/docDetail.action?docID=10647686. Doorn, D., & O’Brien, M. (2007). Assessing the gains from concept mapping in introductory statistics. International Journal for the Scholarship of Teaching and Learning, 1(2), 1–19. Duckworth, A. L., Peterson, C., Matthews, M. D., & Kelly, D. R. (2007). Grit: Perseverance and passion for long-term goals. Journal of Personality and Social Psychology, 92, 1087–1101. doi:10.1037/0022-3514.92.6.1087 *Dunlosky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., & Willingham, D. T. (2013). Improving students’ learning with effective learning techniques: Promising directions from cognitive and educational psychology. Psychological Science in the Public Interest, 14(1), 4–58. Retrieved from http://psi.sagepub.com/ lookup/doi/10.1177/1529100612453266. Eanet, M. G. (1976). An investigation of the reap reading/study procedure: Its rationale and effectiveness. In P. D. Pearson & J. Hansen (Eds.), Reading: Disciplined inquiry in process and practice (pp. 229–232). Clemson, SC: National Reading Conference. Ebrahimi, S. S. (2012). Reading strategies of Iranian postgraduate English students living at ESL context in the first and second language. Paper presented at the International Conference on Education and Management Innovation, Singapore. Ensar, F. (2015). Schema-based text comprehension. Educational Research and Reviews, 10(18), 2568–2574. Fay, L. (1965). Reading study skills: Math and science. In J. A. Figurel (Ed.), Reading and inquiry (pp. 93–94). Newark, DE: International Reading Association. Fletcher, J., Najarro, A., & Yelland, H. (2015). Fostering habits of mind in today’s students: A new approach to developmental education. Sterling, VA: Stylus. Flippo, R. F. (with Gaines, R., Rockwell, K. C., Cook, K., & Melia, D.) (2015). Studying and learning in a high-stakes world: Making tests work for teachers. Lanham, MD: Rowman & Littlefield. Fowler, R. L., & Barker, A. S. (1974). Effectiveness of highlighting for retention of textual material. Journal of Applied Psychology, 59, 358–384. *Gee, J. P. (2012). Social languages, situated meanings, and cultural models. In J. P. Gee (Ed.), Sociolinguistics and literacies: Ideology in discourses (4th ed., pp. 87–111). London, England: Routledge. *Graham, S., & Hebert, M. (2010). Writing to read: Evidence for how writing can improve reading. A Carnegie corporation time to act report. New York, NY: Alliance for Excellent Education, Carnegie Foundation. Retrieved from www.carnegie.org/publications/?q=Writing+to+read&per_page=25&per_page=25. Greene, J. A., Mason Bolick, C., Caprino, A. M., Deekens, V. M., McVea, M., Seung, Y., & Jackson, W. P. (2015). Fostering high-school students’ self-regulated learning online and across academic domains. High School Journal, 99(1), 88–106. Gross, M., & Latham, D. (2013). Addressing below proficient information literacy skills: Evaluating the efficacy of an evidence-based educational intervention. Library & Information Science Research, 35(3), 181–190. doi:10.1016/j.lisr.2013.03.001 Gurung, R. A. R., Weidert, J., & Jeske, A. (2010). Focusing on how students study. Journal of the Scholarship of Teaching and Learning, 10, 28–35. Harper, J. D. I., Burrows, G. E., Moroni, J. S., & Quinnell, R. (2015). Mobile botany: Smart phone photography in laboratory classes enhances student engagement. American Biology Teacher, 77(9), 699–702. Hartley, J. (1986). Improving study skills. British Educational Research Journal, 12(2), 111–123. Hathorn, L. G., & Rawson, K. A. (2012). The roles of embedded monitoring requests and questions in improving mental models of computer-based scientific text. Computers & Education, 59(3), 1021–1031. Hebert, M., Gillespie, A., & Graham, S. (2013). Comparing effects of different writing activities on reading comprehension: A meta-analysis. Reading and Writing: An Interdisciplinary Journal, 26(1), 111–138. doi:10.1007/s11145-012-9386-3
211
Mulcahy-Ernt and Caverly
Holschuh, J. L., & Paulson, E. J. (2013). The terrain of college reading. College Reading and Learning Association. Retrieved from www.crla.net/index.php/publications/crla-white-papers. Huber, J. A. (2004). A closer look at SQ3R. Reading Improvement, 41(2), 108–112. Hughes, J. M. (2017). Digital making with “at-risk” youth. International Journal of Information & Learning Technology, 34(2), 102–113. doi:10.1108/IJILT-08-2016-0037 Jairam, D., Kiewra, K. A., Rogers-Kasson, S., Patterson-Hazley, M., & Marxhausen, K. (2014). Soar versus SQ3R: A test of two study systems. Instruction Science, 42, 409–420. Johns, J. L., & McNamara, L. P. (1980). The SQ3R study technique: A forgotten research target. Journal of Reading, 23(8), 704–708. Johnson, L. L. (1988). Effects of underlining textbook sentences on passage and sentence retention. Reading Research and Instruction, 28(1), 18–32. Joordens, J. C. A., d’Errico, F., Wesselingh, F. P., Munro, S., de Vos, J., Wallinga, J., & Roebroeks, W. (2014). Homo erectus at Trinil on Java used shells for tool production and engraving. Nature, 518(7538), 228–231. doi:10.1038/nature13962 *Kintsch, W. (1998). Comprehension: A paradigm for cognition. New York, NY: Cambridge University Press. Kulthau, C. C. (2004). Seeking meaning: A process approach to library and information services (2nd ed.). Westport, CT: Libraries Unlimited. *Lea, M. R., & Street, B. V. (2006). The “academic literacies” model: Theory and applications. Theory Into Practice, 45(4), 368–377. doi:10.1207/s15430421tip4504_11 *Leu, D. J., Forzani, E., Rhoads, C., Maykel, C., Kennedy, C., & Timbrell, N. (2015). The new literacies of online research and comprehension: Rethinking the reading achievement gap. Reading Research Quarterly, 50(1), 37–59. doi:10.1002/rrq.85 *Leu, D. J., Kinzer, C. K., Coiro, J. L., Castek, J., & Henry, L. A. (2013). New literacies: A dual-level theory of the changing nature of literacy, instruction, and assessment. In D. E. Alvermann, N. Unrau, & R. B. Ruddell (Eds.), Theoretical models and processes of reading (6th ed., pp. 1150–1181). Newark, DE: International Reading Association. Li, L. Y., Fan, C. Y., Huang, D. W., & Chen, G. D. (2014). The effects of the e-book system with the reading guidance and the annotation map on the reading performance of college students. Journal of Educational Technology & Society, 17(1), 320–331. Linderholm, T., Therriault, D. J., & Kwon, H. (2014). Multiple science text processing: Building comprehension skills for college student readers. Reading Psychology, 35(4), 332–356. doi:10.1080/02702711.2012.726696 Liu, P. L., Chen, C. J., & Chang, Y. J. (2010). Effects of a computer-assisted concept mapping learning strategy on EFL college students: English reading comprehension. Computers & Education, 54(2), 436–445. doi:10.1016/j.compedu.2009.08.027 Mayer, R. E. (2009a). Multimedia learning (2nd ed.). Cambridge, UK: Cambridge University. Mayer, R. E. (2009b). The science of learning: Determining how multimedia learning works. In Author (Ed.), Multimedia learning (2nd ed., pp. 57–83). Cambridge, UK: Cambridge University. Mulcahy-Ernt, P., & Caverly, D. C. (2009). Strategic study-reading. In R. F. Flippo & D. C. Caverly (Eds.), Handbook of college reading and study strategy research (2nd ed., pp. 177–198). New York, NY: Routledge. Nagy, W., Townsend, D., Lesaux, N., & Schmitt, N. (2012). Words as tools: Learning academic vocabulary as language acquisition. Reading Research Quarterly, 47(1), 91–108. doi:10.1002/rrq.011 Najarro, A. (2015). Developing students’ self-efficacy. In J. Fletcher, A. Najarro, & H. Yelland (Eds.), Fostering habits of mind in today’s students: A new approach to developmental education (pp. 165–197). Sterling, VA: Stylus. Nash-Ditzel, S. (2010). Metacognitive reading strategies can improve self-regulation. Journal of College Reading & Learning, 40(2), 45–63. National Center for Educational Statistics. (2016). The nation’s report card: 2015 - mathematics & reading at grade 12 (NCES 2016108). Washington, DC: Institute for Education Sciences, U.S. Department of Education. Retrieved from https://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=2016108. Nesbit, J. C., & Adesope, O. O. (2006). Learning with concept and knowledge maps: A meta-analysis. Review of Educational Research, 76(3), 413–448. New London Group. (1996). A pedagogy of multiliteracies: Designing social futures. Harvard Educational Review, 66, 60–93. Nielson, E. K. (2017). A collective case study of developmental literacy students participating in a mindfulness-based intervention: An investigation of mindfulness, self-compassion, affect, and effort (Unpublished doctoral dissertation). San Marcos, TX: Texas State University. Nilson, L. B. (2013). Creating self-regulated learners: Strategies to strengthen students’ self-awareness and learning skills. Sterling, VA: Stylus.
212
Strategic Study-Reading
Nist-Olejnik, S., & Holschuh, J. P. (2014). College success strategies. Boston, MA: Pearson. Ozgungor, S., & Guthrie, J. (2004). Interactions among elaborative interrogation, knowledge, and interest in the process of constructing knowledge from text. Journal of Educational Psychology, 96(3), 437–443. Paulson, E., & Bauer, L. (2011). Goal setting as an explicit element of metacognitive reading and study strategies for college reading. NADE Digest, 5(3), 41–49. Pearson, P. D. (2011). Toward the next generation of comprehension instruction: A coda. In H. Daniels (Ed.), Comprehension going forward (pp. 243–253). Portsmouth, NH: Heinemann. Pearson, P. D., & Gallagher, M. (1983). The instruction of reading comprehension. Contemporary Educational Psychology, 8, 317–344. Pease, R. S. (2012). Using elaborative interrogation enhanced worked examples to improve chemistry problem solving (Doctoral dissertation). Retrieved from ProQuest. (3517610) Perin, D., Grant, G., Raufman, J., & Kalamkarian, H. S. (2017). Learning from student retrospective reports: Implications for the college developmental classroom. Journal of College Reading & Learning, 47(2), 77–98. doi:10.1080/10790195.2017.1286956 Pintrich, P. R., & De Groot, E. (1990). Motivational and self-regulated learning components of classroom academic performance. Journal of Educational Psychology, 82(1), 33–40. *Pintrich, P. R., & Garcia, T. (1994). Self-regulated learning in college students: Knowledge, strategies, and motivation. In P. R. Pintrich, D. R. Brown, & C. E. Weinstein (Eds.), Student motivation, cognition, and learning (pp. 193–214). Hillsdale, NJ: Erlbaum. Pressley, M., McDaniel, M. A., Turnure, J. E., Wood, E., & Ahmad, M. (1987). Generation and precision of elaboration: Effects on intentional and incidental learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 13, 291–300. Radcliffe, R., Caverly, D. C., Hand, J., & Franke, D. (2008). Improving reading in a middle school science classroom. Journal of Adolescent & Adult Literacy, 51(5), 398–408. Radcliffe, R., Caverly, D. C., Peterson, C. L., & Emmons, M. (2004). Improving textbook reading in a middle school science classroom. Reading Improvement, 41(3), 145–156. Robinson, F. P. (1946). Effective study (2nd ed.). New York, NY: Harper & Row. Roe, B. D., Stoodt-Hill, B. D., & Burns, P. C. (2007). Secondary school literacy instruction: The content areas (9th ed.). Boston, MA: Houghton Mifflin. Rouet, J. F., & Britt, M. A. (2011). Relevance processes in multiple document comprehension. In M. T. McCrudden, J. P. Magliano, & G. Schraw (Eds.), Text relevance and learning from text (pp. 19–52). Charlotte, NC: Information Age. Rouet, J. F., Britt, M. A., Mason, R., & Perfetti, C. A. (1996). Using multiple sources of evidence to reason about history. Journal of Educational Psychology, 88(3), 478–493. Rupley, W. H., Nichols, W. D., & Blair, T. R. (2008). Language and culture in literacy instruction: Where have they gone. The Teacher Educator, 43(3), 48–56. Sakta, C. G. (1999). SQRC: A strategy for guiding reading and higher level thinking. Journal of Adolescent Adult Literacy, 42(4), 265–269. Seifert, T. L. (1993). Effects of elaborative interrogation with prose passages. Journal of Educational Psychology, 85, 642–651. *Shanahan, C., Shanahan, T., & Misischia, C. (2011). Analysis of expert readers in three disciplines: History, mathematics, and chemistry. Journal of Literacy Research, 43(4), 393–429. Shang, H. F. (2016). Online metacognitive strategies, hypermedia annotations, and motivation on hypertext comprehension. Journal of Educational Technology & Society, 19(3), 321–334. Simpson, M. L. (1986). PORPE: A writing strategy for studying and learning in the content areas. Journal of Reading, 29, 407–414. Simpson, M. L., Hayes, C. G., Stahl, N., & Connor, R. T. (1988). An initial validation of a study strategy system. Journal of Reading Behavior, 20(2), 149–180. Simpson, M. L., & Nist, S. L. (1984). PLAE: A model for planning successful independent learning. Journal of Reading, 28(3), 218–223. Simsek, A., & Hooper, S. (1992). The effects of cooperative versus individual videodisc learning on student performance and attitudes. International Journal of Instructional Media, 19, 209–218. Smith, B. L., Holliday, W. G., & Austin, H. W. (2010). Students’ comprehension of science textbooks using a question-based reading strategy. Journal of Research in Science Teaching, 47(4), 363–379. Smith, D. (1961). Learning to learn. New York, NY: Harcourt Brace Jovanovich. Spiro, R. J., Feltovich, P. J., Jacobson, M. J., & Coulson, R. L. (1991). Cognitive flexibility, constructivism, and hypertext: Random access instruction for advanced knowledge acquisition in ill-structured domains. Educational Technology, 31(1), 24–33.
213
Mulcahy-Ernt and Caverly
Sticht, T. G. (2002). The reading formula that helped win World War II. Reading Today, October/November 2002, 18. Taraban, R., Rynearson, K., & Kerr, M. (2000). College students’ academic performance and self-reports of comprehension strategy use. Reading Psychology, 21(4), 283–308. Tseng, S. S., Yeh, H. C., & Yang, S. H. (2015). Promoting different reading comprehension levels through online annotations. Computer Assisted Language Learning, 28(1), 41–57. Vacca, R. T., Vacca, J. A. L., & Mraz, M. (2017). Content area reading: Literacy and learning across the curriculum (11th ed.). New York, NY: Pearson Education. van Blerkom, D. L., & Mulcahy-Ernt, P. I. (2005). College reading and study strategies. Belmont, CA: Thomson Wadsworth. van Blerkom, D. L., van Blerkom, M. L., & Bertsch, S. (2006). Study strategies and generative learning: What works? Journal of College Reading and Learning, 37(1), 7–18. *Weinstein, C. E., & Mayer, R. E. (1986). The teaching of learning strategies. In M. C. Wittrock (Ed.), Handbook of research on teaching (pp. 315–327). New York, NY: Macmillian. Weinstein, C. E., Ridley, D. S., Dahl, T., & Weber, E. S. (1988). Helping students develop strategies for learning. Educational Leadership, 46(4), 17–19. Wilson, K. (2012). Note-taking in the academic writing process of non-native students: Is it important as a process or a product? In R. Hodges, M. L. Simpson, & N. A. Stahl (Eds.), Teaching study strategies in developmental education (pp. 153–166). Boston, MA: Bedford/St. Martin’s. Woloshyn, V. E., Willoughby, T., Wood, E., & Pressley, M. (1990). Elaborative interrogation facilitates adult learning of factual paragraphs. Journal of Educational Psychology, 82, 513–524. Wolters, C. A., & Hussain, M. (2015). Investigating grit and its relations with college student’s self-regulated learning and academic achievement. Metacognition Learning, 10, 292–311. Yang, Y. F. (2010). Developing a reciprocal teaching/learning system for college remedial reading instruction. Computers & Education, 55(3), 1193–1201. Young, D. B., & Ley, K. (2002). Brief report: Self-efficacy of developmental college students. Journal of College Reading & Learning, 33(1), 21–31. Zimmerman, B. J. (1989). The social cognitive view of self-regulated academic learning. Journal of Educational Psychology, 81(3), 329–339. *Zimmerman, B. J. (2002). Becoming a self-regulated learner: An overview. Theory Into Practice, 41(2), 64–70. Zimmerman, B. J., & Schunk, D. H. (2011). Handbook of self-regulation of learning and performance. Milton Park, Abingdon, GB: Taylor & Francis.
214
13 Linguistically Diverse Students Christa de Kleine notre dame of maryland university
Rachele Lawton community college of baltimore county
This chapter provides research and classroom strategies that can be applied in a variety of c ollegelevel contexts—English as a Second Language (ESL), developmental, and credit-bearing courses— to support the success of linguistically diverse students. Philosophically, it is problematic to describe certain students as “linguistically diverse” simply because their languages may not align with the expectations of higher education. Therefore, we aim both to acknowledge the need for a change in attitude toward linguistic diversity and the pedagogical practices that certain attitudes engender, and to provide strategies to support the academic language and literacy development needed for success in college. Within this approach, any student who brings a language or variety other than “mainstream” (or “standard”) English to the college classroom is considered linguistically diverse, not because those languages or varieties are insufficient for educational purposes but because language ideologies position them as inferior. We begin with an overview of linguistic diversity in the U.S. First, we provide descriptions of the students that comprise this heterogeneous group and the challenges they face at the postsecondary level. Then, we offer strategies to support linguistically diverse students, beginning with those that can be applied broadly and concluding with those recommended for the specific students described. Finally, we conclude by articulating a need for future research and offering recommendations.
Linguistic Diversity in the U.S. The student population in the U.S., both at the kindergarten through 12th grade (K-12) and the college levels, has become increasingly linguistically diverse in recent years. In 2010, approximately 10 percent of all K-12 students were designated as English Language Learners (ELLs) (Wright, 2015). As the growth of the ELL population outpaces the general K-12 student enrollment significantly, the current expectation is that by 2025, a quarter of the K-12 student population will be ELLs (TESOL International Association, 2013). These trends are reflected at the college level too (Kanno & Harklau, 2012), where numbers are expected to increase dramatically in the next few decades. In addition, U.S. institutions of higher education host more international students than those in any other country. In 2015–2016, 1,043,839 international students enrolled at U.S. colleges and universities. Although it is unclear what percentage of these students are fully English proficient, these numbers also show a steadily increasing trend (Institute of International Education, 2016). Furthermore, linguistically diverse students include those who speak
215
Christa de Kleine and Rachele Lawton
a dialect or variety of English that is different from the mainstream variety of English used in education, although these students may not always be acknowledged as such in education. While no precise data exist on the number of students (in K-12 or postsecondary education) whose home language is a nonmainstream variety of English, there is little doubt that linguistically diverse students—broadly defined as students whose home language is not mainstream English— constitute a significant percentage of college-level students today. Such demographic trends carry crucial implications for college classrooms.
Understanding Linguistically Diverse Students and Their Challenges Student language background is crucial at the college level, and in education in general, since it is the main tool used in instruction and assessment, in addition to its playing a central role in the formation of identities and social relationships (DiCerbo, Anstrom, Baker, & Rivera, 2014). Thus, students whose language skills are not aligned with the language of instruction in higher education—because their home language is either a nonmainstream (or nonstandard) dialect of English (Charity Hudley & Mallinson, 2011; Rickford, Sweetland, Rickford, & Grano, 2013) or a language other than English (Wright, 2015)—face additional challenges at the college level. Indeed, research on the performance of linguistically diverse students at the college level paints a dismal portrait of academic underachievement among this subgroup (Almon, 2012), calling for improved understanding among educators.
Students for Whom English Is a Second Language Students for whom English is a second language (henceforth, L2 students) form a highly diverse group, who vary in terms of educational background, home language, home language literacy skills, cultural background, socioeconomic status, and English language proficiency levels—all of which are factors that carry the potential to influence academic success significantly (Wright, 2015). Within the broader category of this group, subgroups can be distinguished, each having their own academic challenges. A general distinction can be made between “international” L2 and “immigrant” L2 students (Belcher, 2012). International L2 students are those who come to the U.S. on a student visa after having finished secondary school in their home countries, often with the primary goal of pursuing postsecondary education in the U.S. They tend to be economically privileged (compared to immigrant students) and highly motivated for their U.S. studies. Crucially, international students often enroll in U.S. colleges with strong academic preparation already in place (de Kleine & Lawton, 2015), endowing them with two significant advantages over other ESL students. First, they have developed their initial academic literacy skills in their home language, resulting in a solid literacy foundation that can subsequently be used for a second language (Cummins, 2000). Second, the solid educational backgrounds of international students may have included instruction in formal English and thus have prepared them for the academic language of English-medium colleges and universities (in fact, a percentage of these students will be sufficiently English proficient upon arrival; those students are not included in this discussion). Nonetheless, international L2 students may experience linguistic challenges not faced by their monolingual English-speaking counterparts. Academic writing skills in particular have been identified as a challenge (Belcher, 2012; de Kleine & Lawton, 2015; Ferris, 2011). Developing writing skills is always more challenging in a second language, and, more specifically, attaining grammatical accuracy is vastly more challenging in a second language than in a first language (DeKeyser, Alfi-Shabtay, & Ravid, 2010). Developing sufficient vocabulary to succeed at the postsecondary level is also a challenge for most international students as they need to develop a vast
216
Linguistically Diverse Students
academic vocabulary as well as discipline-specific vocabularies. Further challenges related to written language skills include developing a familiarity with the features of American discourse prevalent in textbooks and essays (Ferris, 2009). Most ESL courses at the college level focus primarily on written language skills (Ferris, 2009), and with the solid educational foundation international students typically bring to the classroom, these students are well-positioned to benefit from ESL instruction in the U.S. It should be borne in mind, though, that the written language challenges of international students are likely more pronounced in non-ESL courses, where instructors may not always be familiar with the second-language-specific challenges of academic writing, such as U.S. conventions of written academic prose and grammatical accuracy. Academic ESL courses tend to focus much less on spoken language skills development than on writing skills (Ferris, 2009), presumably on the assumption that spoken language is less relevant for academic success or less of an obstacle. However, research by Ferris and Tagg (1996) showed that international students may also struggle with listening and speaking skills. Using a survey administered to instructors in various academic disciplines at four tertiary institutions, Ferris and Tagg found that international L2 students were less prone to oral participation in the classroom, which the instructors believed was (at least partly) rooted in cultural differences between these students and the general student population. As compared to international L2 students, immigrant L2 students reside in the U.S. permanently; they may be first- or second-generation immigrants and have grown up in homes and communities where a language other than English is the dominant language. In many cases, these students came here originally for reasons not primarily associated with education. As a group, immigrant L2 students have socioeconomic and educational backgrounds that vary much more than those of international L2 students (Ferris, 2009; Roberge, Siegal, & Harklau, 2009). Also, the length of time in the U.S. varies among immigrant L2 students and impacts the development of their English skills. For example, more recently arrived immigrant L2 students may not have studied English in school in their home countries, so their writing and speaking skills may be equally underdeveloped (Thonus, 2014). On the other hand, many are “Generation 1.5” or “(long-term) resident L2” students, broadly defined as students who grew up with a non-English home language, arrived in the U.S. as children or adolescents, and thus received (part of ) their K-12 education in the U.S., typically having graduated from high school here (Doolan, 2017). These students’ oral and aural language skills are often comparable to those of monolingual speakers—in fact, their spoken language is often indistinguishable from that of monolingual English-speaking students as language acquisition theory would predict for early arrivals (Scovel, 1988). However, for many Generation 1.5 students, their academic language skills remained underdeveloped in K-12, the result of having had to develop their literacy skills in a second rather than first language, often without sufficient additional ESL instruction to bridge the home-school language gap (Thomas & Collier, 2002). Thus, once enrolled in college, such students often find their academic language skills to be inadequate (Doolan, 2013, 2017), resulting in a pattern of academic underperformance (Kanno & Harklau, 2012). Generation 1.5 students’ writing skills, in particular, have often been cited as unique when compared to those of both monolingual students and international students (de Kleine & Lawton, 2015), with Generation 1.5 students’ writing derived from their (informal style) oral fluency but frequently lacking the grammatical and textual characteristics of formal style academic writing (de Kleine, Lawton, & Woo, 2014). Because the discrepancy between Generation 1.5 students’ oral and academic English may confound college educators, we emphasize those students in this chapter.
Students Who Speak Nonmainstream Varieties of English Students whose home language is a variety of nonmainstream English fall into two groups: World English speakers and speakers of U.S.-based dialects. World English speakers hail from countries
217
Christa de Kleine and Rachele Lawton
where English was typically introduced as a result of past (often British) colonization and where, today a standardized variety of English is (one of ) the official language(s), for education and other purposes, usually along with other, local nonmainstream varieties of English. Immigration from World English-speaking countries has increased dramatically in the past two decades, which is currently reflected in increased college enrollment. Having been exposed to some form of English in their home countries, World English-speaking students typically display solid oral English skills, but their written English skills—as measured by academic language standards in the U.S.—are often lacking (de Kleine, 2006, 2009, 2015; Nero, 2001). This is particularly the case for students from countries where so-called “restructured” varieties, i.e., pidginized and creolized forms of English, are spoken, such as West Africa (in particular, Sierra Leone, Liberia, and Ghana), and the Caribbean, in countries such as Jamaica, Trinidad and Tobago, and Guyana. This is largely explained by the structural differences between these students’ native creolized variety of English and mainstream American English. Research has suggested that these students may fail to perceive subtle yet important differences between their native creolized English and mainstream American English, and as a result, they do not acquire some or all of the structural properties that set mainstream American English apart from the students’ home varieties (de Kleine, 2009, 2015). For instance, in an analysis of 346 writing samples from Creole English-speaking Sierra Leonean and Liberian secondary school students in ESL classes in the U.S., de Kleine (2009) found that these students’ writings displayed different plural and tense marking patterns (“errors” from a mainstream American English perspective), all of which could be traced to Creole English patterns. Importantly, these patterns barely changed when comparing lower- to advanced-level ESL students, whereas other aspects of their writing did improve considerably, thus suggesting a lack of perception with regard to certain grammatical distinctions between their native English variety and mainstream American English. Just as is the case with many other immigrant students, at the K-12 level, the education of World English-speaking students may have insufficiently addressed their linguistic needs because ESL (and mainstream) instructors typically do not possess the specific knowledge of creolized English patterns needed to modify ESL instruction effectively (de Kleine, 2008). Thus, once World English-speaking students enter college, their academic literacy skills are often seen as insufficient; furthermore, these students may be misinterpreted by instructors who may not recognize that they are actually using another, fully legitimate, variety of English rather than producing language fraught with errors (according to the conventions of mainstream, academic English). Adding to these challenges is the fact that World English-speaking students perceive themselves as native speakers of English (which they are, albeit of another variety than that of U.S. classrooms) and thus may be resistant to ESL instruction (Nero, 2014). Speakers of nonmainstream dialects of American English are, like ESL students, linguistically distinct, although, being U.S.-born, they may not have been acknowledged as such at the college level. Yet an extensive body of research dating back as far as the 1960s has demonstrated convincingly that U.S.-born students who grow up with a variety of English that is different from mainstream English are at a distinct disadvantage in school (Rickford et al., 2013) and may experience linguistic challenges that are somewhat similar to challenges facing students who speak another language at home (Abbate-Vaughn, 2009; Siegel, 2010). Research in K-12 settings has amply demonstrated that the literacy skills of speakers of nonmainstream English lag behind those of mainstream English, with most of the evidence in the U.S. coming from speakers of African American English (AAE).1 It is now generally accepted among linguists that AAE and other nonmainstream English varieties, such as Appalachian English, form rule-governed systems and are from a linguistic perspective different but not inferior (Lippi-Green, 2012). Thus, nonmainstream varieties (or “dialects”) are in themselves no obstacle to academic success. Rather, for children who grow up with nonmainstream English as their primary language variety, it is the discrepancy between their home and school language varieties that may have a negative impact on reading
218
Linguistically Diverse Students
skills development (Craig & Washington, 2006), writing skills development (Fogel & Ehri, 2000), and overall academic achievement (Wheeler & Swords, 2010). It is important to highlight that similar dialect effects are found in educational (and sociopolitical) settings worldwide, including in various countries in Europe, in Australia, and in the Caribbean (de Kleine, 2015; Siegel, 2010). The majority of empirical research available on the literacy skills of speakers of nonmainstream English is at the K-12 level, and most of this suggests that literacy and resulting academic challenges among nonmainstream English-speaking students are not the result of dialect use per se (Craig & Washington, 2006) but instead appear to be rooted in negative attitudes toward nonmainstream language (Wheeler, 2016), lack of teacher knowledge of language variation (Wheeler, Cartwright, & Swords, 2012), or a combination of the two (Champion, Cobb-Roberts, & Bland-Stewart, 2012). Studies conducted at the postsecondary level examining the effects of AAE and other dialects on literacy skills and academic achievement are not quite as numerous, with empirical studies on language skills being rare, with a few exceptions. Gilyard and Richardson (2001), for instance, identified a strong prevalence of AAE discourse features in the writings of AAE-speaking students, a finding echoed in the work of Balester (1993); both publications make a strong case for building on these discourse patterns to improve students’ academic writing skills. A few other studies have identified issues similar to those in the literature on K-12 students. For instance, a study by Williams (2012), who investigated sociolinguistic language awareness among college writing instructors, found that educators may not have sufficient knowledge of language and language variation, and even if they do, they often lack the specific linguistic knowledge needed to address issues of linguistic variation effectively in students’ writings or to build effectively on students’ existing linguistic abilities. The bulk of the literature on nonmainstream English speakers at the college level has been in the developmental (basic) writing course setting, where such students tend to be overrepresented (Rickford et al., 2013). Rather than analyzing student language skills, by far, the majority of recent publications on postsecondary settings has focused on issues of language, power, and pedagogy, with a strong focus on “translanguaging” practices, in a general context of advocacy for students’ rights to use their own language varieties (along with mainstream English) in their writings (Canagarajah, 2013; Gilyard, 2016; Horner, Lu, Royster, & Trimbur, 2011; Lu & Horner, 2016).
Strategies to Support Linguistically Diverse Students Thus far, we have highlighted the heterogeneity of students described as linguistically diverse and noted that a discrepancy between one’s home language and the language of the classroom may impede success at the college level. To support linguistically diverse students—those who speak English as a second language and those who speak nonmainstream varieties of English—we advocate a “linguistically informed” approach to instruction (Charity Hudley & Mallinson, 2011; de Kleine & Lawton, 2015; Wheeler & Swords, 2006). In our view, such an approach embraces the linguistic diversity that all students bring to the classroom through culturally responsive teaching, described later, acknowledges that the concept of “standard” or “mainstream” language is ideological rather than linguistic, and encourages the expansion of all linguistic repertoires, not just those of linguistically diverse students, though their success is the focus of this chapter.
Culturally Responsive Teaching Culturally responsive approaches to teaching acknowledge that the relationship among language, culture, and learning is a complex one in which “one-size-fits-all” mentalities or methodologies are unacceptable (Gay, 2010). Instructors identify and respond to their students’ needs, so culturally responsive teaching should also be linguistically responsive. The skills that students bring to the classroom, including their home languages, are viewed as useful instructional resources, and
219
Christa de Kleine and Rachele Lawton
competency in more than one communication system is seen as a resource and a necessity for living in pluralistic societies (Gay, 2010). For example, within a culturally responsive framework, instructors can acknowledge that language variation is the norm even though proficiency in mainstream (or standard) English, a privileged variety that is accepted as “correct” and is unarguably associated with social and political power, is expected in academic contexts. Such judgments about “correctness” are based on sociopolitical considerations rather than linguistic grounds (Wheeler & Swords, 2004, p. 473), so instructors could adopt the term “standardized” rather than “standard” or “mainstream” English to avoid suggesting that a single standard variety of English exists regardless of social norms, registers, or situational contexts (Charity Hudley & Mallinson, 2011). Culturally responsive teaching should also be rooted in equity and, as Gorski (2016) argues too tight a focus on culture as a response to diversity can discourage a commitment to more equitable educational experiences. Therefore, we advocate an approach in which teachers are racially and linguistically just, not just culturally sensitive. Culturally responsive teaching has often emphasized racially underserved students (Ladson-Billings, 1995), but all linguistically diverse students (and all students in general) can benefit from an approach to teaching that draws on cultural (including linguistic) backgrounds and foregrounds equity. For example, in a small experimental study, Chen and Yang (2017) found that culturally responsive teaching practices increased ESL students’ classroom participation. In addition, Al-Amir (2010) notes that a culturally responsive environment can help diverse learners feel secure enough to practice the target language, make errors, and correct themselves without embarrassment. Finally, culturally responsive pedagogical practices should be informed by knowledge of students and subject matter, thereby demonstrating an appreciation for the rich diversity in students’ backgrounds (Charity Hudley & Mallinson, 2011, 2014; Villegas & Lucas, 2002). For instance, instructors can choose texts around themes that relate to students’ knowledge and experiences, such as multicultural content that provides role models whose linguistic backgrounds are diverse (National Council of Teachers of English, 2006). Reading and writing opportunities and cooperative, collaborative activities that promote discussion can be introduced, along with the modeling of expectations, to help bridge the gap between school and the world outside of it (National Council of Teachers of English, 2006). In addition, low-risk opportunities to read and write allow students to express themselves without concern over “correct” grammar, punctuation, and other conventions of academic English. Journal writing is an effective example that may also provide instructors with valuable insights about their students’ lives and previous education experiences. By learning about students’ backgrounds, instructors can more effectively build relationships and provide useful feedback (Lucas, Villegas, & Freedson-Gonzalez, 2008). To learn more about students’ academic and language learning backgrounds, Reynolds (2009) also suggests conducting a diagnostic interview.
Linguistically Informed Feedback Academic writing skills are a key criterion for entry to academic studies and successful completion of college degree programs, and studies have shown that writing is the linguistic challenge that plagues students the most (Kanno & Grosik, 2012). Therefore, instructor written feedback constitutes an important form of support for linguistically diverse students (see Bitchener & Ferris, 2012). We advocate Charity Hudley and Mallinson’s (2011) “linguistically informed” approach over a “correctionist” approach (see also Reynolds, 2009; Wheeler & Swords, 2006) because identifying every error can be both exhausting for instructors and discouraging and counterproductive for students. Rather, a small number of language patterns can be addressed at a time, which helps students understand their use of specific patterns (Smith & Wilhelm, 2007). Reading an entire paper to identify the issues that most impair one’s understanding is advisable; then, the most frequently occurring issues can be emphasized (Reynolds, 2009). Charity Hudley and Mallinson
220
Linguistically Diverse Students
(2011) caution against making assumptions about what students know, especially if it has not been taught explicitly, so instructors may need to provide examples of certain types of errors before they refer to them in written feedback. One-on-one, in-person feedback may allow instructors to learn more about students’ language backgrounds and become familiar with the non-standardized English features in students’ writing, which they can refer to as patterns of language variation rather than “errors” (Charity Hudley & Mallinson, 2011).
Academic Language Development All linguistically diverse students need opportunities to develop academic language skills, which can occur within a culturally and linguistically responsive framework. Though there is no universally accepted definition of academic language, it is widely acknowledged that developing academic language is more challenging than developing social language (Cummins, 2000). To support the development of academic language, educators should focus on the communicative functions of language and the heavily contextualized language used in teaching academic subjects (Wiley, 2005). Because language and literacy development takes place in specific social contexts, instructors should provide academic socialization to specific literacy practices as opposed to English proficiency that is not specific to a particular context (ibid.). Students need repeated, well-supported opportunities to develop academic language in specific contexts, and instructors should be able to identify the language demands inherent in classroom tasks and use scaffolds to facilitate the completion of academic tasks. Scaffolds may be extralinguistic supports, such as visual tools; supplements to written texts, including study guides that outline major concepts; students’ native languages (or varieties), which may involve peer interaction and translation with comprehension challenges; and other purposeful activities that allow students to interact with classmates and negotiate meaning (Lucas et al., 2008). Instructors can also phrase and then rephrase statements using the academic jargon that students must learn (Charity Hudley & Mallinson, 2011). Although instructors whose primary responsibility is to teach academic subjects (i.e., those outside of ESL and developmental courses) cannot be expected to become experts on language, they can identify the specific characteristics of the language of their disciplines to make them more explicit for ESL students (Lucas et al., 2008). Carroll and Dunkelblau’s (2011) research found that ESL students need opportunities to achieve linguistic accuracy, acquire academic vocabulary, and develop critical thinking skills in order to meet the academic language demands of credit-bearing college courses. Challenging assignments that require students to engage with texts, summarize, paraphrase and cite sources, and critically reflect on ideas will prepare linguistically diverse students for the reading, writing, and thinking tasks they encounter in college.
Students for Whom English Is a Second Language International L2 Students The aforementioned broad strategies apply to international L2 students, whose primary challenges reside with developing academic language skills, including grammatical accuracy. To support international students, instructors should develop some understanding of second language acquisition and second language writing as well as translanguaging practices, in which ESL writers’ differences are viewed as a resource, dominant standards are questioned, and flexibility regarding mistakes and correctness is permitted (see García & Wei, 2014; Horner et al., 2011). Furthermore, because writing and referencing sources as practices are embedded in the culture in which they occur (Abbate-Vaughn, 2009), international students may be unfamiliar with plagiarism concerns
221
Christa de Kleine and Rachele Lawton
and mystified by the importance placed on citing sources and acknowledging ownership of ideas (Ortmeier-Hooper, 2013). Therefore, instructors should provide strategies for avoiding plagiarism as it is defined in U.S. college classrooms. Furthermore, because international students may also struggle with listening and speaking skills, particularly as they relate to the norms of U.S. college classrooms which place value on interaction, they need opportunities for the oral and aural development of both social and academic English. Repeated, scaffolded opportunities to interact with classmates and adjust to sharing opinions in class will provide the much-needed exposure to English at the college level.
Immigrant L2 Students As mentioned earlier, the range of linguistic needs is much wider for immigrant L2 students than it is for international L2 students. More recently arrived immigrant students may need opportunities to develop both spoken (oral and aural) language and academic reading and writing skills, but it is academic language in particular that typically presents the main challenge at the college level, even more so for students who lack solid L1 literacy skills. In particular, the linguistic needs of Generation 1.5 students may remain unaddressed in college classrooms, and depending on the institution, those students may be placed into developmental, ESL, or composition courses. Doolan’s research (which focused solely on error patterns) has examined the writings of both developmental (Doolan, 2014) and first year composition students (Doolan, 2013, 2017), and concluded overall that the patterns of Generation 1.5 students are more comparable to L1 than L2 writers. Error patterns alone, however, provide an incomplete picture, and other studies have shown that Generation 1.5 writing more closely resembles L2 writing (Doolan & Miller, 2012). Still more studies found that when examined more holistically, Generation 1.5 student writing may be more like L1 student writing (Di Gennaro, 2016). Though empirical studies’ findings have been conflicting, generally speaking, Generation 1.5 students need opportunities to develop academic reading and writing skills and may benefit from approaches that involve explicit language instruction, emphasizing grammar and providing opportunities to practice target structures, though we caution overemphasizing grammar instruction. Generation 1.5 students may also need opportunities to develop language awareness so that they can better edit their own written work. Differentiation will likely be necessary to support Generation 1.5 students’ needs, in ESL, developmental, and composition courses alike, and writing centers may have the potential to provide intervention strategies, including teaching students the metalanguage and sociopragmatic conventions of writing, affirming students’ cultural and linguistic heritage, balancing grammar corrections with rhetorical concerns, and offering explicit direction rather than appealing to native speaker intuitions (Thonus, 2003; see also Thonus, 2014). Generation 1.5 students may not identify as ESL and thus resist placement in ESL classes, which is understandable from the perspective of student identity, though ESL instructors may be better prepared to provide the explicit language instruction that Generation 1.5 students often require. Bunch and Kibler (2015) suggest that Generation 1.5 students may benefit from alternatives to developmental or ESL courses, such as models that integrate ESL curriculum and disciplinary content and other accelerated approaches, such as learning communities. Regardless of placement, instructors should validate Generation 1.5 students’ linguistic abilities as fully functional bilinguals able to use each of their languages effectively for different purposes (Valdés, 2003).
Students Who Speak Nonmainstream Varieties of English World English-speaking students, particularly those from countries where restructured varieties are spoken (de Kleine, 2006, 2009), experience challenges with written academic English that are at least partly rooted in their failure to perceive the differences between the varieties of English.
222
Linguistically Diverse Students
For that reason, they need opportunities to understand the structural differences between their native varieties of English and the English required for academic success in U.S. colleges. The first step in this process is for World English-speaking students—and all other nonmainstream English speakers—to develop language awareness, i.e., to learn that all human languages and language varieties have grammatical and sound systems and lexicons, that none is better than another, and that all are systematic and complex in their own way. Exploring structural patterns in one’s own language variety, for instance through analyzing music lyrics, can help students realize that nonmainstream English is fully grammatical, albeit different from “standard” English grammar. This understanding then lays the foundation for writing instruction, as students begin to see that many of the “errors” they may be producing are actually patterns in their home language varieties. U.S.-born speakers of nonmainstream varieties of English can similarly benefit from a language awareness approach that allows them to develop a deeper understanding of language variation, including the properties that distinguish different varieties. Rejecting “traditional” responses to nonmainstream language varieties, which “correct, repress, eradicate, or subtract student language that differs from the standard written target,” Wheeler and Swords (2004, p. 473) propose that instructors employ contrastive analysis, a discovery-based approach that contrasts the grammatical patterns of nonmainstream English and mainstream English. This approach incorporates code-switching, i.e., choosing a variety that is appropriate for given situations, purposes, and audiences rather than right or wrong (Ball & Muhammad, 2003). For example, discussions about when and why we change the way we use language may be useful since effective literacy instruction should equip students for real-world situations that require communication with diverse speakers of different languages and language varieties (Smitherman & Villanueva, 2003). To employ contrastive analysis, instructors can also familiarize themselves with common grammar patterns in their students’ writing, both to support the development of student writing and build relationships. They can also incorporate literature written in different varieties, assignments in which students write in their own varieties, and discussions about the similarities and differences between varieties. Finally, linguistically informed feedback, discussed earlier, will help ensure that students who speak non-standardized varieties of English do not feel devalued or discouraged (Charity Hudley & Mallinson, 2011). It is important to note criticisms of contrastive analysis/code-switching for maintaining that “non-mainstream” varieties are only appropriate for use in the home or social settings ( WilliamsFarrier, 2016). Educators should reject approaches that position academic English as superior. Rather, they can draw on contrastive analysis/code-switching and position academic English as a complement to home languages rather than a replacement. That involves validating students’ home languages by using them alongside academic English in the classroom.
Conclusions and Recommendations for Future Research This chapter has highlighted the heterogeneity of linguistically diverse students and advocated a linguistically informed approach to instruction for all students whose home language differs from the language expected in college classrooms. Unfortunately, there is currently a lack of research on the academic performance of segments of this rapidly growing population at the college level (Kanno & Cromley, 2013; Kanno & Harklau, 2012), with hardly any research at the community college level (Almon, 2012), in particular for Generation 1.5 and AAE-speaking students. More research is needed to provide a better understanding of linguistically diverse students’ experiences at the college level, particularly empirical studies that offer pedagogical strategies, and longitudinal studies that provide insights about the successes and challenges of linguistically diverse students, particularly under-researched populations. Finally, given the increase in linguistic diversity in college classrooms, all educators must be prepared to develop linguistically informed classroom practices that support the success of their students.
223
Christa de Kleine and Rachele Lawton
Note 1 Other terms used for this language variety are African American Vernacular English (AAVE), Black English, and Ebonics.
References Abbate-Vaughn, J. (2009). Addressing diversity. In R. F. Flippo & D. C. Caverly (Eds.), Handbook of college reading and study strategy research (2nd ed, pp. 289–313). New York, NY: Routledge. Al-Amir, M. (2010). Humanizing TESOL curriculum for diverse adult ESL learners in the age of globalization. International Journal of the Humanities, 8(8), 103–112. Almon, C. (2012). Retention of English learner students at a community college. In Y. Kanno & L. Harklau (Eds.), Linguistic minority students go to college: Preparation, access and persistence (pp. 184–200). New York, NY: Routledge. Balester, V. M. (1993). Cultural divide: A study of African-American college-level writers. Portsmouth, NH: Boynton/Cook/Heinemann. Ball, A. F., & Muhammad, R. J. (2003). Language diversity in teacher education and in the classroom. In G. Smitherman & V. Villanueva (Eds.), Language diversity in the classroom: From intention to practice (pp. 76–88). Carbondale, IL: Southern Illinois University Press. Belcher, D. (2012). Considering what we know and need to know about second language writing. Applied Linguistics Review, 3(1), 131–150. doi:10.1515.applirev-2012-0006 Bitchener, J., & Ferris, D. R. (2012). Written corrective feedback in second language acquisition and writing. New York, NY: Routledge. Bunch, G. C., & Kibler, A. K. (2015). Integrating language, literacy, and academic development: Alternatives to traditional English as a second language and remedial English for language minority students in community colleges. Community College Journal of Research and Practice, 39(1), 20–33. doi:10.1080/10668926.2012.755483 Canagarajah, A. S. (2013). Literacy as translingual practice: Between communities and classrooms. New York, NY: Routledge. Carroll, J., & Dunkelblau, H. (2011). Preparing ESL students for “real” college writing: A glimpse of common writing tasks ESL students encounter at one community college. Teaching English in the Two-Year College, 38, 271–281. Champion, T. B., Cobb-Roberts, D., & Bland-Stewart, L. (2012). Future educators’ perceptions of African American Vernacular English (AAVE). Online Journal of Education Research, 1(5), 80–89. *Charity Hudley, A. H., & Mallinson, C. (2011). Understanding English language variation in U.S. schools. New York, NY: Teachers College Press. Charity Hudley, A. H., & Mallinson, C. (2014). We do language: English language variation in the secondary English classroom. New York, NY: Teachers College Press. Chen, D., & Yang, X. (2017). Improving active participation of ESL students: Applying culturally responsive teaching strategies. Theory and Practice in Language Studies, 7(1), 79–86. doi:10.17507/tpls.0701.10 Craig, H. K., & Washington, J. A. (2006). Malik goes to school: Examining the language skills of African American students from preschool-5th grade. New York, NY: Psychology Press. Cummins, J. (2000). Language, power, and pedagogy: Bilingual children in the crossfire. Clevedon, UK: Multilingual Matters. de Kleine, C. (2006). West African World English speakers in U.S. classrooms: The role of West African Pidgin English. In S. Nero (Ed.), Dialects, Englishes, creoles, and education (pp. 205–232). Mahwah, NJ: Erlbaum. de Kleine, C. (2008). ESL teachers’ perceptions of World English speakers. Paper presented at the International TESOL Convention, Washington, DC, April 3. de Kleine, C. (2009). Sierra Leonean and Liberian students in ESL programs in the U.S.: The role of Creole English. In J. Kleifgen & G. C. Bond (Eds.), The languages of Africa and the diaspora: Educating for language awareness (pp. 178–198). Clevedon, UK: Multilingual Matters. de Kleine, C. (2015). Learning and negotiating standard English in the U.S. classroom: Overlooked challenges of Creole English speakers. International Journal of Literacies, 23(1), 31–39. *de Kleine, C., & Lawton, R. (2015). Meeting the needs of linguistically diverse students at the college level. College Reading & Learning Association. Retrieved from www.crla.net/images/whitepaper/Meeting_Needs_ of_Diverse_Students.pdf de Kleine, C., Lawton, R., & Woo, M. (2014). Unique or not? An analysis of error patterns in the writings of generation 1.5 students. Presentation at the 13th Symposium on Second Language Writing, Tempe, AZ, November 15.
224
Linguistically Diverse Students
DeKeyser, R., Alfi-Shabtay, I., & Ravid, D. (2010). Cross-linguistic evidence for the nature of age effects in second language acquisition. Applied Psycholinguistics, 31, 413–438. doi:10.1017/S0142716410000056 Di Gennaro, K. (2016). Searching for differences and discovering similarities: Why international and resident second-language learners’ grammatical errors cannot serve as a proxy for placement into writing courses. Assessing Writing, 29, 1–14. doi:10.1016/j.asw.2016.05.001 DiCerbo, P. A., Anstrom, K. A., Baker, L. L., & Rivera, C. (2014). A review of the literature on teaching academic English to English language learners. Review of Educational Research, 84, 446–482. Doolan, S. (2013). Generation 1.5 writing compared to L1 and L2 writing of first year composition. Written Communication, 30, 135–163. doi:10.1177/0741088313480823 Doolan, S. M. (2014). Comparing language use in the writing of developmental generation 1.5, L1, and L2 tertiary students. Written Communication, 31(2), 215–247. doi:10.1177/0741088314526352 Doolan, S. M. (2017). Comparing patterns of error in generation 1.5, L1 and L2 FYC writing. Journal of Second Language Writing, 35, 1–17. doi:10.1016/j.jslw.2016.11.002 Doolan, S. M., & Miller, D. (2012). Generation 1.5 written error patterns: A comparative study. Journal of Second Language Writing, 21(1), 1–22. doi:10.1016/j.jslw.2011.09.001 *Ferris, D. (2009). Teaching college writing to diverse student populations. Ann Arbor, MI: University of Michigan Press. Ferris, D. (2011). Treatment of error in second language student writing. Ann Arbor, MI: University of Michigan Press. Ferris, D., & Tagg, T. (1996). Academic oral communication needs of EAP learners: What subject-matter instructors actually require. TESOL Quarterly, 30(1), 31–58. Fogel, H., & Ehri, L. C. (2000). Teaching elementary students who speak black English vernacular to write in standard English: Effects of dialect transformation practice. Contemporary Educational Psychology, 25, 212–235. García, O., & Wei, L. (2014). Translanguaging: Language, bilingualism and education. Basingstoke, UK: Palgrave Macmillan. *Gay, G. (2010). Culturally responsive teaching: Theory, research, and practice (2nd ed.). New York, NY: Teachers College Press. Gilyard, K. (2016). The rhetoric of translingualism. College English, 78(3), 284–289. Gilyard, K., & Richardson, E. (2001). Students’ right to possibility: Basic writing and African American rhetoric. In Greenbaum, A. (Ed.). Insurrections: Approaches to resistance in composition studies (pp. 37–51). Albany, NY: SUNY Press. Gorski, P. (2016). Rethinking the role of “culture” in educational equity: From cultural competence to equity literacy. Multicultural Perspectives, 18(4), 221–226. doi:10.1080/15210960.2016.1228344 *Horner, B., Lu, M., Royster, J. J., & Trimbur, J. (2011). Language difference in writing: Toward a translingual approach. College English, 73, 303–321. Institute of International Education (2016). Open doors data: International students. Retrieved from www.iie. org/Research-and-Publications/Open-Doors/Data/International-Students Kanno, Y., & Cromley, J. G. (2013). English language learners’ access to and attainment in postsecondary education. TESOL Quarterly, 47(1), 89–121. doi:10.1002/tesq.49 *Kanno, Y., & Grosik, S. (2012). Immigrant English learners’ transitions to university: Student challenges and institutional policies. In Kanno, Y. & Harklau, L. (Eds.), Linguistic minority students go to college: Preparation, access, and persistence (pp. 130–147). New York, NY: Routledge. *Kanno, Y., & Harklau, L. (Eds.). (2012). Linguistic minority students go to college: Preparation, access, and persistence. New York, NY: Routledge. *Ladson-Billings, G. (1995). But that’s just good teaching! The case for culturally relevant pedagogy. Theory into Practice, 34(3), 159–165. *Lippi-Green, R. (2012). English with an accent: Language, ideology, and discrimination in the United States (2nd ed.). New York, NY: Routledge. Lu, M. Z., & Horner, B. (2016). Introduction: Translingual work. College English, 78(3), 207–218. Lucas, T., Villegas, A. M., & Freedson-Gonzalez, M. (2008). Linguistically responsive teacher education preparing classroom teachers to teach English language learners. Journal of Teacher Education, 59, 361–373. National Council of Teachers of English. (2006). NCTE position paper on the role of English teachers in educating English language learners (ELLs). Retrieved from www.ncte.org/positions/statements/teacherseducatingell Nero, S. (2001). Englishes in contact: Anglophone Caribbean students in an urban college. Cresskill, NJ: Hampton Press. Nero, S. (2014). Classroom encounters with Caribbean Creole English: Language, identities, pedagogy. In A. Mahboob & L. Barratt (Eds.), Englishes in multilingual contexts (pp. 33–46). Dordrecht, the Netherlands: Springer.
225
Christa de Kleine and Rachele Lawton
Ortmeier-Hooper, C. (2013). The ELL writer: Moving beyond basics in the secondary classroom. New York, NY: Teachers College Press. *Reynolds, D. W. (2009). One on one with second language writers: A guide for writing tutors, teachers, and consultants. Ann Arbor, MI: University of Michigan Press. Rickford, J. R., Sweetland, J., Rickford, A. E., & Grano, T. (2013). African American, Creole, and other vernacular Englishes in education: A bibliographic resource (Vol. 4). New York, NY: Routledge. *Roberge, M., Siegal, M., & Harklau, L. (2009). Generation 1.5 in college composition: Teaching academic writing to U.S.-educated learners of ESL. New York, NY: Routledge. Scovel, T. (1988). A time to speak: A psycholinguistic inquiry into the critical period for human speech. Rowley, MA: Newbury House. Siegel, J. (2010). Second dialect acquisition. New York, NY: Cambridge University Press. Smith, M. W., & Wilhelm, J. D. (2007). Getting it right: Fresh approaches to teaching grammar, usage, and correctness. New York, NY: Scholastic. Smitherman, G., & Villanueva, V. (Eds.). (2003). Language diversity in the classroom: From intention to practice. Carbondale, IL: Southern Illinois University Press. TESOL International Association. (2013, March). Overview of the common core state standards initiatives for ELLs. Alexandria, VA: Author. Thomas, W. P., & Collier, V. (2002). A national study of school effectiveness for language minority students’ longterm academic achievement. Santa Cruz, CA: Center for Research on Education, Diversity & Excellence, University of California. Thonus, T. (2003). Serving generation 1.5 learners in the university writing center. TESOL Journal, 12(1), 17–24. *Thonus, T. (2014). Tutoring multilingual students: Shattering the myths. Journal of College Reading & Learning, 44(2), 200–213. doi:10.1080/10790195.2014.906233 Valdés, G. (2003). Expanding definitions of giftedness: Young interpreters of immigrant background. Mahwah, NJ: Lawrence Erlbaum Associates. Villegas, A. M., & Lucas, T. (2002). Preparing culturally responsive teachers: Rethinking the curriculum. Journal of Teacher Education, 53(1), 20–32. merican *Wheeler, R. S., (2016). “So much research, so little change”: Teaching standard English in African A classrooms. Annual Review of Linguistics, 2, 367–390. doi:10.1146/annurev-linguistics-011415-040434 Wheeler, R. S., & Swords, R. (2004). Codeswitching: Tools of language and culture transform the dialectally diverse classroom. Language Arts, 81, 470–480. *Wheeler, R. S., & Swords, R. (2006). Code-switching: Teaching standard English in urban classrooms. Urbana, IL: National Council of Teachers of English. Wheeler, R. S., & Swords, R. (2010). Code-switching lessons: Grammar strategies for linguistically diverse writers: Grades 3–6. Portsmouth, NH: Firsthand Heinemann. Wheeler, R. S., Cartwright, K. B., & Swords, R. (2012). Factoring AAVE into reading assessment and instruction. The Reading Teacher, 65(6), 416–425. doi:10.1002/TRTR.01063 Wiley, T. G. (2005). Literacy and language diversity in the United States (2nd ed.). Washington, DC: Center for Applied Linguistics. Williams, K. C. (2012). The role of instructors’ sociolinguistic language awareness in college writing courses: A discourse analytic/ethnographic approach (Doctoral dissertation). Retrieved from Georgetown Digital Repository at https://repository.library.georgetown.edu/handle/10822/557714 Williams-Farrier, B. J. (2016). Signifying, narrativizing, and repetition: Radical approaches to theorizing African American language. Meridians: feminism, race, transnationalism, 15(1), 218–242. doi:10.2979./ meridians.15.1.12 *Wright, W. E. (2015). Foundations for teaching English language learners: Research, theory, policy, and practice (2nd ed.). Philadelphia, PA: Caslon Publishing.
226
14 Study and Learning Strategies Claire Ellen Weinstein the university of texas at austin
Taylor W. Acee texas state university
This chapter highlights the importance of strategic and self-regulated learning for college student success and the development of lifelong learners. The authors review early research on study and learning strategies and show how this research led to the development of more comprehensive and dynamic models of learning that emphasize interactions among cognitive, metacognitive, motivational, affective, behavioral, and environmental factors. One such model is discussed: the Model of Strategic Learning (MSL). The MSL organizes various factors that underlie learning into four major components: skill, will, self-regulation, and the academic environment. Finally, the authors discuss approaches for teaching and assessing strategic learning in college.
Study and Learning Strategies Societies across the globe are experiencing a growing need for their citizens to become more strategic and self-regulated lifelong learners who can proactively develop knowledge and skills on their own to meet the rapidly evolving demands of the global economy and modern workforce (McGarrah, 2014; Pew Research Center, 2016). However, as the need for a more highly educated, skilled, and adaptive workforce grows, so does the number of students entering postsecondary education who are not sufficiently prepared to reap the benefits that college has to offer. For example, a national study on high school graduates who took the ACT (a commonly used college entrance examination) found that 61 percent met college-readiness benchmarks in English, 44 percent did so in reading, 41 percent did so in mathematics, 36 percent did so in science, and 26 percent did so in all four areas (ACT, 2016). The combination of increasing enrollments in higher education, high proportions of students entering college academically underprepared, and consistently low graduation rates (Aud et al., 2011) has led to broader definitions of college readiness (Conley, 2007) and a stronger focus on strategic and self-regulated learning (Weinstein & Acee, 2013). Although content knowledge as well as skills in reading, writing, and mathematics are necessary for college success, they are not sufficient—students must also use learning strategies to actively improve and take responsibility for their own learning (Fong, Zientek, Ozel, & Phelps, 2015; Robbins, Oh, Le, & Button, 2009). Learning strategies include any cognitive, metacognitive, motivational, affective, or behavioral process or action that facilitates the generation of meaningful and retrievable memories, and increases the probability of learning and the transfer of learning to new situations (Weinstein & Acee, 2013; Weinstein, Husman, & Dierking, 2000).
227
Claire Ellen Weinstein and Taylor W. Acee
In this chapter, we review research on learning strategies; describe the MSL; discuss approaches for teaching and assessing strategic learning; and describe ways in which strategic learning instruction can be incorporated into various programs, courses, and interventions within postsecondary educational settings.
A Brief History of Learning Strategies The quest for improving learning and memory can be traced back to the ancient Greeks, with the study of mnemonics and Socratic teaching methods. For example, according to Cicero (106–143 bce), Simonides (556–468 bce) introduced a mnemonic technique, the method of loci, after he was able to remember the guests at a banquet by recalling where they sat around the table (Thomas, 2017). Simonides postulated that memory of an ordered list of items could be enhanced by selecting an interconnected set of locations, i.e., loci (e.g., landmarks on a commonly traveled path) and then associating images of the to-be-remembered items with those locations. Mnemonics are artificial memorization techniques, such as song mnemonics (e.g., the ABC song) and the first-letter method (e.g., HOMES to remember the five Great Lakes), that can aid recall of information (although it is noted that these techniques are not designed to facilitate deep-level learning on more complex learning tasks; Weinstein & Mayer, 1986). The ancient Greeks developed and used various mnemonic techniques, but systematic empirical investigations of learning strategies did not begin until the late 1960s and the 1970s when models of learning were shifting from viewing learners as passive receptacles of knowledge to viewing them as autonomous individuals who construct knowledge through active information processing (Weinstein et al., 2000). Early research on learning strategies focused on understanding the cognitive processes underlying mnemonic systems and their effects on recall (Wood, 1967). In the 1970s, this body of work was enriched by Wittrock’s (1974) theory of generative learning, which postulated that individuals can enhance their learning by actively connecting new information with prior knowledge and elaborating this information with detail, new concepts, and idiosyncratic thought. Empirical investigations also provided evidence that active information processing was critical to learning and could be taught. For example, Weinstein (1975, 1978) showed that imaginal and verbal elaboration strategies could be taught to students and used to improve learning on free-recall, paired-associate, and reading comprehension tasks. Flavell’s (1979) work on metacognition added another layer to learning strategies research because it implied that learning could be enhanced by thinking about, monitoring, and controlling one’s own thinking and use of strategies. Researchers in the mid-1980s came to recognize that cognitive learning strategies were critical for effective learning but insufficient to produce lasting, transformative improvements in students’ learning and academic achievement, in part because findings indicated that students were not likely to use cognitive learning strategies on their own in nonexperimental contexts (Pressley & McCormick, 1995; Zimmerman, 2008). As research on learning strategies grew, definitions of learning strategies expanded from a narrower focus on cognitive information processing strategies to a broader focus on metacognitive, motivational, and affective strategies. In a seminal publication on learning strategies, Weinstein and Mayer (1986) proposed five general categories of learning strategies: rehearsal strategies (basic and complex), elaboration strategies (basic and complex), organizational strategies (basic and complex), comprehension monitoring strategies, and affective and motivational strategies (see Chapter 12 in this volume for a discussion of these strategies in postsecondary reading contexts). Zimmerman (2008) pointed to a symposium at the American Educational Research Association, and corresponding special issue (Zimmerman, 1986), as a critical historical moment in which researchers converged around the goal of integrating theory and
228
Study and Learning Strategies
research on “learning strategies, metacognitive monitoring, self-concept perceptions, volitional strategies, and self-control” (p. 167). Contemporary models of strategic and self-regulated learning (e.g., Pintrich, 2004; Weinstein et al., 2000; Zimmerman, 2000) emphasize the idea that effective learning emerges from interactions among cognitive, metacognitive, motivational, affective, behavioral, and environmental factors. Moreover, these models recognize the learner as an autonomous agent who has the power to proactively and intentionally use a wide range of strategies to improve learning and performance. Although these models are generic and applicable to learning across a life span, they have frequently been used to inform research and practice in postsecondary educational settings. Next, we briefly describe general categories of cognitive learning strategies proposed by Weinstein and Mayer (1986), and then go on to describe Weinstein’s MSL (see Figure 14.1), which expands on her earlier work.
Cognitive Learning Strategies Weinstein and Mayer (1986) summarized much of the early research on learning strategies and proposed a taxonomy of learning strategies. Here, we review three categories of cognitive learning strategies that operate directly on the information to be learned: rehearsal, elaboration, and organizational strategies. Rehearsal strategies involve repeating and holding in one’s mind the information that is to be learned (Weinstein & Mayer, 1986; Weinstein, Acee, & Jung, 2011). Examples of rehearsal strategies include rereading assigned texts, listening over and over again to recorded lectures, restating definitions of words verbatim, repeating the steps in a process over and over again, highlighting material in a text, copying information verbatim into notes, and creating and cycling through flash cards. Although these strategies can help college students select information to be learned and hold that information in working memory for further elaboration, they are not particularly useful for generating new ideas and establishing meaningful connections among ideas. Passive rehearsal strategies involve mindless repetition that does not require effortful cognitive processing and, therefore, does not result in meaningful learning and long-term retention of information. Active rehearsal strategies involve more active engagement with the information one is trying to learn, which creates further opportunities to elaborate on this information (Ornstein, Medlin, Stone, & Naus, 1985; Simpson, Olejnik, Tan, & Supattathum, 1994). For example, repeating a key principle verbatim may help one apply the principle when solving problems and enrich one’s understanding of the principle. Elaboration strategies involve actively processing, transforming, or adding to the information one is trying to learn and integrating this information with prior knowledge (Weinstein & Mayer, 1986; Weinstein et al., 2011). Examples of elaboration strategies include the following: paraphrasing, summarizing, creating analogies, compare-and-contrast strategies, generating and answering test questions, visualizing a process unfolding, applying information, and teaching material to someone else. Paraphrasing and summarizing are among the simplest types of elaboration strategies. Unlike mindless repetition, which does not involve active processing (e.g., one can restate something verbatim and have no idea what it means), paraphrasing and summarizing involve a higher degree of cognitive processing because they require students to use their own ideas and idiosyncratic thinking to describe the information they are trying to learn in their own words. Accordingly, summarizing information in their own words has been found to help increase underprepared college students’ learning (King, 1992). More effortful forms of elaboration, such as self-explanation (Chi, De Leeuw, Chiu, & LaVancher, 1994) and teaching material to someone else (for research on the benefits of tutoring on tutors, see Cohen, Kulik, & Kulik, 1982; Roscoe & Chi, 2007), can help students integrate new information with
229
Claire Ellen Weinstein and Taylor W. Acee
existing knowledge and make the information more accessible for later recall across a range of simple and more complex learning tasks (Pizzimenti & Axelson, 2015; Van Rossum & Schenk, 1984; Willoughby, Wood, Desmarais, Sims, & Kalra, 1997). Elaboration strategies can also help students to identify gaps in their understanding and correct misunderstandings more effectively and efficiently. Organization strategies are another form of elaboration that involve actively processing information and organizing or reorganizing it, typically in a graphic form (Weinstein & Mayer, 1986; Weinstein et al., 2011). Examples of organization strategies include the following: creating outlines, concept maps, concept matrixes, cause-effect diagrams, relational diagrams, h ierarchical graphic organizers, sequential graphic organizers, and cyclical graphic organizers (Cummins, Kimbell-Lopez, & Manning, 2015). Using a scaffolding procedure to teach undergraduate students how to create and use graphic organizers has been found to increase their use of graphic organizers when taking notes and their course performance (Robinson, Katayama, Beth, Odom, Hsieh, & Vanderveen, 2006). Graphically organizing (or reorganizing) information requires students to actively process information and critically think about ways to create a visual display or graphic representation of that information. This process can be more or less complex. For example, creating an outline of the material to be learned does not require complex cognitive processing, whereas creating an original cause-effect diagram does. Accordingly, higher-level cognitive processing should result in deeper understanding and learning. However, creating a complex graphic organizer can take a considerable amount of time, and students need to know under which conditions it is both effective and efficient to use time-intensive organization strategies (a point we will return to later when discussing different types of strategy knowledge).
Model of Strategic Learning Strategic learners are goal-directed, autonomous learners who have the skill, will, and self-regulation needed to survive and thrive in different postsecondary educational contexts (Weinstein et al., 2000). The four components of the MSL are skill, will, self-regulation, and the academic environment, and each component is comprised of various elements, or variables, that have been found to affect learning and success. For inclusion in the model, each element listed under the components had to meet four criteria: (a) there had to be evidence suggesting that the element related to academic success, (b) the nature of the relationship had to be causative, (c) the element had to account for a meaningful amount of variation in students’ success, and (d) the element had to be amenable to change through an educational intervention. The MSL is an emergent model, meaning that successful learning emerges in the interactions among the various elements within the model and cannot be traced back to any one element—the whole is greater than the sum of its parts. In addition, the MSL emphasizes learning strategies related to students’ skill, will, and self-regulation that are under their direct control and can be proactively and intentionally used by students to improve their own learning and academic success. In addition, the MSL highlights environmental factors that are not under students’ direct control but are nevertheless important to generate awareness of in order to be more strategic. What follows is a brief description of each component of the MSL.
Skill The skill component of the MSL emphasizes college students’ knowledge about and skill in effectively and efficiently using a variety of learning strategies and thinking skills, such as rehearsal, elaboration, and organizational strategies; identifying important information for reaching
230
Study and Learning Strategies
MODEL OF STRATEGIC LEARNING REQUIREMENTS OF THE CURRENT LEARNING ACTIVITY, ASSIGNMENT OR TEST
SKILL Skill in …
- Self as Learner - Nature of Academic Task - Learning Strategies and Skills - Content (Prior Knowedge) - Learning Context
- Using Learning Strategies and Skills - Identifying Important Information For Reaching Learning Goals (e.g., Finding Main Ideas)
LEARNER (…Individual Differences)
- Reading and Listening Comprehension - Note-taking and Note-Using
WILL
SELF-REGULATION
- Setting, Analyzing, and Using Goals - Future Time Perspective - Motivation for Achievement (i.e., Academic Learning Goals, Interests and Values) - Emotions and Feelings About Learning (e.g., Curiosity, Worry and Anxiety, Apathy, Joy, Anger, and Excitement - Beliefs (e.g., Enabling / Self-Sabotaging Beliefs, Academic SelfEfficacy and Attributions for Academic Outcomes) - Commitments to Reaching Goals - Creating a Positive Mind-Set Toward Learning and Avoiding Self-Sabotaging Thoughts and Behaviors
© C.E. Weinstein, 2006
- Preparing for and Taking Tests - Using Reasoning and Problem Solving Skills
- Time Managing / Dealing with Procrastination - Concentrating - Comprehension Monitoring - Systematic Approach to Learning and Accomplishing Academic Tasks (e.g., Setting Goal(s), Reflecting, Brainstorming and Creating a Plan, Selecting, Implementing, Monitoring, and Formatively Evaluating Progress, Modifying (if necessary), and Summatively Evaluating Outcomes) - Coping with Academic Worry and Anxiety - Managing Motivation for Learning and Achievement - Volitional Control (Managing Commitment and Intention) - Academic Help Seeking
SOCIAL CONTEXT/SUPPORT
TEACHER BELIEFS/EXPECTATIONS
Knowledge about …
AVAILABLE RESOURCES
Figure 14.1 W einstein’s MSL organizes malleable intraindividual factors that affect students’ learning under three components—skill, will, and self-regulation. These factors are under students’ direct control and amenable to change through educational intervention. Around the outside of the rectangle, the MSL depicts factors in the academic environment about which students can generate knowledge to be more strategic, even though these factors are outside of their direct control. Within the triangle, the MSL recognizes that each learner has a unique set of individual differences that may affect their learning and response to intervention. The MSL purports that strategic learning is an emergent property resulting from interactions among the academic environment and students’ skill, will, self-regulation, and individual differences.
learning goals; reading and listening comprehension; note-taking and note-using; preparing for and taking tests; and using reasoning and problem-solving (see Figure 14.1). Strategic and self- regulated learning involves developing declarative knowledge about a variety of different learning strategies, learning how to use these strategies, and determining which strategies, or combination of strategies, work best under different conditions (Weinstein & Acee, 2013). In addition to knowledge of learning strategies and skills, the skill component of the MSL is also comprised of other types of knowledge. These are as follows: knowledge of one’s self as a learner (e.g., knowing about one’s learning strengths, weaknesses, study habits, learning preferences, and interests across different academic tasks and contexts), knowledge of the academic task (e.g., knowing about the requirements and performance demands of different academic tasks), prior knowledge about the content one is trying to learn (e.g., knowing about one’s level of prior knowledge, how prior knowledge affects learning, and how to strategically activate prior knowledge to increase learning), and knowledge of current and future contexts in which the information one is learning could be transferred and applied (e.g., knowing that the content in an algebra course is applicable to one’s current introductory physics course and the calculus course one will need to
231
Claire Ellen Weinstein and Taylor W. Acee
take in the future). Theory and research on self-regulated learning acknowledge the important influence of these types of knowledge on learning and performance (see Zimmerman & Schunk, 2011). For example, Winne (2011) recognized the importance of generating knowledge of the academic task in relation to the individual student, the broader context, and available strategies for accomplishing the task. Research has also emphasized the important role of prior knowledge on student learning, for example, students have been found to differentially benefit from note- taking (Wetzels, Kester, van Merrienboer, & Broers, 2011), learning strategies (Wetzels, Kester, & van Merrienboer, 2011), and instructional methods (Acuña, Rodicio, & Sanchez, 2011) based on their levels of prior knowledge. In sum, skill refers to developing critical types of knowledge and learning how to effectively and efficiently use learning strategies and other thinking skills to reach learning goals under different circumstances.
Will Will refers to the “wanting to” of learning and includes various motivational and affective factors related to strategic learning (e.g., beliefs, values, goals, and emotions) that may contribute to or detract from learning and academic achievement in college. Examples of elements under the will component of the MSL include the following: setting, analyzing, and using goals; elaborating and expanding one’s future time perspective; making learning personally meaningful or interesting; developing enabling beliefs and deconstructing self-sabotaging beliefs; and generating a positive mindset toward learning. Motivation has been defined as “the process whereby goal-directed activity is instigated and sustained” (Schunk, Meece, & Pintrich, 2014, p. 377) and as “a person’s willingness to exert physical or mental effort in pursuit of a goal or outcome” (VandenBos, 2007, p. 594). Indexes of motivation include choice, effort, and persistence, but motivation also involves underlying psychological factors that influence motivated behaviors, such as students’ beliefs (e.g., self-efficacy and attribution beliefs), values (e.g., interest and utility value), goals (Eccles & Wigfield, 2002), and academic emotions (Pekrun, 2006). Moreover, college students can use motivational regulation strategies to intentionally influence their own motivation, for example, by modifying their beliefs, values, goals, emotions, and the environment. Wolters (2003) synthesized research on motivational regulation strategies and proposed the following strategies: self-consequating, goal-oriented self-talk, interest enhancement, environmental structuring, self-handicapping, attribution control, efficacy management, and emotion regulation. Self- consequating refers to self-imposed rewards and punishments for reaching, or failing to reach, academic goals (e.g., making going to a party contingent on finishing a paper). Goal-oriented self-talk involves using internal dialogue to make the reasons for task engagement more salient and motivational (e.g., telling oneself that continuing to study is important for a future career and making one’s family proud). Interest enhancement involves intentionally trying to increase one’s immediate enjoyment or situational interest in a task (e.g., making studying into a game and incorporating fantasy or creativity into a task). Environmental structuring refers to controlling one’s environment in order to reduce the probability of being distracted (e.g., choosing to study in a quite location in the library and turning off one’s cell phone before studying). Self-handicapping refers to the creation of obstacles that make task success less likely (e.g., partying the night before a test and not studying) in order to create alternative reasons for failing and to avoid attributing failure to low ability. Although this strategy is not advisable, it may be motivational to the extent that it helps to protect students’ self-worth and keeps them from completely withdrawing from academic activities. Attribution control involves intentionally attributing one’s successes and failures to factors that are under their direct control in order to increase their motivation (e.g., not getting caught up in blaming the instructor and the test for one’s low performance, and emphasizing effort and use of learning strategies as reasons for succeeding on a lab report). Efficacy management
232
Study and Learning Strategies
refers to the deliberate modification of one’s self-efficacy (i.e., confidence in one’s capabilities to successfully perform academic tasks). Proximal goal setting is one strategy that students can use to increase their self-efficacy. This strategy involves breaking a larger task down into simpler parts that are easier to manage and setting specific goals for accomplishing each part. Emotional regulation as a motivational regulation strategy involves controlling one’s emotional experiences in order to generate and sustain effort to reach academic goals (e.g., controlling one’s breathing when experiencing test anxiety and searching for positive evaluations of one’s self when feeling a low sense of belonging). Acee and Weinstein (2010) proposed value-reappraisal strategies as another type of motivational regulation strategy that college students could use to make academic tasks more personally valuable. These strategies include the following: generating rationales for why an academic task is personally meaningful or useful, imagining future possible selves in which having developed knowledge and skills in a course would be worthwhile, and contrasting the pros and cons of task engagement. Research has also shown that college students who are at-risk of low performance may benefit from social-psychological interventions that aim to help them develop a growth mindset toward learning (Aditomo, 2015) and make personal relevance connections with the material they are trying to learn (Harackiewicz, Canning, Tibbetts, Priniski, & Hyde, 2015). The will component of the MSL helps to organize important factors that influence students’ motivation and strategies students can use to generate and sustain motivation. Developing skill and will are critical aspects of strategic learning, but students must also actively manage their skill and will as well as the entire learning process.
Self-Regulation Self-regulation involves actively planning, monitoring, and controlling one’s thoughts, feelings, and behaviors in order to reach self-set academic goals (Weinstein & Acee, 2013). Accordingly, self-regulated learners are goal directed, use effective and efficient learning strategies, monitor and evaluate their goal progress, and learn from their mistakes and successes. Self-regulation can occur on global and real-time levels. On a more global level, for example, college students may actively manage their time over multiple weeks in order to allocate enough resources to study for all of their courses. They may also monitor their progress and adapt their plans as time unfolds. On a more microlevel, for example, students may monitor their attention and comprehension on a specific learning activity by periodically checking to see if they are paying attention and learning what they are studying. The self-regulation component of the MSL includes elements such as time managing, focusing attention and maintaining concentration over time, comprehension monitoring, using a systematic approach to learning, and academic help seeking (see Figure 14.1). Key to self-regulation is using a systematic approach for reaching learning goals, developing (and refining) learning strategies for attaining these goals, and automatizing effective and efficient study routines for different situations. Zimmerman (2000) proposed three cyclical phases of self- regulation: forethought (e.g., setting an academic goal and strategically planning how to reach it), performance/volitional control (e.g., implementing strategies for reaching an academic goal, monitoring goal progress, and regulating one’s thoughts, feelings, and behaviors toward the goal), and self-reflection (e.g., summatively evaluating goal attainment, managing reactions to success or failure, and reflecting about which strategies worked and which did not). Self-regulation is cyclical in that each phase feeds into the next, and the self-reflection phase feeds forward into the forethought phase for similar future goals. For example, during the self-reflection phase, students might determine that creating a concept map is a useful learning strategy for studying economics and subsequently decide to use this strategy again in their next economics course. Building on Zimmerman’s model, Pintrich (2004) postulated that students can intentionally regulate their cognition, motivation/affect, behavior, and environment during each self-regulatory
233
Claire Ellen Weinstein and Taylor W. Acee
phase. Systematically cycling through these self-regulatory phases can help college students to develop and refine their use of learning strategies related to both skill and will. For example, Bol, Campbell, Perez, and Yen (2016) designed a self-regulated learning intervention for students in a community college developmental mathematics course. Significant differences between the intervention and control group suggested that training in self-regulated learning improved students’ mathematics achievement and use of learning strategies related to metacognitive self-regulation and time/study environmental management. In sum, the self-regulation component of the MSL emphasizes processes for managing and regulating one’s own learning as well as developing systematic approaches for the successful completion of academic tasks and the development of more effective and efficient learning strategies and study routines.
Academic Environment Unlike the other components of the MSL, the academic environment is usually not under the student’s control (Weinstein & Acee, 2013), but students’ perceptions of it can influence their studying behaviors (Sun & Richardson, 2016). Accordingly, it is important that students develop knowledge about elements within the academic environment so they can understand both the restraints and the opportunities it provides. For example, different college instructors often have different expectations for successful performance on exams, papers, class participation, and other course requirements. Students need to identify these expectations so they can use elements from the skill and self-regulation components to adjust their study efforts appropriately. Learning how to take advantage of available resources for facilitating learning and dealing with comprehension problems is another important element of the academic environment component of the MSL. Most postsecondary educational settings provide student support services, such as learning centers, tutoring centers, reading and math labs, and advising/counseling resources. It is very helpful for students to know about these resources and the services they provide so they can access them if they encounter difficulties with elements from the other three components. Another element in the academic environment component is knowing about and using social context and support to help achieve academic success. Getting to know a couple of students in one’s classes, forming or joining study groups, and turning to friends and family for emotional support are all examples of ways students can use their social environments to help them strategically focus their efforts and succeed in college. Finally, understanding the requirements of different academic tasks and how to prepare for them is another important element of the academic environment component of the MSL. For example, college students need to know how to write a paper and what is required for the papers in each of their courses. They need to be clear about what material from a course they will be expected to know for an exam and what type of exam they will be taking (e.g., short-answer test, essay test, multiple-choice test, etc.). Knowledge about the elements of the academic environment can be used to strategically navigate, utilize, and integrate into the academic and social environments of postsecondary learning contexts.
Strategy Instruction Teaching students learning strategies related to skill, will, and self-regulation involves using direct instruction (e.g., telling students what the strategy is and how it can help them to be more effective and efficient learners), modeling (e.g., demonstrating how to use the strategy), and guided practice with feedback (e.g., providing guidance as students practice the strategy, gradually removing support until they can do it on their own, and providing them with informative feedback on their strategy development; Weinstein & Acee, 2013). These instructional methods can help college
234
Study and Learning Strategies
students learn about various strategies, get practice using them, and eventually learn how to use them on their own across different situations. There are three primary types of strategy knowledge that can be targeted in instruction: declarative (i.e., knowing about learning strategies), procedural (i.e., knowing how to effectively and efficiently use learning strategies), and conditional (i.e., knowing the conditions under which particular learning strategies are more or less effective and efficient; Weinstein & Acee, 2013). Another facet of strategy instruction is to help students develop a repertoire of learning strategies. Having a repertoire of learning strategies can help students to flexibly adapt to a variety of academic and everyday learning situations. Also, developing a repertoire of strategies is important so that when a comprehension error occurs or learning is not proceeding at an efficient pace, the student has other strategies to use.
Strategic Learning Assessments Researchers have used a number of different methods for assessing students’ strategic and self- regulated learning, such as trace data (Hadwin, Nesbit, Jamieson-Noel, Code, & Winne, 2007), think-aloud protocols, checklists, and questionnaires (Zimmerman, 2008). Strategic learning assessments have been used for the purposes of basic research, program evaluation, improving instruction, and enhancing students’ self-awareness. Within the context of college teaching and learning, strategic learning assessments help to provide (a) an initial snapshot of students’ status as a strategic learner, (b) a tool to help students become more aware of their own strengths and areas needing improvement, (c) a basis for prescribing interventions and individualizing instruction, (d) a pretest-posttest assessment to both demonstrate student progress and identify areas needing further development, and (e) an evaluation tool to help determine the effectiveness of interventions for students. Diagnostic-prescriptive self-report questionnaires have most often been used to assess strategic learning in college contexts and some of the most common assessments include the Learning and Study Strategies Inventory (LASSI; Weinstein, Palmer, & Acee, 2016a), the Motivated Strategies for Learning Questionnaire (Pintrich, Smith, Garcia, & McKeachie, 1991), the Approaches and Study Skills Inventory for Students (Entwistle, Tait, & McCune, 2000), and the Study Process Questionnaire (Biggs, Kember, & Leung, 2001). Although these instruments all address learning and motivational processes and strategies, the test framework and operational definitions of each scale are different. A review and comparison of each instrument is beyond the scope of this chapter. For our purposes here, we focus on the LASSI because it is based on the MSL described above. The LASSI (Weinstein et al., 2016a, 3rd ed.) is a 60-item diagnostic-prescriptive self-report inventory with 10 scales that measure different factors related to students’ skill, will, and self- regulation in postsecondary educational and training contexts (Weinstein, Palmer, & Acee, 2016b). The LASSI provides students with national norms that can help them generate self-awareness about their strengths and the areas they may want to target for improvement. The LASSI scales primarily falling under the skill component of the MSL are Information Processing, Selecting Main Ideas, and Test Strategies. These scales assess students’ use of learning strategies and study skills for identifying, acquiring, and self-generating new information and skills and demonstrating their understanding on exams and other postsecondary assessments. The LASSI scales most related to the will component of strategic learning are Attitude, Motivation, and Anxiety. These scales assess students’ attitudes and interest in college, their commitment and self-discipline to expend effort and stay on track, and the extent to which anxiety and worry interferes with their learning and performance. The LASSI scales most related to the self-regulation component of strategic learning are Concentration, Self-Testing, Time Management, and Using Academic Resources. These scales assess how students self-manage the entire learning process by focusing their attention and maintaining their concentration on academic tasks; monitoring their comprehension and checking to see if they have met the learning demands for their courses; strategically managing
235
Claire Ellen Weinstein and Taylor W. Acee
their time to reach their learning goals; and using learning support resources, such as instructors’ office hours, study groups, review sessions, learning centers, writing centers, mentors, and tutors. The early development work, pilot testing, and field testing of the LASSI helped to establish reliability and validity evidence (Weinstein, 1987). For example, content validity evidence for each scale was established using expert raters and interviews with students and users, and construct-irrelevant variance was reduced by removing items that correlated with social desirability. F urthermore, evidence of test-retest and internal-consistency reliability was provided ( Weinstein, 1987). Since the development of the LASSI (Weinstein, Schulte, & Palmer, 1987, 1st ed.), researchers have used and examined the psychometric properties of the LASSI in various postsecondary contexts with students of diverse cultural and linguistic backgrounds using different editions, versions, and translations of the LASSI. Studies have provided reliability (Weinstein, 1987; W einstein & Palmer, 2002; Weinstein et al., 2016b) and validity (Carson, 2011; Flowers, Bridges, & Moore, 2012; Marrs, Sigler, & Hayes, 2009) evidence supporting the LASSI. Other studies have found the LASSI to be sensitive to pretest-posttest changes in response to interventions (Dill et al., 2014; Nist, Mealey, Simpson, & Kroc, 1990). However, results have been mixed regarding the factor structure of the LASSI (Cano, 2006; Finch, Cassady, & Jones, 2016; Yip, 2013).
Strategic Learning Interventions Strategic learning interventions may be offered in a variety of different forms, vary in depth, and be designed for college students in general or specific subgroups (Weinstein & Acee, 2013). For example, less-intensive interventions may include voluntary workshops and handouts on strategic learning offered by a learning center or other program on campus. Strategic learning instruction may also be systematically integrated into training programs for tutors, supplemental instructors, mentors, and advisors so that they can teach students learning strategies and help them cultivate attitudes and beliefs that will facilitate their learning of course content (Acee, Weinstein, Jordan, Dearman, & Fong, 2012; Acee, Weinstein, Dacy, Han, & Clark, 2012). Another approach is for content instructors to use a metacurriculum that involves teaching strategic learning within a credit-bearing college course, such as economics (e.g., De Corte & Masui, 2004) or developmental mathematics (e.g., Mireles, Acee, & Gerber, 2014). In addition, online interventions, such as gStudy (Winne et al., 2006) and the Becoming a Strategic Learner: LASSI Instructional Modules (Weinstein, Woodruff, & Awalt, 2002), have been used to teach students to become more strategic and self-regulated learners. The most intensive and comprehensive strategic learning interventions are semester-long, credit-bearing, learning-to-learn courses, and the pairing of learning-to-learn courses with a content course that uses a metacurriculum. Learning-to-learn courses (also referred to as learning frameworks or strategic learning courses) are designed to (a) teach students conceptual frameworks of how learning works; (b) facilitate students’ effective and efficient use of learning strategies related to skill, will, and self-regulation; (c) prompt students to apply strategic learning principles to a variety of academic and everyday learning situations; (d) support students’ autonomous motivation and self-growth; and (e) encourage help-seeking and academic integration. Research has suggested that learning-to-learn courses can help college students improve their use of cognitive and motivational strategies (Hofer & Yu, 2003) and contribute to students’ success in college (Weinstein et al., 1997; for a more detailed review of research on these courses, see Chapter 17 in this volume). For example, the senior author developed a three-credit, learningto-learn course at the University of Texas at Austin called Individual Learning Skills (EDP 310). This multi-section, coordinated course used the MSL as the guiding conceptual framework, the LASSI as a pretest-posttest assessment, and the LASSI Instructional Models as a major source of course content. Course instructors were graduate students who participated in extensive and ongoing training. In a study examining the effectiveness of this course, Weinstein et al. (1997) found
236
Study and Learning Strategies
that the general population of students from the university who did not take the learning-to-learn course had a five-year graduation rate of 55 percent, whereas students who took the course had a five-year graduation rate of 71 percent, despite having significantly lower verbal and quantitative SAT scores (the SAT is a commonly used college entrance exam).
Conclusion Fostering strategic and self-regulated learning is critical to college students’ academic success and the development of a workforce that is more flexible and adaptive to the changing demands of the modern world. Helping students to build a repertoire of learning strategies is one way to prepare them to continue learning throughout their lives. Improving students’ ability to meaningfully encode information and retrieve it from memory involves teaching them information processing strategies, such as rehearsal, elaboration, and organization strategies. For students to be successful, however, students must also self-generate motivation to learn and self-manage the entire learning process. Teaching students to improve their motivation for learning may involve providing them with strategies to create personally meaningful learning experiences and cultivate more positive attitudes, beliefs, mindsets, and goals for learning. In order to facilitate students’ self-regulation, instruction should guide students in using a systematic approach to learning that involves planning, implementing, monitoring, and evaluating their strategic approaches for reaching academic goals. The MSL helps to organize many important factors that underlie learning and suggests that successful learning emerges in the interaction of students’ skill, will, self-regulation, and the academic environment. Instructors, advisors, counselors, mentors, academic coaches, tutors, and program coordinators working at postsecondary institutions could use the MSL and data from learning strategies assessments to inform the development and refinement of strategic learning interventions and activities.
References and Suggested Readings Acee, T. W., & Weinstein, C. E. (2010). Effects of value-reappraisal intervention on statistics students’ motivation and performance. The Journal of Experimental Education, 78, 487–512. doi:10.1080/00220970903352753 Acee, T. W., Weinstein, C. E., Dacy, B., Han, C., & Clark, D. A. (2012). Motivational perspectives on student learning. In K. Agee & R. Hodges (Eds.), Handbook for training peer tutors and mentors (pp. 35–38). Mason, OH: Cengage Learning. Acee, T. W., Weinstein, C. E., Jordan, M. E., Dearman, J. K., & Fong, C. J. (2012). Self-regulated learning: Helping students manage their own learning. In K. Agee & R. Hodges (Eds.), Handbook for training peer tutors and mentors (pp. 39–42). Mason, OH: Cengage Learning. ACT. (2016). The condition of college and career readiness 2016. Retrieved from www.act.org/content/dam/act/ unsecured/documents/CCCR_National_2016.pdf Acuña, S. R., Rodicio, H. G., & Sánchez, E. (2011). Fostering active processing of instructional explanations of learners with high and low prior knowledge. European Journal of Psychology of Education, 26(4), 435–452. doi:10.1007/s10212-010-0049-y Aditomo, A. (2015). Students’ response to academic setback: Growth mindset as a buffer against demotivation. International Journal of Educational Psychology, 4(2), 198–222. doi:10.17583/ijep.2015.1482 Aud, S., Hussar, W., Kena, G., Bianco, K., Frohlich, L., Kemp, J., & Tahan, K. (2011). The condition of education 2011 (NCES 2011-033). U.S. Department of Education, National Center for Education Statistics. Washington, DC: U.S. Government Printing Office. Biggs, J., Kember, D., & Leung, D. Y. P. (2001). The revised two-factor Study Process Questionnaire: R-SPQ2F. British Journal of Educational Psychology, 71, 133–149. doi:10.1348/000709901158433 Bol, L., Campbell, K. D. Y., Perez, T., & Yen, C. J. (2016). The effects of self-regulated learning training on community college students’ metacognition and achievement in developmental math courses. Community College Journal of Research and Practice, 40(6), 480–495. doi:10.1080/10668926.2015.1068718 Cano, F. (2006). An in-depth analysis of the Learning and Study Strategies Inventory (LASSI). Educational and Psychological Measurement, 66(6), 1023–1038. doi:10.1177/0013164406288167 Carson, A. D. (2011). Predicting student success from the LASSI for Learning Online (LLO). Journal of Educational and Computing Research, 45(4), 399–414. doi:10.2190/EC.45.4.bhttp://baywood.com
237
Claire Ellen Weinstein and Taylor W. Acee
Chi, M. T. H., De Leeuw, N., Chiu, M.-H., & Lavancher, C. (1994). Eliciting self-explanations improves understanding. Cognitive Science, 18(3), 439–477. doi:10.1207/s15516709cog1803_3 Cohen, P. A., Kulik, J. A., & Kulik, C. C. (1982). Educational outcomes of tutoring: A meta-analysis of findings. American Educational Research Journal, 19(2), 237–248. doi:10.3102/00028312019002237 Conley, D. T. (2007). Redefining college readiness. Eugene, OR: Educational Policy Improvement Center. Cummins, C., Kimbell-Lopez, K., & Manning, E. (2015). Graphic organizers: Understanding the basics. The California Reader, 49(1), 14–22. De Corte, E., & Masui, C. (2004). The CLIA-model: A framework for designing powerful learning environments for thinking and problem. European Journal of Psychology of Education, 19, 365–384. Dill, A. L., Justice, C. A., Minchew, S. S., Moran, L. M. Wang, C. H., & Weed, C. B. (2014). The use of the LASSI (The Learning and Study Strategies Inventory) to predict and evaluate the study habits and academic performance of students in a learning assistance program. Journal of College Reading and Learning, 45, 20–34. doi:10.1080/10790195.2014.906263 Eccles, J. S., & Wigfield, A. (2002). Motivational beliefs, values and goals. Annual Review of Psychology, 53, 109–132. doi:10.1146/annurev.psych.53.100901.135153 Entwistle, N., Tait, H., & McCune, V. (2000). Patterns of response to an approaches to studying inventory across contrasting groups and contexts. European Journal of Psychology of Education, 15(1), 33–48. doi:10.1007/BF03173165 Finch, W. H., Cassady, J. C., & Jones, J. A. (2016). Investigation of the latent structure of the Learning and Study Strategies Inventory. Journal of Psychoeducational Assessment, 34(1), 73–84. doi:10.1177/0734282 915588443 Flavell, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitive-developmental inquiry. American Psychologist, 34(10), 906–911. doi:10.1037/0003-066X.34.10.906 Flowers, L. A., Bridges, B. K., & Moore, J. L. (2012). Concurrent validity of the Learning and Study Strategies Inventory (LASSI): A study of African American precollege students. Journal of Black Studies, 43(2), 146–160. doi:10.1177/0021934711410881 Fong, C. J., Zientek, L. R., Ozel, Z. E. Y., & Phelps, J. M. (2015). Between and within ethnic differences in strategic learning: A study of developmental mathematics students. Social Psychological Education, 18, 55–74. doi:10.1007/s11218-014-9275-5 Hadwin, A. F., Nesbit, J. C., Jamieson-Noel, D., Code, J., & Winner, P H. (2007). Examining trace data to explore self-regulated learning. Metacognition Learning, 2, 107–124. doi:10.1007/s11409-007-9016-7 Harackiewicz, J. M., Canning, E. A., Tibbetts, Y., Priniski, S. J., & Hyde, J. S. (2015). Closing achievement gaps with a utility-value intervention: Disentangling race and social class. Journal of Personality and Social Psychology. Advance online publication. doi:10.1037/pspp0000075 Hofer, B. K., & Yu, S. L. (2003). Teaching self-regulated learning through a “learning-to-learn” course. Teaching of Psychology, 3(1), 30–33. King, A. (1992). Comparison of self-questioning, summarizing, and notetaking-review as strategies for learning from lectures. American Educational Research Journal, 29(2), 303–323. doi:10.3102/00028312029002303 Marrs, H., Sigler, E., & Hayes, K. (2009). Study strategy predictors of performance in introductory psychology. Journal of Instructional Psychology, 36(2), 125–133. McGarrah, M. W. (2014). Lifelong learning skills for college and career readiness: An annotated bibliography. Retrieved from www.ccrscenter.org/sites/default/files/Lifelong%20Learning%20Skills%20for%20College%20 and%20Career%20Readiness.pdf Mireles, S. V., Acee, T. W., & Gerber, L. N. (2014). FOCUS: Sustainable mathematics successes. Journal of Developmental Education, 38(1), 26–30. Nist, S. L., Mealey, D. L., Simpson, M. L., & Kroc, R. (1990). Measuring the affective and cognitive growth of regularly admitted and developmental studies students using the learning and study strategies inventory (LASSI). Reading Research and Instruction, 30(1), 44–49. doi:10.1080/19388079009558032 Ornstein, P. A., Medlin, R. G., Stone, B. P., & Naus, M. J. (1985). Retrieving for rehearsal: An analysis of active rehearsal in children’s memory. Developmental Psychology, 21(4), 633–641. doi:10.1037/0012-1649.21.4.633 Pekrun, R. (2006). The control-value theory of achievement emotions: Assumptions, corollaries, and implications for educational research and practice. Educational Psychology Review, 18, 315–341. doi:10.1007/ s10648-006-9029-9 Pew Research Center (2016). The state of American jobs: How the shifting economic landscape is reshaping work and society and affecting the way people think about the skills and training they need to get ahead. Retrieved from www.pewsocialtrends.org/2016/10/06/1-changes-in-the-american-workplace *Pintrich, P. R. (2004). A conceptual framework for assessing motivation and self-regulated learning in college students. Educational Psychology Review, 16(4), 385–407. doi:10.1007/s10648-004-0006-x
238
Study and Learning Strategies
Pintrich, P. R., Smith, D. A., Garcia, T., & McKeachie, W. J. (1991). A manual for the use of the Motivated Strategies for Learning Questionnaire (MSLQ) (Technical Report No. 91-B-004). Ann Arbor, MI: National Center for Research to Improve Postsecondary Teaching and Learning. Pizzimenti, M. A., & Axelson, R. D. (2015). Assessing student engagement and self-regulated learning in a medical gross anatomy course. Anatomical Sciences Education, 8, 104–110. Pressley, M., & McCormick, C. B. (1995). Advanced educational psychology: For educators, researchers, and policymakers. New York, NY: HarperCollins. Robbins, S. B., Oh, I. S., Le, H., & Button, C. (2009). Intervention effects on college performance and retention as mediated by motivational, emotional, and social control factors: Integrated meta-analytic path analyses. Journal of Applied Psychology, 94(5), 1163–1184. doi:10.1037/a0015738 Robinson, D. H., Katayama, A. D., Beth, A., Odom, S., Hsieh, Y.-P., & Vanderveen, A. (2006). Increasing text comprehension and graphic note taking using a partial graphic organizer. The Journal of Educational Research, 100(2), 103–111. doi:10.3200/JOER.100.2.103-111 Roscoe, R. D., & Chi, M. T. H. (2007). Understanding tutor learning: Knowledge-building and k nowledgetelling in peer tutors’ explanations and questions. Review of Educational Research, 77(4), 534–574. doi:10.3102/0034654307309920 Schunk, D. H., Meece, J. L., & Pintrich, P. R. (2014). Motivation in education (4th ed.). New York, NY: Pearson. Simpson, M. L., Olejnik, S., Tan, A. Y., & Supattathum, S. (1994). Elaborative verbal rehearsals and college students’ cognitive performance. Journal of Educational Psychology, 86, 267–278. doi:10.1037/0022-0663.86.2.267 Sun, H., & Richardson, J. T. E. (2016). Students’ perceptions of the academic environment and approaches to studying in British postgraduate business education. Assessment & Evaluation in Higher Education, 41(3), 384–399. doi:10.1080/02602938.2015.1017755 Thomas, N. J. T. (2017). Mental imagery. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy. Retrieved from https://plato.stanford.edu/archives/spr2017/entries/mental-imagery Van Rossum, E. J., & Schenk, S. M. (1984). The relationship between learning conception, study strategy, and learning outcome. British Journal of Educational Psychology, 54, 73–83. doi:10.1111/j.2044-8279.1984.tb00846.x VandenBos, G. R. (Ed.). (2007). APA dictionary of psychology. Washington, DC: American Psychological Association. Weinstein, C. E. (1975). Learning of elaboration strategies (Unpublished doctoral dissertation). University of Texas at Austin, Austin, TX. Weinstein, C. E. (1978). Elaboration skills as a learning strategy. In H. F. O’Neil, Jr. (Ed.), Learning strategies (pp. 31–55). New York, NY: Academic Press. Weinstein, C. E. (1987). LASSI user’s manual. Clearwater, FL: H & H Publishing. *Weinstein, C. E., & Acee, T. W. (2013). Helping college students become more strategic and self-regulated learners. In H. Bembenutty, T. J. Cleary, & A. Kitsantas (Eds.), Applications of self-regulated learning across diverse disciplines: A tribute to Barry J. Zimmerman (pp. 197–236). Charlotte, NC: Information Age Publishing. Weinstein, C. E. & Mayer, R. E. (1986). The teaching of learning strategies. In M. Wittrock (Ed.), Handbook of research on teaching (3rd ed., pp. 315–327). New York, NY: Macmillan. Weinstein, C. E., & Palmer, D. R. (2002). User’s manual for those administering the Learning and Study Strategies Inventory (2nd ed.). Clearwater, FL: H & H Publishing. Weinstein, C. E., Acee, T. W., & Jung, J. H. (2011). Self-regulation and learning strategies. New Directions for Teaching & Learning, 2011(126), 45–53. doi:10.1002/tl.443 Weinstein, C. E., Hanson, G., Powdrill, L., Roska, L., Dierking, D., & Husman, J. (1997). The design and evaluation of a course in strategic learning. Retrieved from www.umkc.edu/cad/nade/nadedocs/97 conpap/cwcpap97.htm Weinstein, C. E., Husman, J., & Dierking, D. R. (2000). Self-regulation interventions with a focus on learning strategies. In M. Boekaerts, P. Pintrich, & M. Zeidner (Eds.), Handbook of self-regulation (pp. 724–747). San Diego: Academic Press. Weinstein, C. E., Palmer, D. R., & Acee, T. W. (2016a). Learning and Study Strategies Inventory (3rd ed.). Clearwater, FL: H&H Publishing. Weinstein, C. E., Palmer, D. R., & Acee, T. W. (2016b). LASSI user’s manual: Learning and Study Strategies Inventory third edition. Clearwater, FL: H&H Publishing. Weinstein, C. E., Schulte, A., & Palmer, D. R. (1987). The Learning and Study Strategies Inventory. Clearwater, FL: H & H Publishing. Weinstein, C. E., Woodruff, T., & Awalt, C. (2002). Becoming a strategic learner: LASSI instructional modules. Clearwater, FL: H&H Publishing.
239
Claire Ellen Weinstein and Taylor W. Acee
Wetzels, S. A. J., Kester, L., & van Merrienboer, J. J. G. (2011). Adapting prior knowledge activation: Mobilisation, perspective taking, and learners’ prior knowledge. Computers in Human Behavior, 27, 16–21. doi:10.1016/j.chb.2010.05.004 Wetzels, S. A. J., Kester. L., van Merrienboer, J. J. G., & Broers, N. J. (2011). The influence of prior knowledge on the retrieval-directed function of note taking in prior knowledge activation. British Journal of Educational Psychology, 81, 274–291. doi:10.1348/000709910X517425 Willoughby, T., Wood, E., Desmarais, S., Sims, S., & Kalra, M. (1997). Mechanisms that facilitate the effectiveness of elaboration strategies. Journal of Educational Psychology, 89(4), 682–685. doi:10.1037/00220663.89.4.682 Winne, P. H. (2011). A cognitive and metacognitive analysis of self-regulated learning. In B. J. Zimmerman & D. H. Schunk (Eds.), Handbook of self-regulation of learning and performance: Educational psychology handbook series (pp. 15–32). New York, NY: Routledge. Winne, P. H., Nesbit, J. C., Kumar, V., Hadwin, A. F., Lajoie, S. P., Azevedo, R., & Perry, N. E. (2006). Supporting self-regulated learning with gStudy software: The learning kit project. Technology, Instruction, Cognition and Learning, 3, 105–113. Wittrock, M. C. (1974). Learning as a generative process. Educational Psychologist, 11(2), 87–95. doi:10.1080/ 00461527409529129 Wolters, C. A. (2003). Regulation of motivation: Evaluating an underemphasized aspect of self–regulated learning. Educational Psychologist, 38, 189–205. doi:10.1207/S1532698EP3804_1 Wood, G. (1967). Mnemonic systems in recall. Journal of Educational Psychology, 58(6), 1–27. doi:10.1037/ h0021519 Yip, M. C. W. (2013). The reliability and validity of the Chinese version of the Learning and Study Strategies Inventory (LASSI-C). Journal of Psychoeducational Assessment, 31(4), 396–403. doi:10.1177/0734282912452835 Zimmerman, B. J. (Ed.) (1986). Special issue on self-regulated learning [Special issue]. Contemporary Educational Psychology, 11, 305–427. *Zimmerman, B. J. (2000). Attaining self-regulation: A social cognitive perspective. In M. Boekaerts, P. R. Pintrich, & M. Zeidner (Eds.), Handbook of self-regulation (pp. 13–39). San Diego, CA: Academic Press. Zimmerman, B. J. (2008). Investigating self-regulation and motivation: Historical background, methodological developments, and future prospects. American Educational Research Journal, 45(1), 166–183. doi:10.3102/0002831207312909 Zimmerman, B. J., & Schunk, D. H. (Eds.) (2011). Handbook of self-regulation of learning and performance: Educational psychology handbook series (pp. 15–32). New York, NY: Routledge.
240
15 Test Preparation and Test Taking Rona F. Flippo university of massachusetts boston
Victoria Appatova university of cincinnati clermont college
David M. Wark university of minnesota
This chapter reviews and reexamines test preparation and test taking at the postsecondary level. It contains a historical overview of more than 50 years of research on test preparation, test-wiseness and test-taking skills, coaching for tests, and test anxiety. In the era of transitioning to new formats of testing students’ knowledge, it is important to learn lessons from a half-century of research related to objective and essay tests in order to assess its applicability to new testing approaches now and in the future. A section on implications for practice contains suggestions on how instructors can integrate the strategies of preparing for and taking tests into the curriculum, and how students can apply them. Finally, we include a summary of implications for future research in these areas.
Test Preparation and Test Performance Achievement on tests is a critical component for attaining access to and successfully negotiating in advanced educational and occupational opportunities. Standardized tests are increasingly used for a variety of high-stakes purposes, and that tendency continues to expand (Flippo, 2015). Students must perform acceptably on tests to pass their courses and receive credit. Students expecting to receive financial aid must have appropriate grades and test scores to qualify. Admission to graduate and professional schools depends largely on test scores. Greater accountability in educational and professional domains has led to requirements for ongoing demonstration of competence. Some occupations require tests to advance or simply to remain employed in a current position. Many professionals must pass tests to qualify for licensure, certification, and recertification in their fields. Considering all the ways in which test scores can affect lives, knowing the techniques of preparing for and taking tests appears essential. That information, along with the methods of teaching it, should be part of every reading and study skills instructor’s professional tool kit and embedded into instructional activities across the academic curriculum. The research literature supports the idea that special instruction in preparing for and taking a test can improve performance and result in higher test scores within the college curriculum. Over the past 50 years, studies have consistently shown positive effects among various student populations using a variety of approaches. Many publications at the end of the 20th century demonstrated 241
Flippo, Appatova, and Wark
that standardized tests are also amenable to test practice and training. To name only a few, scores on the Scholastic Aptitude Test (SAT; Slack & Porter, 1980), the Graduate Record Examination (GRE; Swinton & Powers, 1983), and the National Board of Medical Examiners (NBME) examinations (Frierson, 1991; Scott, Palmisano, Cunningham, Cannon, & Brown, 1980) were shown to increase after use of a variety of training approaches. More recent research literature of the 21st century emphasizes the idea that test preparation should not detract educators and students from deeper learning (e.g., Flippo, 2015; Gulek, 2003) and draws the public’s attention to the fact that methods of preparation and their effectiveness differ significantly among demographic groups (e.g., Buchmann, Condron, & Roscigno, 2010; Chung-Herrera, Ehrhart, Ehrhart, Solamon, & Kilian, 2009; Ellis & Ryan, 2003; Wilkinson & Wilkinson, 2013). The literature covers many distinct topics under the broad categories of test preparation and test taking, including philosophical orientations; specific coaching for certain tests; special skills, such as reducing test anxiety; and test-wiseness strategies. In this chapter, we review the research and application literature relevant to these areas for the postsecondary, college, and advanced-level student. Some of the studies reviewed were conducted with younger student populations. We include those when findings or implications are useful to postsecondary students or to reading and study skills specialists. Instruction in test preparation and test taking can make a difference in some students’ scores. The literature shows that students from different populations, preparing for tests that differentiate at both high and low levels of competence, may improve their scores using a number of training programs. This chapter explains and extends these results.
Test-Wiseness and Test-Taking Skills Formulations of Test-Wiseness Test-wiseness is a meaningful but often misunderstood concept of psychological measurement. In fact, the notion of test-wiseness is poorly defined and often used as ammunition in the battle over the value of objective testing. Some vocal opponents of objective testing have claimed that high-scoring students may be second rate and superficial, performing well because they are cynically test-wise (Faunce & Kenny, 2004; Maylone, 2004). The concept of test-wiseness was first put forth by Thorndike (1951) in regard to the effect that persistent and general characteristics of individuals may contribute to test scores and affect test reliability. Specifically, Thorndike claimed the following: Performance on many tests is likely to be in some measure a function of the individual’s ability to understand what he is supposed to do on the test. Particularly, as the test situation is novel or the instructions complex, this factor is likely to enter in. At the same time, test score is likely to be in some measure a function of the extent to which the individual is at home with tests and has a certain amount of sagacity with regards to tricks of taking them. (p. 569) Millman, Bishop, and Ebel (1965) further developed the idea and suggested that lack of test- wiseness may be a source of measurement error. They defined test-wiseness as a subject’s capacity to utilize the characteristics and formats of the test and/or the test taking situation to receive a high score. Test-wiseness is logically independent of the examinee’s knowledge of the subject matter for which the items are supposed measures. (p. 707)
242
Test Preparation and Test Taking
Millman et al. (1965) and Sarnacki (1979) presented early reviews of the concept and taxonomy of test-wiseness. Contemporary formulations view test-wiseness as a broad collection of skills that, in combination with content knowledge, promote optimal test performance. Test-taking skills can be viewed as test-wiseness in action. Test-wiseness, on the other hand, is a broader category that refers to cognitive skills, general test-taking skills, and other attributes that contribute to exam scores in addition to students’ knowledge of the content being tested. Test-wise students develop test-taking skills by applying a host of test-taking strategies across tests. They know that if they change their answers after some reflection and can reduce the possible correct responses, they can generally improve their scores. They never leave questions blank when there is no penalty for guessing. They maintain good timing on exams so as to correctly answer the greatest number of questions in the allotted time. They plan learning and study activities that they anticipate will match the way in which they are asked to apply information on the test. They utilize reasoning and problem-solving skills in the context of the testing situation and consider all that they know in relation to the information being tested. They address test questions using approaches similar to those employed in prior situations requiring the recall and application of the information being tested. They attend to all factors that influence test performance (Flippo, 2015).
Impact of Test-Wiseness on Test Performance Consistency of performance on a specific type of test was observed in a study by Bridgeman and Morgan (1996). They studied the relationship between scores on the essay and multiple-choice portions of Advanced Placement (AP) tests and compared student performance with scores obtained on similarly formatted exams. Results indicated that students in the high multiple-choice/ low essay test score group performed much better on other multiple-choice tests than the low multiple-choice/high essay test score group and vice versa. Bridgeman and Morgan concluded that the different test measures were measuring different constructs. Direct measurement of test-wiseness was undertaken in a study with undergraduate business students (Geiger, 1997). A positive association between test-wiseness scores and performance on both multiple-choice and non-multiple-choice (problems) exam items was observed. The author also noted that test-wiseness scores for upper-level students were higher than those of introductory students and concluded that there may be a maturation effect on test-wiseness, even at the college level. The higher ability level of the subjects in the latter study appears to be associated with higher levels of test-wiseness. More recent studies are consistent with these findings. For example, Hayati and Ghojogh (2008) demonstrated that high-performing students are more test-wise than low-performing students. This study also provides evidence of no significant difference between the two genders related to the use of test-wiseness strategies.
Test-Wiseness and Student Demographics The test-wiseness of different demographic groups has also been addressed in the literature; however, a more thorough exploration is needed. Ellis and Ryan (2003) showed that the difference in the cognitive-ability test performance between Caucasian and African-American college students can be partly explained by the greater use of ineffective test-taking strategies by the A frican-American students. Consistent with these findings are the results obtained by Dollinger and Clark (2012), who indicated that the use of ineffective test-taking strategies in their study accounted for 19 percent–25 percent of the variance originally explained by race.
243
Flippo, Appatova, and Wark
In a study that assessed test-wiseness and explored possible differences between culture groups, a test-wiseness questionnaire was administered to Canadian and other international pharmacy students and graduates (Mahamed, Gregory, & Austin, 2006). Results revealed significant differences between senior Canadian pharmacy students and international pharmacy graduates, with the international graduates demonstrating a lower level of recognition and response to test-wise cueing strategies (grammatically correct answer choice, strong modifiers, and excess specificity). The authors concluded that, in their study, the North American students had test-wiseness skills that were less prevalent in international graduates.
Reasons for Teaching and Learning Test-Wiseness Some readers may question the necessity or propriety of teaching test-wiseness. However, M illman et al. (1965) suggest that a lack of test-wiseness is a source of measurement error. Consequently, teaching students to be test-wise should increase test validity. Green and Stewart (1984) indicate that test-wiseness includes a combination of learned and inherent abilities. It seems that it would be the obligation of educational institutions to provide instruction in test-taking skills. If students are test-wise, their scores will better reflect the underlying knowledge or skill being tested rather than sensitivity to irrelevant aspects of the test. With the advent of high-stakes testing and expectations for a successful application of knowledge on standardized exams, awareness of the importance of test-wiseness has only increased. Numerous discussions have evolved regarding the how of providing test-wiseness instruction to students at all levels (see Glenn, 2004; Hoover, 2002; Lam, 2004; Saunders & Maloney, 2004; Volante, 2006).
Variety of Factors Contributing to Test-Wiseness Some researchers have attempted to determine various strategies used by high-scoring test takers. Although Paul and Rosenkoetter (1980) found no significant relationship between completion time and test scores, they did find that better students generally finish examinations faster. There were exceptions, however. Some low scorers finished early, and some high scorers took extra time to contemplate answers. Other studies have further supported the lack of a relationship between speed and performance on exams. Lester (1991) looked at student performance on undergraduate abnormal psychology multiple-choice exams and found no association between time spent on the exams and scores obtained. Results of data analysis have also found no relationship between level of performance and time taken to complete statistics exams in a graduate-level course (Onwuegbuzie, 1994). High scorers seemingly have two strategies: know the material well enough to go through the test very quickly or go through the test slowly, checking, changing, and verifying each answer. Either strategy seems to be an effective approach. In an effort to determine what test-taking strategies are used by A students compared with those used by C and F students, McClain (1983) asked volunteers taking a multiple-choice exam in an introductory psychology course to verbalize their test-taking procedures while taking the exam. She found that, unlike the C or F students, the A students consistently looked at all alternative answers and read the answers in the order in which they were presented in the test. They also anticipated answers to more questions than did the lower-scoring students. In addition, they were more likely to analyze and eliminate incorrect alternatives to help determine the correct answer. The A students also skipped more questions they were unsure of (coming back to them later) than did the C and F students. On a later exam, some of the C and F students who reported using the strategies characteristic of the A students reported an improvement in their exam scores. Kim and Goetz (1993) sought to determine effective exam-taking strategies by examining the types of marks made on test sheets by students on multiple-choice exams in an undergraduate
244
Test Preparation and Test Taking
educational psychology course. Among the different categories identified in the study, the use of answer option elimination marks was found to be significantly related to students’ test scores, with increased test scores associated with greater frequency of marking of eliminated options. It was also noted that test markings increased as the question difficulty increased. The authors proposed that the markings on tests could serve to aid in facilitating the retrieval of information from longterm memory, assist students in focusing on important information, and decrease information load; they further concluded that training in the use of marking strategies might improve test scores. LoSchiavo and Shatz (2002) found that a better test performance was associated with using such test-marking techniques as highlighting key concepts, marking items requiring additional consideration, and drawing pictures or charts. Huck (1978) was interested in what effect the knowledge of an item’s difficulty would have on students’ strategies. His hypothesis was that students might read certain items more carefully if they were aware of how difficult those items had been for previous test takers. The study revealed that knowing the difficulty of an item had a significant and positive effect on test scores. It is not clear, however, how the students used that information to improve their scores. Xie and Andrews (2013) studied a “washback” effect on test preparation strategies (“washback” is a term commonly used in language testing to describe the impact of testing on teaching and learning). Using the expectancy-value theory, the authors found that higher endorsed task value and higher expectation of test success jointly contributed to greater engagement in test preparation. Also, prior familiarization with the test design and difficulty was related to increased self-regulation in test preparation and the use of more sophisticated test-taking skills (Xie & Andrews, 2013). Another study by Xie (2015) found that time spent on test preparation increases with the perceived higher weight of the test, and test takers engage more in learning activities and focused test preparation if they favorably perceive the test’s validity. The significant impact of the anticipated value of a test on test-taking effort and test performance was also shown by Cole, Bergin, and Whittaker (2008). The authors concluded that if students do not perceive the importance or usefulness of an exam, their effort suffers, and so do their test scores. Anticipated test difficulty in association with anticipated test format has been studied in relation to performance on tests. Thiede (1996) researched the effect of anticipating recall versus recognition test items on level of exam performance. Results indicated that superior performance was associated with anticipating recall test items regardless of the actual item type used on the test. It was proposed that this might be related to increasing encoding of associations and increased effort in preparing for a recall versus a recognition test, in relation to perceptions that a recall test is more difficult than a recognition test. Similarly, students were found to prepare more thoroughly and use more efficient learning strategies when they anticipated open-ended questions as opposed to only multiple-choice items (Balch, 2007). Partly due to the common misperception of a relative easiness of multiple-choice questions, students are reported to prefer multiple-choice tests over short-answer or essay tests (Tozoglu, Tozoglu, Gurses, & Dogar, 2004). These findings are consistent with the later study by Sommer and Sommer (2009) who conclude that students feel less comfortable taking all-essay tests than taking multiple-choice or short-answer exams. However, a study conducted by Skinner (2009) shows that most students do not perform considerably better on multiple-choice than essay questions (or vice versa) which may be explained by an increased effort in preparing for an intimidating test. The impact of prior knowledge was investigated in association with reading tests. Chang (1978) found that a significant number of the undergraduate students he tested were able to correctly answer questions about passages on a standardized reading comprehension test without seeing the text. Some authors would say that the questions could be answered independently of the passages. Chang, on the other hand, attributed the students’ success to test-wiseness. Blanton and Wood (1984) designed a specific four-stage model to teach students what to look for when taking
245
Flippo, Appatova, and Wark
reading comprehension tests, making the assumption that students could be taught to use effective test-wiseness strategies for reading comprehension tests. A similar investigation was undertaken by Powers and Leung (1995). They conducted a study to determine the extent to which verbal skills versus test-wiseness or other such skills were being utilized to answer reading comprehension questions on the SAT. Test takers were asked to answer sets of reading questions without the reading passages. Results indicated that students were able to attain a level of performance that exceeded chance level on the SAT reading questions. However, it was noted that the strategies for answering questions without the reading passages reflected use of verbal reasoning rather than test-wiseness skills. Specifically, students were observed to attend to consistencies within the question sets and to use this information to reconstruct the theme of the missing passage. In a study of strategies for taking essay tests, Cirino-Gerena (1981) found that higher-scoring students reported using the following strategies: quoting books and articles, rephrasing arguments several times, rephrasing the questions, and including some irrelevant material in the answer. The most common strategy used by all students, however, was that of expressing opinions similar to those of the teacher. It appears that at least some aspects of test-wiseness develop with age. Slakter, Koehler, and Hampton (1970) reported that fifth graders were able to recognize and ignore absurd options in test items. This is a fundamental strategy, one whose appearance demonstrates an increasing sense of test-wiseness. In the same study they looked at another basic strategy, eliminating two options that mean the same thing. Being able to recognize a similarity is developmentally and conceptually more advanced than recognizing an absurdity. Not surprisingly, these authors found that the similar option strategy did not appear until the eighth grade. In relation to the use of marking on tests as a means of facilitating retrieval from long-term memory, metacognitive research has suggested that younger students would be less strategic in their use of test marking in comparison with older students (Flavell, 1985). In addition to the wide variety of factors contributing to test-wiseness reviewed in this chapter, other factors have also been examined in the past 50 years: recognizing cues, changing answers, and retesting. Selected studies within these areas provide a historic overview that may be useful for current and future researchers.
Recognizing Cues A test-wiseness skill that attracted much attention between the 1960s and 1980s was an ability to recognize cues in alternative answers (or stems) and use them to eliminate inappropriate choices. With an advent of publisher-generated standardized tests and course packets, test items contain very few flaws. As a result, the focus on cue recognition has drastically diminished since the 1980s. This section provides coverage of earlier studies that contain some valid recommendations applicable in certain test situations. Starting with the foundational work by Millman et al. (1965), test writers have been alerted against possible cues in the test items that may provide test takers with helpful hints on eliminating inappropriate alternatives, irrespective of the students’ knowledge of the content. Some test constructors may, for example, write a stem and the correct answer, and generate two good foils. Stumped for a good third foil, a test writer may take the easy way out by restating one of the false foils. However, a test-wise student spots the ruse and rejects both similar alternatives. Or, perhaps, the correct answer is the most complete and hence the longest. These and other cues can appear in a variety of test formats. Some cues have to do with the position or length of the correct answer. Inexperienced test writers have a tendency to hide the correct alternative in the B or C position of a multiple-choice
246
Test Preparation and Test Taking
alternative set, perhaps thinking that the correct choice will stand out in the A or D position and be too obvious. Jones and Kaufman (1975) looked at the position and length of alternatives on objective tests to determine the effects on responses. They found that students involved in their research project were more likely to pick out a correct response because of its B or C position than because of its length in relation to the other choices. However, both cues had an effect; apparently, some students are alert for the possibility of such cues. A study by Flynn and Anderson (1977) investigated four types of cues and their effects on students’ scores on tests measuring mental ability and achievement. The four cues were (1) options that were opposites, so that if one were correct, the other would be incorrect; (2) longer correct options; (3) use of specific determiners; and (4) resemblance between the correct option and an aspect of the stem. Two studies focused on technical wording as a cue. In one, Strang (1977) found that nontechnically worded options were chosen more often than technically worded items regardless of the students’ familiarity with the content. In the second study (Strang, 1980), the students had more difficulty with recall items in which the incorrect option was technically worded. Strang suggested that this difficulty might spring from students’ tendency to memorize technical terms when studying for multiple-choice tests. They would thus use technical words as cues to a correct choice. Smith (1982) pointed out one of the principles of objective item construction: Every distracter must be plausible if the item is to discriminate between students who know the content of the test domain and those who do not. Smith (1982) also made a contribution to the test-wiseness cues research with the notion of convergence. Smith suggests that test-wise students look for the dimensions that underlie the alternatives and know how to converge them. The author randomly divided a group of high school students and gave the experimental group two hours of instruction in finding the convergence point. The control group had two hours of general test-taking instruction. The experiment group demonstrated significantly higher results on the verbal subscale of the SAT. Smith believes that convergence training is the explanation of the findings. Although cue recognition has historically lost its significance, test-wiseness does seem to imply a certain level of sensitivity to cues in the items. Some cues found in objective tests (e.g., inter-item cues, qualifying and absolute words, grammatical agreement, word associations, and synonyms) may still be addressed by instructors (Flippo, 2015). However, Flippo cautions teachers against focusing too much on cue-recognition strategies and suggests that they might be distractive from learning the content. It is possible that other kinds of cues involving deeper cognitive processes (e.g., an instructor’s emphasis on certain parts of a lecture that logically could appear on the test) may remain an area of interest for researchers in the field.
Changing Answers There is a false but persistent notion in the test-taking field that a student’s first answer is likely to be correct. The implication is that one should stay with the first choice since changing answers is likely to lead to a lower score. Contrary to this belief, research indicates that changing answers produces higher test scores (Fischer, Hermann, & Kopp, 2005; Geiger, 1997; Lynch & Smith, 1975; Milia, 2007; Nieswiadomy, Arnold, & Garza, 2001; Shatz & Best, 1987; Skinner, 2009; Smith, Coop, & Kinnard, 1979). These studies confirm earlier research findings that changing answers is, in fact, a mark of test-wiseness. The research on this point is remarkably consistent. When looking at accuracy of student perceptions of changing test answers, Geiger (1991) found that students had negative perceptions of answer changing and that they underestimated the benefits to their exam scores. It was found that the majority of students (65 percent) underestimated the benefit of this strategy, whereas 26 percent correctly perceived the outcome of its use and 9 percent overestimated the outcome.
247
Flippo, Appatova, and Wark
To begin, it should be clear that answer changing is not a frequent event. In Milia’s study (2007) more than half of the participants changed at least one answer; however, answer switching was identified only in 1.7 percent cases in an undergraduate course and in 2.4 percent cases in a postgraduate course. Answer changing is not a random event either. Lynch and Smith (1975) found a significant correlation between the difficulty of an item and the number of students who changed the answer to that item. They suggested that other items on the test may have helped the students reconsider their answers for the more difficult items. It seems possible that changes produce higher scores because later items help students recall information they did not remember the first time through. Shatz and Best’s (1987) findings show that a fairly clear and logical reason to change an answer (such as a misread question or a clue discovered later in the test) leads to a good chance of producing a wrong-to-right change. On the other hand, if a student is unsure of an answer, agonizing over alternatives may not be warranted since the likelihood of making a wrong-to-right choice is not significant (Shatz, 1985). As a means of gaining insight as to the reasons behind changing answers, Schwarz, M cMorris, and DeMers (1991) conducted personal interviews with students in graduate-level college courses. Six reasons for changing answers were identified. These included the following in order of frequency: rethought and conceptualized a better answer (26 percent), reread and understood the question better (19 percent), learned from a later item (8 percent), made clerical corrections (8 percent), remembered more information (7 percent), and used clues (6 percent). Fischer et al. (2005) showed the most common initial answer change to be from incorrect to correct. However, the majority of second and third answer changes were from correct to incorrect. The authors concluded that students should reexamine the answers they doubt on exams, and that overall examination performance can be expected to increase as long as answers are changed only once. For the most part, higher-scoring students gained more points by changing answers than did their lower-scoring colleagues (e.g., Penfield & Mercer, 1980). Only one study (Smith et al., 1979) found that the lower-scoring group benefited more from their answer changes. In general, the higher-scoring students make more changes and are more likely to switch from a wrong to a right answer (Ferguson, Kreiter, Peterson, Rowat, & Elliott, 2002; Lynch & Smith, 1975; Milia, 2007; Penfield & Mercer, 1980). Scherbaum, Blanshetyn, Marshall-Wolp, McCue, and Strauss (2011) investigated stereotype threat, a type of test anxiety, and concluded that minority participants under conditions of stereotype threat initially selected incorrect responses more frequently than other participants, and they changed these responses less often than other participants, thus engaging in “debilitating test-taking behaviors” (p. 372). The studies that looked into the answer-changing patterns of males and females (Milia, 2007; Penfield & Mercer, 1980) did not find a significant difference in score gains as a function of the gender. However, male students were found to be more apt to perceive answer changing as beneficial compared to female students (Geiger, 1991). As illustrated in a study by Ferguson et al. (2002), the advent of computer-based testing has provided the opportunity for investigation of a broader range of test-taking behaviors in relation to answer changing. A software product was utilized which enabled gathering of data for each examinee and the test group on response time, initial and changed responses, as well as data for classical item analysis. Though small, overall exam performance was observed to improve when answers were changed. Increased time was spent on the items that were changed, and students were more likely to change answers on the more difficult items. There is strong support that changing answers is an effective test strategy when, after some reflection or a review of previous responses, the student thinks changing is a wise idea. In general, high-scoring students make more changes and benefit the most.
248
Test Preparation and Test Taking
Retesting A final area of test-wiseness research covered in this chapter delves into the effects of simply repeating a test in the original or parallel form. As summarized in Karpicke, Butler, and Roediger (2009), basic research on human learning and memory has shown that practicing retrieval of information (by testing the information) has powerful effects on learning and long-term retention. A score on a retest reflects a number of effects: regression to the mean, measurement error, and the increased information gained by study between tests. Additionally, part of the difference between the original and retest scores is due to a type of test-wiseness. An instructor may give several tests during a course, and students may begin to see a pattern in the types of questions asked. The research on retesting starts with the premise that the actual taking of the test helps students develop certain strategies for taking similar tests at a later time. Some of the retesting research involves typical classroom exams. Other studies cover the effects of repeated testing on standardized measures of intelligence, personality, or job admission. A study by Zimmer and Hocevar (1994), investigating the effects of repeated testing over the course of a term, has yielded positive effects on performance of undergraduate teacher education students. Although the study notes the differences in achievement, the precise factors underlying such differences were not determined. Such factors may have included increased focus on distributed learning of course material (prompted by repeated evaluations) and improved test-wiseness. Another study presented similar findings on the benefits of retesting (Roediger & Karpicke, 2006). Two experiments of study and retesting sessions were conducted with undergraduate students. Both studies showed that immediate testing on a reviewed passage promoted better long-term retention than simple restudying of the passage. The authors theorized that testing, in comparison to studying, not only provided additional exposure to the material but also provided practice on the skill required on the future test. Practicing skills during learning that are required for retrieval in the test are thus seen as enhancing retention and performance. McDaniel, Roediger, and McDermott (2007) extended work in this area by examining the types of tests and timing of performance feedback that yielded the strongest effects of retesting. The authors reviewed three experiments that used educationally relevant materials (e.g., brief articles, lectures, college course materials). Their findings consistently support the use of production tests (short answer or essay) on initial testing and feedback soon after testing for promoting increased learning and retention. In summary, it seems that retesting, without any explicit content tutoring, can have positive effects on certain scores. The gain may be due in part to regression upward toward the mean or in part to a test-specific type of test-wiseness. However, research also provides support that repeated exposure to and retrieval of information appears to be contributing to these effects.
Testing Approaches in the 21st Century In the past two decades, higher education has experienced some significant transformations in typical testing approaches commonly applied to assess student learning. Pencil-and-paper testing and Scantron sheets have become almost nonexistent on college campuses and have been replaced by computer-based testing. As part of the flipped classroom pedagogy, online testing occurring outside of class is becoming more prevalent than in-class testing. Publisher-designed tests accompanying e-texts become a convenient alternative to an instructor-designed test. Alternative course assessments, such as e-portfolios, self-assessments and exit interviews, more commonly substitute traditional tests, both formative and summative. Some of the older testing approaches have been widely discussed by researchers (e.g., computer-based testing); others have not been thoroughly investigated yet (e.g., online testing). The following studies can attest to the continuous metamorphosis of this area and the potential impact that various testing formats may have on student learning and test performance.
249
Flippo, Appatova, and Wark
Epstein, Epstein, and Brosvic (2001) studied traditional at that time, multiple-choice Scantron answer sheet use in comparison to an Immediate Feedback Assessment Technique (IFAT, incorporating an answer-until-correct format) on unit and final examinations performance in an Introductory Psychology course. The two testing formats were used on the unit test, with students using the IFAT therefore having the correct answer as their final answer on each item. All students then used Scantron answer sheets for the final exam. Results showed that percent correct for initial answer choices on the unit tests did not differ by type of test form, and there was no difference between the two groups on correctly responding to new items on the final exam. However, when examining questions on the final that had been repeated from the unit exams, it was observed that students using the IFAT on the unit exam answered these items significantly more accurately than students who had used the Scantron form. Dihoff, Bosvic, Epstein, and Cook (2004) investigated the IFAT with undergraduate students preparing for classroom exams by use of practice exams. Their results revealed increased performance on exams with the use of immediate feedback versus delayed feedback on practice exams. They also determined that the use of immediate self-corrective feedback provided for greater retention of factual information over the academic semester. A study that provided support for an answer-until-correct format for tests was conducted by Roediger and Marsh (2005), in which they identified both positive and negative effects of traditional multiple-choice tests on students’ knowledge. While there was an overall positive testing effect (student performance on a final exam was highest after having previously taken a multiple-choice exam), negative effects also occurred. They observed that cued-recall tests could be answered with incorrect information after taking a multiple-choice test. Apparently, students sometimes perceived incorrect distracters as correct and thus acquired false knowledge in the multiple-choice exam. This was found to be associated with increased numbers of alternative answer options. Collaborative testing is a group (versus individual) method of student assessment. A variety of positive effects have been observed with the utilization of this testing format. Justifications for collaborative testing have been recorded (Hurren, Rutledge, & Garvin, 2006). The following studies demonstrate the cognitive and noncognitive effects that have been observed. Zimbardo, Butler, and Wolfe (2003) studied cooperative (collaborative) testing in college students. They found that scores on college examinations improved significantly when students were given the opportunity to select test partners. In addition, the testing promoted positive attitudes toward testing and learning, such as that “(a) knowledge can be, or should be, shared with fellow students; (b) that differences in opinion could be rationally negotiated; and (c) that cooperative learning procedures can be enjoyable and productive” (p. 120). The use of cooperative testing can be viewed as a next step in the development of a traditional pedagogical approach – required study group participation, in which test performance among minority students (over others in the same college without study group participation) was enhanced (Garland & Treisman, 1993). The positive effect of increased undergraduate student performance on quizzes with use of collaborative testing was observed in a study by Rao, Collins, and DiCarlo (2002). In a later study, members of this research group also observed increased understanding and retention of course content with utilization of collaborative testing (Cortright, Collins, Rodenbaugh, & DiCarlo, 2003). Following completion of a traditional individual exam, students in experimental Group A were assigned in pairs and answered selected questions from the exam. The same procedure was used on an exam four weeks later but with Group B (rather than Group A) being exposed to the experimental condition. A final exam was given four weeks later, with both groups taking the test in traditional individual format and then answering a subset of randomly selected questions from the second exam. It was observed that student performance on exams was increased with
250
Test Preparation and Test Taking
collaborative testing and that when students answered the subsets of questions in groups rather than individually, the information was retained for the final exam. Results of a study that found no significant differences in performance on a final exam following collaborative testing during unit exams have been reported by Lusk and Conklin (2003). However, they noted that comprehension and retention was comparable between the individual and collaborative test groups, thus quelling possible concerns that students being tested under group testing conditions could pass courses without learning the material. The authors also identified positive experiences during collaborative testing (e.g., opportunities for collaboration, decreased test anxiety) that would further support its use for testing. Russo and Warren (1999) have provided guidelines for collaborative testing. They include directions for both teachers and learners (i.e., “introduce the concept of collaboration on exams at the beginning of the semester and reinforce it during test reviews; make sure students understand that wrong answers can come from other students, as well as correct ones”; p. 20). Such guidelines reinforce the role of the teacher in providing not only valid, reliable assessments of student learning but also direction for testt aking skills. Technology advances at the end of the 20th century led to increased use of computers in learning and testing, and the integration of learning into assessment modalities. These occurred in relation to computer-based tests and practice exams for academic courses, as well as testing resources for standardized high-stakes exam preparation. An early concern about the move to computer-based tests was regarding possible differential effects in comparison to paper-and-pencil testing. A number of studies have been conducted to investigate this. Performances on computer-based tests have been shown to be equivalent to performance on paper-and-pencil tests (MacCann, Eastment, & Pickering, 2002; Pomplun, Frey, & Becker, 2002). DeAngelis (2000) found that students performed as well or better on computer-based exams. An investigation of the role of gender has also found no significant performance differences (Kies, Williams, & Freund, 2006). Attention to integrating traditional test-taking behavior capacities into computer-based testing (i.e., mark questions for later review, eliminate answer options, add notes to questions) has occurred and may have contributed to students’ adaptation to computerized tests (Peterson, Gordon, Elliott, & Kreiter, 2004). Improvement in test performance with use of computer-based practice questions was observed (Cooper, 2004; Gretes & Green, 2000) and appreciated. Computer-based and web-based practice question sources are widely available to students preparing for high-stakes exams (for example, see Junion-Metz, 2004). The increased testing capabilities associated with computer-based questions (e.g., use of images and other multimedia) have been found to be positively received by students (Hammoud & Barclay, 2002; Hong, McLean, Shapiro, & Lui, 2002; Mattheos, Nattestad, Falk-Nilsson, & Attstrom, 2004) and educators who strive to get a better sense of the learning of their students (Hammoud & Barclay, 2002; Khan, Davies, & Gupta, 2001). With the advent of computer-based and online testing, a number of additional concerns for students, teachers, and educational institutions have emerged. For example, test scoring errors by major testing companies on major high-stakes examinations have resulted in reporting of incorrect scores for college applicants. These errors have resulted in irreversible and negative (i.e., non-admittance to schools of choice, which would have offered admission if correct scores had been reported; increased student distress; unnecessary time spent preparing to retake exams, etc.) or unfair (i.e., failure to report corrected scores for students receiving inflated scores) consequences (Cornell, Krsnick, & Chiang, 2006; Hoover, 2006). Security concerns and other computer-testing challenges at the state and school district levels have also been identified ( Neugent, 2004; Seiberling, 2005). These realities add new layers of responsibility for ensuring that examinations accurately measure and report the knowledge levels of test takers.
251
Flippo, Appatova, and Wark
An approach to addressing the limitations of written examinations has been the development of other assessment modalities. While assessment of discrete knowledge is possible through written examinations, the assessment of skills has led to creation of methods that focus on authentic demonstration of skills and competencies reflective of learning achievements. Examples of two of these areas include the assessment of the competencies in the problem-based learning experience and assessment of health sciences learners by use of the Objective Structured Clinical Examination (OSCE). These approaches are included here to broaden consideration of testing in relation to alternative assessment methods and the factors that influence student learning and performance with these types of testing. A study by Sim, Azila, Lian, Tan, and Tan (2006) evaluated the usefulness of a tool for assessing the performance of students in problem-based tutorials. The areas assessed included participation and communication skills, cooperation/team-building skills, comprehension/reasoning skills, and knowledge/information gathering skills. The problem-based learning rating form was found to be a feasible and reliable assessment tool when the criteria guidelines were followed judiciously by faculty tutors. Such a process-oriented assessment extends the demonstration of learning beyond discrete answering of questions on written examinations to demonstration of abilities in ongoing authentic learning activities. Richardson and Trudeau (2003) have noted the importance of providing direction to students in problem-based learning activities and suggest an orientation session that delineates effective problem-based group learning strategies, the format of the written exercises, and the expectations of professors regarding the assignments. Such direction will be essential to students’ preparation and successful performance (as measured by ratings on process forms) in problem-based learning activities. Another example of skills assessment is that of the OSCE utilized increasingly in health science education programs to provide assessment of skills to be performed in clinical work. Students rotate through multiple stations staffed by standardized patients (individuals trained to portray patients with specific medical findings). Tasks for each patient station are established and serve as the basis of standardized patients’ ratings of students. Evaluation with an OSCE for second-year medical students identified differences in level of performance by domain (interpersonal and technical skills earning higher ratings than interpretive or integrative skills) and differences in performance levels on stations by training site (Hamann et al., 2002). Importantly, these findings not only reflected the levels of student learning but also provided direction for refinement of the curriculum to improve areas identified as weaker by student performance. This utilization of OSCE data for student learning and feedback, as well as curriculum evaluation, appears to be common and is observed in studies in various health science programs (Duerson, Romrell, & Stevens, 2000; Rentschler, Eaton, Cappiello, McNally, & McWilliam, 2007; Townsend, McLlvenny, Miller, & Dunn, 2001). In terms of added value for students, the utilization of OSCE assessment has also been found to promote greater levels of realistic self-assessment and achievement in a specific clinical competence, appears to stimulate student learning (Schoonheim-Klein et al., 2006), and is generally perceived as a positive experience by learners (Duerson et al., 2000; Rentschler et al., 2007; Tervo et al., 1997). The OSCE assessment format has been modified for use in genuine patient encounters (Kogan, Bellini, & Shea, 2002; Norcini, Blank, Duffy, & Fortna, 2003) and in electronic formats (Alnasir, 2004; El Shallaly & Ali, 2004; Nackman, Griggs, & Galt, 2006; Zary, Johnson, Boberg, & Fors, 2006). Directions on preparing for and participating in an OSCE assessment are regularly made available to learners by their professional programs.
Instructional Approaches to Test-Wiseness and Test-Taking Skills There are some empirical findings on attempts to teach test-wiseness strategies. Wasson (1990) conducted a study to compare the effectiveness of a specialized workshop on test-taking versus test-taking instruction embedded within a college survival skills course. The workshop content
252
Test Preparation and Test Taking
was based on the model of Hughes, Schumaker, Deschler, and Mercer (1988). Results indicated that there was a significant difference in the scores of the Comparative Guidance and Placement Program English placement exam workshop students as opposed to the students receiving in class test-taking instruction. Results support the use of more intensive and specialized focus programming for test-taking skills for low-achieving college students. Similar progress in academic achievement was observed in a study by Frierson (1991) that investigated the effects of test-taking and learning team interventions on classroom performance of nursing students. The treatment conditions included a group with combined test-taking skills and learning team activities, a test-taking skills group, and a comparison group. Learning team activities included regular cooperative review of course materials; test-taking skills activities consisted of instruction in general test-taking and utilization of trial tests for practice and self-assessment. Results reflected significant differences in grade point average (GPA) at the end of the semester between the combined intervention group and the other groups, and between the test-taking and comparison groups. In addition, determination of effect sizes revealed that 89 percent of the participants in the combined test-taking and learning team groups had GPAs that were higher than the comparison group’s mean GPA and that 67 percent of the test-taking group participants had GPAs higher than the comparison group mean. Comprehensive test-taking skills instructional materials are available online, providing students with ongoing access to resources. Colleges and universities provide these on their websites through learning center or counseling center sites and/or as posted by individual course instructors. In addition, a variety of public service student guides are available. The increased availability of test-taking skills resources can be viewed as positive. However, evaluation of the usefulness and exam outcomes associated with utilization of these materials has not been done. In summary, the identification of test-taking strategies and methods to teach them continue to be areas needing further development. Efforts at all academic levels have been made to disseminate information on preparing for exams (Flippo, 2015), general test-taking tips (Gloe, 1999), test-taking for a special subject area (Schwartz, 2004), expanding the network of individuals who support good testtaking (Curriculum Review, 2004), and on coaching for noncognitive factors affecting performance (Curriculum Review, 2005). This work should only continue and be expanded. Good test takers possess a variety of strategies, at least some of which require a certain level of cognitive development. Although the idea of teaching test-taking strategies is intuitively acceptable, few researchers have reported success in interventions aimed solely at test-taking strategies. Perhaps the techniques take a long time to learn or require more intensive instruction or learning. It is also possible that individual differences, such as personality, anxiety level, and intelligence, affect the application of test-wiseness skills in actual testing situations. Alternatively, it may be that adequate content knowledge, studying the most productive materials in the most productive ways, critical thinking, and reading skills are required to most benefit from test-wiseness (Flippo, 2015). Test-wiseness therefore may more accurately be understood as necessary, but not sufficient, for optimal test performance.
Conclusions The literature on test-wiseness and test-taking skills over the past 50 years supports several conclusions. Test-wiseness is a cognitive and behavioral construct that includes test-taking skills, cognitive skills, and other personal attributes that all together may impact test performance irrespective of the test taker’s content knowledge. Test-wiseness manifests itself when test-taking skills are applied on a particular test. Test-wiseness was shown to have a positive correlation with test performance. Test-taking skills that can be explicitly taught are often referred to as test-taking strategies. A host of factors may contribute to a student’s test-wiseness (e.g., time spent on a test, types of marks students use on a test, knowledge of a test difficulty and test significance, understanding
253
Flippo, Appatova, and Wark
idiosyncrasies of different test formats, and ability to implement prior knowledge). An ability to recognize cues in a test item is one of the test-wiseness skills that is no longer of primary importance for researchers or practitioners although certain strategies (e.g., identification of inter-item cues or grammatical agreement) may still become a teachable moment in the classroom. A nswer-changing behavior is another test-wiseness skill, and changing answers is a generally effective test-taking strategy. Finally, simple retesting, even without any formal review of content, can have a small but positive impact on scores. Since test-taking skill instruction was shown to be helpful in some test situations, a variety of coaching programs exist in order to better prepare students for tests.
Coaching to Prepare College Students for Tests Interpretations of Coaching and Related Controversy Coaching is a controversial area in test preparation, partly because the term is not adequately defined. No uniform definition of coaching has been achieved since early studies by Anastasi (1981) and Messick (1981) who indicated the lack of the agreed-upon meaning of the term. A coaching program can include any combination of interventions related to general test-taking, test familiarization, drill and practice on sample test items, motivational encouragement, subject matter review, or cognitive skill development that are focused on optimizing test performance on a specific standardized exam. Special modules, such as test anxiety reduction, may also be included (Samson, 1985). Because the operational definition of coaching is so varied, it evokes a range of reactions and raises a variety of issues. This chapter uses a widely permissive definition and includes studies that involve any test preparation or test-taking technique, in addition to formal instruction in the knowledge content of a test. One of the main issues raised by coaching is a problem of social policy. On the one hand, A nastasi (1988) stated that individuals who have deficient educational backgrounds are more likely to reap benefits from special coaching than those who have had superior educational opportunities and who are already prepared to do well on tests. On the other hand, the argument is that students from economically disadvantaged schools or families cannot afford expensive coaching courses (Nairn, 1980). If effective methods of test preparation are not available to all, certain test takers would have an unfair advantage over others (Powers, 1993). Consequently, decisions based on the results of the tests when some students have had coaching and some have not are inherently inequitable. The quality and timeliness of coaching available to different demographic groups becomes of particular concern given results of studies like Ellis and Ryan’s (2003). Their “troubling finding” (Ellis & Ryan, 2003, p. 2618) revealed that African-American college students reported participating in more test preparation courses before college but used less effective test-taking strategies in college compared to Caucasian students. This investigation is consistent with a later study by Chung-Herrera et al. (2009) who found that African-Americans report more self-initiated test preparation than Caucasians in a job context. The authors speculate that a possible reason might be higher levels of cultural mistrust among African-Americans, who may believe that they must work harder to succeed (Chung-Herrera et al., 2009). A comprehensive report on coaching or “shadow education” (a term used by many authors for a variety of educational activities which occur outside of formal schooling and intend to aid students to get ahead within the formal education) was published by Buchmann et al. (2010) and triggered both support and critique (e.g., Alon, 2010; Grodsky, 2010). Although the authors may disagree on the definition and scope of shadow education and the effect of different types of coaching on the standardized test scores of various student populations, some of their concerns
254
Test Preparation and Test Taking
are consistent: Commercial coaching offers additional possible advantages for already privileged students. Buchmann et al. (2010) raised awareness that shadow education is a globally spreading phenomenon that can negatively impact formal educational systems in terms of both equity and quality (Organisation for Economic Co-operation and Development, 2006). Buchmann et al. (2010) confirmed previous studies that racial/ethnic minorities are more likely than whites to utilize some types of coaching, college-oriented minority students are generally more motivated to use test preparation aids than their white counterparts, and females demonstrate a significantly greater likelihood of using all forms of test preparation than males. Studies of shadow education that reveal no correlation between coaching and test performance (e.g., Wilkinson & Wilkinson, 2013) implicitly bring up an even broader ethical issue related to the existence of commercial programs that are not efficient. Exploring the effects of preparation courses on the Undergraduate Medicine and Health Sciences Admissions Test (UMAT), Wilkinson and Wilkinson (2013) question the value of both commercial (in this study, MedEntry, a UMAT preparation provider) and noncommercial coaching (in this investigation, tutoring offered by the students’ residence hall) since neither was found to significantly affect UMAT scores. Another, more technical debate focuses on the problem of transfer. What is transferred from the coaching to the test-taking and, ultimately, to the performance being assessed or predicted? Anastasi (1988) believed that the closer the resemblance is between the test content and the coaching material, the greater the improvement is in test scores. However, the more restricted the instruction is to specific test content, the less valid the score is in extending to criterion performance. Similarly, if skill development in test-taking tricks affects score improvement, then a question arises about the degree to which test scores are indicative of academic abilities versus the ability to take tests (Powers, 1993). In essence, the argument is that coaching reduces the validity of the test. The counterargument is that coaching can enhance the construct validity of the test by eliminating error variance resulting from anxiety or unfamiliar test format (Chung-Herrera et al., 2009). A third issue is that of maximal student development. One thing to be considered is the cost associated with coaching programs in terms of the types of academic opportunities that are not being accessed when time, energy, and financial resources are committed to test-coaching courses (Powers, 1993). Another concern is associated with the value of the types of skills promoted by coaching. Green (1981) suggested that certain types of coaching should, in fact, become long-term teaching strategies. The notion is that comprehension and reasoning skills should be taught at the elementary and secondary levels and that school programs should integrate skill development with the development of knowledge. Schools also should prepare students in managing anxiety around test-taking and other evaluative situations and not simply familiarize them with test format and test-takings skills. Note that the social policy, transfer of training, and student development arguments make a common assumption: Coaching does have a real, observable effect. If not, there would be no reason to fear that many underprivileged students are disadvantaged by their inability to afford coaching classes. Similarly, if coaching were not associated with gains in certain important test scores, there would be no need to debate whether the gain signified an increase in some basic underlying aptitude or whether the schools should take the responsibility of coaching scholarship. These arguments do not settle the debate. In fact, they raise a basic question: How effective is coaching?
Effects of Coaching Consider the SAT. Coffman (1980), writing from a perspective of 17 years of experience at the Educational Testing Service, recalled thousands of studies on the SAT and concluded that, although
255
Flippo, Appatova, and Wark
it is difficult to differentiate teaching from coaching, “there is some evidence… that systematic instruction in problem-solving skills of the sorts represented by SAT items may improve not only test performance but also the underlying skills the test is designed to assess” (p. 11). Anastasi (1981) reported that the College Board, concerned about ill-advised commercial coaching, has conducted well-controlled research and has also reviewed the results of other studies in this area. The samples included white and minority students from urban and rural areas and from public and private schools. The general conclusion was that intensive drill on test items similar to those on the SAT does not produce greater gains in test scores than those earned by students who retake the SAT after a year of regular high school instruction. However, some scholars conclude that coaching was effective if an intensive short program produced the same gain as a year’s study. Anastasi (1988) also noted that new item types are investigated for their susceptibility to coaching by major testing organizations (e.g., College Board, GRE Board). When test performance levels can be significantly raised by short-term instruction or drill on certain item types, these item types are not retained in the operational forms of the test. This would appear to thus circumvent attempts to effect score improvement solely through coaching on test-taking strategies for discrete item types. This assurance, negating the susceptibility of high-stakes exam performance to utilization of item cues, has been supported by a study that investigated the presence of correct answer cues (longest answer, midrange value, one of two similar choices, one of two opposite choices) and incorrect answer cues (inclusionary language, grammatical mismatch) in preparatory materials for a credentialing exam (Gettig, 2006). He reviewed question and answer sets in the preparation manual for the Pharmacotherapy board certification examination (i.e., questions that could be assumed as surrogates for the certification exam). Results indicated that application of test- taking cues alone could not replace adequate studying as a determinant of successful examination performance. In a comprehensive review and meta-analysis of 48 studies on the effectiveness of coaching for the SAT, Becker (1990) investigated the effect of coaching for studies that employed pre- and posttest comparisons, regardless of whether or not the studies incorporated the use of a comparison group. Becker also looked at whether or not coaching effects differed between math and verbal sections of the SAT. It was found that longer coaching programs result in greater score increases than shorter programs, that the effects of coaching for the math section of the SAT (SAT-M) were greater than the effects of coaching for the verbal section (SAT-V), and that coaching effects for more scientifically rigorous studies (e.g., those that control for factors, such as regression, self- selection, motivational differences) are reduced in comparison to studies in which such factors are not controlled for (e.g., studies that merely compare score gains of coached students with national norms). After investigating only studies that employed comparison groups – studies that could be ascertained to provide the most rigorous evaluations of coaching effects – Becker (1990) determined that “we must expect only modest gains from any coaching intervention” (p. 405), average gains of approximately 9 points for the SAT-V and 19 points for the SAT-M. In reviewing studies on the effects of SAT coaching, Powers (1993) concluded that coaching programs tend to have a small effect on SAT-V scores and a modest effect on SAT-M scores. Among these studies that controlled for growth, practice, and other factors common to students who have been coached and those who have not, the median SAT-V and SAT-M score gains were found to be 3 points and 17 points, respectively. Yates (2001) has reported improvement in SAT scores of 93 points over PSAT and SAT scores for minority students who completed a two-week college residential SAT workshop. Students participated in SAT preparation sessions for approximately six hours per day. Activities included review of verbal and math manuals and lists of high yield vocabulary words, as well as instruction
256
Test Preparation and Test Taking
on test-taking strategies and practice on full-length practice SAT exams. This program represented an effort to address the achievement gap between minority and white students. It was one of several efforts by the state of South Carolina to improve SAT scores, efforts that have been successful as evidenced by a 40-point increase in the average SAT score during 1982–1992 and a 43-point gain from 1992–2002 (Hamilton, 2003). Buchmann et al.’s report (2010) indicated that most affluent students are significantly more likely to enroll in private courses, such as those offered by Princeton Review and Kaplan – a strategy that, according to the authors, corresponds to SAT score gains of about 30–40 points. Grodsky (2010) argues that shadow education in preparation for college entrance exams is a relatively ineffective means of improving test scores if private preparation courses are separated in the statistical analysis from public preparation courses. The author concludes that private courses and tutors (i.e., individualized coaching) add only about 4.5 and 11.5 points, respectively, to the SAT scores students could earn were they to take a publicly available preparation course at their high school. Individualized coaching was found to have a positive effect on the performance of undergraduate college students with learning disabilities. However, the impact was investigated on the GPA but not specifically on test performance (Allsopp, Minskoff, & Bolt, 2005). Individualized coaching was also shown effective for undergraduate nontraditional students, improving their self-advocacy, study skills, persistence rates during the treatment period and year-to-year retention (Bettinger & Baker, 2011, 2014). Research on coaching for graduate-level and/or specialized disciplinary standardized tests also presents mixed conclusions on the effect of coaching. Swinton and Powers (1983) studied university students to see the effects of special preparation on GRE analytical scores and item types. They coached students by offering familiarization with the test. Their results showed that scores may improve on practice items similar to those found on the test. The authors contend that if the techniques learning in coaching are retained, students may improve performance both on the GRE itself and in graduate school. Early studies that involved the NBME indicated positive results from coaching (Scott et al., 1980; Weber & Hamer, 1982). However, later studies have not supported the effectiveness of commercial test-coaching programs on NBME examinations. Werner and Bull (2003) utilized medical student scores on the NBME Comprehensive Basic Science Examination (CBSE) to predict scores on the United States Medical Licensing Examination (USMLE) Step 1. Scores received on the CBSE were then analyzed in relation to the subsequent Step 1 scores. Similar results were obtained by Zhang, Rauchwarger, Toth, and O’Connell (2004). They examined the relationships between Step 1 performance, method of preparation for Step 1, and performance in medical school. The effect of preparation method was not significant. They concluded that performance on Step 1 is related to medical school learning and performance and not to the type of method used to prepare for Step 1. McGaghie, Downing, and Kubilius (2004) reviewed and evaluated the results of 11 empirical studies on commercial test preparation courses for preadmission and medical education. The studies included preparation courses for the Medical College Admissions Test (MCAT), the former NBME Part 1, and the USMLE Step 1. The authors concluded that the effectiveness of commercial test preparation courses in medicine has not been demonstrated. A study related to NBME was conducted specifically with ethnic minority medical students. It has been observed that the fail rate for first-time minority student test takers on the NBME has been several times the percentage of nonminority students (Frierson, 1991). Frierson (1984) investigated the effectiveness of an intervention program for minority students that included instruction on effective test-taking strategies, practice trial exams, self-assessment based on trial exams, and cooperative participation in learning teams on improving scores and pass rates. Analysis of the results revealed that mean exam scores and pass rate differences between minority and nonminority
257
Flippo, Appatova, and Wark
students were not statistically significant. Also, the difference in pass rates between minority students from this and the previous year (without the intervention program) was statistically significant. Thus, it appears that a multicomponent intervention (versus a more limited test-coaching approach) aimed at improving performance on NBME examinations can be effective. The results of a multicomponent, problem-based remediation for nursing students who failed the Health Education System, Inc. (HESI) exam twice have been reported by English and Gordon (2004). Successful performance on the HESI is a requirement for advancement to senior year of nursing school. The remediation program included initial assessment of students related to performance on the HESI, learning styles, and self-assessment of exam difficulties; followed by review sessions, visualization and guided imagery, and instruction in test-taking strategies. All students performed successfully on the HESI following completion of the program. The following studies examined the effect of planned preparation by medical students for an OSCE. Mavis (2000) investigated student preparation by means of a brief survey that students completed immediately before an end-ofyear OSCE exam. Survey items gathered information on study time and approaches (reviewing physical exam text book, class notes, and supplemental course readings) as well as perceptions of confidence, preparedness, and anxiety. Results indicated that there was no relationship between preparation time and performance. The author concluded that “prior academic performance rather than preparatory studying time is a better predictor of OSCE outcomes” (p. 812). A more recent study by Wilkinson and Wilkinson (2013) also found no relationship between total time spent on preparation for UMAT and the test performance. However, a structured intervention that focused on students’ self-identified needs and incorporated study skills and a practice OSCE had more positive results (Beckert, Wilkinson, & Sainsbury, 2003). Curriculum was determined by a needs assessment and students were invited to participate in the course design. Student performance on end-of-year examinations (both OSCE and written multiple-choice examinations) was reported to be significantly enhanced over the previous year’s performance and in comparison to students from other school taking the same exams. Again, a broader approach, with focus on both study and examination techniques and based on student needs, resulted in improved outcomes. Two meta-analyses looked at the effect of coaching on achievement test scores and on a variety of aptitude tests. Samson (1985) summarized 24 studies involving elementary and secondary students. Bangert-Drowns, Kulik, and Kulik (1983) reviewed 25 studies, mostly of secondary and college students that looked at the effects of coaching for aptitude tests other than the SAT. Thirteen studies were common to the two papers. Both reports came to surprisingly similar conclusions. Samson (1985) found that across all types of treatments, the average effect size of coaching was 0.33 (in other words, among all students involved in any type of treatment the average gain was SD = 0.33). Thus, the average coached student moved from the 50th percentile to the 63rd. Bangert-Drowns et al. (1983) found similar results. Across all variables, the average effect size was SD = 0.25, representing a gain from the 50th to the 60th percentile. Both analyses concurred in the main finding that coaching is associated with significant gains in test scores. Both research studies also found the same secondary relationships. The first is that length of treatment made an important difference in the effectiveness of a coaching program. In the Samson (1985) study, coaching raised the average score from the 50th to the 57th percentile after 1–2 hours, to the 64th percentile after 3–9 hours, and back to the 62nd percentile after more than 9 hours. In the Bangert-Drowns et al. (1983) summary, the increases were to the 61st percentile after 1–2 hours, the 59th percentile after 3–6 hours, and the 64th percentile after 7 or 8 hours. Apparently, a program of between 6 and 9 hours is most effective. The general effect of coaching seems to be slightly greater for the younger students in the Samson study which may be explained by the greater test-wiseness of older students. The results of both studies agree that coaching can be effective.
258
Test Preparation and Test Taking
The other secondary effect was type of treatment. For Samson (1985), general test-taking skills, such as following directions, making good use of time, and using answer sheets correctly, make up the content of an effective program. Those skills would be very appropriate for younger students who did not have much practice with objective testing formats. In the Bangert-Drowns et al. (1983) study, the effective treatments focused not on simple test-taking mechanics but on “intensive, concentrated ‘cramming’ on sample test questions” (p. 578). Another study investigated a variety of coaching programs with elementary and secondary students (Faunce & Kenny, 2004). Analyses of the data indicated that coaching with secondary students aimed at improving end-of-year exam performance was not effective, which aligns with Welsh, Eastwood, and D’Agostino’s study (2014) showing that test preparation was not beneficial for elementary and secondary students relative to meeting state standards. However, a significant effect of coaching on the outcome of the specific test (e.g., entrance examination for Gifted and Talented program) was observed with younger children (Faunce & Kenny, 2004). The authors cautioned educational test designers to be aware of the effect of test-taking coaching for specialized exams with younger students, an effect that appears to disappear as students get older and have more exposure to such testing formats. In addition to special in-class programs for improving performance on standardized exams, commercial test coaching companies provide online resources. These resources include diagnostic tests, practice questions, tips on effective test preparation, etc. (e.g., www.kaptest.com) and in print (e.g., Junion-Metz, 2004). Exam preparation resources have also been developed for handheld electronic devices, making examination preparation very portable (Hoover, 2005). Similar to the electronic availability of test-taking strategies, the specific usefulness and performance outcomes associated with utilization of these materials has not been determined.
Conclusions Although levels of effectiveness vary, coaching can impact test performance under certain conditions. Studies of commercial and other coaching courses have implications for test-preparation programming sponsored by educational institutions. Interventions structured around students’ individualized needs tend to be more efficient than non-individualized instruction. It appears that coaching impacts student learning greater on early stages of educational continuum. The instruction should be consistent with the school’s curriculum and should provide a framework for review of the basic material taught and a focus on the underlying cognitive skills being tested (i.e., problem-solving and reasoning skills). This type of preparation would be learning and thinking experience, rather than simply a crash course or cramming strategy to pass an exam. In addition to the content review, coaching should include familiarization with the test format and cover specific processes for recalling and applying knowledge and skills as dictated by types of items to be encountered. Anxiety reduction or motivation enhancement should be part of the curriculum, if appropriate.
Test Anxiety A major problem that some students face in taking tests is test anxiety. Test-anxious students often earn lower scores on classroom tests than their ability would predict. The highly anxious student may have done a creditable job of preparation, using all the appropriate study techniques. Up to the moment of the exam, the student may be able to summarize and report content and demonstrate other necessary skills. However, in the actual test situation, when it counts, this student fails to perform. Also, a student may experience interference with learning as anticipatory anxiety around a future test promotes avoidance behavior and impedes concentration and follow
259
Flippo, Appatova, and Wark
through on early learning tasks. Regardless of the precise way in which test anxiety is manifested in the individual student, there is well-documented evidence (Elliott, DiPerna, Mroch, & Lang, 2004; Embse & Hasson, 2012; Seipp, 1991; Tobias, 1985; Zeidner, 1998) of an inverse relationship between test anxiety and academic performance. The typical test-anxious student may show distress in one or more following ways: physiologically (excessive perspiration, muscular tension, accelerated heartbeat), intellectually (forgetting, incorrect response fixation), or emotionally (worry, self-degradation). Reports consistently show that approximately 10 percent–40 percent of students exhibit various degrees of anxiety in various assessment contexts (Ergene, 2003; Putwain & Daly, 2014; Segool, Carlson, Goforth, Embse, & Barterian, 2013; Zeidner & Matthews, 2010).
Test Anxiety Theories Test anxiety, as a scientific concept, is approximately 65 years old. Since the classic work by Mandler and Sarason (1952), the investigation of test anxiety has blossomed. Multiple studies and meta-analyses (chronologically) by Allen (1971), Wildemouth (1977), Allen, Elias, and Zlotlow (1980), Tryon (1980), Schwarzer, Ploeg, and Spielberger (1989), Hagtvet and Johnsen (1992), Jones and Petruzzi (1995), Cassady (2010), Zeidner and Matthews (2010), Embse and Hasson (2012), and Maloney, Sattizahn, and Beilock (2014) exemplify the theoretical and empirical growth of the field. Recent studies on anxiety, in general, and its impact on cognitive performance (Beilock & Carr, 2005; Eysenck, Derakshan, Santos, & Calvo, 2007; Eysenck, Payne, & Derakshan, 2005; Maloney et al., 2014; Ramirez & Beilock, 2011; Zeidner & Matthews, 2010) have confirmed earlier approaches specific to test anxiety. Over the past 15 years, anxiety has been clearly shown to affect the cognitive processing center of the working memory system, decrease attentional control, and increase the extent to which processing is influenced by the stimulus-driven attentional system. Although test anxiety correlates positively with general anxiety, it is a distinct construct (Zeidner & Matthews, 2010). The increase in test anxiety has been reported in association with the increasing dominance of high-stakes tests (Cizek & Burg, 2006; Embse & Hasson, 2012; Embse, Schultz, & Draughn, 2015; Putwain, 2007, 2008a, 2008b; Segool et al., 2013). Test anxiety is consistently shown to be significantly higher in female students than male students (Putwain, 2007; Putwain & Daly, 2014; Rosário et al., 2008). Significant gender differences are present in both the worry and emotionality components (Putwain, 2007) although gender differences are considerably more pronounced on the emotionality than on the worry component of test anxiety (Zeidner & Matthews, 2010). Essentially similar to test anxiety is stereotype threat (a term introduced by Steele and Aronson, 1995), which also impairs verbal working memory resources (Beilock, Rydell, & McConnell, 2007; Schmader, Johns, & Forbes, 2008). A study by Appel, Kronberger, and Aronson (2011) revealed how stereotypes interfere with test preparation among women in science, technology, engineering, and mathematics. The authors found that women’s note-taking activities were impaired under stereotype threat, and stereotype threat impaired women’s performance evaluating the notes of others. This study also expressed concern that since the quality of student notes has been shown to be a predictor of the test performance (Peverly et al., 2007), the impeded quality of notes may result in impeded test performance (Appel et al., 2011). However, gender differences were not reported for the test anxiety-assessment performance relationship (Putwain, 2008b), which may be explained by compensatory factors, such as test-wiseness, applied by female students. A meta-analysis of research on stereotype threat’s effect on the mathematics scores of females and minorities can be found in Smith and Hung (2008).
260
Test Preparation and Test Taking
Starting with seminal work by Steele and Aronson (1995), it has been clearly established that minority students underperform compared with their white peers as a result of stereotype threat. In Putwain’s (2007) study, ethnic and socioeconomic factors were identified as significant predictors of variance in test anxiety scores. Students from a white ethnic background reported lower test anxiety than students from Asian, black, or other ethnic backgrounds, and higher levels of test anxiety are reported by groups from lower socioeconomic backgrounds. A differential test anxiety-assessment performance relationship was reported for socioeconomic background ( Putwain, 2008b). Although research has suggested that test anxiety may increase based on contextual factors, such as school or class setting (Goetz, Preckel, Zeidner, & Schleyer, 2008), studies have been inconclusive in this respect. Embse and Hasson (2012), for example, found no significant difference in test anxiety levels between urban and suburban schools. English as a native or non-n ative language of students did not predict variance in test anxiety scores (Putwain, 2007). Test anxiety has been found to have a negative correlation with the parents’ educational level and a positive correlation with procrastination habits (Rosário et al, 2008), less effective study skills and note-taking skills (Cassady, 2004), and learning disability (Whitaker Sena, Lowe, & Lee, 2007). Much of the research has been aimed at understanding physiological aspects of test anxiety, behavioral responses, and test anxiety treatments leading to improved academic performance. This section focuses specifically on those treatment techniques that have been shown to improve grades among college students. Liebert and Morris (1967) conducted a foundational study which identified two components of test anxiety – emotionality and worry. Another, “social” component of test anxiety, termed as the biopsychosocial model or cognitive behavioral model of test anxiety, has been recently proposed (Embse, Mata, Segool, & Scott, 2014; Lowe & Lee, 2007; Lowe et al., 2007; Segool, Embse, Mata, & Gallant, 2014) as a perceived potential negative reaction from important others (e.g., peers, teachers, family) as a result of poor performance. The effect of this “social” component on learning was found to be similar to that of the worry component (Thomas, Cassady, & Finch, 2017). Emotionality, or excessive physiological arousal, may or may not be detrimental to student performance. A moderate level of arousal is leading to one’s best performance (Beilock & Carr, 2005; Chamberlain, Daly, & Spalding, 2011; Gaeddert & Dolphin, 1981; Rath, 2008) and is often referred to as facilitative anxiety as opposed to debilitating anxiety (Harrington, 2015; Raffety, Smith, & Ptacek, 1997). The optimal level of arousal for any given task depends on a person’s history, physiology, and state of health. If emotionality goes beyond that optimal level, performance may begin to deteriorate. However, emotionality is not a universally negative variable; as posited by Pekrun (1992), other emotions may be no less important to learning and performance than is anxiety. For example, positive emotions (e.g., enjoyment, hope, pride) appear necessary for developing intrinsic and ongoing motivation. Also, there may be negative emotions other than extreme anxiety (e.g., boredom, hopelessness) that may be detrimental to learning and achievement through reducing task motivation. For purpose of this writing, however, we focus on the emotional component of anxiety and its relationship to academic performance. Worry, the other factor, is seen as always being detrimental to test performance. The high- anxiety student has internal responses that interfere with optimal test performance. Hollandsworth, Galazeski, Kirkland, Jones, and Norman (1979) cleverly documented the kinds of internal statements made during a test by high- and low-anxiety students. Calm people recall themselves saying things like “I was thinking this was pretty easy,” “I was just thinking about the questions mostly,” or “I always love doing things like these little designs.” Their comments contrast strongly with those recalled by anxious students: “I decided how dumb I was,” or “My mother would
261
Flippo, Appatova, and Wark
say […] don’t set bad examples because I’m watching you.” These internal statements may reduce performance by interfering with task-relevant thoughts, and they may also increase emotionality. Nelson and Knight’s study (2010) suggested a treatment for worry – positive thinking about successful experiences right before taking a test – which results in more positive emotions, higher level of optimism, less test anxiety, perceptions of the test as a challenge rather than a threat, and better performance on the test. Embse et al. (2015) summarize prior studies of the relationship between test anxiety, test performance, and teacher messaging – threat-based (messages focusing on fear of the negative consequences of test failure) or facilitating (messages highlighting students’ efficacy or expectation for high performance). Although fear appeals have been shown to have a significant relationship with higher test anxiety and lower test performance, a causal relationship has not been proven (Putwain & Best, 2011, 2012; Putwain & Remedios, 2014; Putwain & Symes, 2011). Embse et al.’s (2015) study examined 487 university students and the influence they experienced from fear and efficacy appeals. Their results suggest that fear appeals significantly harm student test performance relative to efficacy appeals, even when controlling for the impact of intrinsic motivation on test anxiety. However, student anxiety did not appear to explain the relationship between fear appeals and lowered test performance. Additionally, their investigation demonstrates that test anxiety is significantly related to test performance, both negatively (as a result of cognitive obstruction or physiological tenseness) and positively (as a result of social derogation), which they claim to be consistent with prior research (Embse & Hasson, 2012; Embse & Witmer, 2014). The authors suggest that the underlying mechanism of the relationship between fear appeals, test anxiety, and test performance remains inconclusive. Putwain and Remedios (2014) also emphasize the need to study a contribution of motivation and test anxiety to the relationship of fear appeals and test performance. Both fear and efficacy appeals need to be studied as potential inhibitors or enhancers impacting test anxiety and test performance. Another important theory about variables affecting test anxiety was put forward by Wine (1971), who noted the importance of how students direct their attention. According to her analysis, calm students pay most attention to test items. Anxious students, on the other hand, attend to their internal states, their physiological arousal, and especially their negative self-talk. In essence, high-anxiety students are focusing their attention internally rather than externally to the examination, and they are more distracted by worry from cognitive tasks of tests than are low-anxiety students (Sarason, 1988; Wine, 1982). Wine (1982) was able to reduce test-anxiety effects by showing students how to attend to the test, and not to their internal states. The attentional processes of high-anxiety students have also been studied in relation to general level of distractibility during tests (Alting & Markham, 1993). It was found that under evaluative test conditions, high test-anxiety students were significantly more distractible to nonthreatening stimuli present in the test environment than were low test-anxiety students. The authors suggested that cognitive interference from worry may need to be supplemented by considering the role of other types of distracting stimuli for high test-anxious students. In summary, there are three general approaches to test anxiety. The physiological or behavioral approach stresses the disruptive effects of arousal and emotionality. Treatment is geared toward helping students relax and desensitizing them to their presumed fear of tests and evaluations. The second approach flows from the worry or cognitive component of test taking. Students are taught how to change the way they think and talk about themselves in a test situation. The third approach involves teaching test anxious students to focus on the exam, to use good test-taking skills, and to ignore distracting internal and external stimuli. It can be noted, however, that treatment approaches that include elements to address the worry or the emotionality components of test anxiety in combination with study skills training have been shown to be most efficacious in decreasing test anxiety, with accompanying increases in academic performance.
262
Test Preparation and Test Taking
Other Test Anxiety Theories and Treatments Using instructor-controlled hypnosis is an alternative way to reduce test anxiety. Typically, students may receive post hypnotic suggestions to be calm and apply best practice techniques during a test. This approach has yielded positive results in a number of studies with college students ( Boutin, 1989; Boutin & Tosi, 1983; Hebert, 1984; Sapp, 1991; Stanton, 1984, 1994). The technique has also been used to help highly hypnotizable students increase their interest (but not comprehension scores) for difficult, technical text (Mohl, Finigan, & Scharff, 2016). An alternative approach involves eyes open, alert active self-hypnosis (Wark, 1998). The students are fully in charge and can work independently of any instructor. Using alert hypnosis, students can be more effective when reading, listening to lectures, making notes, or taking tests anywhere – in a classroom, library, or exam room. In a learning skills course (1996) and a case report (2006), Wark demonstrated that alert self-hypnosis training was associated with improved test scores. Variations of the alert self-hypnosis technique have yielded positive results in a variety of studies (De Vos & Louw, 2006; Schreiber, 1997; Schreiber & McSweeney, 2004; Schreiber & Schreiber, 1998; Wark, 2011). How does the process work? Recent research (Faymonville, Boly, & Laureys, 2006; Jiang, White, Greicius, Waelde, & Spiegel, 2016; Rainville et al., 1999; Wark, 2015) agrees that hypnosis has both inhibiting and facilitating effects, involving different parts of the brain. One result is decreased activity in the dorsal Anterior Cingulate Cortex considered a center for anxiety (Raz, Shapiro, Fan, & Posner, 2002). In addition, there may be increased neurological connections between the forward part of the brain and the insula. The effect of this increase is that the person pays less attention to what is going on in the environment around them, and more attention to their own internal physiological signals and condition. The shift following hypnotic induction is thought to reduce daydreaming and make more resources available for memory and problem-solving. There may be a sense of increased brightness or salience for whatever is in the focus of attention, perhaps the wording of a test item. A test anxious student, using alert self-hypnosis, may be able to shift from worry about future consequences, and experience an enhanced sense of calm, problem-free withdrawal from the external world and the people in it. The technique may be a way of focusing attention during testing and excluding negative thoughts (Wine, 1971, 1982). A student using hypnosis for a test is calm, focused on a particular question or item, ready to carry out a suggestion. For the well-prepared student, this may be a suggestion to apply previously learned best practices, such as recall and visualize material from a textbook, remember hearing the instructor’s lecturer, think about what the instructor is trying to teach, or experience the pleasure of learning new material (for a list of specific practices, see Implications for Practice). When the student is using self-hypnosis, there is no conflicting suggestions, high emotional reactions, or worry about the future, which can reduce memory, learning, and academic achievement (Yerkes & Dodson, 1908). This treatment for test anxiety is increasingly available in counseling centers and clinics staffed by professionals skilled in the application of hypnotherapy techniques. Observational learning from a model student is an example of a social learning theory approach to test-anxiety reduction. Horne and Matson (1977) had a group of high-anxiety students listen to a series of tapes purporting to be group sessions of test anxious patients. Over the course of a 10week treatment, students heard three tapes, in which the model students expressed progressively less concern about test panic. During the sessions in which no tapes were played, counselors verbally reinforced the subjects’ non-anxious self-reports. Students in other groups were treated by desensitization, flooding (asking students to imagine test failure), or study skills counseling. Horne and Matson found that modeling, desensitization, and study skills training were more effective than flooding in producing grade improvements and reducing test anxiety. On the other hand,
263
Flippo, Appatova, and Wark
McCordick, Kaplan, Finn, and Smith (1979), comparing modeling with cognitive treatment and study skills, found that no treatment in their study was effective in improving grades. As these researchers admit, “the ideal treatment for test anxiety is still elusive” (p. 420).
Test Anxiety and Alternative Testing Procedures The previously cited research on test anxiety has generally been in relation to traditional academic assessment methods. This section will focus on test anxiety in the realm of other, alternative assessment modalities. In terms of collaborative (cooperative) testing, Mitchell and Melton (2003) reported faculty observations that “students immediately appeared less anxious, both verbally and nonverbally, prior to and during the exam, as well as prior to the posting of exam grades” (p. 96). Zimbardo et al. (2003) received positive results on student evaluations forms following participation in collaborative testing activities, with 81 percent of the Introductory Psychology student participants indicating reduced test anxiety during study and 88 percent reduced test anxiety during testing. Empirical research findings on reduced test anxiety in collaborative learning have also been reported (Meinster & Rose, 1993). Dibattista and Gosse (2006) investigated the possible effects of the IFAT on test anxiety. They were concerned that test-anxious undergraduate students might be disadvantaged by the use of IFAT. Instruments to measure trait anxiety and test anxiety (Revised Test Anxiety Scale) were administered prior to using the IFAT. Following initial use of the IFAT, students completed a questionnaire with items related to general acceptance of the test format and items on anxiety in the test situation. Results showed that students’ preference for the IFAT was not related to test anxiety or to any measure of performance. They concluded that the IFAT does not discriminate against students based on their level of test anxiety. Additional studies are needed in this area to further support these findings. Test anxiety in relation to performance on Standardized Patient OSCE examinations and in relation to test anxiety levels on multiple-choice examinations was the focus of a study by Reteguiz (2006). Test anxiety was measured by use of the Test Attitude Inventory (TAI). The medical student anxiety levels were assessed after the completion of the two clerkship (OSCE) examinations and the multiple-choice exam. Though greater levels of anxiety were expected in relation to the skills exams, results showed equal levels of anxiety on the written and skills exams. Unlike the inverse relationship between test anxiety and academic performance consistently found in research, no relationships between levels of test anxiety and performance were observed in this study.
Conclusions The field of test anxiety continues to evolve and expand as new discoveries are made in the areas of human learning and performance. The conclusions are expected to change in time. Currently, the problem of test anxiety would seem to be best addressed in the classroom through teaching students better ways to study and take tests, and improved methods for exerting active self-control over their own processes of preparing for and taking exams. Instructors can make some environmental changes to reduce test-anxiety effects. Teachers who do their best to reduce tension, project hope and kindness, and model efficiency rather than panic are also exercising good preventive counseling. In addition, it appears essential to promote positive academic self-concepts. Of course, those students who exhibit extreme levels of debilitating test anxiety should be referred for professional assistance. The real challenge is to find and provide appropriate interventions that assist students in reducing test anxiety and that promote optimal learning and performance.
264
Test Preparation and Test Taking
Implications for Practice We have reviewed three aspects of the process of preparing for and taking examinations. The construct of test-wiseness presents a complex situation. High-scoring students report the usage of some strategies to good effect. A presumed mechanism accounts for at least part of the test- wiseness effect: a student’s sensitivity to the various cues to the correct answer left by unpracticed item writers. Test-wise students apparently use a system of strategies to gain an advantage. To some extent, then, the strategies take advantage of certain errors in item construction and measurement. Test-wise students also seem to take risks and make guesses. In addition, test-wise students may be applying broader conceptual and reasoning skills in the test situation. Both the sensitivity to cues and the test-taking techniques appear to be teachable. Although there is significant variability in the level of effectiveness observed in studies of coaching, positive results have been obtained. Students of a wide range of abilities have been shown to profit from certain kinds of coaching programs (especially, multicomponent programs). The consensus from measurement experts is that the more disadvantaged and deficient a student’s background, the greater the impact of test coaching. Finally, we reviewed the status of test anxiety as an aspect of test preparation and test taking. Test anxiety has been identified and studied for more than 65 years. In that period, research on the evaluation and treatment of the test-anxious student has continued to move ahead. It is now possible to teach students how to avoid the personal effects of anxiety, and to teach instructors how to arrange testing to reduce the likelihood that anxiety will adversely affect test scores. How might reading and learning skills professionals use the information presented here? Perhaps, it is possible by incorporating it into work with an individual student, by creating a test preparation unit in a class, or by developing a systematic program that is open to a wide audience. In any case, the actual form of the program will depend on the nature of the students, the needs of the institution, and the resources available. What follows is a set of suggested components for any program. Some of the suggestions are strongly supported by research evidence. Others are based on our own teaching and clinical experience.
Study Skills Most test preparation programs assume that students know how to study. If there is any to think otherwise, the program must have a study skills component. Moreover, test preparation strategies are also good learning strategies which enhance students’ overall learning (Flippo, 2002b). The literature on study and learning skills instruction and on specific techniques for reading and studying textbook material is summarized in previous chapters. Without reiterating here, we can say that only research-based strategies proven to ignite student learning (e.g., Willis, 2006) should be used in the process of test preparation. Such general study skills as getting organized, planning study time, and prioritizing tasks are critical parts of efficient test preparation (Flippo, 2015; Gettinger & Seibert, 2002).
Content Review The review of successful test preparation programs is consistent on one point. Good programs are not simple content cram courses. They must be planned as an integrated package of experiences. In most cases, the presentation team is an interdisciplinary one. A reading and learning skills specialist instructs students in learning skills and test-wiseness strategies using specific content area information and materials needed for the test preparation. Depending on staff make-up, either the learning skills specialist or a psychologist helps students learn techniques to reduce test anxiety. Additionally, there must be a subject matter expert (SME) on the team.
265
Flippo, Appatova, and Wark
The SME must be knowledgeable both in the content area of the test and in the pedagogical strategies appropriate to the subject. He or she must know where students typically have trouble. If it is with the conceptual aspect, the SME must be prepared to offer important ideas at a more basic level. If the problem is computational, there must be guided practice to make the applications clear. If the problems are perceptual, the SME must teach the necessary discrimination that a competent student should demonstrate. Independent content review for tests – within or beyond test preparation programs – should involve practicing such metacognitive strategies as self-testing and other types of repeated information retrieval proven to have significant impact on long-term retention (Karpicke et al., 2009).
Test Practice and Test Taking The collection of suggestions for taking exams is vast. This chapter has reviewed the impact and value of many of them. Which techniques to teach in a particular situation is a decision for the reading and learning skills specialist. Beyond general research-based learning strategies relevant to test preparation and test taking (Flippo, 2002a, 2015; Hodges, Simpson, & Stahl, 2012; Willis, 2006), instructors can find a body of techniques that apply to specific item types. There is, for example, a set of general strategies for various objective items; specific strategies for multiple-choice, matching, completion, and true/ false items; general strategies for essay test preparation; specific strategies for process and trace, compare and contrast, and discussion questions; as well as suggestions for group practice to prepare for these test questions from class lectures, readings, and notes (Flippo, 2002a, 2015). Additional suggestions exist for approaching objective items (e.g., LoSchiavo & Shatz, 2002) and answering essay questions (e.g., Raygor & Wark, 1980). Excellent sources on this topic are available. They should be consulted for management procedures (Flippo, 1984) and specific examples to illustrate techniques. The works by Boyd (1988), Flippo (2015), Jalongo, Twiest, and Gerlach (1996), Majors (1997), and Raygor and Wark (1980) are all appropriate for postsecondary and college students.
Test Anxiety What can we conclude about the most effective ways to reduce test anxiety and increase grades for college students? A meta-analysis of 56 studies of test-anxiety reduction programs (Ergene, 2003) demonstrated that treatment completion gives a client a higher chance of success compared to 74 percent of individuals who did not receive treatment. The author found that the most effective treatments combine skill-focused approaches with behavioral and cognitive approaches. Combined approaches produced high effect sizes, while techniques like meditation, physical exercise alone, Gestalt therapy, and humanistic counseling produced small effect sizes. Research literature has some clear suggestions for instructors and learning specialists who can help their students avoid test panic by incorporating the following approaches (see detailed explanation of items 1–3 in Flippo, 2015): 1 Help students prepare themselves mentally for a test by visualizing their ultimate goal and consciously establishing its connection to the test. Many studies have shown a direct correlation between clear, ultimate goals and resulting success (e.g., Andriessen, Phalet, & Lens, 2006; Cukras, 2006; Gabriele, 2007) 2 Emphasize the importance of getting physically in shape for a test, such as eating nutritional food, maintaining regular sleep habits, using breaks for physical activities or quiet relaxation time, and practicing basic breathing exercises. Studies show that irregular sleep patterns and inadequate breakfast lead to lower academic performance (Trockel, Barnes, & Egget, 2000).
266
Test Preparation and Test Taking
3
4
5 6
7
On the other hand, physical exercise and healthy food are associated with higher levels of academic performance and self-esteem (Kristjánsson, Sigfúsdóttir, & Allegrante, 2010). Teach physical relaxation techniques that can help overcome test-related anxiety (Casbarro, 2004; Gates, 2005; Supon, 2004; Viadero, 2004). Teach methods for organizing course content through creating and systematizing course notes which may increase the level of confidence with the course material and thus become crucial for test preparation (Barbarick & Ippolito, 2003; Cifuentes & Hsieh, 2003; Kobayashi, 2006) Train students in cognitive self-instruction. Teach students to be aware of any negative internal self-talk and to counter it with positive mind-set. Have students practice a self-instructional script that contains instructions how to view a test as a challenge rather than a threat ( Jamieson, Mendes, Blackstock, & Schmader, 2010; Maloney et al., 2014), recollect prior successful experiences, use test-wiseness strategies, focus on exam items, and give gentle self-support. Use interventions, such as expressive writing, proven to alleviate negative self-talk (Ramirez and Beilock, 2011). Teach behavior self-control techniques. Have students select a specific place for study and write precise goals for time and number of pages to read or problems to solve. Keep a chart of the number of hours spent in study and the goals met. Contract for rewards to be taken only when the goals are met. The payoff may be tangible or verbal self-reinforcement. Develop students’ test-wiseness skills by emphasizing strategies used by calm, high-performing students. Anxious students should be taught a checklist of steps to recall during a test (e.g., plan time, eliminate similar options, look for associations, look for specific determiners). Note that the literature gives no support for test-wiseness as an isolated treatment. Instruction in test-wiseness seems to work only when combined with other interventions.
Specific techniques to reduce stereotype threat are summarized by Smith and Hung (2008): lessening the importance of the task, reducing the salience of the stereotype, providing excuses for poor performance, claiming the test is not susceptible to the stereotype, altering ability conceptions from static to fluid, presenting people with successful role models from their own group, and blurring group identity. In one study, just teaching women about stereotype threat and its debilitating effect allowed the researchers to eliminate the impact of stereotype threat on the participants’ performance ( Johns, Schmader, & Martens, 2005). Some institutions may be planning a structured program to combat test anxiety. More details of an effective treatment program for test-anxious students can be found in Boutin and Tosi (1983); Wark (1996, 1998, 2017); and Wark, Bennett, Emerson, and Ottenheimer (1981). Mindfulness techniques may be considered for inclusion in such programs. They have recently acquired attention and popularity due to benefits shown for students’ performance on high-stakes quizzes and exams by reducing their cognitive test anxiety (Bellinger, DeCaro, & Ralston, 2015). Some teachers may want to screen a class to pick out the students who are at risk for test anxiety. Those students identified by the screening can be referred for group or individual attention. Dozens of anxiety measuring instruments are analyzed in Zeidner and Matthews (2010). The one specifically recommended for test anxiety is Sarason’s (1984) questionnaire which has four scales, assessing the following interrelated but separate aspects of vulnerability to test anxiety: bodily reactions (physical symptoms of anxiety, such as a racing heart and an upset stomach); tension (negative emotions, such as feeling nervous and jittery); worry (concern about failing during tests); and test-irrelevant thinking (external, personal concerns distracting from the test). Evaluating an individual for test anxiety is essentially a clinical activity. Both test anxiety and study skills tests are helpful in this evaluation. Each gives some additional information that can lead to a diagnosis. Part of the process should be obtaining a history of school experiences and conducting an assessment to determine recent anxiety experiences to test taking.
267
Flippo, Appatova, and Wark
Recommendations for Future Research Suggestions for future research in test preparation and test taking were implicit in many of the sources reviewed for this writing. From an informal summary across the sources, specific areas of concern seem to emerge. One is best characterized as a broad educational focus. Development of broadly applied intellectual skills, work habits, and problem-solving strategies would provide education – rather than coaching or short-term cramming – to pass certain test items (Flippo, 2015). Flippo (2014) suggested that test-wiseness training should start in elementary grades and beyond, giving students continuous opportunities to practice their test preparation and taking skills as they experience real test situations in school. The evolving work in traditional and what we have referred to as alternative testing procedures has brought to light the positive effects of retesting in promoting learning. Advances in this area are demonstrated by the work of Roediger and Karpicke (2006) and others. The benefit of providing learning experiences that incorporate testing activities (test-enhanced learning) is well supported. Further investigation into test-enhanced learning and the standardization of instructional methods for integrating this approach at all levels of learning is strongly encouraged. Future research is needed in assessing new testing procedures addressed earlier in the section, Testing procedures in the 21st century. Specifically, preparation for online testing and relevant test- taking strategies must be investigated. Usefulness and exam outcomes associated with utilization of the publicly available online test preparation materials has to be addressed. If continued research can provide better strategies for test preparation, perhaps some of the negative aspects of testing can be reduced. More importantly, test-wiseness research may lead to new and effective methods of teaching and learning. In the foreseeable future, tests will remain a fact of life for anyone moving up the educational ladder. It is interesting to consider how learning might change if much of the negative aspects of testing were to be removed.
References and Suggested Readings Allen, G. J. (1971). Effectiveness of study counseling and desensitization in alleviating test anxiety in college students. Journal of Abnormal Psychology, 77, 282–289. Allen, G. J., Elias, M. J., & Zlotlow, S. F. (1980). Behavioral interventions for alleviating test anxiety: A methodological overview of current therapeutic practices. In I. G. Sarason (Ed.), Test anxiety: Theory, research, and application. Hillsdale, NJ: Erlbaum. Allsopp, D. H., Minskoff, E. H., & Bolt, L. (2005). Individualized course-specific strategy instruction for college students with learning disabilities and ADHD: Lessons learned from a model demonstration project. Learning Disabilities Research & Practice, 20(2), 103–118. doi:10.1111/j.1540-5826.2005.00126.x Alnasir, F. A. (2004). The Watched Structure Clinical Examination (WASCE) as a tool of assessment. Saudi Medical Journal, 25(1), 71–74. Alon, S. (2010). Racial differences in test preparation strategies: A commentary on shadow education, American style: Test preparation, the SAT and college enrollment. Social Forces, 89(2), 463–474. doi:10. 1353/sof.2010.0053 Alting, T., & Markham, R. (1993). Test anxiety and distractibility. Journal of Research in Personality, 27, 134–137. Anastasi, A. (1981). Diverse effects of training on tests of academic intelligence. In W. B. Schrader (Ed.), New directions for testing and measurement. San Francisco, CA: Jossey-Bass. Anastasi, A. (1988). Psychological testing. New York, NY: Macmillan Publishing Company. Andriessen, I., Phalet, K., & Lens, W. (2006). Future goal setting, task motivation and learning of minority and non-minority students in Dutch schools. British Journal of Educational Psychology, 76(4), 827–850. doi:10.1348/000709906X148150 Appel, M., Kronberger, N., & Aronson, J. (2011). Stereotype threat impairs ability building: Effects on test preparation among women in science and technology. European Journal of Social Psychology, 41(7), 904–913. doi:10.1002/ejsp.835 Balch, W. R. (2007). Effects of test expectation on multiple-choice performance and subjective ratings. Teaching of Psychology, 34(4), 219–225. doi:10.1080/00986280701700094
268
Test Preparation and Test Taking
Bangert-Drowns, R. L., Kulik, J. K., & Kulik, C. C. (1983). Effects of coaching programs on achievement test performance. Review of Educational Research, 53, 571–585. Barbarick, K. A., & Ippolito, J. A. (2003). Does the number of hours studied affect exam performance? Journal of Natural Resources and Life Sciences Education, 32, 32–35. Becker, B. J. (1990). Coaching for the scholastic aptitude test: Further synthesis and appraisal. Review of Educational Research, 60, 373–417. Beckert, L., Wilkinson, T. J., & Sainsbury, R. (2003). A needs-based study and examination skills course improves students’ performance Medical Education, 37(5), 424–428. Beilock, S. L., & Carr, T. H. (2005). When high-powered people fail: Working memory and “choking under pressure” in math. Psychological Science, 16, 101–105. doi:10.1111/j.0956-7976.2005.00789.x Beilock, S. L., Rydell, R. J., & McConnell, A. R. (2007). Stereotype threat and working memory: Mechanisms, alleviation, and spillover. Journal of Experimental Psychology: General, 136(2), 256–276. doi:10.1037/0096-3445.136.2.256 Bellinger, D. B., DeCaro, M. S., & Ralston, P. A. S. (2015). Mindfulness, anxiety, and high-stakes mathematics performance in the laboratory and classroom. Consciousness and Cognition, 37, 123–132. doi:10.1016/ j.concog.2015.09.001 Bettinger, E., & Baker, R. (2011). The effects of student coaching in college: An evaluation of a randomized experiment in student mentoring. Cambridge, MA: National Bureau of Economic Research. Bettinger, E. P., & Baker, R. B. (2014). The effects of student coaching: An evaluation of a randomized experiment in student advising. Educational Evaluation and Policy Analysis, 36(1), 3–19. doi:10.3102/0162 373713500523 Blanton, W. E., & Wood, K. D. (1984). Direct instructions in reading comprehension test-taking skills. Reading World, 24, 10–19. Boutin, G. E. (1989). Treatment of test anxiety by rational stage directed hypnotherapy: A case study. Australian Journal of Clinical Hypnotherapy and Hypnosis, 10(2), 65–72. Boutin, G. E., & Tosi, D. J. (1983). Modification of irrational ideas and test anxiety through rational stage directed hypnotherapy. Journal of Clinical Psychology, 39(3), 382–391. *Boyd, R. T. C. (1988). Improving your test-taking skills. Washington, DC: American Institutes for Research. Bridgeman, B., & Morgan R. (1996). Success in college for students with discrepancies between performance on multiple-choice and essay tests. Journal of Educational Psychology, 88(2), 333–340. *Buchmann, C., Condron, D. J., & Roscigno, V. J. (2010). Shadow education, American style: Test preparation, the SAT and college enrollment. Social Forces, 89(2), 435–461. doi:10.1353/sof.2010.0105 Casbarro, J. (2004). Reducing anxiety in the era of high-stakes testing. Principal, 83(5), 36–38. Cassady, J. C. (2004). The influence of cognitive test anxiety across the learning–testing cycle. Learning and Instruction, 14(6), 569–592. doi:10.1016/j.learninstruc.2004.09.002 *Cassady, J. C. (2010). Anxiety in schools: The causes, consequences, and solutions for academic anxieties. New York, NY: Peter Lang. Chamberlain, S., Daly, A. L., & Spalding, V. (2011). The fear factor: Students’ experiences of test anxiety when taking A-level examinations. Pastoral Care in Education, 29(3), 193–205. doi:10.1080/02643944.20 11.599856 Chang, T. (1978). Test wiseness and passage-dependency in standardized reading comprehension test items. Dissertation Abstracts International, 39(10), 6084. Chung-Herrera, B. G., Ehrhart, K. H., Ehrhart, M. G., Solamon, J., & Kilian, B. (2009). Can test preparation help to reduce the Black—White test performance gap? Journal of Management, 35(5), 1207–1227. doi:10.1177/0149206308328506 Cifuentes, L., & Hsieh, Y. J. (2003). Visualization for construction of meaning during study time: A qualitative analysis. International Journal of Instructional Media, 30(4), 407–417. Cirino-Gerena, G. (1981). Strategies in answering essay tests. Teaching of Psychology, 8(1), 53–54. Cizek, G. J., & Burg, S. S. (2006). Addressing test anxiety in a high-stakes environment: Strategies for classrooms and schools. Thousand Oaks, CA: Corwin Press. Coffman, W. E. (1980). The scholastic aptitude test: A historical perspective. College Board Review, 117, A8–A11. Cole, J. S., Bergin, D. A., & Whittaker, T. A. (2008). Predicting student achievement for low stakes tests with effort and task value. Contemporary Educational Psychology, 33(4), 609–624. doi:10.1016/j.cedpsych. 2007.10.002 Cooper, S. (2004). Computerized practice tests boost student achievement. T.H.E. Journal, 32(2), 58–59. Cornell, D. G., Krsnick, J. A., & Chiang, L. (2006). Students’ reactions to being wrongly informed of failing a high-stakes test: The case of the Minnesota Basic Standards Test. Educational Policy, 20(5), 718–751.
269
Flippo, Appatova, and Wark
Cortright, R. N., Collins, H. L., Rodenbaugh, D. W., & DiCarlo, S. E. (2003). Student retention of course content is improved by collaborative-group testing. Advances in Physiology Education, 27(3), 102–108. Cukras, G. G. (2006). The investigation of study strategies that maximize learning for underprepared students. College Teaching, 54(1), 194–197. doi:10.3200/CTCH.54.1.194-197 Curriculum Review. (2004). Use parent nights to improve student test-taking skills. Curriculum Review, 43(5), 6. Curriculum Review. (2005). Maximize testing-day performance with tips for student diet, dress and exercise. Curriculum Review, 44(5), 7. DeAngelis, S. (2000). Equivalency of computer-based and paper -and-pencil testing. Journal of Allied Health, 29(3), 161–164. De Vos, H. M., & Louw, D. A. (2006). The effect of hypnotic training programs on the academic performance of students. American Journal of Clinical Hypnosis, 49(2), 101–112. Dibattista, D., & Gosse, L. (2006). Test anxiety and the immediate feedback assessment technique. The Journal of Experimental Education, 74(4), 311–327. Dihoff, R. E., Bosvic, G. M., Epstein, M. L., & Cook, M. J. (2004). Provision of feedback during preparation for academic testing: Learning is enhanced by immediate but not delayed feedback. Psychological Record, 54(2), 207–231. Dollinger, S. J., & Clark, M. H. (2012). Test-taking strategy as a mediator between race and academic performance. Learning and Individual Differences, 22(4), 511–517. doi:10.1016/j.lindif.2012.03.010 Duerson, M. C., Romrell, L. J., & Stevens, C. B. (2000). Impacting faculty teaching and student performance: Nine years’ experience with the objective structured clinical examination. Teaching and Learning in Medicine, 12(4), 176–182. El Shallaly, G., & Ali, E. (2004). Use of video-projected structured clinical examination (ViPSCE) instead of the traditional oral (Viva) examination in the assessment of final year medical students. Education for Health, 17(1), 17–26. Elliott, S. N., DiPerna, J. C., Mroch, A. A., & Lang, S. C. (2004). Prevalence and patterns of academic enabling behaviors: An analysis of teachers’ and students’ ratings for a national sample of students. School Psychology Review, 33(2), 302–309. Ellis, A. P. J., & Ryan, A. M. (2003). Race and cognitive-ability test performance: The mediating effects of test preparation, test-taking strategy use and self-efficacy. Journal of Applied Social Psychology, 33(12), 2607–2629. doi:10.1111/j.1559-1816.2003.tb02783.x Embse, N. v. d., & Hasson, R. (2012). Test anxiety and high-stakes test performance between school settings: Implications for educators. Preventing School Failure: Alternative Education for Children and Youth, 56(3), 180–187. doi:10.1080/1045988X.2011.633285 Embse, N. v. d., Mata, A. D., Segool, N., & Scott, E. (2014). Latent profile analyses of test anxiety: A pilot study. Journal of Psychoeducational Assessment, 32(2), 165–172. doi:10.1177/0734282913504541 Embse, N. v. d., Schultz, B. K., & Draughn, J. D. (2015). Readying students to test: The influence of fear and efficacy appeals on anxiety and test performance. School Psychology International, 36(6), 620–637. doi:10.1177/0143034315609094 Embse, N. v. d., & Witmer, S. E. (2014). High-stakes accountability: Student anxiety and large-scale testing. Journal of Applied School Psychology, 30(2), 132–156. doi:10.1080/15377903.2014.888529 English, J. B., & Gordon, D. K. (2004). Successful student remediation following repeated failures on the HESI exam. Nurse Educator, 29(6), 266–268. Epstein, M. L., Epstein, B. B., & Brosvic, G. M. (2001). Immediate feedback during academic testing. Psychological Reports, 88, 889–894. Ergene, T. (2003). Effective interventions on test anxiety reduction: A meta-analysis. School Psychology International, 24(3), 313–328. doi:10.1177/01430343030243004 Eysenck, M. W., Derakshan, N., Santos, R., & Calvo, M. G. (2007). Anxiety and cognitive performance: Attentional control theory. Emotion, 7(2), 336–353. doi:10.1037/1528-3542.7.2.336 Eysenck, M. W., Payne, S., & Derakshan, N. (2005). Trait anxiety, visuospatial processing, and working memory. Cognition and Emotion, 19, 1214–1228. Faunce, G., & Kenny, D. T. (2004). Effects of academic coaching on elementary and secondary school students. The Journal of Educational Research, 98(2), 115–126. Faymonville, M., Boly, M., & Laureys, S. (2006). Functional neuroanatomy of the hypnotic state. Journal of Physiology – Paris, 99(4), 463–469. doi:10.1016/j.jphysparis.2006.03.018 Ferguson, K. J., Kreiter, C. D., Peterson, M. W., Rowat, J. A., & Elliott, S. T. (2002). Is that your final answer? Relationship of changed answers to overall performance on a computer-based medical school course examination. Teaching and Learning in Medicine, 14(1), 20–23.
270
Test Preparation and Test Taking
Fischer, M. R., Hermann, S., & Kopp, V. (2005). Answering multiple-choice questions in highstakes medical examinations. Medical Education, 39, 890–894. Flavell, J. H. (1985). Cognitive development. Englewood Cliffs, NJ: Prentice-Hall. *Flippo, R. F. (1984). A test bank for your secondary/college reading lab. Journal of Reading, 27(8), 732–733. Flippo, R. F. (2002a). Study skills and strategies. In B. J. Guzzetti (Ed.), Literacy in America: An encyclopedia of history, theory, and practice (Vol. 2, pp. 631–632). Santa Barbara, CA: ABC-CLIO. Flippo, R. F. (2002b). Test preparation. In B. J. Guzzetti (Ed.), Literacy in America: An encyclopedia of history, theory, and practice (Vol. 2, pp. 650–651). Santa Barbara, CA: ABCCLIO. Flippo, R. F. (2014). Assessing readers: Qualitative diagnosis and instruction (2nd ed.). New York, NY: Routledge/Taylor & Francis; Newark, DE: International Reading Association. *Flippo, R. F. (with Gaines, R., Rockwell, K. C., Cook, K., & Melia, D.). (2015). Studying and learning in a high-stakes world: Making tests work for teachers. Lanham, MD: Rowman & Littlefield. Flynn, J., & Anderson, B. (1977, Summer). The effects of test item cue sensitivity on IQ and achievement test performance. Educational Research Quarterly, 2(2), 32–39. Frierson, H. T. (1984). Impact of an intervention program in minority medical students’ National Board Part I performance. Journal of the National Medical Association, 76, 1185–1190. Frierson, H. T. (1991). Intervention can make a difference: The impact on standardized tests and classroom performance. In W. R. Allen, E. G. Epps, & N. Z. Haniff (Eds.), College in black and white: African American students in predominantly white and historically black public universities (pp. 225–238). Albany: State University of New York Press. Gabriele, A. J. (2007). The influence of achievement goals on the constructive activity of low achievers during collaborative problem solving. British Journal of Educational Psychology, 77(1), 121–141. doi:10.1348/ 000709905X89490 Gaeddert, W. P., & Dolphin, W. D. (1981). Effects of facilitating and debilitating anxiety on performance and study effort in mastery-based and traditional courses. Psychological Reports, 48(3), 827–833. doi:10.2466/ pr0.1981.48.3.827 Garland, M., & Treisman, U. P. (1993). The mathematics workshop model: An interview with Uri Treisman. Journal of Developmental Education, 16, 14–16. Gates, G. S. (2005). Awakening to school community: Buddhist philosophy for educational reform. The Journal of Educational Thought ( JET) / Revue De La Pensée Éducative, 39(2), 149–173. Geiger, M. A. (1991). Changing multiple-choice answers: Do students accurately perceive their performance? Journal of Experimental Education, 59(3), 250–257. Geiger, M. A. (1997). An examination of the relationship between answer changing, testwiseness, and examination performance. The Journal of Experimental Education, 66(1), 49–60. Gettig, J. P. (2006). Investigating the potential influence of established multiple-choice test-taking cues on item response in a Pharmacotherapy board certification examination preparatory manual: A pilot study. Pharmacotherapy, 26(4), 558–562. Gettinger, M., & Seibert, J. K. (2002). Contributions of study skills to academic competence. School Psychology Review, 31(3), 350–366. Glenn, R. E. (2004). Teach kids test-taking tactics. The Education Digest, 70(2), 61–63. Gloe, D. (1999). Study habits and test-taking tips. Dermatology Nursing, 11(6), 439–449. Goetz, T., Preckel, F., Zeidner, M., & Schleyer, E. (2008). Big fish in big ponds: A multilevel analysis of test anxiety and achievement in special gifted classes. Anxiety, Stress & Coping, 21, 185–198. Green, B. F. (1981). Issues in testing: Coaching, disclosure, and ethnic bias. In W. B. Schrader (Ed.), New directions for testing and measurement. San Francisco, CA: Jossey-Bass. Green, D. S., & Stewart, O. (1984). Test wiseness: The concept has no clothes. College Student Journal, 18(4), 416–424. Gretes, J. A., & Green, M. (2000). Improving undergraduate learning with computer-assisted assessment. Journal of Research on Computing in Education, 33(1), 46–54. Grodsky, E. (2010). Learning in the shadows and in the light of day: A commentary on “Shadow education, American style: Test preparation, the SAT and college enrollment.” Social Forces, 89(2), 475–481. doi:10.1353/sof.2010.0063 Gulek, C. (2003). Preparing for high stakes testing. Theory into Practice, 42(1), 42–50. Hagtvet, K. A., & Johnsen, T. B. (1992). Advances in test anxiety research: Volume 7. Amsterdam/Lisse: Swets & Zeitlinger. Hamann, C., Volkan, K., Fishman, M. B., Silvestri, R. C., Simon, S. R., & Fletcher, S. W. (2002). How well do second-year students learn physical diagnosis? Observational study of an objective structured clinical examination (OSCE). BMC Medical Education, 2(1), 1. doi:10.1186/1472-6920-2-1
271
Flippo, Appatova, and Wark
Hamilton, K. (2003). Testing’s pains and gains. Black Issues in Higher Education, 20(8), 26–27. Hammoud, M. M, & Barclay, M. L. (2002). Development of a web-based question database for students’ self-assessment. Academic Medicine, 77(9), 925. Harrington, C. (2015). Student success in college: Doing what works! (2nd ed.). Boston, MA: Cengage Learning. Hayati, A. M., & Ghojogh, A. N. (2008). Investigating the influence of proficiency and gender on the use of selected test-wiseness strategies in higher education. English Language Teaching, 1(2) doi:10.5539/elt. v1n2p169 Hebert, S. W. (1984). A simple hypnotic approach to treat test anxiety in medical students and residents. Journal of Medical Education, 59(10), 841–842. *Hodges, R., Simpson, M. L., & Stahl, N. A. (2012). Teaching study strategies in developmental education: Readings on theory, research, and best practice. Boston, MA, and New York, NY: Bedford/St. Martin’s. Hollandsworth, J. G., Galazeski, R. C., Kirkland, K., Jones, G. E., & Norman, L. R. V. (1979). An analysis of the nature and effects of test anxiety: Cognitive, behavior, and physiological components. Cognitive Therapy and Research, 3(2), 165–180. Hong, C. H., McLean, D., Shapiro, J., & Lui, H. (2002). Using the internet to assess and teach medical students in dermatology. Journal of Cutaneous Medical Surgery, 6(4), 315–319. Hoover, E. (2005). Test-preparation companies go portable with new products. The Chronicle of Higher Education, 51(49), 37. Hoover, E. (2006). College Board clashes with N.Y. Lawmaker over report on SAT scoring snafu. The Chronicle of Higher Education, 52(46), 1. Hoover, J. P. (2002). A dozen ways to raise students’ test performance. Principal, 81(3), 17–18. Horne, A. M., & Matson, J. L. (1977). A comparison of modeling, desensitization, flooding, study skills, and control groups for reducing test anxiety. Behavior Therapy, 8, 1–8. Huck, S. (1978). Test performance under the condition of known item difficulty. Journal of Educational Measurement, 15(1), 53–58. Hughes, C. A., Schumaker, J. B., Deschler, D. D., & Mercer, C. D. (1988). The test-taking strategy. Lawrence, KS: Excellent Enterprises. Hurren, B. L., Rutledge, M., & Garvin, A. B. (2006). Team testing for individual success. Phi Delta Kappan, 87(6), 443–447. Jalongo, M. R., Twiest, M. M., & Gerlach, G. J. (1996). The college learner: How to survive and thrive in an academic environment. Columbus, OH: Merrill. Jamieson, J. P., Mendes, W. B., Blackstock, E., & Schmader, T. (2010). Turning the knots in your stomach into bows: Reappraising arousal improves performance on the GRE. Journal of Experimental Social Psychology, 46(1), 208–212. doi:10.1016/j.jesp.2009.08.015 Jiang, H., White, M. P., Greicius, M. D., Waelde, L. C., & Spiegel, D. (2016). Brain activity and functional connectivity associated with hypnosis. Cerebral Cortex, 27(8), 4083–4093. doi:10.1093/cercor/bhw220 Johns, M., Schmader, T., & Martens, A. (2005). Knowing is half the battle: Teaching s tereotype threat as a means of improving women’s math performance. Psychological Science, 16(3), 175–179. doi:10.1111/j. 0956-7976.2005.00799.x Jones, L., & Petruzzi, D. C. (1995). Test anxiety: A review of theory and current treatment. Journal of College Student Psychotherapy, 10(1), 3–15. Jones, P., & Kaufman, G. (1975). The differential formation of response sets by specific determiners. Educational and Psychological Measurement, 35(4), 821–833. Junion-Metz, G. (2004). Testing, testing. School Library Journal, 50(1), 34. Karpicke, J. D., Butler, A. C., & Roediger I. H. L. (2009). Metacognitive strategies in student learning: Do students practise retrieval when they study on their own? Memory, 17(4), 471–479. doi:10.1080/0965 8210802647009 Khan, K. S., Davies, D. A., & Gupta, J. K. (2001). Formative assessment using multiple truefalse questions on the internet: Feedback according to confidence about correct knowledge. Medical Teacher, 23(2), 158–163. Kies, S. M., Williams, B. D., & Freund, G. G. (2006). Gender plays no role in student ability to perform on computer-based examinations. BMC Medical Education, 6(1), 57-57. doi:10.1186/1472-6920-6-57 Kim, Y. H., & Goetz, E. T. (1993). Strategic processing of test questions: The test marking responses of college students. Learning and Individual Differences, 5(3), 211–218. doi:10.1016/1041-6080(93)90003-B Kobayashi, K. (2006). Combined effects of note-taking/-reviewing on learning and the enhancement through interventions: A meta-analytic review. Educational Psychology, 26(3), 459–477. Kogan, J. R., Bellini, L. M., & Shea, J. A. (2002). Implementation of the mini-CEX to evaluate medical students’ clinical skills. Academic Medicine, 77(11), 1156–1157.
272
Test Preparation and Test Taking
Kristjánsson, Á., Sigfúsdóttir, I., & Allegrante, J. P. (2010). Health behavior and academic achievement among adolescents: The relative contribution of dietary habits, physical activity, body mass index, and self-esteem. Health Education & Behavior, 37(1), 51–64. doi:10.1177/1090198107313481 Lam, L. T. (2004). Test success, family style. Educational Leadership, 61(8), 44–47. Lester, D. (1991). Speed and performance on college course examinations. Perceptual and Motor Skills, 73, 1090. Liebert, R. M., & Morris, L. W. (1967). Cognitive and emotional components of test anxiety: A distinction and some initial data. Psychological Reports, 20, 975–978. LoSchiavo, F., & Shatz, M. (2002). Students’ reasons for writing on multiple-choice examinations. Teaching of Psychology, 29(2), 138–140. Retrieved from Academic Search Premiere database. Lowe, P. A., & Lee, S. W. (2007). Factor structure of the test anxiety inventory for children and adolescents (TAICA) scores across gender among students in elementary and secondary school settings. Journal of Psychoeducational Assessment, 26(3), 231–246. doi:10.1177/0734282907303773 Lowe, P. A., Lee, S. W., Witteborg, K. M., Prichard, K. W., Luhr, M. E., Cullinan, C. M., … Janik, M. (2007). The test anxiety inventory for children and adolescents (TAICA): Examination of the psychometric properties of a new multidimensional measure of test anxiety among elementary and secondary school students. Journal of Psychoeducational Assessment, 26(3), 215–230. doi:10.1177/ 0734282907303760 Lusk, M., & Conklin, L. (2003). Collaborative testing to promote learning. Journal of Nursing Education, 42(3), 121–124. Lynch, D., & Smith, B. (1975). Item response changes: Effects on test scores. Measurement and Evaluation in Guidance, 7(4), 220–224. MacCann, R., Eastment, B., & Pickering, S. (2002). Responding to free response examination questions: Computer versus pen and paper. British Journal of Educational Technology, 33, 173–188. Mahamed, A., Gregory, P. A. M., & Austin, Z. (2006). “Testwiseness” among international pharmacy graduates and Canadian senior pharmacy students. American Journal of Pharmaceutical Education, 70(6), 1–6. doi:10.5688/aj7006131 Majors, R. E. (1997). Is this going to be on the test? Upper Saddle River, NJ: Gorsuch Scarisbrick. Maloney, E. A., Sattizahn, J. R., & Beilock, S. L. (2014). Anxiety and cognition. Wiley Interdisciplinary Reviews: Cognitive Science, 5(4), 403–411. doi:10.1002/wcs.1299 Mandler, G., & Sarason, S. B. (1952). A study of anxiety of learning. Journal of Abnormal and Social Psychology, 47, 166–173. Mattheos, N., Nattestad, A., Falk-Nilsson, E., & Attstrom, R. (2004). The interactive examination: Assessing students’ self-assessment ability. Medical Education, 38(4), 378–389. Mavis, B. E. (2000). Does studying for an objective structured clinical examination make a difference? Medical Education, 34(10), 808–812. Maylone, N. (2004). Do tests show more than “test think”? Education Digest: Essential Readings Condensed for Quick Review, 69(8), 16–20. McClain, L. (1983). Behavior during examinations: A comparison of A, C, and F students. Teaching of Psychology, 10(2), 69–71. McCordick, S. M., Kaplan, R. M., Finn, M. E., & Smith, S. H. (1979). Cognitive behavior modification and modeling for test anxiety. Journal of Consulting and Clinical Psychology, 47(2), 419–420. McDaniel, M. A., Roediger, H. L., & McDermott, K. B. (2007). Generalizing test-enhanced learning from the laboratory to the classroom. Psychonomic Bulletin & Review, 14(2), 200–206. McGaghie, W. C., Downing, S. M, & Kubilius, R. (2004). What is the impact of commercial test preparation courses on medical examination performance? Teaching and Learning in Medicine, 16(2), 202–211. Meinster, M. O., & Rose, K. C. (1993, March). Cooperative testing in introductory-level psychology courses. Teaching of psychology: Ideas and innovations. Proceedings of the Annual Conference on Undergraduate Teaching of Psychology, Ellenville, NY. Messick, S. (1981). The controversy over coaching: Issues of effectiveness and equity. In W. B. Schrader (Ed.), New directions for testing and measurement (pp. 35–46). San Francisco, CA: Jossey-Bass. Milia, L. D. (2007). Benefiting from multiple-choice exams: The positive impact of answer switching. Educational Psychology, 27(5), 607–615. *Millman, J. C., Bishop, C. H., & Ebel, R. (1965). An analysis of test wiseness. Educational and Psychological Measurement, 25, 707–727. Mitchell, N., & Melton, S. (2003). Collaborative testing: An innovative approach to test taking. Nurse Educator, 28(2), 95–97.
273
Flippo, Appatova, and Wark
Mohl, J. C., Finigan, D. M., & Scharff, L. M. (2016). The effect of a suggestion to generate interest in a reading in highly hypnotizable people: A promising use in education. International Journal of Clinical and Experimental Hypnosis, 64(2), 239–260. doi:10.1080/00207144.2016.1131592 Nackman, G. B., Griggs, M., & Galt, J. (2006). Implementation of a novel web-based objective structured clinical evaluation. Surgery, 140(2), 206–211. Nairn, A. (1980). The reign of ETS: The corporation that makes up minds. Washington, DC: Learning Research Project. Nelson, D. W., & Knight, A. E. (2010). The power of positive recollections: Reducing test anxiety and enhancing college student efficacy and performance. Journal of Applied Social Psychology, 40(3), 732–745. doi:10.1111/j.1559-1816.2010.00595.x Neugent, L. W. (2004). Getting ready for online testing. T.H.E. Journal, 12(34), 36. Nieswiadomy, R. M., Arnold, W. K., & Garza, C. (2001). Changing answers on multiple-choice examinations taken by baccalaureate nursing students. Journal of Nursing Education, 40(3), 142–144. Norcini, J. J., Blank, L. L., Duffy, F. D., & Fortna, G. S. (2003). The Mini-CEX: A method for assessing clinical skills. Annals of Internal Medicine, 138, 476–481. Onwuegbuzie, A. J. (1994). Examination-taking strategies used by college students in statistics courses. College Student Journal, 28(2), 163–174. Organisation for Economic Co-operation and Development (2006). Demand-sensitive schooling?: Evidence and issues. Paris: OECD Publishing. doi:10.1787/9789264028418-en Paul, C., & Rosenkoetter, J. (1980). Relationship between completion time and test score. Southern Journal of Educational Research, 12(2), 151–157. Pekrun, R. (1992). The impact of emotions on learning and achievement: Towards a theory of cognitive/ motivational mediators. Applied psychology: An international review, 41(4), 359–376. *Penfield, D., & Mercer, M. (1980). Answer changing and statistics. Educational Research Quarterly, 5(5), 50–57. Peterson, M. W., Gordon, J., Elliott, S., & Kreiter, C. (2004). Computer-based testing: Initial report on extensive use in a medical school curriculum. Teaching and Learning in Medicine, 16(1), 51–59. Peverly, S. T., Ramaswamy, V., Brown, C., Sumowski, J., Alidoost, M., & Garner, J. (2007). Skill in lecture note-taking: What predicts? Journal of Educational Psychology, 99, 167–180. Pomplun, M., Frey, S., & Becker, D. F. (2002). The score equivalence of paper-and-pencil and computerized versions of a speeded test of reading comprehension. Educational & Psychological Measurement, 62, 337–354. Powers, D. E. (1993). Coaching for the SAT: A summary of the summaries and an update. Educational Measurement: Issues and Practice, 12(2), 24–30, 39. Powers, D. E., & Leung, S. W. (1995). Answering the new SAT reading comprehension questions without the passages. Journal of Educational Measurement, 32(2), 105–129. Putwain, D. W. (2007). Test anxiety in UK schoolchildren: Prevalence and demographic patterns. British Journal of Educational Psychology, 77(3), 579–593. doi:10.1348/000709906X161704 Putwain, D. W. (2008a). Do examinations stakes moderate the test anxiety examination performance relationship? Educational Psychology, 28(2), 109–118. Putwain, D. W. (2008b). Test anxiety and GCSE performance: The effect of gender and socio-economic background. Educational Psychology in Practice, 24(4), 319–334. doi:10.1080/02667360802488765 Putwain, D. W., & Best, N. (2011). Fear appeals in the primary classroom: Effects on test anxiety and test grade. Learning and Individual Differences, 21(5): 580–584. doi:10.1016/j.lindif.2011.07.007 Putwain, D. W., & Best, N. (2012). Do highly test anxious students respond differentially to fear appeals made prior to a test? Research in Education, 88, 881–910. doi:10.7227/rie.88.1.1 Putwain, D. W., & Daly, A. L. (2014). Test anxiety prevalence and gender differences in a sample of English secondary school students. Educational Studies, 40(5), 554–570. Putwain, D. W., & Remedios, R. (2014). The scare tactic: Do fear appeals predict motivation and exam scores? School Psychology Quarterly, 29(4): 503–516. doi:10.1037/spq0000048 Putwain, D. W., & Symes, W. (2011). Teachers’ use of fear appeals in the mathematics classroom: Worrying or motivating students? British Journal of Educational Psychology, 81(3): 456–474. doi:10.1348/2044-8279.002005 Raffety, B. D., Smith, R. E., & Ptacek, J. T. (1997). Facilitating and debilitating trait anxiety, situational anxiety, and coping with an anticipated stressor: A process analysis. Journal of Personality and Social Psychology, 72(4), 892–906. doi:10.1037/0022-3514.72.4.892-906 erebral Rainville, P., Hof bauer, R. K., Paus, T., Duncan, G. H., Bushnell, M. C., & Price, D. D. (1999). C mechanisms of hypnotic induction and suggestion. Journal of Cognitive Neuroscience, 11(1), 110–125. doi:10.1162/089892999563175
274
Test Preparation and Test Taking
Ramirez, G., & Beilock, S. L. (2011). Writing about testing worries boosts exam performance in the classroom. Science, 331(6014), 211–213. doi:10.1126/science.1199427 Rao, S. P., Collins, H. L., & DiCarlo, S. E. (2002). Collaborative testing enhances student learning. Advances in Physiology Education, 26(1), 37–41. Rath, S. (2008). Converting distress into stress. Social Science International, 24(1), 98–103. Retrieved from PsycINFO database. *Raygor, A. L., & Wark, D. M. (1980). Systems for study (2nd ed.). New York, NY: McGraw-Hill. Raz, A., Shapiro, T., Fan, J., & Posner, M. I. (2002). Hypnotic suggestion and the modulation of stroop interference. Archives of General Psychiatry, 59(12), 1155–1161. doi:10.1001/archpsyc.59.12.1155 Rentschler, D. D., Eaton, J. E., Cappiello, J., McNally, S. F., & McWilliam, P. (2007). Evaluation of undergraduate students using objective structured clinical evaluation. Journal of Nursing Education, 46(3), 135–139. Reteguiz, J. (2006). Relationship between anxiety and standardized patient test performance in the medicine clerkship. Journal of General Internal Medicine, 21, 415–418. Richardson, K., & Trudeau, K. J. (2003). A case for problem-based collaborative learning in the nursing classroom. Nurse Educator, 28(2), 83–88. Roediger, H. L., & Karpicke, J. D. (2006). Test-enhanced learning: Taking memory tests improves longterm retention. Psychological Science, 17(3), 249–255. Roediger, H. L., & Marsh, E. J. (2005). The positive and negative consequences of multiple-choice testing. Journal of Experimental Psychology, 31(5), 1155–1159. Rosário, P., Núñez, J. C., Salgado, A., González-Pienda, J. A., Valle, A., Joly, C., & Bernardo, A. (2008). Test anxiety: Associations with personal and family variables. Psicothema, 20(4), 563–570. Russo, A., & Warren, S. H. (1999). Collaborative test taking. College Teaching, 47(1), 18–20. Samson, G. E. (1985). Effects of training in test-taking skills on achievement test performance. Journal of Educational Research, 78, 261–266. Sapp, M. (1991). Hypnotherapy and test anxiety: Two cognitive-behavioral constructs. The Australian Journal of Clinical Hypnotherapy and Hypnosis, 12(1), 25–31. Sarason, I. G. (1984). Stress, anxiety, and cognitive interference: Reactions to tests. Journal of Personality and Social Psychology, 46, 929–938. Sarason, I. G. (1988). Anxiety, self-preoccupation and attention. Anxiety Research, 1, 3–8. Sarnacki, R. (1979). An examination of test wiseness in the cognitive test domain. Review of Educational Research, 49(2), 252–279. Saunders, T., & Maloney, K. (2004). Minority scholars-diversity and achievement. Principal Leadership, 5(4), 39–41. Scherbaum, C. A., Blanshetyn, V., Marshall-Wolp, E., McCue, E., & Strauss, R. (2011). Examining the effects of stereotype threat on test-taking behaviors. Social Psychology of Education, 14(3), 361–375. doi:10.1007/s11218-011-9154-2 Schmader, T., Johns, M., & Forbes, C. (2008). An integrated process model of stereotype threat effects on performance. Psychological Review, 115, 336–356. Schoonheim-Klein, M. E., Habets, L. L. M. H., Aartman, I. H. A., Vleuten, v. d., C. P. M, Hoogstraten, J., & Velden, v. d., U. (2006). Implementing an objective structured clinical examination (OSCE) in dental education: Effects on students’ learning strategies. European Journal of Dental Education, 10(4), 226–235. doi:10.1111/j.1600-0579.2006.00421.x Schreiber, E. H. (1997). Use of group hypnosis to improve college students’ achievement. Psychological Reports, 80(2), 636–638. doi:10.2466/pr0.1997.80.2.636 Schreiber, E. H., & McSweeney, P. A. (2004). Use of group hypnosis to improve academic achievement of college freshmen. Australian Journal of clinical and experimental hypnosis, 32(2), 153–156. Schreiber, E. H., & Schreiber, K. N. (1998). Use of hypnosis and Jacobson’s relaxation techniques for improving academic achievement of college students. Perceptual and Motor Skills, 86(1), 85–86. doi:10.2466/ pms.1998.86.1.85 Schwartz, A. E. (2004). Scoring higher on math tests. The Education Digest, 69(8), 39–43. Schwarz, S. P., McMorris, R. F., & DeMers, L. P. (1991), Reasons for changing answers: An evaluation using personal interviews. Journal of Educational Measurement, 28(2), 163–171. Schwarzer, R., Ploeg, H. M. v. d., & Spielberger, C. D. (1989). Advances in test anxiety research: Volume 6. Berwyn, PA: Swets North America. Scott, C., Palmisano, P., Cunningham, R., Cannon, N., & Brown, S. (1980). The effects of commercial coaching for the NBME Part 1 examination. Journal of Medical Education, 55(9), 733–742.
275
Flippo, Appatova, and Wark
Segool, N. K., Carlson, J. S., Goforth, A. N., Embse, N. v. d., & Barterian, J. A. (2013). Heightened test anxiety among young children: Elementary school students’ anxious responses to high-stakes testing. Psychology in the Schools, 50, 489–499. doi:10.1002/pits.21689 Segool, N. K., Embse, N. v. d., Mata, A. D., & Gallant, J. (2014). Cognitive behavioral model of test anxiety in a high-stakes context: An exploratory study. School Mental Health, 6(1), 50–61. doi:10.1007/ s12310-013-9111-7 Seiberling, C. (2005). Cyber security: A survival guide. Technology & Learning, 25(7), 31–36. Seipp, B. (1991). Anxiety and academic performance: A meta-analysis of findings. Anxiety Research, 4(1), 27–41. Shatz, M. A. (1985). Students’ guessing strategies: Do they work? Psychological Reports, 57, 1167–1168. Shatz, M. A., & Best, J. B. (1987). Students’ reasons for changing answers on objective tests. Teaching of Psychology, 14(4), 241–242. doi:10.1207/s15328023top1404_17 Sim, S., Azila, N., Lian, L., Tan, C., & Tan, N. (2006). A simple instrument for the assessment of student performance in problem-based learning tutorials. Annals of the Academy of Medicine, Singapore, 35, 634–641. Skinner, N. F. (2009). Academic folk wisdom: Fact, fiction and falderal. Psychology Learning & Teaching, 8(1), 46–50. doi:10.2304/plat.2009.8.1.46 Slack, W. V., & Porter, D. (1980). The scholastic aptitude test: A critical appraisal. Harvard Educational Review, 50, 154–175. Slakter, M. J., Koehler, R. A., & Hampton, S. H. (1970). Grade level, sex, and selected aspects of test w iseness. Journal of Educational Measurement, 7, 119–122. Smith, C. S., & Hung, L. (2008). Stereotype threat: Effects on education. Social Psychology of Education, 11(3), 243–257. doi:10.1007/s11218-008-9053-3 Smith, J. (1982). Converging on correct answers: A peculiarity of multiple-choice items. Journal of Educational Measurement, 19(3), 211–220. Smith, M., Coop, R., & Kinnard, P. W. (1979). The effect of item type on the consequences of changing answers on multiple-choice tests. Journal of Educational Measurement, 16(3), 203–208. Sommer, R., & Sommer, B. A. (2009). The dreaded essay exam. Teaching of Psychology, 36(3), 197–199. doi:10.1080/00986280902959820 Stanton, H. E. (1984). Changing the experience of test anxiety. International Journal of Eclectic Psychotherapy, 3(2), 23–28. Stanton, H. E. (1994). Self-hypnosis: One path to reduced test anxiety. Contemporary Hypnosis, 11(1), 14–18. Steele, C. M., & Aronson, J. (1995). Stereotype threat and the intellectual test performance of African A mericans. Journal of Personality and Social Psychology, 69(5), 797–811. doi:10.1037/0022-3514.69.5.797 Strang, H. (1977). The effects of technical and unfamiliar options on guessing on multiple-choice test items. Journal of Educational Measurement, 14(3), 253–260. Strang, H. (1980). The effects of technically worded options on multiple-choice test performance. Journal of Educational Research, 73(5), 262–265. Supon, V. (2004). Implementing strategies to assist test-anxious students. Journal of Instructional Psychology, 31(4), 292–296. Swinton, S. S., & Powers, D. E. (1983). A study of the effects of special preparation of GRE analytical scores and item types. Journal of Educational Psychology, 75(1), 104–115. Tervo, R. C., Dimitrievich, E., Trugillo, A. L., Whittle, K., Redinius, P., & Wellman, L. (1997). The objective structured clinical examination (OSCE) in the clinical clerkship: An overview. South Dakota Journal of Medicine, 50(5), 153–156. Thiede, K. W. (1996). The relative importance of anticipated test format and anticipated test difficulty on performance. The Quarterly Journal of Experimental Psychology, 49 A (4), 901–918. doi:10.1080/027249896392351 Thomas, C. L., Cassady, J. C., & Finch, W. H. (2017). Identifying severity standards on the cognitive test anxiety scale: Cut score determination using latent class and cluster analysis. Journal of Psychoeducational Assessment. doi:10.1177/0734282916686004 *Thorndike, R. L. (1951). Reliability. In E. F. Lindquist (Ed.), Educational measurement (pp. 560–620). Washington, DC: American Council on Education. Tobias, S. (1985). Test anxiety: Interference, defective skills and cognitive capacity. Educational Psychologist, 3, 135–142. Townsend, A. H., McLlvenny, S., Miller, C. F., & Dunn, E. V. (2001). The use of an objective structured clinical examination (OSCE) for formative and summative assessment in a general practice clinical attachment and its relationship to final medical school examination performance. Medical Education, 35, 841–846.
276
Test Preparation and Test Taking
Tozoglu, D., Tozoglu M. D., Gurses A., & Dogar, C. (2004). The students’ perceptions: Essay versus multiple-choice type exams. Journal of Baltic Science Education, 6, 52–59. Retrieved from Academic Search Premiere database. Trockel, M. T., Barnes, M. D., & Egget, D. L. (2000). Health-related variables and academic performance among first-year college students: Implications for sleep and other behaviors. Journal of American College Health, 49(3), 125–131. doi:10.1080/07448480009596294 Tryon, G. S. (1980). The measurement and treatment of test anxiety. Review of Educational Research, 2, 343–372. Viadero, D. (2004). Researchers explore ways to lower students’ stress. Education Week, 23(38), 8. Volante, L. (2006). Toward appropriate preparation for standardized achievement testing. Journal of Educational Thought, 40(2), 129–144. Wark, D. M. (1996). Teaching college students better learning skills using self-hypnosis. American Journal of Clinical Hypnosis, 38(4), 277–287. Wark, D. M. (1998). Alert hypnosis: History and applications. In W. J. Matthews, & J. H. Edgette (Eds.), Creative thinking and research in brief therapy: Solutions, strategies, narratives (pp. 387–406). Philadelphia, PA: Brunner/Mazel. Wark, D. M. (2006). Alert hypnosis: A review and case report. American Journal of Clinical Hypnosis, 48(4), 291–300. *Wark, D. M. (2011). Traditional and alert hypnosis for education: A literature review. American Journal of Clinical Hypnosis, 54(2), 96–106. merican Wark, D. (2015). Traditional and alert hypnotic phenomena: Development through anteriorization. A Journal of Clinical Hypnosis, 57, 254–266. Wark, D. M. (2017). The induction of eyes-open alert waking hypnosis. In M. P. Jensen (Ed.), The art and science of hypnotic induction: Favorite methods of master clinicians (pp. 90–113). Kirkland, WA: Denny Creek Press. Wark, D. M., Bennett, J. M., Emerson, N. M., & Ottenheimer, H. (1981). Reducing test anxiety effects on reading comprehension of college students. In G. H. McNinch (Ed.), Comprehension: Process and product (pp. 60–62). Athens, GA: American Reading Forum. Wasson, B. (1990). Teaching low-achieving college students a strategy for test taking. College Student Journal, 24(4), 356–360. Weber, D. J., & Hamer, R. M. (1982). The effect of review course upon student performance on a standardized medical college examination. Evaluation and the Health Professions, 5(3), 35–43. Welsh, M. E., Eastwood, M., & D’Agostino, J. V. (2014). Conceptualizing teaching to the test under standards-based reform. Applied Measurement in Education, 27(2), 98–114. doi:10.1080/08957347.2014.880439 Werner, L. S., & Bull, B. S. (2003). The effect of three commercial coaching courses on Step One USMLE performance. Medical Education, 37(6), 527–531. Whitaker Sena, J. D., Lowe, P. A., & Lee, S. W. (2007). Significant predictors of test anxiety among students with and without learning disabilities. Journal of Learning Disabilities, 40(4), 360–376. doi:10.1177/00222 194070400040601 Wildemouth, B. (1977). Test anxiety: An extensive bibliography. Princeton, NJ: Educational Testing Service. Wilkinson, T. M., & Wilkinson, T. J. (2013). Preparation courses for a medical admissions test: Effectiveness contrasts with opinion. Medical Education, 47(4), 417–424. doi:10.1111/medu.12124 Willis, J. (2006). Research-based strategies to ignite student learning: Insights from a neurologist and classroom teacher. Alexandria, VA: Association for Supervision and Curriculum Development. Wine, J. (1971). Test anxiety and direction of attention. Psychological Bulletin, 76, 92–104. Wine, J. (1982). Evaluation anxiety-a cognitive attentional construct. In H. W. Krohne & L. Laux (Eds.), Achievement, stress and anxiety (pp. 217–219). Washington, DC: Hemisphere. Xie, Q. (2015). Do component weighting and testing method affect time management and approaches to test preparation? A study on the washback mechanism. System, 50, 56–68. doi:10.1016/j.system. 2015.03.002 Xie, Q., & Andrews, S. (2013). Do test design and uses influence test preparation? Testing a model of washback with structural equation modeling. Language Testing, 30(1), 49–70. doi:10.1177/0265532212442634 Yates, E. L. (2001). SAT boot camp teaches students the rules of the test-ta king game. Black Issues in Higher Education, 18(1), 30–31. Yerkes, R. M., & Dodson, J. (1908). The relation of strength of stimulus to rapidity of habit-formation. Journal of Comparative Neurology and Psychology, 18, 459–482. Zary, N., Johnson, G., Boberg, J., & Fors, U. (2006). Development, implementation and pilot evaluation of a web-based virtual patient case simulation environment – Web-SP. BMC Medical Education, 6, 10.
277
Flippo, Appatova, and Wark
*Zeidner, M. (1998). Test anxiety: The state of the art. New York, NY: Plenum Press. Zeidner, M., & Matthews, G. (2010). Anxiety 101. New York, NY: Springer Publishing Company. Zhang, C., Rauchwarger, A., Toth, C., & O’Connell, M. (2004). Student USMLE Step 1 Preparation and Performance. Advances in Health Sciences Education, 9, 291–297. Zimbardo, P. G., Butler, L. D., & Wolfe, V. A. (2003). Cooperative college examinations: More gain, less pain when students share information and grades. The Journal of Experimental Education, 71(2), 101–125. Zimmer, J. W., & Hocevar, D. J. (1994). Effects of massed versus distributed practice of test taking on achievement and test anxiety. Psychological Reports, 74, 915–919.
278
Part IV
Programs and Assessment David R. Arendale university of minnesota
The central themes of these chapters are the programmatic approaches for reading and study strategies, the assessment of students and types of assessments, and the reading tests available and used by various college programs. The chapters in this section are essential to the national conversation about who is ready for college, how to determine this, and what programs and services students need. The section begins with the Bridge Programs chapter, authored by David R. Arendale and Nue Lor Lee. This chapter traces the sophistication of academic skill and social programs to improve transition into and throughout the college experience by a diverse student population. Next, through their Program Management chapter, Karen S. Agee, Russ Hodges, and Amarilis M. Castillo explore the complexity and nuance of effective management of programmatic approaches to support student academic success. They identify the growth of national certifications and standards to guide these programs. Jan Norton and Karen S. Agee provide a natural continuation of this topic in their Program Assessment chapter. They identify the sophistication of assessing student-learning outcomes (SLOs) as an essential component of program assessment. Complementing the previous macro approach to evaluation is Tina Kaf ka’s chapter on Student Assessment. She identifies assessments to differentiate the learning environment for individual students. The final chapter in this section is Reading Tests by Rona Flippo, Sonya Armstrong, and Jeanne Shay Schumm. The authors provide a sweeping and comprehensive review of reading tests used with college students. They explore how the assessments use different criteria to determine the readiness of college aspirants to read and understand their college texts and other sources.
16 Bridge Programs David R. Arendale university of minnesota
Nue Lor Lee university of michigan
Introduction Bridge programs are designed to ease the transition of students from secondary education to postsecondary institutions and are customized to assist a particular subpopulation of students to be successful. They were originally created for students with higher rates of academic difficulty and withdrawal than the general student population. Then, the focus of most bridge programs was the preparation of high school students for the increased academic rigor they would encounter in college. In recent years, a variety of purposes have been added such that bridge programs now intend to meet the needs of culturally diverse students who are underrepresented in college, increase student success in particular college degree programs, and increase the number of students who aspire to college entry and are prepared for college. Bridge programs range from an intensive program of coordinated courses and noncredit learning experiences to a single academic term course or a workshop lasting a day or two. One example of the expanded role of college bridge programs is use during multiple transition points throughout secondary and postsecondary education. Previously, these programs were a programmatic effort by colleges to span high school graduation and fall college matriculation. Transition presents different challenges for students, especially first-generation college, historically underrepresented, and economically disadvantaged students who are lacking social capital commonly held by privileged students (Engle & Tinto, 2008). More than academic competency development, bridge programs often deal with psychosocial factors that impact academic success (Allen & Bir, 2011–2012; Tinto, 1993, 2012). A review of the professional literature revealed a strong emphasis on bridge programs supporting higher student success in science, technology, engineering, and mathematics (STEM) since these fields seek increased enrollment and graduation within the aforementioned demographic groups. Since the founding of U.S. postsecondary institutions in the 1600s, bridge programs have been common. At that time, most students attending college were white male students from affluent backgrounds. Women and students of color were not afforded the same opportunities to attend college. Most white male students lacked formal education. As a result, many were placed in private schools subsequent to enrollment in private colleges. During the 1800s, academic preparatory academies were created for them to gain the prerequisite skills necessary for admission to college. In the early 1900s, one purpose of junior colleges was acting as a bridge program to prepare
281
David R. Arendale and Nue Lor Lee
students academically for transfer to senior institutions. For some institutions today, requirements to pass prerequisite remedial- or developmental-level courses before enrollment in the corresponding college-level course are a manifestation of bridge programs (Arendale, 2010). Additional information on this topic is available in History, Chapter 1. Bridge programs became less important for economically and socially privileged students since they often attended high-quality private schools and were members of families who for generations graduated from college. These family members provided mentorship, guidance, and financial resources for the privileged to be successful. Today, a new demographic of students aspire to college, lacking social capital enjoyed by others. Those new college enrollees are increasingly culturally diverse, economically disadvantaged, and lacking family members who completed college. Those factors place them at risk of not aspiring to or completing college (Braxton, Doyle, Hartley, Hirschy, Jones, & McLendon, 2014; Tinto, 2012; Ward, Siegel, & Davenport, 2012). These students benefit from bridge programs, which help them to have equal access to the benefits of college. This curricular opportunity is important for increased access and social justice for a diverse U.S. (Damico, 2016; Greenfield, Keup, & Gardner, 2013; Upcraft, Gardner, Barefoot, & Associates, 2005). Other researchers challenge the research data on social capital efficacy for advantaging some students over others since they have unique psychosocial challenges (Kingston, 2001; Luthar & Latendresse, 2005). Tinto’s (1993, 2012) model for student departure broadly states that academic and social integration, or lack thereof, is a powerful predictor of students’ dropping out of college. The model helps explain the efficacy of bridge programs to address clusters of integration events that influence student departure: (1) academic and social adjustment to a new environment, (2) failure to meet academic standards after prior success in high school, (3) incongruence between the previous student comfort zone and the new institutional culture and environment, and (4) social isolation in a new environment. Each time students encounter a transition point in the education journey (high school to college, first-year experience, undergraduate to graduate or professional school), these four event clusters endanger college graduation and goal attainment. Bridge programs often incorporate one or more of these event clusters in their program design. Another confirmation of bridge program importance was indicated by Steele’s (1997) groundbreaking research on stereotype threat as a major barrier for women and African Americans in school. His general theory of domain identification states that students are more successful academically if they perceive themselves as part of the school and its environment. Steele identified stereotype threat, which occurs when others perceive the student to be at risk academically, despite previous academic achievement, simply because of their gender or ethnicity. Steele’s research found that women and African Americans scored lower on standardized tests who embraced the negative stereotype. Bridge programs can support higher achievement for underrepresented students by creating a cohort of students in a program to support their transition into college and by creating academically rigorous programs, such as STEM, with a supportive cohort group.
Organization of This Chapter Bridge programs have grown in sophistication and sequencing through their academic journey. This chapter examines six transition zones: (1) Middle school to high school. (2) High school to the first year of college; this period includes bridge programs during high school before graduation and matriculation in college. (3) Orientation prior to college classes; this includes single-day and extended orientation programs. (4) The first-year experience in college. (5) Transition from community college to four-year college. (6) Undergraduate college to graduate or professional school. Middle school bridge programs are included in this chapter since it has been demonstrated that academic experiences for students at this level influence them for higher success in the following
282
Bridge Programs
transition points. Colleges and external funding agencies are often involved with secondary school programs to increase viable postsecondary applicants. Within these six zones, institution-specific bridge programs have been highly customized based on demographics, academic program content, academic rigor, and institutional priorities for graduating students of a particular profile. Most bridge programs are unique to a single institution. While evaluation studies have been published on bridge programs identified in this chapter, most institutions create their own programs due to unique local needs and the lack of detailed instructions on how to implement models developed by others. Some bridge programs have been implemented at many institutions nationally. An example is the federally funded TRIO programs that serve 800,000 students at nearly 3,000 institutions, with different approaches at each of the aforementioned six zones. A deeper examination of those TRIO programs is contextualized for the student populations served and the most common academic programs pursued by the students. For example, at the University of Minnesota, the TRIO Student Support Services (SSS) program is hosted in the College of Education and H uman Development (CEHD). While these college students are free to pursue any academic program, most enroll in programs offered by CEHD, and SSS services are customized to support their success.
Bridge: Middle School to High School Influences for high school graduation and college admission occur much earlier than previously believed (Knaggs, Sondergeld, & Schardt, 2015). For students and parents, commitment to college begins as early as late elementary or middle school. This commitment is especially important for families with no family members who attended college. This lack of role models, mentors, and information regarding financial aid is a significant social capital deficit (Ehlert, Finger, R usconi, & Solga, 2017). Goals for these programs are cultivating the possibility of attending college and availability of financial aid. The target student population for these programs includes first-generation college students, economically disadvantaged students, and students of color. It is for these reasons the federally funded Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP) was created. This program operates with more than 40 educational partnerships and states, serving nearly 120,000 students. GEAR UP serves cohorts of students enrolled in lowincome schools beginning in middle school and continues until high school graduation. Common activities include peer mentoring, tutoring, field trips, financial aid information, federal and state financial aid application assistance, college admission assistance, summer academies, and other activities. Services are provided for students and their family members. A national study of 173 institutions found that GEAR UP involvement increased students’ college-readiness when compared with those not participating from other impoverished schools (Bausmith & France, 2012; Knaggs, Sondergeld, & Schardt, 2015; Maxwell, 2015). Evaluation studies are limited to quasi-experimental studies since all students at the middle school must be served by the GEAR UP program. A similar middle school bridge program is TRIO’s Educational Talent Search (ETS) program. It is similar in approach to the federally funded GEAR UP programs. Bowden and Belfield (2015) evaluated ETS regarding a benefit-cost and cost-effectiveness analysis. In both regards, the evaluators found that ETS benefits exceeded program costs, but the cost-effectiveness varied widely by program sites.
Bridge: High School to College First Year Bridge Programs during High School These college preparation bridge programs occurred during the regular high school day or were after-school activities offered by an external agency. An example of the regular school
283
David R. Arendale and Nue Lor Lee
day approach was reported by Barnett, Fay, and Pheatt (2016) who studied credit-bearing transition courses offered by high school teachers during students’ senior years in California, New York, Tennessee, and West Virginia. The target student population included academically underprepared students predicted for enrollment in developmental-level courses when admitted to college. While there was variability in delivery, the courses shared the common goal of college-readiness in mathematics and reading. Improved outcomes with these college-readiness skills appeared promising, but the qualitative assessments and diversity of the implementation among the states limited generalizability. The evaluation system did not follow the high school students to college and include academic performance and persistence to college graduation data. The largest embedded approach is dual high school-college enrollment (What Works Clearinghouse, 2017). This approach is used widely in the U.S. The national study found positive results for completing college courses as high school students, especially for the historically underrepresented. Positive outcomes included higher academic achievement in high school, increased high school graduation rates, higher college matriculation rates, and improved college-readiness. The second approach was external to the high school curriculum. The federally funded TRIO Upward Bound (UB) program provides an assortment of enrichment curriculum, after-school tutoring, Saturday workshops for the family, field trips, counseling, college-readiness coursework, college admission test preparation, college application assistance, and college visits. More than 60,000 students are served by UB across the nation. Students must meet eligibility requirements of at least two of the following criteria: low-income, physically disabled, and first-generation college aspirant. Cultural or ethnic background is not a criterion for admission as with all other TRIO programs. Most UB participants enter the program as ninth graders (Epps et al., 2016). UB evaluation studies often examined high school grade point average (GPA), high school graduation, and college enrollment. These studies have reported mixed results: short-term benefits cited by Mathematics Policy Research (1997), moderate effectiveness by McLure and Child (1998), and stronger evidence based on a multi-institutional study by Seftor, Mamun, and Schirm (2009) and Thomas (2014). A weakness of most TRIO evaluation studies is limitation to a single institution rather than multi-institutional, regional, or national evaluations. A criticism of the studies is the employment of quasi-experimental design and qualitative measures. The federal Department of Education (DOE) conducted a series of controversial national UB studies between 1998 and 2005. The TRIO community resisted pure experimental design since it requires recruiting eligible students for admission and then randomly accepting one group into UB (treatment) and denying other students (control). TRIO advocates argued that UB-eligible students were a vulnerable population since they experienced historic discrimination and were from disadvantaged backgrounds. They argued that denial of service increased their likelihood for college failure and as a result facing a lifetime of diminished quality of life. Some colleges hosting UB programs prohibited involvement with the DOE studies since it violated their institutional review board rules for treatment of vulnerable student populations. Proponents of experimental design argued the practice of using wait-listed students for acceptance into UB as the control group (quasi-experimental design) since it did not create an acceptable comparison group with random assignment to treatment and control groups. TRIO advocates raised the issue of morality about denial of services to eligible students. Proponents of random assignment design argued the short-term sacrifice of some students in the control group was worth it since positive evaluation study results increased likelihood of stable or increased federal funding. Besides, the denied students could seek other campus services to help them succeed. However, this endangered the study since the control group students may have received similar services and thereby depressed the differences in outcomes between the two groups. The UB community was able to stop the final DOE study by successfully lobbying Congress who passed legislation that stopped them. While the TRIO community celebrated the suspension
284
Bridge Programs
of the study, little has been done by them to conduct rigorous state or national studies. As mentioned elsewhere in this chapter, most evaluation studies of TRIO programs are single-institution quasi-experimental studies often reported through student dissertations rather than rigorous peer-reviewed education journals. Some bridge programs in this category focused on particular demographic groups or students pursuing specific college majors. Katz, Barbosa-Leiker, and Benavides-Vaello (2015) found efficacy with increased enrollment of American Indian and Hispanic students for a baccalaureate of science in nursing (BSN) degree. A two-week summer residential program was offered to high school students. This approach was intensive with math and science curriculum, work in a human anatomy laboratory, field visit to a local hospital, and group discussions among participants dealing with leaving home, emotional support, motivation, and dealing with the academic rigor of college.
Bridge Programs during Summer Following High School Graduation Summer bridge programs following high school graduation differ based on the types of students. All participating students with these programs were accepted for college enrollment at a particular institution. In the first two types of summer bridge programs, the students were already academically prepared for college. However, both groups could benefit with the following outcomes: higher rates of academic achievement, increased persistence to academic program competition, and higher college graduation rates. Only in the final bridge program type was d evelopmental-level curriculum needed since students were predicted at risk for college completion. The first type of bridge program focuses on improving outcomes of historically underrepresented students pursuing diverse postsecondary degree programs. Institutions or particular academic departments seek to increase the diversity of their admitted students. A second type of bridge program is narrowly focused on an academic program, such as STEM. In this case, the institutional priority may be to increase diversity in the program or increase graduation rates of enrolled students. The final approach in this category of bridge programs focuses on conditionally admitted students due to low scores on standardized admissions assessments and other measures. Bir and Myrick (2015) examined the impact of a summer residential bridge program before firsttime fall enrollment at a historically Black institution. Students in the program were conditionally admitted African American males and females who were assessed for placement in developmental-level courses during fall term. Students entered the summer program with significantly lower admission test scores and grades from high school in comparison with a nonparticipant control group. The students enrolled in a five-week course in mathematics and English with a Saturday academy that addressed issues such as conflict resolution, financial literacy, social networking, and others. The curriculum was based on evaluation of factors influencing African American academic outcomes. Learning communities of mixed gender were created with out-of-class social and development activities under the guidance of a mentor who lived in the residence hall with them. The female bridge students achieved statistically significant higher college GPAs and persisted to the second and third year at higher rates. Cabrera, Miner, and Milem (2013) reported on The New Start Summer Program at the University of Arizona. Since 1969, it has served more than 13,000 students with most from historically underrepresented backgrounds. Significant outcomes included higher first-year GPA and persistence rates. The article explored challenges of accounting for variables when calculating statistical significance. Most often cited are bridge programs preparing students for STEM majors. Seymour and Hewitt (1997) found in a national study variables influencing undergraduates, especially females, to switch academic majors from the hard sciences in which they are enrolled, such as chemistry and physics. Their study has been frequently cited by subsequent research studies of the unsupportive culture for women and other underrepresented students in the sciences and has influenced a rise in bridge programs represented in this chapter. Boedeker, Bicer, Capraro, Capraro,
285
David R. Arendale and Nue Lor Lee
Morgan, and Barroso (2015) reported success with an intensive two-week STEM summer camp. Students worked on projects eight hours daily on robotics, bridge building, solar power, and other activities. These students were not at risk academically but represented students not often enrolled in STEM majors. Assessment revealed significant improvements in reading and writing skills expected for first-year college courses. Matriculation rates for college were statistically higher for the participants. Other studies report similar findings (Constan & Spicer, 2015; Dagley, Georgiopoulos, Reece, & Young, 2016; Lesik, 2015; Rahm & Moore, 2015; Snipes, Huang, Jaquet, & Finkelstein, 2015; Stolle-McAllister, 2011). Creating student cohorts before the fall term begins provides supportive classmates of similar backgrounds who can encourage one another as they journey together toward graduation (Patton, Renn, Guido, & Quaye, 2016). Brown (2015) reported on a six-week residential program for underprepared students of color at a HBCU in the Southeast. Students enrolled in math and composition courses coupled with daily tutoring sessions. They worked with faculty and staff mentors, participated in skill workshops, and attended symposiums on psychosocial issues. The qualitative study focused on participant’s perceptions of confidence with academic skills and ability to persist to graduation. Themes that emerged from the bridge experience included the importance of peer networks and developing new approaches to studying and help-seeking. Wathington et al. (2011) reported on an evaluation study from eight colleges of a summer bridge program for students determined at risk academically. Participants were more likely to enroll in college-level courses with rigorous demands in math, reading, and writing than the cohort of nonparticipants. Program components included accelerated instruction in math, reading, and/or writing, tutoring, and a $400 stipend.
Bridge: First-Year College Experience First-year college programs have a long history in American higher education. During the late 1800s, programs with noncredit courses were established Boston University, the University of Michigan, and Oberlin College (Gordon, 1989). The first credit-bearing course was offered at Reed College in 1911 (Fitts & Swift, 1928). Hunter and Linder (2005) analyzed national surveys of first-year courses and found the average use was approximately 75 percent. These programs include three approaches. The first is enrollment in a single course during first semester of college. Sometimes this model is referred as extended orientation. The second approach is a comprehensive First-Year Experience (FYE) program involving coordinated curriculum of multiple courses and noncourse activities. The final approach is a holistic model providing comprehensive services for students enrolled in the first two years of college. Often this approach focuses on historically underrepresented students (Greenfield et al., 2013; Upcraft et al., 2005). The first approach is first-year seminars and extended orientation courses during the fall semester of a student’s initial enrollment in college. The content varies among these courses: student transition issues, study strategies, career exploration, campus resources, and other topics. First-year seminars often have the additional feature of a small enrollment of 15 or less and a discussion format ensuring high interaction among students and the course instructor. For more information about the possible curriculum of orientation courses, consult other chapters in this handbook including Strategic Study-Reading (Chapter 12), Study and Learning Skills (Chapter 14), and Test Preparation and Test Taking (Chapter 15). Berry (2014) examined extended orientation courses at 45 institutions through meta-analysis of their individual studies. Berry’s study found statistically significant improved first term and first-year GPA, and higher persistence to the second year. The outcomes were greater if a single faculty member taught the course than teams of faculty and staff or staff alone. Regarding persistence rates, outcomes were higher for students of color. Soria, Clark, Lingren Clark, and Koch (2013) reported similar outcomes plus students had greater sense of belonging than nonparticipants. Upcraft et al. (2005) identified a variety of evidence-based first-year seminars and courses.
286
Bridge Programs
The second approach is coordinated and comprehensive FYE programs. These programs employ a combination of learning communities involving several classes, service learning projects, common book reading and discussions, first-year seminars and extended orientation courses, and other activities. The National Resource Center for the First Year Experience and Students in Transition is hosted at the University of South Carolina (http://sc.edu/fye/). Greenfield et al. (2013) identified a successful model of FYE programs across the U.S. An example of the third approach targeting services during the first and second year of college is federally funded TRIO SSS Programs serving a similar demographic to UB programs. Operating at more than 1,000 institutions serving nearly a quarter million students nationwide, common activities are tutoring; economic literacy; supportive group of peers; academic advising; counseling; direct instruction in reading, writing, and other content areas; financial aid locating and application assistance; and help with applying for graduate school. Chaney, Muraskin, Cahalan, and Goodwin (1998) conducted a rigorous evaluation of SSS participation and found it was statistically significant for improved student retention rates. Contributing variables for SSS efficacy included peer tutoring and social support by grouping SSS students into cohorts.
Bridge: Two-Year College to Four-Year College As reported in Chapter 11 in this handbook, Academic Preparedness, , more than half of postsecondary students begin careers at community colleges with many aspiring for transfer to a senior institution, despite their often lower academic preparation levels. Much of the professional literature has identified the problem of transfer shock (Hill, 1965) for these students that results in lower GPAs and higher dropout rates. Review of the professional literature identified few solutions offered by colleges. There were few evaluation studies of transfer programs of any type in the recent professional literature. Chrystal, Gansemer-Topf, and Laanan (2013) conducted a qualitative study of community college transfers to Iowa State University. The program served all transfer students regardless of their previous levels of academic achievement or demographic background. Several themes emerged including adjustment to the new campus and feelings of social isolationism. The researchers noted the institution must do more than place information online, encourage voluntary interaction with an academic advisor or, other student affairs staff member, and access the myriad of student services on a large college campus. The research noted there were often little, if any, structured interactions between college staff. As a result, the institution did not create a cohort of similar transfer students to provide support. Townsend and Wilson (2006) identified most institutional energy for transfer students has been limited to developing articulation agreements for credit acceptance by the senior institution but little effort for academic and social integration of the transfer student into the four-year college. Lack of attention to integration and other factors (Tinto, 1993, 2012) previously described in this chapter are associated with decisions to drop out of college. Mitchell, Alozie, and Wathington (2015) found limited success for transfer students by offering a summer bridge program at the two-year institution before matriculation to the senior institution.
Bridge: Undergraduate to Graduate or Professional School These bridge programs have grown due to continuing high rates of noncompletion of doctoral degrees and an institutional goal for increased diversity of students with doctoral degrees and professional school licensure, such as dentistry and medicine. While some students in these programs are academically underprepared, most participants are well-prepared academically and historically underrepresented (examples: females, students of color, first generation in college), which places them at risk as described earlier in this chapter. Most often, these students in these programs are selected through holistic assessment process for voluntary applicants. The programs are divided
287
David R. Arendale and Nue Lor Lee
into three approaches: (1) preparing students as upper-class undergraduates for transition to graduate school, (2) a bridge program between undergraduate graduation and matriculation in graduate school, and (3) a comprehensive approach beginning with summer bridge before matriculation and continuing through graduate school. One of the most successful bridge program models occurring at the upper-division, undergraduate level is the federally funded TRIO Ronald E. McNair Post-baccalaureate Achievement Program (McNair). McNair students are first-generation college students with economic need or are historically underrepresented in graduate and professional school education. McNair operates at more than 150 institutions. Common activities are as follows: (1) academic advisement, (2) counseling, (3) supportive group of peers, (4) involvement with research through a summer internship, (5) mentorship by a professor, and (6) other scholarly writing and presentation activities. Exstrom (2003) identified the importance of a supportive group of peers for pursuing graduate education as a first-generation college student. Gittens (2014) identified important student factors that improved the doctoral experience. They were increased self-confidence, social and academic integration into the campus environment, and development of an identity as competent postsecondary learner. The second approach of a bridge program in this category was reported by Hall, Harrell, Cohen, Miller, Phelps, and Cook (2015). They identified a successful postbaccalaureate one-year training program to increase application and success of historically underrepresented students in biomedical sciences doctoral programs. Acceptance into doctoral programs doubled for these participating students in comparison before the program was created. Components included development of presentation skills, time management, scientific writing, and development of a supportive peer group. Jackson-Smith (2015) reported on the challenges of intersecting identifies of culture and gender with success of STEM students. Her research focused on African American women pursing STEM majors. Summer bridge increased confidence levels of the students and adjustment to the graduate school culture. The third approach began prior to graduate school and continuing through graduate school was reported by Hodapp and Woodle (2017). The American Physical Society (APS) Bridge Program serves underrepresented students to pursue doctorates in physics at institutions across the U.S. Some activities occurred before graduate enrollment and others continued while completing the doctoral degree. Key components of APS are as follows: (1) clearinghouse of available doctoral programs bridge programs, (2) financial aid to help students relocate, (3) financial literacy to manage finances and make wise choices with financial aid, (4) development of a supportive peer group, (5) mentorship by multiple faculty members, and (6) progress monitoring to graduation.
Bridge: Throughout the Academic Pipeline The Health Careers Opportunity Program (HCOP) promotes recruitment of students from disadvantaged backgrounds into health and allied health profession programs in the U.S. Hosts for these federally funded programs are graduate and professional health-science schools. Common activities include financial aid information, ACT/SAT/MCAT exam preparation, counseling, mentoring, and primary care field trips. These programs begin as early as middle school and continued through professional school. A study by Kosobuski, Whitney, Skildum, and Prunuske (2017) found significant increase in knowledge gained and increased confidence for success by historically underrepresented students pursuing medical careers at the University of Minnesota.
Areas for Further Research While evaluation studies have been numerous and robust with STEM bridge programs, other approaches need periodic rigorous studies at the quasi-experimental or experimental design
288
Bridge Programs
level. This is especially true with the TRIO programs as described earlier in the chapter. Leadership by the national association representing TRIO, Council for Opportunity in Education (COE), must work with the U.S. DOE to develop an acceptable plan for conducting periodic national studies. The same is true FYE programs. There is an overreliance of single studies published in peer-reviewed journals or dissertations. With the large numbers of students transferring from community colleges to four-year institutions, more research is needed to identify evidence-based practices to help bridge the gap which imperils college graduation. The few studies cited in the professional literature are focused on a small student populations from a few institutions scattered across the U.S.
Implications for Practitioners The research cited in this chapter generates several implications for practitioners and policymakers at secondary and postsecondary institutions. First, practice must be guided by research and national standards upon which they are based. Over the past four decades, the Council for the Advancement of Standards in Higher Education (CAS) has published national standards in nearly 50 a reas of the student experiences in college. CAS is composed of leaders from over 40 professional associations who collaborated on the standards creation, continuous revision, and endorsement to their members and the wider college community (CAS, 2015). While many of CAS standards are helpful in guiding bridge programs, the following standards are of direct application: orientation programs, parent and family programs, transfer student programs, and TRIO and other educational opportunity programs. A second implication is the need for an introspective assessment of the institution’s current bridge programs. In particular, institutions would benefit from asking whether programs are offered at all transition points in the student learning experience. It does little good to support a first-year student’s transition into the institution but ignore transitions for transfer students, students entering rigorous academic degree programs, and students in graduate and professional school. In addition, with the increasing diversity of the student population, are the bridge programs provided in a culturally sensitive manner and addressing issues for historically underrepresented students in college? The investment expense for some of bridge programs in this chapter is high. Most college budgets are not projected to receive more state or federal funds. Often, the amount from those sources is flat or declining. Institutional leaders have to ask the hard questions that if the evidence-based approaches from this chapter are not being offered at their institution, the question is why not. Can colleges and society accept the current rate of dropouts or students switching from high- demand degree programs like STEM into other college degree programs? A third implication of the research is increasing the rigor and frequency of program evaluation. As indicated in the previous section on areas for further research, there is much work in this area. The evaluation studies needed to be repeated on a periodic basis rather than relying on a single study when the practice is first implemented. These evaluation studies are needed for practitioners to make program revisions for institutional policymakers to make the difficult budget decisions on funding an expanded number of bridge programs in the midst of dwindling fiscal resources. Finally, more professional development of the leaders and staff of these bridge programs is needed. First, they need to establish their own best practice centers that identify, validate, and disseminate evidence-based practices for their programs. An example is one established by the Educational Opportunity Association, a regional COE association (http://besteducationpractices. org). Second, most bridge staff need to develop more expertise with conducting evaluation studies through enrollment in college courses in this area.
289
David R. Arendale and Nue Lor Lee
Conclusions Significant growth has occurred with bridge programs at critical transition points through the secondary and postsecondary education pipeline. While program activities are diverse, they often share common approaches: (1) attention to cognitive and psychosocial factors, (2) peer tutoring, (3) mentoring by staff and especially faculty members, (4) development of student cohorts for mutual support and encouragement, (5) rigorous academic content, and (6) strong institutional investment in the bridge programs through personnel, facilities, and budget. While the growth of STEM-related bridge programs is commendable, colleges need to expand offerings for other groups, such as transfer students and those in other academic majors who too often silently depart without detection or intervention by the institution and do not graduate at the end of a long and difficult academic journey. The diversity of college graduates must match that of the first-year students for higher education for an equitable future of the nation.
References and Suggested Readings Allen, D. F., & Bir, B. (2011–2012). Academic confidence and summer bridge learning communities: Path analytic linkages to student persistence. Journal of College Student Retention, 13(4), 519–548. *Arendale, D. R. (2010). Access at the crossroads: Learning assistance in higher education. ASHE Higher Education Report 35(6). San Francisco, CA: John Wiley & Sons. doi:10.1002/aehe.3506 Barnett, E. A., Fay, M. P., & Pheatt, L. (2016). Implementation of high school-to-college transition courses in four states. New York, NY: Community College Research Center, Teachers College, Columbia University. Retrieved from ccrc.tc.columbia.edu/media/k2/attachments/high-school-college-transition-four-states.pdf Bausmith, J. M., & France, M. (2012). The impact of GEAR UP on college readiness for students in low income schools. Journal of Education for Students Placed at Risk, 17(4), 234–246. doi:10.1080/10824669.20 12.717036 Berry, M. S. (2014). The effectiveness of extended orientation first year seminars: A systematic review and meta-analysis. (Unpublished doctoral dissertation). University of Louisville, Louisville, KY. Retrieved from ir.library. louisville.edu/etd/105/ Bir, B., & Myrick, M. (2015). Summer bridge’s effects on college student success. Journal of Developmental Education, 39(1), 22–28, 30. Boedeker, P., Bicer, A., Capraro, R. M., Capraro, M. M., Morgan, J., & Barroso, L. (2015). STEM summer camp follow up study: Effects on students’ SAT scores and postsecondary matriculation. Proceedings of the Frontiers in Education Conference: Launching a New Vision in Engineering Education, 1875–1882. New York, NY: IEEE. doi:10.1109/FIE.2015.7344330 Bowden, A. B., & Belfield, C. (2015). Evaluating the Talent Search TRIO program: A benefit-cost analysis and cost-effectiveness analysis. Journal of Benefit-Cost Analysis, 6(3), 572–602. *Braxton, J. M., Doyle, W. R., Hartley III, H. V., Hirschy, A. S., Jones, W. A., & McLendon, M. K. (2014). Rethinking college student retention. San Francisco, CA: Jossey-Bass. Brown, L. (2015). Hear our stories: An examination of the external factors and motivating forces that help underprepared students succeed. (Unpublished doctoral dissertation). The University of North Carolina at Greensboro, Greensboro, NC. Retrieved from libres.uncg.edu/ir/uncg/f/Brown_u ncg_0154D_11784. pdf Cabrera, N. L., Miner, D. D., & Milem, J. F. (2013). Can a summer bridge program impact first-year persistence and performance? A case study of the new start summer program. Research in Higher Education, 54, 481–498. doi:10.1007/s11162-013-9286-7 Chaney, B., Muraskin, L. D., Cahalan, M. W., & Goodwin, D. (1998). Helping the progress of disadvantaged students in higher education: The federal Sstudent Ssupport Sservices Program. Educational Evaluation and Policy Analysis, 20(3), 197–215. Chrystal, L. L., Gansemer-Topf, A., & Laanan, F. S. (2013). Assessing students’ transition from community college to a four-year institution. Journal of Assessment and Institutional Effectiveness, 3(1), 1–18. Constan, Z., & Spicer, J. (2015). Maximizing future potential in physics and STEM: Evaluating a summer program through a partnership between science outreach and education research. Journal of Higher Education, 19(2), 117–135. Council for the Advancement of Standards in Higher Education (CAS). (2015). CAS Professional Standards for Higher Education (9th ed.). Fort Collins, CO: Author.
290
Bridge Programs
Dagley, M., Georgiopoulos, M., Reece, A., & Young, C. (2016). Increasing retention and graduation rates through a STEM learning community. Journal of College Student Retention Research, Theory & Practice, 18(2), 167–182. doi:10.1177/1521025115584746 Damico, J. J. (2016). Breaking down barriers for low-income college bound students: A case-study of five college access programs. Journal of the European Teacher Education Network, 11, 150–162. Retrieved from jeten-online.org/index.php/jeten/article/view/84 Ehlert, M., Finger, C., Rusconi, A., & Solga, H. (2017). Applying to college: Do information deficits lower the likelihood of college-eligible students from less-privileged families to pursue their college intentions?: Evidence from a field experiment. Social Science Research, 67, 193–212. doi:10.1016/ j.ssresearch.2017.04.005 Engle, J., & Tinto, V. (2008). Moving beyond access: College success for low-income, first-generation students. Washington, DC: The Pell Institute for the Study of Opportunity in Higher Education, Council for Opportunity in Education. Retrieved from ERIC database. (ED504448). Epps, S. R., Jackson, R. H., Olsen, R. O., Shivji, A., Roy, R., & Garcia, D. J. (2016). Upward Bound at 50: Reporting on implementation practices today (NCEE 2017–4005). Washington, DC: National Center for Education Evaluation, Institute of Education Sciences, U.S. Department of Education. Exstrom, B. (2003). A case study of McNair program participant experiences. (Unpublished doctoral dissertation). University of Nebraska, Lincoln. Retrieved from digitalcommons.unl.edu/dissertations/ AAI3116571/ Fitts, C. T., & & Swift, F. H. (1928). The construction of orientation course for college freshmen. University of California Publications in Education 1897–1929, 2(3), 145–250. Gittens, C. B. (2014). The McNair program as a socializing influence on doctoral degree attainment. Peabody Journal of Education, 89(3), 368–379. Gordon, V. N. (1989). Origins and purposes of the freshman seminar. In M. L. Upcraft, J. N. Gardner, & Associates (Eds.), The freshman year experience: Helping students survive and succeed in college (pp. 183–197). San Francisco, CA: Jossey-Bass. *Greenfield, G. M., Keup, J. R., & Gardner, J. N. (2013). Developing and sustaining successful first-year programs: A guide for practitioners. San Francisco, CA: Jossey-Bass. Hall, J. D., Harrell, J. R., Cohen, K. W., Miller, V. L., Phelps, P. V., & Cook, J. G. (2015). Preparing postbaccalaureates for entry and success in biomedical PhD programs. CBE Life Science Education, 15(3), 1–13. doi:10.1187/cbe.16-01-0054 Hill, J. R. (1965). Transfer shock: The academic performance of the junior college transfer. Journal of Experimental Education, 33, 201–215. Hodapp, T., & Woodle, K. S. (2017). A bridge between undergraduate and doctoral degrees. Physics Today, 70(2), 50–56. doi:10.1063/PT.3.3464 Hunter, M. S., & Linder, C. W. (2005). First-year seminars. In M. L. Upcraft, J. N. Gardner, B. O. Barefoot, and Associates (Eds.), Challenging & supporting the first-year students: A handbook for improving the first year of college (275–291). San Francisco, CA: Jossey-Bass. Jackson-Smith, D. (2015). The summer was worth it: Exploring the influences of a science, technology, engineering, and mathematics focused summer research program on the success of African American females. Journal of Women and Minorities in Science and Engineering, 21(2), 87–105. doi:10.1615/JWomen M inorScienEng.2015010988 Katz, J. R., Barbosa-Leiker, C., & Benavides-Vaello, S. (2015). Measuring the success of a pipeline program to increase nursing workforce diversity. Journal of Professional Nursing, 32(1), 6–14. doi:10.1016/j. profnurs.2015.05.003. Kingston, P. W. (2001). The unfulfilled promise of cultural capital theory. Sociology of Education, 74, 88–99. Knaggs, C. M., Sondergeld, T. A., & Schardt, S. (2015). Overcoming barriers to college enrollment, persistence, and perceptions for urban high school students in a college preparatory program. Journal of Mixed Methods Research, 9(1), 7–30. Kosobuski, A. W., Whitney, A., Skildum, A., & Prunuske, A. (2017) Development of an interdisciplinary pre-matriculation program designed to promote medical students; self-efficacy. Medical Education Online, 22(1). doi:10.1080/10872981.2017.1272835 Lesik, S. (2015). Evaluating the effectiveness of a mathematics bridge program using propensity scores. Journal of Applied Research in Higher Education, 7(2), 331–345. doi:10.1108/JARHE-01-2014-0010 Luthar, S. S., & Latendresse, S. J. (2005). Children of the affluent: Challenges to well-being. Current Directions in Psychological Science, 14(1), 49–53. Mathematica Policy Research, Inc. (1997). The national evaluation of Upward Bound: Short-term impacts of Upward Bound. Washington, DC: U.S. Department of Education.
291
David R. Arendale and Nue Lor Lee
Maxwell, D. M. (2015). A mixed method study of the effectiveness of Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP) federal incentive program in Southern Mississippi public secondary schools. (Unpublished doctoral dissertation). The University of Southern Mississippi, Hattiesburg. Retrieved from aquila.usm.edu/dissertations/51 McLure, G. T., & Child, R. L. (1998). Upward Bound students compared to other college-bound students: Profiles of nonacademic characteristics and academic achievement. Journal of Negro Education, 67(4), 346–363. Mitchell, C. E., Alozie, N. M., & Wathington, H. D. (2015). Investigating the potential of community college developmental summer bridge programs in facilitating student adjustment to four-year institutions. Community College Journal of Research and Practice, 39(4), 366–387. Patton, L. D., Renn, K. A., Guido, F. M., & Quaye, S. J. (2016). Student development in college (3rd ed.). San Francisco, CA: Jossey-Bass. Rahm, J., & Moore, J. C. (2015). A case study of long-term engagement and identity-in-practice: Insights into the STEM pathways of four underrepresented youths. Journal of Research in Science Teaching, 53(5), 768–801. Seftor, N. S., Mamun, A., & Schirm, A. (2009) The impacts of regular Upward Bound on postsecondary outcomes 7–9 years after scheduled high school graduation. Princeton, NJ: Mathematica Policy Research, Inc. *Seymour, E., & Hewitt, N. M. (1997). Talking about leaving: Why undergraduates leave the sciences. Boulder, CO: Westview Press. Snipes, J., Huang, C-W., Jaquet, K., & Finkelstein, N. (2015). The effects of the Elevate Math Summer Program on math achievement and algebra readiness (REL 2015–096). Washington, DC: U. S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory West. Soria, K., Clark, M., Lingren Clark, B. M., & Koch, L. C. (2013). Investigating the academic and social benefits of extended new student orientations for first-year students. Journal of College Orientation and Transition, 20(2), 33–45. *Steele, C. M. (1997). A threat in the air: How stereotypes shape intellectual identity and performance. American Psychologist, 51(6), 613–629. Stolle-McAllister, K. (2011). The case for summer bridge: Building social and cultural capital for talented black STEM students. Science Educator, 20(2), 12–22. Thomas, K. S. (2014). The effectiveness of select Upward Bound Programs in meeting the needs of 21st century learners in preparation for college readiness. (Unpublished doctoral dissertation). Atlanta University & Clark Atlanta University, Atlanta, GA. Retrieved from digitalcommons.auctr.edu/cgi/viewcontent.cgi?article=1029& context=cauetds *Tinto, V. (1993). Leaving college: Rethinking the causes and cures of student attrition (2nd ed.). Chicago, IL: The University of Chicago Press. *Tinto, V. (2012). Completing college: Rethinking institutional action. Chicago, IL: The University of Chicago Press. Townsend, B. K., & Wilson, K. (2006). “A hand hold for a little bit”: Factors facilitating the success of community college transfer students to a large research university. Journal of College Student Development, 47(4), 439–456. *Upcraft, M. L., Gardner, J. N., Barefoot, B. O., & Associates. (2005). Challenging & supporting the first-year student: A handbook for improving the first year of college. San Francisco, CA: Jossey-Bass. *Ward, L., Siegel, M. J., & Davenport, Z. (2012). First generation college students: Understanding and improving the experience from recruitment to commencement. San Francisco, CA: Jossey-Bass. Wathington, H. D., Barnett, E., Weissman, E., Teres, J., Pretlow, J., & Nakanishi, A. (2011). Getting ready for college: An implementation and early impacts study of eight Texas developmental summer bridge programs. New York, NY: National Center for Postsecondary Research. What Works Clearinghouse, U.S. Department of Education, Institute of Education Sciences. (2017, February). Transition to college intervention report: Dual enrollment programs. Retrieved from whatworks.d.gov
292
17 Program Management Karen S. Agee university of northern iowa
Russ Hodges, and Amarilis M. Castillo texas state university
Every postsecondary reading and study strategy program is unique and must be managed differently to accomplish a unique mission, yet every program should strive for the same high quality of management. More than 45 years ago, learning support center pioneer Frank Christ insisted that a learning center—indeed, any postsecondary program—needed to be organized and managed with attention to outcomes (Christ, 1997). Since 1986, the Council for the Advancement of Standards in Higher Education (CAS), a consortium of 42 professional associations representing over 115,000 professionals in higher education in the U.S. and Canada, has established clear standards for postsecondary program management. CAS standards specify not only an attention to outcomes and goals but also the creation of “a vision for programs and services,” “teams, coalitions, and alliances,” and “risk management plan[s] for the organization” by program leaders who “must model ethical behavior” and institutional citizenship (Council for the Advancement of Standards, 2018, p. 11). In the most recent edition of its General Standards, CAS specifies the planning, management, and advancement requirements and expectations of program leaders in the areas of strategic planning, management and supervision, and program advancement. (See Appendix for the text of these standards.) The CAS standards and guidelines written specifically for learning assistance programs are available via links to CAS on the websites of the College Reading and Learning Association (CRLA), National Association for Developmental Education (NADE), and National College Learning Center Association (NCLCA). They add to the General Standards the additional guideline that learning assistance programs—the CAS functional area encompassing learning assistance and developmental education—are often “organized as units in the academic affairs or student affairs division” as determined by the “mission and goals …, the needs and demographics of their clients, and their institutional role” (Council for the Advancement of Standards, 2016, p. 10). Although the validity of standards developed by consensus (e.g., CAS standards, NADE Guides, CRLA tutor- and mentor-training certification standards, and regional accreditation requirements) is difficult to test, the CAS standards have benefited from professional review and verification for more than 30 years. To ensure that all students benefit from programs and services, CAS regularly revises standards from the perspectives of a range of communities of practice, including lesbian, gay, bisexual, transgender, queer, and questioning (LGBTQ) and international communities, and underrepresented populations. To achieve these high standards, every college reading and learning strategy program faces a unique set of challenges, regardless of its size or structure. Programs are manifestations of institutional
293
Karen S. Agee, Russ Hodges, and Amarilis M. Castillo
policies and, as such, must attempt to achieve their institution’s mission for student success. For instance, adjunct-taught courses, student-staffed mentoring programs, and grant-funded TRIO Student Support Services (SSS) offerings will necessarily articulate different program missions, budget different human and technological resources, and inform different stakeholders; therefore, the organization and management of programs must be inspired by reason rather than habit or whimsy. Postsecondary institutions in the U.S. have developed both traditional and novel organizational structures to provide reading and learning strategies instruction and create learning environments for their students. The traditional approach is to offer instruction in academic courses: Faculty create reading and literacy courses and student success offerings in their areas of expertise, and faculty determine what students need to learn. In recent decades, these offerings have been augmented with paired, adjunct, and linked courses, and first-year and transfer courses in which management may be shared with learning professionals, such as learning center personnel, to overcome the limitations of standard course curricula. To broaden student learning options beyond academic courses, institutions structure program offerings in tutoring, mentoring, and coaching programs; peer cooperative learning programs; TRIO federal grant programs; and learning communities. These programs are often managed outside academic departments but must also demonstrate efficiency, effectiveness, and other hallmarks of well-managed programs specified in the CAS standards (see Appendix). This chapter articulates and critiques the various ways in which programs are currently organized and managed, then offers recommendations for the efficient and effective management of these programs.
Academic Courses Administrators and educational professionals created developmental courses in reading, writing, math, and learning strategies because postsecondary faculty worry that their students are unprepared or underprepared for college (ACT, 2016), and equity gaps remain despite federal legislation (Malin, Bragg, & Hackmann, 2017). The problem has been blamed on K-12 instruction (National Assessment of Educational Progress, 2016) and on students themselves (Banks, 2005) but is perennial. As Maxwell (1997) long ago observed, “It seems that every generation of faculty members discovers that students cannot read as well as professors expect” (p. 213). Read more about this in Chapter 1. For decades, reading and study-skills courses have been mandated for non-native speakers of English as well as for those who were “misprepared” for the reading and learning demands of college study (Hardin, 1998, p. 15). Some students need to update their reading and learning strategies for expectations at the college level (Miller & Atkinson, 2001), especially at more selective institutions, even if they earned high grades in high school (Moore, 2006). Developmental courses have helped students catch up with better-prepared peers (Allen, DeLauro, Perry, & Carman, 2017; Boatman & Long, 2018). Shields and O’Dwyer (2017) recently concluded that developmental courses, though not always successful, have “the potential to give underprepared students a second chance at college success” (p. 104). Program managers have an obligation to help courses achieve that potential, despite organizational challenges. Staffing of developmental courses, for example, has been a challenge for many institutions. More than half of the developmental education faculty in the U.S. are adjuncts; programs that hire adjunct faculty to teach developmental courses can experience salary savings but also face challenges related to student access to faculty and faculty disengagement from departmental and institutional goals (Datray, Saxon, & Martirosyan, 2014), and thus they may have difficulty achieving the program outcomes of the CAS standards.
College Reading and Literacy Courses Getting into college usually requires a reading test—a standardized measure of achievement or aptitude for higher learning. Reading is seen as the key to student success in college (Feller, 2006)
294
Program Management
and poor reading ability as the “kiss of death” for a college education (Adelman, 1998). At present, a significant gap is reported between the reading skills students need in college and the kinds they possess (ACT, 2016). Slightly over half of the postsecondary institutions in the U.S. offer reading courses to bridge this gap. According to Chen (2016), 28.1 percent of students entering public two-year institutions and 10.8 percent of students entering public four-year institutions took remedial reading while enrolled 2003–2009. Read more about this in Chapter 19. Several states are considering or have passed legislation limiting or prohibiting traditional forms of developmental education, with multicourse prerequisite sequencing extending over multiple semesters; these states are redesigning instruction using corequisite courses combined with integrated and contextualized plans (National Conference of State Legislatures, 2017). These legislative attempts to accelerate student progress through developmental education coursework have merged with developmental educators’ efforts to improve rather than eliminate developmental courses in higher education. CAS standards require program administrators to “incorporate data and information in decision making” (Council for the Advancement of Standards, 2018, p. 11), so managing such courses requires faculty attention to the kinds of reading expectations facing students in higher education to assist them in transitioning into college courses. Well-managed literacy courses should align their goals with student-success and institutional objectives. For example, instructors of developmental reading courses should be knowledgeable of and responsive to faculty expectations in general education courses; dialogue between these faculties can inform practice (Armstrong & Stahl, 2017). Integrated Reading and Writing (IRW) courses must integrate authentic college reading and critical literacy pedagogy as well as ongoing growth-centered a ssessment (Hayes & Williams, 2016), and college literacy courses should incorporate critical and culturally relevant pedagogy to scaffold students’ critical literacy and academic competence (Kelly & Brower, 2017). In addition to expertise in these areas, faculty may need development training to teach their culturally and linguistically diverse classes (Haan, Gallagher, & Varandani, 2017) and adult students (Hawley & Chiang, 2017) to ensure that students feel comfortable, accepted, and understood for who they are and the knowledge they contribute to the program. Also, choosing a reading textbook is one aspect of course management that requires close scrutiny of available options (Roth, 2017). Read more in Chapter 4. The CAS standards task program leaders with understanding the technological and digital tools appropriate to the strategic goals so as to integrate these into programs and services. By including technology in the program’s vision and mission statements, which determine short- and long-term planning, programs can reward the efforts of faculty utilizing digital tools (Martirosyan, Kennon, Saxon, Edmonson, & Skidmore, 2017; Rodrigo & Romberger, 2017). These tools can also help meet diverse students’ learning needs (Kelly & Brower, 2017; Rodrigue, Soule, Fanguy, & Kleen, 2016; Wilson, Dondlinger, Parsons, & Niu, 2018). To encourage an organizational environment that is inclusive and provides opportunities for student learning, development, and engagement per CAS standards, literacy and IRW courses require careful attention to staffing issues. Programs must hire faculty with knowledge of the needs of first-year students and expertise in both reading and writing or must provide the necessary professional development for faculty who have expert knowledge in just one aspect of literacy (Goen-Salter, 2008; Hayes & Williams, 2016; Marsh, 2015).
Administration of Developmental Reading Courses Developmental reading courses may be offered in an academic (literacy) department, a centralized developmental department with highly coordinated programs, or a learning center. However, there is disagreement over which approach works best (Schwartz & Jenkins, 2007). Parsad and Lewis (2003) found that institutions located developmental reading education most
295
Karen S. Agee, Russ Hodges, and Amarilis M. Castillo
often in a traditional academic department (57 percent) and less often in a separate developmental division or department (28 percent) or a learning center (13 percent). There may be good academic and pedagogical reasons for a campus’ reading courses to be offered by English faculty: linking literacy courses in an academic department rather than in a developmental unit or learning center is consonant with trends in the literacy field, characterized by transformation of the International Reading Association to the International Literacy Association in 2015 and its Journal of Reading into the Journal of Adolescent and Adult Literacy. A decentralized approach can more easily streamline coordination and alignment between the developmental and credit-bearing courses, and provides more opportunity for communication among departmental faculty. By contrast, establishing a separate centralized department facilitates the coordination of all developmental courses and services, and allows for enhanced program autonomy, funding, and tracking of students’ progress (Schwartz & Jenkins, 2007). However, this approach can imply the existence of “developmental students,” as though they were qualitatively distinguishable from other students. Since over half of all students enrolled in developmental education courses are enrolled in only one content area (Chen, 2016), a centralized program may also administratively disconnect students from their own programs of study as well as from other students (Arendale, 2010). Boylan (2002) argued that developmental courses are most successful when centralized in one coherent department, with a clear statement of mission and objectives for the entire department, in an institution that considers developmental education to be an institutional priority. Centralizing all developmental courses into a developmental education division, department, or college generally puts faculty of all these reading, writing, math, study strategies, and sometimes science courses in close, productive contact with one another. Although research, such as Calcagno’s study (2007) of the enrollment and success rates of students in developmental education in Florida community colleges, found “some weak evidence that stand-alone developmental education departments are less effective for reading than programs that integrate developmental education into the relevant departments” (p. 76), more recently, Boylan and Saxon (2012) posited that a high level of coordination among developmental education activities was positively related to student success but that it is not imperative to adhere to a centralized structure to ensure communication and collaboration among developmental education professionals. A third organizational option for developmental courses is locating them in a comprehensive learning assistance center as one of the offerings provided by the center’s professionals. At Mt. San Antonio College in California (2018), where the Learning Assistance Center provides developmental courses, tutorial services, and a learning lab, this structure encourages a collegial and cohesive developmental faculty, with shared purpose and collaborative style. Such a learning center makes evaluation of program effectiveness easier than it is in separate departments and provides courses to all students without stigmatizing the students who take them. Regardless of the administrative location, institutions must value and commit to excellence in developmental education by ensuring ample resource allocations and assigning a director to coordinate developmental education programs and services with regular and systematic communication among all stakeholders (Boylan & Saxon, 2012). Other management issues include ongoing alertness to the changing needs of populations served, including the creation of alternatives to developmental coursework; appropriate staffing of courses; ongoing professional development for faculty; improvement of connections among assessment, advising, and placement; and integration of developmental education classroom instruction and learning support from learning laboratories (Boylan & Saxon, 2012). Additional ideas for the effective management of developmental education programs are found in the NADE Self-Evaluation Guides (Clark-Thayer & Cole, 2009) in addition to the CAS standards.
296
Program Management
Managing Placement Into Developmental Reading Courses The admission ticket to developmental reading courses tends to be a low score on a standardized reading test. According to one study, five nationally available standardized tests tend to be used to determine student need for developmental reading: the ACT and SAT admissions test and the ACCUPLACER Reading Comprehension, ASSET Reading Skills, and ACT Compass Reading Placement tests (Fields & Parsad, 2012). The Compass was phased out in late 2016. Only 13 percent of colleges used other measures for placement in remedial English (Fields & Parsad, 2012). However, relying on standardized examinations alone for placement is controversial. If an institution offers two or more reading courses and establishes minimum scores for each level, placement testing becomes a gatekeeper (Horn & Asmussen, 2014), admitting some students to their academic programs and progress toward a degree and putting up detours, speed bumps, and roadblocks for others. The National Center for Fair and Open Testing, also known as FairTest, an organization advocating against the use of standardized tests for evaluating student performance, cautions that race, class, and gender biases give white, affluent, and male test takers an unfair edge on standardized high-stakes exams, such as the SAT and ACT. In fact, over 900 colleges and universities no longer use SAT or ACT scores to bar admissions because of possible test bias (FairTest, 2017). See Goudas (2017) and Chapters 19 and 20 in this volume for additional critiques of these tests and alternative assessments. More research is also needed to ensure accurate and culturally sensitive placement in courses. The retrospective comments of students (Perin, Grant, Raufman, & Kalamkarian, 2017), including multilingual English language learners (Ferris, Evans, & Kurzer, 2017), could help place students in appropriate courses, although researchers have not explicitly weighed resource demands against the benefits of accurate placement. A recent study in Virginia (Xu & Dadgar, 2018) indicated that placement in the lowest-level developmental math coursework may be especially onerous on black students. Causal studies in other U.S. states could clarify this observation. More about this topic can be found in Chapter 19.
Critical (College-Level) Reading Courses In addition to “deficit model” reading courses developed for students presumed deficient in knowledge or skill, institutions offer “cognitive-based model” courses (using terminology of Simpson, Stahl, & Francis, 2004) to support students’ strategic learning behaviors in the regular curriculum and to explore the content-area or disciplinary literacy knowledge they may need (Porter, 2018). Critical reading and thinking courses at the college level are developed as part of the regular curriculum or as a special boon to students, offering a competitive advantage or easier learning. The 70-year-old Harvard Course in Reading and Study Strategies—the longest-r unning course at Harvard University—now offers college-level reading and learning strategies in 10 contact hours over 2 weeks (Harvard, 2018) and provides not reading and study skills but reading and study strategies (Perry, 1959). Another example of a reading and learning course offered at the university level is found at the University of California, Berkeley, where for 25 years, Student Learning Center staff have managed two-credit college-level courses for academic success and strategic learning (Berkeley Student Learning Center, 2017).
Student Success Courses Postsecondary institutions have offered courses to teach the rituals of college study since the 1920s (Maxwell, 1997). In the 1940s, Robinson (1946) indicated that over 100 colleges had remedial reading and how-to-study programs, many first created to assist probationary students.
297
Karen S. Agee, Russ Hodges, and Amarilis M. Castillo
These courses promoted students’ increased reading ability, greater organization, use of educational resources, more satisfactory adjustment, and higher grades. Robinson claimed his own how-to-study course was useful to all students, since all had inefficiencies, but “brighter” students benefited most (p. 1). That premise has not been thoroughly tested, but Robinson’s Effective Studying textbook and Survey Q3R method of studying proved to be quite influential. Since then, numerous student success courses have been developed. A matrix created by Cole, Babcock, Goetz, and Weinstein (1997), still salient today, categorizes student success courses from lower-level skills and topics to courses steeped in educational psychology theories requiring application of learning strategies. Orientation courses provide students with a comprehensive overview of the institution and its resources. Navigation courses teach students how and when to use a variety of resources and facilities. Academic and Personal Development courses facilitate students’ transition from high school into the college environment by focusing on adjustment to college life and learning. Learning-to-Learn courses provide study-skills instruction and help students comprehend and retain academic material as well as often introducing some theory. Critical Thinking courses promote independent thought and decision-making processes. Learning Framework courses, through self-discovery and analysis, facilitate students’ development of perspectives about themselves as learners so they can monitor and regulate their own learning. The learning framework course curriculum is deeply rooted in theories from educational psychology (Cole et al., 1997). Of special interest here are first-year seminars (FYS), representative of the academic and personal development courses referred to earlier, and learning framework courses. Read more about this topic in Chapter 16. First-year seminars. First-Year Seminars (FYS)—also referred to as first-year experience courses or freshman seminars—are intended to scaffold students successfully through their first year of college. Deemed a high-impact educational practice by the Association of American Colleges and Universities (Kuh, 2008), FYS are often taught in tandem with an institution’s annual common intellectual theme (e.g., global warming, civility, sustainability, civic responsibility) and with related readings, films, speakers, fine arts, and symposia, all intended to spur students’ discussion, critical inquiry, and campus engagement. FYS were the impetus for First-Year Experience (FYE), a phrase coined by John Gardner, a leader of the movement (Koch & Gardner, 2014). FYE programs can include early warning systems to alert students of their academic progress within the first few weeks of the semester, extended orientation programs promoting campus culture and acclimation, and learning communities fostering intentional peer connections (Rockey & Congleton, 2016) and cultivating student transition, acclimation, and integration. FYS organized by various program content objectives and curricula, such as extended orientation, academic seminars with uniform content, academic seminars with variable content, introduction to discipline or professional seminars, and basic study skills ( John N. Gardner Institute, n.d.; U.S. Department of Education, 2016), ideally promote “critical inquiry, frequent writing, information literacy, collaborative learning, and other skills that develop students’ intellectual and practical competencies” (Kuh, 2008, p. 9). Management of such seminars and of similar seminars created for transfer students and other students in transition may be interdisciplinary, located in the major discipline, or college-wide. Whether institutions mandate participation or create seminars as options for special populations (e.g., residential students, honor students, seniors, those on probation, or those transferring from community college to university) can complicate staffing. In addition to a history of and rationale for FYS, the National Resource Center for the FirstYear Experience and Students in Transition provides rich resources for managers of such programs, promoting research, policy, and best practices by offering institutes, conferences, online courses, research publications, electronic mailing lists, and awards (National Resource Center, n.d.). Coordinators typically manage FYS and transfer seminar programs in collaboration with interdepartmental teams of faculty members, partnerships of academic and student affairs personnel,
298
Program Management
and institutional researchers in the design, instruction, and evaluative components of the course (Gardner, Barefoot, & Swing, 2001; Mayo, 2013; Permzadian & Credé, 2016). Program design and evaluation considerations include the structure and content of the formal curriculum, connections to the mission of the institution, student learning outcomes, voluntary or mandatory participation requirements, selecting and training of instructors, pedagogy, support for diverse student populations, and connections to academic support programs, such as advising and tutoring (Gardner et al., 2001). Learning framework courses. Students’ appropriate use of learning strategies is critical for college success (Mackenzie, 2009; Tuckman & Kennedy, 2011). Learning framework courses (LFC)— also referred to as learning strategy courses—provide students with instruction in both the theoretical underpinnings of strategic learning and the application of learning strategies. Texas State University and University of Texas at Austin were the first to create early versions of LFC in the 1970s (Hodges & Agee, 2009), grounded in information-processing models and emerging cognitive psychology theories that supported the belief that cognitive and metacognitive processes could be controlled within an academic learning context (Weinstein, Husman, & Dierking, 2000). Course content is supported by Weinstein’s conceptual model of strategic learning, which delineates these frameworks into a series of four components: skill, will, self-regulation, and the academic environment (Weinstein et al., 2012; also see Chapter 14). The combination of theoretical frameworks and skills-based lessons is a critical aspect of LFC (Hodges, Sellers, & Dochen, 2012). Although LFC are offered throughout the U.S., Texas leads in number; approximately 90 percent of two-year institutions in Texas offer multiple sections of the course, usually in departments of psychology or education. Over 75 percent of Texas four-year institutions also offer LFC, most within educational psychology departments (Acee & Hodges, 2017). Many two-year institutions require all first-year students to enroll in the course, while four-year institutions more typically offer LFC to special populations (e.g., summer bridge students, conditionally admitted students, those experiencing academic difficulty or deemed at risk of failure). Although most instructors have academic backgrounds in education, psychology, or related disciplines, coordinators provide managerial and professional development for instructors and arrange course sections. Increasingly, online and hybrid versions of LFC are emerging as student populations increase in number and diversity. These formats can provide more flexibility for student scheduling and increased interactions with instructors (Tuckman & Kennedy, 2011). Various types of student success courses are increasingly common, especially in community colleges, but some resistance to offering these courses remains because of resources needed for training, hiring, or shifting faculty, obtaining classroom space, and creating curricula. Other critics believe that institutions’ limited resources should not be spent on basic study-skills courses and question if such courses warrant academic credit (Fain, 2012). Current research on the effects of student success courses is sparse; most designs lack published evaluations and external measures of success (Tuckman & Kennedy, 2011). Permzadian and Credé (2016) found that published program evaluations have generally used behavior and results criteria—specifically first-year GPA and one-year retention rate—with mixed results; their own meta-analytic investigation found small average effects on both. What Works Clearinghouse (WWC), a research division of the Institute of Educational Sciences, identified 273 studies that investigated the effects of FYE courses, only 97 of which were eligible for review against WWC group design standards. No randomized studies and only four quasi-experimental studies met WWC group design standards without reservation (see Clouse, 2012; Jamelske, 2009; Shoemaker, 1995; Wilkerson, 2008). From these four studies, representing 12,091 first-year students at four institutions, FYE courses were found to have potentially positive effects on credit accumulation, degree attainment, and general academic achievement (U.S. Department of Education, 2016).
299
Karen S. Agee, Russ Hodges, and Amarilis M. Castillo
Paired, Adjunct, and Linked Courses Reading and learning courses may be offered in conjunction with liberal arts core courses to maximize student success. For instance, a reading strategies course may be paired with an introductory psychology course that provides content reading material for the paired course. Both reading and learning strategies courses may be linked to an introductory biology course, with all students in the reading and learning strategies courses also enrolled with the same biology instructor. Whether paired, linked, or offered in tandem, adjunct courses focus their reading and study strategies on the content of the liberal arts core course and assign grades based on students’ learning of reading and learning strategies. First-year writing and mathematics courses seem to be more often linked with content courses than are reading or learning strategies courses, although linking a learning framework course to a mathematics course is becoming more common. A model course curriculum developed by The Charles A. Dana Center (2017) provides a plan for such linkage. A recent reform within developmental education is the corequisite curricular model—originally called the Accelerated Learning Program (ALP)—developed at the Community College of Baltimore County (Boylan, Calderwood, & Bonham, 2017). Institutions enroll students in a developmental and college-level course in the same subject simultaneously. For instance, developmental reading and first-year English are taught in tandem. Students also receive targeted support to help them master the content for each course. The corequisite design allows institutions to offer the developmental course without credit but also generate fees; students move more quickly through their developmental course sequence and earn college credit toward graduation (Complete College America, n.d.). However, corequisite programs may not be appropriate for all students (Daugherty, Gomez, Gehlhaus Carew, Mendoza-Graf, & Miller, 2018), and effective management and organization of corequisite models require very careful planning, with specialized training for instructors and time to meet, plan, and collaborate effectively and close attention to the life and learning needs of low-income, first-generation, and minority students (Boylan et al., 2017).
Tutoring, Mentoring, and Coaching Programs Many academic departments and learning centers offer academic support by either student peers (Latino & Unite, 2012) or professionals in face-to-face and digital venues. Coaching, a form of academic mentoring, generally matches professional staff or successful students with students new to the institution for encouragement and guidance. Offering peer tutoring—employing students to enhance student learning—in many settings has demonstrated efficient use of resources (Topping, 1996) and effectiveness (Cooper, 2010; Reinheimer & McKenzie, 2011), with benefits for the students served, the students providing service, and the programs hosting their work ( Newton & Ender, 2010). However, all peer-provided services pose management problems of recruitment, training, scheduling, documentation, institutional credibility, and turnover, especially in twoyear institutions. Technological solutions are being explored to organize scheduling and data management, but studies of their relative efficiency are needed. The Handbook for Training Peer Tutors and Mentors (Agee & Hodges, 2012)—in addition to 76 training modules on a wide variety of topics—provides strategies for managing these kinds of programs, with modules on topics such as hiring, training, and evaluating staff and seeking external funding. Most modules are practitioner-suggested best practices rather than research-supported strategies.
Tutoring, Mentoring, and Coaching by Faculty and Staff “Tutors” in English and early American colleges were faculty, who monitored students’ moral as well as intellectual efforts. Faculty tutors at Harvard early in the 20th century also provided
300
Program Management
individual assistance to students wanting to achieve distinctive “honors” in end-of-program exams (Brubacher & Rudy, 1997). According to Maxwell (1997), when U.S. colleges and universities expanded access in the 1960s, many also created tutorial services. Professional tutors continue to provide services to students in developmental education programs, sometimes in conjunction with learning labs, as at Lincoln University, an Historically Black Colleges and Universities (HBCU) institution in Pennsylvania (Fullmer, 2012). Learning center managers can invite faculty to meet with students in the center to increase collaboration and maximize resource efficiency. Faculty and professional staff may also provide services indirectly by designing learning programs and new technologies for learning; faculty-developed intelligent tutoring programs have been compared favorably to academic tutoring by humans (VanLehn, 2011). The Association for the Tutoring Profession (ATP, 2017) provides professional development opportunities for professional tutors and can guide coordinators of these tutors toward appropriate management options. Professional staff and faculty traditionally serve as academic mentors and coaches for both undergraduate and graduate students. One goal has been to increase students’ understanding of their own agency (Griffin, Eury, & Gaffney, 2015). Newer models of mentoring and coaching are developing; for example, noninstitutional coaches have worked with undergraduate students (Bettinger & Baker, 2011), and learning professionals may provide executive function coaching for students with attention deficit hyperactivity disorder (Parker & Boutelle, 2009). Developmental educators have been urged to learn and use techniques of academic coaching in academic courses (Webberman, 2011). For all such programs, institutional management practices for faculty, staff, and outside contractors should be followed.
Peer Tutoring Modern peer tutoring is often offered free of charge as one of the services in a learning support center and indeed may be its primary function. Some centers are dedicated to the tutoring of science, math, or writing, while others offer general tutoring in a range of subjects to undergraduates and graduate students. Face-to-face tutoring has demonstrated some positive short-term outcomes in a variety of settings (Coladarci, Willett, & Allen, 2013; Colver & Fry, 2016; Hendriksen, Yang, Love, & Hall, 2005; Reinheimer & McKenzie, 2011; Xu, Hartman, Uribe, & Mencke, 2001). Online tutoring also can provide academic support, foster community, and reduce failure (Sax, 2002). To outsource the management requirements of tutoring, some institutions contract out their online tutoring to commercial vendors (e.g., Smarthinking, Thinkingstorm, Tutor.com) or participate in collaboration with other institutions to meet management and scheduling needs via Tutor Matching Service. Such programs should be held to the same standards of management as campus-based services. Because tutors require training to meet the needs of the diverse students at their institutions, in 1989, the CRLA created International Tutor Training Program Certification (ITTPC), which certifies training programs for tutors in postsecondary settings and operates as a community of practice for program supervisors. For all three levels of ITTPC certification, programs must document that their hours and modes of tutor training, training topics, tutoring experience requirements, and criteria for tutor selection and evaluation are up to standard (CRLA, 2018b). CRLA has also developed tutor learning outcomes and assessments that are consonant with the ITTPC certification requirements (Schotka, Bennet-Bealer, Sheets, Stedje-Larsen, & Van Loon, 2014), and these assessments can guide the tutor trainer’s development of training, supervision, and job enrichment decisions.
Peer Mentoring and Academic Coaching Two additional modes of providing academic support to students individually or in small groups are mentoring and coaching. Managers of these programs tend to distinguish them
301
Karen S. Agee, Russ Hodges, and Amarilis M. Castillo
from tutoring by their focus not on specific academic course concerns of the week but on time management and general learning strategies. Though multiple models of mentoring make research findings difficult to generalize (Budge, 2000; Crisp & Cruz, 2009; Jacobi, 1991), peer mentoring has been called a resource-efficient approach for intervention with students in academic crisis (Sorrentino, 2007), and despite some risks (Colvin & Ashman, 2010), most well-designed studies show positive effects on academic success and retention in a variety of forms and settings (Crisp & Cruz, 2009). Peer mentoring of first-year students by more experienced students is sometimes found in first-year seminar programs (Padgett & Keup, 2012) and often found in compensatory education programs, especially TRIO SSS programs, in which peer mentors are employed to augment services provided by professionals at four-year institutions across the U.S. Since 1998, CRLA’s International Mentor Training Program Certification (IMTPC) has achieved some consensus on standards and guidelines for the minimum skills and training that mentors need. The expectation is that trained mentors will provide better service to students, and the mentor program will thereby gain greater credibility. Programs approved by IMTPC are expected to follow a code of ethics and encouraged to provide recognition and positive reinforcement to their mentors. Certification is available at three levels of experience and competence to provide increasing rewards and campus recognition for mentors (College Reading and Learning Association, 2018a), and many mentor training options are available to professionals managing these programs (Agee & Hodges, 2012). Peer academic coaching (borrowed from mentoring and coaching models for training in business contexts) has developed to support students in medical and professional programs. For undergraduate students, peer coaching is widely employed to supply and model learning strategies (Cuseo, 2010) and provide academic support to specific student populations, such as students with learning disabilities (Zwart & Kallemeyn, 2001). Peer coaching is thus an adaptable model for supporting students’ diverse needs and circumstances, with management concerns similar to those of tutor and mentor programs.
Peer Cooperative Learning Programs More than 40 years ago, Goldschmid and Goldschmid (1976) provided a comprehensive perspective on the many and various methods of peer teaching models. Whitman (1988) later divided peer teaching into near-peer and co-peer categories, to explain the various roles of students helping students, including teaching assistants, tutors, counselors, partnerships, and work groups. New formulations of peer cooperative learning continue to be developed: For example, “reciprocal peer coaching” groups have been devised to encourage self-regulation and provide ongoing formative assessment to students in a first-year, high-risk course (Asghar, 2010). Management challenges for these programs often include frequent recruiting of student peer leaders, ongoing training, scheduling, and close collaboration between faculty and student services professionals. Standards for managing these kinds of programs are detailed in the NADE Self-Evaluation Guides (Clark-Thayer & Cole, 2009). Peer cooperative learning was defined by Arendale (2004) as a subset of collaborative learning often sharing principles of purpose, process, and structure. In his thorough review, Arendale identified distinctive research-supported peer cooperative learning programs in postsecondary settings that promote both course content and learning strategies. Some are “embedded” within the academic course: Emerging Scholars Programs (ESPs), Video-based Supplemental Instruction (VSI), and Peer-led Team Learning (PLTL). Others are “adjunct” to the course and led by someone other than the course instructor: Supplemental Instruction (SI), Accelerated Learning Groups, and Structured Learning Assistance (SLA). Four of these programs are widely employed.
302
Program Management
Supplemental Instruction Supplemental Instruction (now known as SI) is the most widely adopted postsecondary cooperative learning program in the world. Because SI identifies not high-risk students but high-risk courses— courses with high rates of Ds, Fs, and withdrawals (DFW)—this approach to academic success avoids the stigma of remediation. All students in a course are encouraged to attend, and SI’s goals are to improve grades, retention, and graduation rates. SI—developed in 1972 by Deanna Martin to reduce attrition of talented minority students enrolled in medicine, pharmacy, and dentistry at the University of Missouri-Kansas City—won certification in 1981 by the U.S. Department of Education as an exemplary program and thus became eligible for funds from the National Diffusion Network to train educators in the use of SI (Arendale, 2002; Widmar, 1994). The International Center for Supplemental Instruction currently offers annual international conferences, publications and research studies, coordinator and staff training, newsletters, an e-mail discussion groups, and a variety of resources for program managers (University of Missouri-Kansas City, 2018).
Structured Learning Assistance SLA adds to the SI approach both mandatory student attendance and a strong faculty development component. SLA workshops offer students a safety net for targeted high-risk courses, typically STEM courses. First developed in 1994 at Ferris State University, SLA features mandatory 2.5–3 weekly hours of directed study and practice in a group setting. SLA workshops are led by trained peer facilitators who attend course lectures and collaborate with course instructors. The program also serves to give course instructors regular feedback on their teaching. In 2000, Ferris State’s SLA program received a Theodore M. Hesburgh Award Certificate of Excellence from TIAA-CREF; in 2001, Ferris State received a U.S. Department of Education Fund for the Improvement of Post-Secondary Education (FIPSE) grant to help three partner universities and one two-year college to develop their own SLA programs (Ferris State University, 2018). Since then, more institutions have utilized the program for academic success and other student goals; a nursing program has used SLA to boost students’ National Council Licensure Examination (NCLEX) scores (Morton, 2006). Development of a community of practice can guide campus personnel who hire and train the peer facilitators and create collaborative relationships with faculty.
Emerging Scholars Program From a year’s observations and interviews of calculus students at the University of California, Berkeley (UCB), Philip Uri Treisman discovered that African American students were more likely to study alone the faculty-recommended eight hours weekly; Chinese American students were much more likely to combine social and study time, spending about 14 hours weekly not only sharing calculus knowledge but also assisting each other with difficult problems and consulting TAs together (Fullilove & Treisman, 1990). Based on the final grades, Treisman posited that students’ use of peer collaborative learning was the critical element for their mastery of calculus and developed UCB’s Mathematics Workshop Program (Fullilove & Treisman, 1990). Since then, over 100 institutions have utilized Treisman’s model, now known often as the Emerging Scholars Program (ESP) (Arendale, 2004), and the large number of practitioners can serve as a resource for program managers. Managing ESP programs requires coordinating a number of initiatives and collaborating with multiple programs and services. Most postsecondary ESP programs establish “honor” communities, usually for first-year minority students. The cohorts are academically oriented, utilizing peer
303
Karen S. Agee, Russ Hodges, and Amarilis M. Castillo
support, extensive orientation, ongoing academic advising, and ongoing adjunct instructional sessions that promote cognitive and metacognitive learning strategies. ESP continues to contribute to student success, especially in STEM programs (e.g., Epperson, Peterson, & Houser, 2015).
Peer-led Team Learning PLTL engages teams of six to eight students in group sessions led by well-trained peer leaders, students who have done well in the course. Group enrollment is required in STEM courses offering PLTL. Peer leaders provide guidance, structure, and encouragement but not answers. PLTL programs have demonstrated success with a number of student populations, such as minority students (Snyder, Sloane, Dunk, & Wiles, 2016). PLTL programs can be administratively centralized (Alberte, Cruz, Rodgriguez, & Pitzer, 2012) or decentralized to multiple departments. Originally developed at the City University of New York in the 1990s, PLTL grew quickly, established a national office, and produced standardized print curriculum materials and workbooks. Support from the National Science Foundation has assisted more than 100 postsecondary institutions to adopt this model, and PLTL programs are widely encouraged for STEM course faculty (Wilson & Varma-Nelson, 2016). The PLTL national website offers numerous resources for postsecondary institutions employing this program (Center for Peer-led Team Learning, 2018).
TRIO Federal Grant Programs Compensatory education programs evolved out of President Johnson’s 1960s Great Society legislative agenda. Thirty-eight years before election as vice president, Johnson taught in an economically disadvantaged school in Cotulla, Texas. He believed that equitable educational opportunities could compensate for past injustices and counteract potentially negative effects of poverty. The Economic Opportunity Act of 1964, Higher Education Act of 1965, and 1968 amendment to the Higher Education Act of 1965 created the original trio of programs; now there are eight TRIO programs (U.S. Department of Education, 2011). These compensatory programs continue to serve students from disadvantaged backgrounds, displaced workers, veterans, and students with disabilities— populations of students that traditionally face many disadvantages. Students first in their generation to attend college are especially at risk and need support to compensate for cultural and social factors, such as racism, poverty, lack of familial support, and previous inadequate opportunities for literacy development (Garcia, 2015). TRIO programs remain a pathway for ensuring college preparedness and access for all students and have been successful in increasing both the higher education attendance rates and educational attainment of students (Pitre & Pitre, 2009). Two-thirds of students in TRIO programs are required to be from families with incomes at 150 percent or less of the federal poverty level and in which neither parent graduated from college (U.S. Department of Education, 2014). To support students prior to enrollment in college, Educational Opportunity Centers, Talent Search, Upward Bound, Veterans Upward Bound, and Upward Bound Math and Science all seek to increase the number of eligible students who complete high school and enroll in or continue in postsecondary education through academic, career, and financial counseling and other forms of support, such as tutoring and assistance with college applications and financial aid options. At the postsecondary level, the goal of Student Support Services (SSS) is to increase the college retention and graduation rates of its eligible students by providing information on financial aid options, scholarships, and financial literacy; support through academic counseling, tutoring, and advising; and assistance when applying from two-year to four-year programs or graduate and professional schools. Ronald E. McNair Post-Baccalaureate Achievement Program seeks to increase enrollment in and attainment of doctoral degrees by eligible undergraduate students through early
304
Program Management
involvement in research and scholarship opportunities, seminars, summer internships, and other forms of academic support (U.S. Department of Education, 2014). Management of TRIO programs is complicated by lack of certainty that funding will continue. TRIO grant applications are awarded and renewed on a five-year cycle of competition. Recipients are institutions of higher education, public and private agencies, and community-based organizations or combinations of such institutions, all with missions to serve disadvantaged youth and adults. TRIO’s program directors and staff work in collaboration with targeted schools and community agencies to identify, recruit, and select eligible participants and must also plan, develop, and implement services as specified in the grant. In 2013–2014, 2,791 TRIO grants were awarded and served 758,325 eligible students (U.S. Department of Education, 2014). To assist program managers, CAS standards for TRIO and other educational opportunity programs have been developed to supplement the general standards and learning assistance programs standards and guidelines. Arendale (2017) has created the EOA National Best Practices Center, a rich resource for program managers to share best practices. In addition, professional development is offered through Training Programs for Federal TRIO Program Staff—one of the eight TRIO grant programs. Read more about TRIO in Chapter 16.
Learning Communities Learning communities, another high-impact educational practice (Kuh, 2008), create cohorts of students in two or more linked courses and often house students together on campus. Because student involvement in an institution’s unique culture is associated with retention and academic success (Tinto, 1994), learning communities are created to introduce first-year students to academia, and some have been adapted to meet the special needs of less-prepared students (Tinto, 1998). Although learning communities were not developed primarily to provide developmental education, Laufgraben and Shapiro (2004) reported that they characteristically serve as settings for academic support programs. A benefit of learning in communities and sharing the same adjunct courses is that students can learn appropriate reading and learning strategies not as “decontextualized skills” but as “recontextualized abilities” (Malnarich, 2003, p. 27). The explosion of campus learning communities since the late 1980s owes much to the leadership of the Washington Center for Undergraduate Education at Evergreen State College. With funding assistance from FIPSE and the Pew Charitable Trusts, the Washington Center has created a network of learning communities supported by workshops, resources, publications, and a residential summer institute for faculty teams from institutions developing learning communities ( Washington Center, 2018). The Center’s Journal of Learning Communities Research, inaugurated in 2006, is now the online Learning Communities Research and Practice. More than 300 institutions are listed in the online learning communities directory. Roueche and Roueche (1999) listed “development of learning communities in the [developmental] program and collegewide” as an effective practice for making developmental education work (p. 34). Boylan (2002) also listed learning communities among the best practices for improving instruction—but cautioned that they are labor-, training-, and collaboration-intensive for faculty and may not be suitable for all students. Learning communities organized around a theme may share not only knowledge but also knowing by involving students in the development and sharing of course content (Tinto, 1998, p. 4), but only if faculty attend to and intend such purposes. The student affairs aspects of managing learning community programs are as complex as the academic aspects. The advice of skilled practitioners in Benjamin’s well-edited handbook (2015) should be helpful to supervisors of these programs. Other sources describe how learning communities may integrate the curriculum with the co-curriculum, for instance by locating the community in residence halls or residence houses (Lenning & Ebbers, 1999). Some employ peer collaborative learning models to capitalize on developing community (Halley, Heiserman,
305
Karen S. Agee, Russ Hodges, and Amarilis M. Castillo
Felix, & Eshleman, 2013). Evaluation studies of some programs seek outcomes in both academic learning and student development (Pike, 1999).
Recommendations for Program Management and Further Research Most of this final section is devoted to recommendations for improved program management. Throughout this chapter, the authors identified areas for more research on program management. Regrettably, much of the professional literature is descriptive only. One reason for this imbalance may be that many individuals working in the field lack evaluation skills. (See more on that issue later in this section.) Another reason may be institutional discouragement of management review in favor of less-evaluative institution-wide reviews. Following are our recommendations for improvement of program management. Over 20 million students are currently enrolled in the institutions of our unique system of postsecondary education in the U.S. While educational access waxes and wanes, innovative and creative models are ever needed to meet the needs of such a diverse array of students. The challenge, to paraphrase the CAS standards, is for professionals in well-managed programs to consider the needs of their students and communities when they establish their programs’ goals and objectives and the learning and program outcomes they seek to accomplish—and then to assess and update as needed. Another challenge is for reading and learning professionals to become research savvy and engage in continuing professional development. University and college faculty generally do seek out professional development opportunities to increase and assess their teaching effectiveness (Cranton, 2016; O’Meara, Rivera, Kuvaeva, & Corrigan, 2017). However, one of the main challenges for college reading and literacy professionals, as argued by Paulson and Armstrong (2014), is that “the type, quality, and amount of training required of those who provide college literacy instruction does not adequately reflect the needs of students who are already in the midst of such a crucial transition” (p. 520). A third challenge is that programs and services must evolve to accommodate the learning needs of diverse students. Much may be known about learning strategies of particular use to students with learning disabilities, for instance, and by utilizing research-based inclusive teaching strategies and learner supports, instructors can create a more accessible and successful learning environment for these students (Orr & Hammig, 2009; Tinto, 2012). Without deep research, however, it is difficult to develop and apply empirically-validated practices for students with disabilities (Madaus et al., 2016). Thus, the offerings of professional organizations (conferences, special interest groups, state and regional organizations, numerous online resources, and peer-reviewed professional journals) serve a key role in continuous training for professionals in the field. Much can be learned from the offerings of professional organizations in postsecondary learning, higher education, and student development in addition to discipline-specific professional organizations, such as the International Literacy Association. The CRLA, the NADE and other member associations of the Council of Learning Assistance and Developmental Education Associations (CLADEA) also foster productive discourse among professionals in the field. Nevertheless, professional organizations must monitor their offerings for currency and relevance. (The 2014 review by Bauer and Theado is one example of an inquiry into the articles in one journal.) Hodges and Agee (2009) expressed optimism that a wide variety of student center and learning center programs are now firmly established and widely emulated: “Reading and learning are now seen not as handout-ready skills but as strategic processes situated in time and space, processes that require learners to choose thoughtfully among approaches depending on purpose and circumstances” (p. 371). That optimism was premature. If all developmental educators and learning center personnel were operating on best practices, then reading and learning instruction would be informed by metacognition (Gourgey, 1999) rather than expectations of skill and drill. Whether programs are organized around academic courses or other models, students now should learn in
306
Program Management
context. The programs described in this chapter should support students constructing their own strategies and understandings (Biddlecomb, 2005). There is no evidence, however, that consensus has been achieved on what constitutes an excellent program and that all program personnel actually hold themselves to high standards. The kinds of programs described in this chapter tend to manifest care about important themes in higher education, such as universal design, multicultural education, and the core curriculum (as urged by Barajas & Higbee, 2003). Some make intentional connections with middle school and high school education and with retention, graduation, and lifelong learning. Well-managed reading and learning programs help institutions to accomplish their missions and achieve both institutional and their students’ goals. Although negative press in the last two decades has docked developmental education programs for accomplishing too little while using too many resources, in fact intentionally developed reading and learning programs model higher education’s very purpose. Providing basic skills instruction in writing and mathematics—especially in a remedial paradigm—has been the target of much criticism. When student outcomes have not met expectations in terms of student persistence, retention, and certificate and degree completion, institutions explore other models. Because the U.S. is undertaking far-reaching reforms to widen college access and improve college readiness, bringing current exploratory efforts to scale will not be easy (Boylan et al., 2017). It is difficult to predict whether current program models will be redeployed in new ways or whether entirely new programs will be needed for some settings. Explorations of new models—contextualized, pathways, accelerated, fast-track, emporium, counseling, and early support models, to name a few—are already underway and being assessed (Daugherty et al., 2018). In Leaving College, Tinto (1994) named only two programs for retaining, engaging, and supporting students academically: learning communities and SI. Clearly, states and institutions seeking to improve the quality of teaching and learning and to integrate their developmental and college-level courses with academic support and other student services have other successful models to explore. The wide variety of programs outlined in this chapter ensures that foundations for future models are already in place. It will be the responsibility of program managers to assess and report on the effectiveness and efficiency of the models they have chosen.
Appendix CAS General Standards on Program Planning, Management, and Advancement In its 2018 General Standards document, the Council for the Advancement of Standards in Higher Education (CAS) outlined the planning, management, and advancement activities required of program leaders. These General Standards apply to all higher education functional areas for which the Council has established standards, including the CAS Learning Assistance Program standards and guidelines for developmental education and learning assistance programs in and beyond the U.S. 6.2 Management
The functional area managers must * be empowered to demonstrate effective management * plan, allocate, and monitor the use of fiscal, physical, human, intellectual, and technological resources * develop plans for scholarship, leadership, and service to the institution and the profession * engage diverse perspectives from within and outside the unit to inform decision making.
307
Karen S. Agee, Russ Hodges, and Amarilis M. Castillo
6.3 Supervision
The functional are supervisors must * incorporate institutional policies and procedures in the development of strategies for recruitment, selection, professional development, supervision, performance planning, succession planning, evaluation, recognition, and reward of personnel * consult with institutional HR personnel to access and receive education and training that influence successful performance of personnel * provide feedback on personnel performance * identify and resolve workplace conflict * follow institutional policies for addressing complaints * provide reports and activity updates to management * work with personnel to develop plans for scholarship, leadership, and service to the profession and institution * provide supervision and support so that personnel may complete assigned tasks
6.4 Strategic Planning The functional area leaders, managers, and supervisors must facilitate ongoing strategic planning process that * facilitate continuous development, implementation, assessment, and evaluation of program effectiveness and goal attainment congruent with institutional mission and ongoing planning efforts * support ongoing assessment activities that improve student learning, development, and success * utilize philosophies, principles, and values that guide the work of the functional area * promote environments that provide opportunities for student learning, development, and success * develop, adapt, and improve programs and services in response to the needs of changing environments, populations served, and evolving institutional priorities * engage many diverse constituents and perspectives from within and outside the unit to inform the development and implementation of the planning process * result in a vision and mission that drive short- and long-term planning * set goals and objectives based on the needs of the populations served, intended student learning and development outcomes, and program outcomes (Council for the Advancement of Standards, 2018, pp. 11–12).
References and Suggested Readings Acee, T. W., & Hodges, R. (2017). [Learning framework courses in Texas]. Unpublished raw data. ACT. (2016). The condition of college and career readiness 2016. Retrieved from www.act.org/content/dam/act/ unsecured/documents/2016-CCCR-InfoGraphic.pdf Adelman, C. (1998). The kiss of death? An alternative view of college remediation. National CrossTalk (Summer). National Center for Public Policy and Higher Education. Retrieved from www.highereducation. org/crosstalk/ct0798/voices0798-adelman.shtml Agee, K., & Hodges, R. (Eds.). (2012). Handbook for training peer tutors and mentors. Mason, OH: Cengage Learning. Alberte, J., Cruz, A., Rodgriguez, N., & Pitzer, T. (2012). Hub n’ spokes: A model for centralized organization of PLTL at Florida International University. Conference Proceedings of the Peer-led Team Learning International Society Inaugural Conference, Brooklyn, NY. Retrieved from pltlis.org/wp-content/ uploads/2012%20Proceedings/Alberte-2-2012.docx
308
Program Management
Allen, N. J., DeLauro, K. A., Perry, J. K., & Carman, C. A. (2017). Does literacy skill level predict performance in community college courses: A replication and extension. Community College Journal of Research and Practice, 41(3), 203–216. doi:10.1080/10668926.2016.1179606 Arendale, D. (2002). History of Supplemental Instruction (SI): Mainstreaming of developmental education. In D. B. Lundell & J. L. Higbee (Eds.), Histories of developmental education (pp. 15–27). Minneapolis, MN: Center for Research on Developmental Education and Urban Literacy, General College, University of Minneapolis. *Arendale, D. (2004). Pathways of persistence: A review of postsecondary peer cooperative learning programs. In I. M. Duranczyk, J. L. Higbee, & D. B. Lundell (Eds.), Best practices for access and retention in higher education (pp. 27–40). Minneapolis, MN: Center for Research on Developmental Education and Urban Literacy, General College, University of Minneapolis. *Arendale, D. R. (2010). Access at the crossroads: Learning assistance in higher education. (ASHE Higher Education Report No. 35[6]). San Francisco, CA: Jossey-Bass. Arendale, D. R. (Ed.) (2017). 2017 EOA national best practices center directory (3rd ed.). Retrieved from www. besteducationpractices.org/ Armstrong, S. L., & Stahl, N. A. (2017). Communication across the silos and borders: The culture of reading in a community college. Journal of College Reading and Learning, 47(2), 99–122. doi:10.1080/10790195.2 017.1286955 Asghar, A. (2010). Reciprocal peer coaching and its use as a formative assessment strategy for first-year students. Assessment & Evaluation in Higher Education, 35, 403–417. doi:10.1080/02602930902862834 Association for the Tutoring Profession (ATP). (2017). ATP certification levels and requirements. Retrieved from www.myatp.org/atp-certification-levels-and-requirements/ Banks, J. (2005). African American college students’ perceptions of their high school literacy preparation. Journal of College Reading and Learning, 35(2), 22–37. Barajas, H. L., & Higbee, J. L. (2003). Where do we go from here? Universal design as a model for multicultural education. In J. L. Higbee (Ed.), Curriculum transformation and disability: Implementing universal design in higher education (pp. 285–290). Minneapolis, MN: Center for Research on Developmental Education and Urban Literacy, General College, University of Minnesota. Bauer, L., & Theado, C. K. (2014). Examining the “social turn” in postsecondary literacy research and instruction: A retrospective view of JCRL scholarship, 2005–2013. Journal of College Reading and Learning, 45(1), 67–84. Benjamin, M. (Ed). (2015). Learning communities from start to finish. San Francisco, CA: Jossey-Bass. Berkeley Student Learning Center. (2017). Courses for academic success and strategic learning. Retrieved from slc. berkeley.edu/academic-success-and-strategic-learning-resources Bettinger, E. P., & Baker, R. (2011). The effects of student coaching in college: An evaluation of a randomized experiment in student mentoring (Unpublished manuscript). Stanford University, Palo Alto, CA. Retrieved from ed.stanford.edu/sites/default/files/bettinger_baker_030711 Biddlecomb, B. D. (2005). Uniting mathematical modeling and statistics: Data analysis in the college classroom. The Learning Assistance Review, 10(2), 41–51. Boatman, A., & Long, B. T. (2018). Does remediation work for all students? How the effects of postsecondary remedial and developmental courses vary by level of academic preparation. Educational Evaluation and Policy Analysis, 40(1), 29–58. doi:10.3102/0162373717715708 Boylan, H. R. (2002). What works: Research-based best practices in developmental education. Boone, NC: Continuous Quality Improvement Network with the National Center for Developmental Education. Boylan, H. R., Calderwood, B. J., & Bonham, B. S. (2017). College completion: Focus on the finish line (White Paper). Boone, NC: National Center for Developmental Education. Retrieved from the Appalachian State University website: ncde.appstate.edu/news/college-completion-focus-finish-line-boylancalderwood-and-bonham-march-1-2017 *Boylan, H. R., & Saxon, P. D. (2012). Obtaining excellence in developmental education: Research-based recommendations for administrators. National Center for Developmental Education. Boone, NC: DevEd Press. Brubacher, J. S., & Rudy, W. (1997). Higher education in transition: A history of American colleges and universities (4th ed.). New Brunswick, NJ: Transaction. Budge, S. (2000). Peer mentoring in postsecondary education: Implications for research and practice. Journal of College Reading and Learning, 31(1), 71–85. Calcagno, J. C. (2007). Evaluating the impact of developmental education in community colleges: A quasi-experimental regression-discontinuity design (Unpublished doctoral dissertation). Columbia University. The Center for Peer-led Team Learning. (2018). Peer-led Team Learning. Retrieved from sites.google.com/ view/pltl
309
Karen S. Agee, Russ Hodges, and Amarilis M. Castillo
The Charles A. Dana Center. (2017). Frameworks for mathematics and collegiate learning course. Retrieved from the University of Texas website: www.utdanacenter.org/ Chen, X. (2016). Remedial coursetaking at U.S. public 2- and 4-year institutions: Scope, experiences, and outcomes (NCES 2016–405). U.S. Department of Education. Washington, DC: National Center for Education Statistics. Retrieved from nces.ed.gov/pubs2016/2016405.pdf Christ, F. L. (1997). Using MBO to create, develop, improve, and sustain learning assistance programs. In S. Mioduski & G. Enright (Eds.), Proceedings of the 17th and 18th Annual Institutes for Learning Assistance Professionals: 1996 and 1997 (pp. 43–51). Tucson, AZ: University of Arizona. Retrieved from www.lsche. net/?page_id=1062 *Clark-Thayer, S., & Putnam Cole, L. (Eds.). (2009). NADE self-evaluation guides: Best practice in academic support programs (2nd ed.). Clearwater, FL: H&H. Clouse, W. A. (2012). The effects of non-compulsory freshman seminar and core curriculum completion ratios on post-secondary persistence and baccalaureate degree attainment (Doctoral dissertation). Retrieved from ProQuest Dissertations and Theses database. (UMI No. 3523633) Coladarci, T., Willett, M. B., & Allen, D. (2013). Tutor program participation: Effects on GPA and retention to the second year. The Learning Assistance Review, 18(2), 79–96. Cole, R. P., Babcock, C., Goetz, E. T., & Weinstein, C. E. (1997, October). An in-depth look at academic success courses. Paper presented at the meeting of the College Reading and Learning Association, Sacramento, CA. College Reading and Learning Association (CRLA). (2018a). CRLA International Mentor Training Program Certification. Retrieved from crla.net/index.php/certifications/imtpc-international-mentor-training-program College Reading and Learning Association (CRLA). (2018b). CRLA International Tutor Training Program Certification. Retrieved from crla.net/index.php/certifications/ittpc-international-tutor-trainingprogram Colver, M., & Fry, T. (2016). Evidence to support peer tutoring programs at the undergraduate level. Journal of College Reading and Learning, 46(1), 16–41. Colvin, J. W., & Ashman, M. (2010). Roles, risk, and benefits of peer mentoring relationships in higher education. Mentoring & Tutoring, 18(2), 121–134. Complete College America. (n.d.). Transform remediation: The co-requisite course model. Retrieved from www. completecollege.org/docs/CCA%20Co-Req%20Model%20-%20Transform%20Remediation%20 for%20Chicago%20final(1).pdf Cooper, E. (2010). Tutoring center effectiveness: The effect of drop-in tutoring. Journal of College Reading and Learning, 40(2), 21–34. *Council for the Advancement of Standards in Higher Education. (2018). CAS general standards. Advance online publication. Retrieved from cas.edu/generalstandards *Council for the Advancement of Standards in Higher Education. (2016). Learning assistance programs: CAS standards and guidelines. Ft. Collins, CO: Author. Cranton, P. (2016). Continuing professional education for teachers and university and college faculty. New Directions for Adult and Continuing Education, 2016(151), 43–52. doi:10.1002./ace.20194 Crisp, G., & Cruz, I. (2009). Mentoring college students: A critical review of the literature between 1990 and 2007. Research in Higher Education, 50(6), 525–545. doi:10.1007/s11162-009-9130-2 Cuseo, J. (2010). Peer leadership: Definition, description, and classification. E-Source for College Transitions, 7(5), 3–5. Datray, J. L., Saxon, D. P., & Martirosyan, N. M. (2014). Adjunct faculty in developmental education: Best practices, challenges, and recommendations. Community College Enterprise, 20(1), 36–49. Daugherty, L., Gomez, C. J., Gehlhaus Carew, D., Mendoza-Graf, A., & Miller, T. (2018). Designing and implementing corequisite models of developmental education: Findings from Texas community colleges. Santa Monica, CA: RAND Corporation. Retrieved from rand.org/pubs/research_reports/RR2337.html Epperson, J. A. M., Peterson, L., & Houser, F. J. (2015). Intervention in Calculus: Back-mapping performance differences to tasks in the Emerging Scholars Program. Proceedings of the 2015 ASEE Gulf-Southwest Annual Conference, American Society for Engineering Education. Retrieved from engineering.utsa.edu/~ aseegsw2015/papers/ASEE-GSW_2015_submission_121.pdf Fain, P. (2012, February 12). Success begets success. [Web log post]. Retrieved from insidehighered.com/ news/2012/02/21/student-success-courses-catch-slowly-community-colleges FairTest: The National Center for Fair and Open Testing. (2017). 950+ accredited colleges and universities that do not use ACT/SAT scores to admit substantial numbers of students into bachelor-degree programs. Retrieved from www.fairtest.org/university/optional Feller, B. (2006). Study: Reading key to college success. Retrieved from www.boston.com/news/ education/k_12/articles/2006/03/01/study_reading_key_to_college_success?mode=PF
310
Program Management
Ferris, D. R., Evans, K., & Kurzer, K. (2017). Placement of multilingual writers: Is there a role for student voices? Assessing Writing, 32, 1–11. doi:10.1016/j.asw.2016.10.001 Ferris State University. (2018). History of SLA. Retrieved from www.ferris.edu/HTMLS/academics/sla/ history/homepage.htm Fields, R., & Parsad, B. (2012). Tests and cut scores used for student placement in postsecondary education: Fall 2011. National Assessment Governing Board, Washington, DC. Retrieved from www.nagb.org/content/nagb/ assets/documents/commission/researchandresources/test-and-cut-scores-used-for-student-placementin-postsecondary-education-fall-2011.pdf *Fullilove, R. E., & Treisman, P. U. (1990). Mathematics achievement among African American undergraduates at the University of California, Berkeley: An evaluation of the mathematics workshop program. The Journal of Negro Education, 59, 463–478. Fullmer, P. (2012). Assessment of tutoring laboratories in a learning assistance center. Journal of College Reading and Learning, 42(2), 67–89. Garcia, A. A. (2015). Fostering the success of learners through support programs: Student perceptions on the role of TRIO Student Support Services, from the voices of active and non-active TRIO eligible participants (Unpublished doctoral dissertation). University of Texas at San Antonio: San Antonio, TX. *Gardner, J. N., Barefoot, B. O., & Swing, R. L. (2001). Guidelines for evaluating the first-year experience at four-year colleges (2nd ed.). The National Resource Center for the First-Year Experience and Students in Transition. Retrieved from the University of South Carolina website: sc.edu/fye/research/assessment_ resources/pdf/Guidelines_for_Evaluating.pdf Goen-Salter, S. (2008). Critiquing the need to eliminate remediation: Lessons from San Francisco State. Journal of Basic Writing, 27(2), 81–105. Goldschmid, B., & Goldschmid, M. L. (1976). Peer teaching in higher education: A review. Higher Education, 5, 9–33. Goudas, A. M. (2017, April). Multiple measures for college placement: Good theory, poor implementation. Community College Data. Retrieved from communitycollegedata.com/articles/multiple-measuresfor-college-placement/ Gourgey, A. F. (1999). Teaching reading from a metacognitive perspective: Theory and classroom experiences. Journal of College Reading and Learning, 30(1), 85–93. Griffin, K. A., Eury, J. L., & Gaffney, M. E. (2015). Digging deeper: Exploring the relationship between mentoring, developmental interactions, and student agency. New Directions for Higher Education, 2015 (171), 13–22. doi:10.1002/he.20138 Haan, J. E., Gallagher, C. E., & Varandani, L. (2017). Working with linguistically diverse classes across the disciplines: Faculty beliefs. Journal of the Scholarship of Teaching and Learning, 17(1), 37–51. doi:10.14434/ josotl.v17i1.20008 Halley, J., Heiserman, C., Felix, V., & Eshleman, A. (2013). Students teaching students: A method for collaborative learning. Learning Communities Research and Practice, 1(3), Article 7. Retrieved from washingtoncenter.evergreen.edu/lcrpjournal/vol1/iss3/7 Hardin, C. J. (1998). Who belongs in college: A second look. In J. L. Higbee & P. L. Dwinell (Eds.), Developmental education: Preparing successful college students (pp. 15–24). Columbia, SC: National Resource Center for the First-Year Experience & Students in Transition. Harvard. (2018). Reading course. Retrieved from bsc.harvard.edu/content-type/course Hawley, J. D., & Chiang, S-C. (2017). Does developmental education help? Findings from the academic performance of adult undergraduate students in community colleges. Community College Journal of Research and Practice, 41(7), 387–404. doi:10.1080/10668926.2016.1194237 Hayes, S. M., & Williams, J. L. (2016). ACLT 052: Academic literacy—An integrated, accelerated model for developmental reading and writing. NADE Digest, 9(1), 13–22. Hendriksen, S. I., Yang, L., Love, B., & Hall, M. C. (2005). Assessing academic support: The effects of tutoring on student learning outcomes. Journal of College Reading and Learning, 29(2), 101–122. *Hodges, R., & Agee, K. (2009). Program management. In R. F. Flippo & D. C. Caverly (Eds.), Handbook of college reading and study strategy research (pp. 351–378). New York, NY: Routledge. Hodges, R., Sellers, D., & Dochen, C. W. (2012). Implementing a learning framework course. In R. Hodges, M. L. Simpson, & N. A. Stahl (Eds.), Teaching study strategies in developmental education: Readings on theory, research and best practice (pp. 314–325). New York, NY: Boston. Horn, A. S., & Asmussen, J. G. (2014). The traditional approach to developmental education: Background and effectiveness (Research Brief ). Retrieved from Midwestern Higher Education Compact website: www.mhec. org/sites/mhec.org/files/2014nov_traditional_approach_dev_ed_background_effectiveness.pdf Jacobi, M. (1991). Mentoring and undergraduate academic success: A literature review. Review of Educational Research, 61, 505–532.
311
Karen S. Agee, Russ Hodges, and Amarilis M. Castillo
Jamelske, E. (2009). Measuring the impact of a university first-year experience program on student GPA and retention. Higher Education, 57(3), 373–391. John N. Gardner Institute. (n.d.). The first-year seminar: A central component of higher education retention efforts. Retrieved from jngi.org/wordpress/wp-content/uploads/2016/06/The-First-Year-Seminar-Template_ Betsy-Barefoot.pdf Kelly, C., & Brower, C. (2017). Making meaning through media: Scaffolding academic and critical media literacy with texts about schooling. Journal of Adolescent & Adult Literacy, 60(6), 655–666. doi:10.1002/jaal.614 Koch, A. K., & Gardner, J. N. (2014). A history of the first-year experience in the United States during the twentieth and twenty-first centuries: Past practices, current approaches, and future directions. The Saudi Journal of Higher Education, 11, 11–44. Retrieved from wiu.edu/first_year_experience/instructors_and_ faculty/students/History%20of%20the%20FYE%20Article_Koch%20and%20Gardner.pdf *Kuh, G. D. (2008). High-impact educational practices: What they are, who has access to them, and why they matter. Washington, DC: Association of American Colleges and Universities. Retrieved from provost.tufts.edu/ celt/files/High-Impact-Ed-Practices1.pdf Latino, J. A., & Unite, C. M. (2012). Providing academic support through peer education. In J. R. Keup (Ed.), Peer leadership in higher education (pp. 31–43). New Directions for Higher Education, no. 157. doi:10.1002/he.20004 Laufgraben, J. L., & Shapiro, N. S. (2004). Sustaining and improving learning communities. San Francisco, CA: Jossey-Bass. Lenning, O. T., & Ebbers, L. H. (1999). The powerful potential of learning communities: Improving education for the future. ASHE-ERIC Higher Education Report, 26(6). Washington, DC: The George Washington University, Graduate School of Education and Human Development. Mackenzie, A. H. (2009). Preparing high school students for college science classes. The American Biology Teacher, 71(1), 6–7. Madaus, J. W., Gelbar, N., Dukes, L. L., III, Lalor, A. R., Lombardi, A., Kowitt, J., & Faggella-Luby, M. N. (2016). Literature on postsecondary disability services: A call for research guidelines. Journal of Diversity in Higher Education. Advance online publication. Retrieved from dx.doi.org/10.1037/dhe0000045 Malin, J. R., Bragg, D. D., & Hackmann, D. G. (2017). College and career readiness and the Every Student Succeeds Act. Educational Administration Quarterly, 53(5), 809–838. doi:10.1177/0013161X177148845 Malnarich, G. (2003). The pedagogy of possibilities: Developmental education, college-level studies, and learning communities. National Learning Communities Project Monograph Series. Olympia, WA: The Evergreen State College, Washington Center for Improving the Quality of Undergraduate Education, in cooperation with the American Association for Higher Education. Marsh, B. (2015). Reading-writing integration in developmental and first-year composition. National Council of Teachers of English, 43(1), 58–70. Martirosyan, N. M., Kennon, J. L., Saxon, D. P., Edmonson, S. L., & Skidmore, S. T. (2017). Instructional technology practices in developmental education in Texas. Journal of College Reading and Learning, 47(1), 3–25. doi:10.1080/10790195.2016.1218806 *Maxwell, M. (1997). Improving student learning skills: A new edition. Clearwater, FL: H&H. Mayo, T. (2013). First-year course requirements and retention for community colleges. Community College Journal of Research and Practice, 37(10), 764–768. Miller, S. D., & Atkinson, T. S. (2001). Cognitive and motivational effects of seeking academic assistance. Journal of Educational Research, 94, 323–334. Moore, R. (2006). Do high school behaviors set up developmental education students for failure? The Learning Assistance Review, 11(2), 19–32. Morton, A. M. (2006). Improving NCLEX scores with Structured Learning Assistance. Nurse Educator, 31(4), 163–165. Mt. San Antonio College. (2018). LAC reading classes. Retrieved from www.mtsac.edu/lac/lac_reading.html National Assessment of Educational Progress (2016). The nation’s report card. Retrieved from nationsreportcard. gov/ National Conference of State Legislatures (2017). Hot topics in higher education: Reforming remedial education. Retrieved from www.ncsl.org/research/education/improving-college-completion-reforming-remedial.aspx National Resource Center for the First-Year Experience and Students in Transition. (n.d.). Upcoming events. Retrieved from the University of South Carolina website: www.sc.edu/fye/ Newton, F. B., & Ender, S. C. (2010). Students helping students: A guide for peer educators on college campuses (2nd ed.). San Francisco, CA: Jossey-Bass. O'Meara, K. A., Rivera, M., Kuvaeva, A., & Corrigan, K. (2017). Faculty learning matters: Organizational conditions and contexts that shape faculty learning. Innovative Higher Education, 42(4), 355–376. doi:10.1007/s10755-017-9389-8
312
Program Management
Orr, A. C., & Hammig, S. B. (2009). Inclusive postsecondary strategies for teaching students with learning disabilities: A review of the literature. Learning Disability Quarterly, 32, 181–196. Padgett, R. D., & Keup, J. R. (2012). 2009 national survey of first-year seminars: Ongoing efforts to support students in transition. Research Reports on College Transitions no. 2. Columbia, SC: National Resource Center for the First-Year Experience and Students in Transition, University of South Carolina. Parker, D. R., & Boutelle, K. (2009). Executive function coaching for college students with learning disabilities and ADHD: A new approach for fostering self-determination. Learning Disabilities Research & Practice, 24, 204–215. doi:10.1111/j.1540-5826.2009.00294.x Parsad, B., & Lewis, L. (2003). Remedial education at degree-granting postsecondary institutions in fall 2000 (NCES 2004–2010). Washington, DC: National Center for Educational Statistics, Institute for Educational Science, U.S. Department of Education. Retrieved from nces.ed.gov/pubs2004/2004010.pdf (NCES 2004–2010). Paulson, E. J., & Armstrong, S. L. (2014). Postsecondary literacy: Coherence in theory, terminology, and teacher preparation. In S. L. Armstrong, N. A. Stahl, & H. R. Boylan (Eds.), Teaching developmental reading: Historical, theoretical, and practical background readings (2nd ed., pp. 509–527). Boston, MA: Bedford/St. Martin’s. Perin, D., Grant, G., Raufman, J., & Kalamkarian, H. S. (2017). Learning from student retrospective reports: Implications for the college developmental classroom. Journal of College Reading and Learning, 47(2), 77–98. doi:10.1080/10790195.2017.1286956 Permzadian, V., & Credé, M. (2016). Do first-year seminars improve college grades and retention? A quantitative review of their overall effectiveness and an examination of moderators and effectiveness. Review of Educational Research, 86, 277–316. doi:10.3102/0034654315584955 Perry, W. G., Jr. (1959). Students’ use and misuse of reading skills: A report to a faculty. Harvard Educational Review, 29(3), 19–25. Pike, G. R. (1999). The effects of residential learning communities and traditional residential living arrangements on educational gains during the first year of college. Journal of College Student Development, 40, 269–284. Pitre, C. C., & Pitre, P. (2009). Increasing underrepresented high school students’ college transition and achievements: TRIO Educational Opportunity Programs. NASSP Bulletin, 93(2), 96–110. doi:10.1177/019263650 9340691 Porter, H. D. (2018). Constructing an understanding of undergraduate disciplinary reading: An analysis of contemporary scholarship. Journal of College Reading and Learning, 48(1), 25–46. doi:10.1080/10790195.2 017.1362970 Reinheimer, D., & McKenzie, K. (2011). The impact of tutoring on the academic success of undeclared students. Journal of College Reading and Learning, 41(2), 22–36. *Robinson, F. P. (1946). Effective study. New York, NY: Harper & Brothers. Rockey, M., & Congleton, R. (2016). Exploring the role of the first-year experiences in enhancing equity & outcomes. Insights on equity and outcomes, Office of community college research and leadership. Retrieved from occrl.illinois.edu/docs/librariesprovider4/ptr/first-year-experience.pdf Rodrigo, R., & Romberger, J. (2017). Managing digital technologies in writing programs: Writing program technologists and invisible service. Computers & Composition, 44, 67–82. Rodrigue, S., Soule, L., Fanguy, R., & Kleen, B. (2016). University student experiences and expectations in regard to technology. Journal of Higher Education Theory & Practice, 16(2), 59–70. Roth, D. (2017). Morphemic analysis as imagined by developmental reading textbooks: A content analysis of a textbook corpus. Journal of College Literacy and Learning, 47(1), 26–44. doi:10.1080/10790195.2016.1218807 Roueche, J. E., & Roueche, S. D. (1999). High stakes, high performance: Making remedial education work. Washington, DC: Community College Press. Sax, B. (2002). Brief report: New roles for tutors in an online classroom. Journal of College Reading and Learning, 33(1), 62–67. *Schotka, R., Bennet-Bealer, N., Sheets, R., Stedje-Larsen, L., & Van Loon, P. (2014). Standards, outcomes, and possible assessments for ITTPC certification. Retrieved from crla.net/index.php/certifications/ ittpc-international-tutor-training-program Schwartz, W., & Jenkins, D. (2007). Promising practices for community college developmental education: A discussion for the Connecticut Community College System. Community College Resource Center. Retrieved from careerladdersproject.org/docs/Promising%20Practices%20for%20CC%20Dev%20Ed.pdf Shields, K. A., & O’Dwyer, L. M. (2017). Remedial education and completing college: Exploring differences by credential and institutional level. The Journal of Higher Education, 88(1), 85–109. doi:10.1080/00 221546.2016.1243943 Shoemaker, J. S. (1995, April). Evaluating the effectiveness of extended orientation for new, undecided freshmen. Paper presented at the meeting of the American Educational Research Association, San Francisco, CA. Retrieved from files.eric.ed.gov/fulltext/ED384303.pdf
313
Karen S. Agee, Russ Hodges, and Amarilis M. Castillo
Simpson, M. L., Stahl, N. A., & Francis, M. A. (2004). Reading and learning strategies: Recommendations for the 21st century. Journal of Developmental Education, 28(2), 2–4, 6, 8, 10–12, 14–15, 32–33. Snyder, J. J., Sloane, J. D., Dunk, R. D. P., & Wiles, J. R. (2016). Peer-Led Team Learning helps minority students succeed. PLOS Biology, 14(3), [e1002398]. doi:10.1371/journal.pbio.1002398 Sorrentino, D. M. (2007). The SEEK mentoring program: An application of the goal-setting theory. Journal of College Student Retention, 8(2), 241–250. *Tinto, V. (1994). Leaving college: Rethinking the causes and cures of student attrition (2nd ed.). Chicago, IL: University of Chicago Press. Tinto, V. (1998). Learning communities and the reconstruction of remedial education in higher education. Retrieved from www.doso.wayne.edu/SASS/Tinto%20Articles/Learning%20Communities%20&%20 Remedial%20Education.pdf Tinto, V. (2012). Completing college: Rethinking institutional action. Chicago, IL: University of Chicago Press. Topping, K. J. (1996). The effectiveness of peer tutoring in further and higher education: A typology and review of the literature. Higher Education, 32(3), 321–345. Tuckman, B. W., & Kennedy, G. J. (2011). Teaching learning strategies to increase success of first-term college students. The Journal of Experimental Education, 79, 478–504. doi:10.1080/00220973.2010.512318 University of Missouri-Kansas City. (2018). The International Center for Supplemental Instruction. Retrieved from info.umkc.edu/si/ U.S. Department of Education (2011). History of federal TRIO program. Retrieved from www2.ed.gov/about/ offices/list/ope/trio/triohistory.html U.S. Department of Education. (2014). 50th anniversary federal TRIO programs fact sheet. Retrieved from www2.ed.gov/about/offices/list/ope/trio/trio50anniv-factsheet.pdf *U.S. Department of Education, Institute of Educational Sciences, What Works Clearinghouse. (2016, July). Supporting postsecondary success intervention report: First year experience courses. Retrieved from ies.ed.gov/ ncee/wwc/Docs/InterventionReports/wwc_firstyear_102116.pdf VanLehn, K. (2011). The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems. Educational Psychologist, 46(4), 197–221. doi:10.1080/00461520.2011.611369 Washington Center at The Evergreen State College. (2018). The national resource center for learning communities. Retrieved from wacenter.evergreen.edu/about-the-washington-center Webberman, A. L. (2011). Academic coaching to promote student success: An interview with Carol Carter. Journal of Developmental Education, 35(2), 18–20. *Weinstein, C. E., Acee, T. W., Jung, J., Krause, J. M., Dacy, B. S., & Leach, J. K. (2012). Strategic learning: Helping students become more active participants in their learning. In K. Agee & R. Hodges (Eds.), Handbook for training peer tutors and mentors (pp. 30–34). Mason, OH: Cengage Learning. Weinstein, C. E., Husman, J., Dierking, D. R. (2000). Self-regulation interventions with a focus on learning strategies. In M. Boekaerts, P. Pintrich, & M. Zeidner (Eds.), The handbook of self-regulation (pp. 728–747). San Diego, CA: Academic Press. Whitman, N. A. (1988). Peer teaching: To teach is to learn twice (ASHE Higher Education Report No. 4). Washington, DC: Association for the Study of Higher Education. Widmar, G. E. (1994). Supplemental Instruction: From small beginnings to a national program. In D. C. Martin & D. R. Arendale (Eds.), Supplemental Instruction: Increasing achievement and retention (pp. 3–10). San Francisco, CA: Jossey-Bass. Wilkerson, S. L. (2008). An empirical analysis of factors that influence the first year to second year retention of students at one large, Hispanic Serving Institution (HSI) (Doctoral dissertation). Retrieved from ProQuest Dissertations and Theses database. (UMI No. 3333787) Wilson, D. A., Dondlinger, M. J., Parsons, J. L., & Niu, X. (2018). Exploratory analysis of a blended- learning course redesign for developmental writers. Community College Journal of Research and Practice, 42(1), 32–48. doi:10.1080/10668926.2016.1264898 Wilson, S. B., & Varma-Nelson, P. (2016). Small groups, significant impact: A review of Peer-led Team Learning research with implications for STEM education researchers and faculty. Journal of Chemical Education, 93, 1686–1702. doi:10.1021/acs.jchemed.5b00862 Xu, D., & Dadgar, M. (2018). How effective are community college remedial math courses for students with the lowest math skills? Community College Review, 46(1), 62–81. doi:10.1177/0091552117743789 Xu, Y., Hartman, S., Uribe, G., & Mencke, R. (2001). The effects of peer tutoring on undergraduate students’ final examination scores in mathematics. Journal of College Reading and Learning, 32(1), 22–31. Zwart, L. M., & Kallemeyn, L. M. (2001). Peer-based coaching for college students with ADHD and learning disabilities. Journal of Postsecondary Education and Disability, 15, 1–15.
314
18 Program Assessment Jan Norton university of iowa
Karen S. Agee university of northern iowa
The assessment of academic programs and student learning outcomes (SLOs) to evaluate effectiveness and inform strategic planning is a critical function of postsecondary institutions in all six of the U.S.’ accrediting regions (Middaugh, 2010). Learning assistance professionals and college literacy educators have an ethical responsibility to know and report to others what their work is accomplishing. It is insufficient to rely on studies and assessments in other contexts and at other institutions: Every program must plan its own ongoing assessment utilizing multiple methods and outcomes (Council for the Advancement of Standards, 2016; Norton & Agee, 2014). Formative assessment should improve instruction, and summative assessment should inform program evaluation (Boylan & Bonham, 2009). This chapter discusses the development of assessment plans for reading and study strategy programs in light of recent scholarship in the field. After examining the terminology and types of data used in program assessment and evaluation, the chapter provides a review of current quantitative and qualitative research on reading and study strategies programs. The research reflects a diversity of students and a variety of delivery methods, including courses, academic support, and online options. The authors conclude with a discussion of continuing research issues and the importance of using assessment results toward program improvement.
The Language of Assessment Assessment language has changed over the years and will likely continue to evolve. Chen and Mathies (2016) examined the terms assessment and evaluation, and contexts in which the terms are used and differentiated: Generally speaking, assessment is learner centered and process oriented, which aims to identify areas where teaching and learning can improve, whereas evaluation is judgmental and arrives at a valuation of performance. It should be noted, however, that both terms have been used interchangeably in a wide range of contexts and it is sometimes very difficult to differentiate the meaning of assessment and evaluation. (p. 85) In this chapter, assessment refers to empirical measures of student knowledge, skills, and attitudes by such processes as tests, surveys, focus groups, and case studies. Evaluation refers to the analysis
315
Jan Norton and Karen S. Agee
and interpretation of those measures, including judgments of positive or negative impacts (see Astin & Antonio, 2012). Both processes are critical in order to distinguish between the concepts measured and the meanings of those measurements. Read more about student assessment in Chapters 19 and 20 of this volume.
Quantitative vs. Qualitative Assessment There are two basic types of data used in higher education assessment and evaluation: quantitative and qualitative (Trochim, 2006). Quantitative assessments in education typically include student grades. Quantitative studies may also use measurements unrelated to course instruction, such as standardized tests or other tools that measure knowledge or skills. Qualitative assessments usually reflect students’ attitudes and experiences. Satisfaction surveys and focus group responses, for instance, supply needed feedback about a program’s appeal and perceived effectiveness. It is worth noting that positive opinions can exist where positive quantitative measures do not, and students may have negative opinions about programs that nevertheless benefit them academically. Professional standards identify program elements widely accepted as effective practices. Criterion-referenced evaluations usually require both quantitative and qualitative data. Criteria used to judge the relative success or failure of a program are often found within established missions and program requirements generated at the system or state level; national or international guidelines for best practices may also be incorporated. For an external perspective, an evaluator or team can review institutional data according to accepted standards. A broad range of research and evaluation models have been utilized for institutional and program evaluation—some more appropriately than others. Boylan and Bonham (2009) provided a thorough review of these models. Nevertheless, assessment models specifically for literacy and learning assistance programs have been sparsely researched (Norton & Agee, 2014). Some programs may be pressured to use standardized exams to assess student learning, but because such tests are only indirect measures, may discourage students, and may also differentiate by ethnicity and gender (Woodcock, Hernandez, Estrada, & Schultz, 2012), better assessments should be considered.
Current Issues in Reading Assessment A variety of measures and study designs have been used to assess both program and student literacy outcomes. Many studies have attempted to approximate the rigor of experimental designs with the small-to-moderate number of students in their programs. Other studies explore questions for which qualitative measures are more appropriate.
Quantitative Approaches Jalilifar and Alipour (2007) randomly assigned postsecondary English language learners to three standardized test conditions. One analysis in this study employed t-tests to compare group mean scores on a sample reading comprehension test, finding that students instructed in the detection and analysis of metadiscourse markers performed substantially better than students without instruction and practice. Comparison of group means can be useful for small numbers of students, as can be seen in this study. Similarly, Ebner and Ehri (2016) measured college students’ ability to focus while engaging in self-regulated online vocabulary learning. Students were randomly assigned to three groups: structured think-aloud and a coach, structured think-aloud without a coach, or structured think-tooneself without a coach. Students in all three conditions showed similar gains in word knowledge,
316
Program Assessment
as measured by two-way analyses of variance (ANOVAs; for test-point and test-point-by-condition effects), demonstrating the effectiveness of all three strategies for online vocabulary study. Perin, Hare Bork, Peverly, and Mason (2013) studied the effects of contextualized interventions on summarization skills in two quasi-experimental studies on two campuses. Multiple venues, large numbers of student subjects (322 and 246), and regression with an analysis of covariance (ANCOVA) controlling for pretest scores lend credence to the findings as does recognition of this study as part of a series of focused investigations. The effects of reading for different purposes were tested in an experimental study; Linderholm and Wilde (2010) randomly assigned students to two purpose conditions (reading for entertainment or study). The researchers conducted a multivariate analysis of variance (MANOVA) using reading purpose on four dependent variables, discovering that, although students reading for “study” had higher confidence in their comprehension, in fact, they scored no higher than those reading for “entertainment.” When random assignment of students to different instructional conditions is impossible, nonexperimental designs can be productive. Culver (2016) examined compliance with course reading requirements in multiple psychology courses using the Metacognitive Reading Strategies Questionnaire, surveys of reading compliance, four quizzes, and four weekly reading guides. Using repeated-measures ANOVA, Culver discovered that both the reading guide and quizzes boosted reading compliance, but only the reading guide “increased both reading compliance and metacognitive strategies” (Culver, 2016, p. 51). Studies of students in multiple sections of a course as samples of convenience are most persuasive if other variables are controlled for or eliminated. Part of a study reported by Burgess, Price, and Caverly (2012) compared students’ scores on a departmental reading comprehension final exam using an independent samples t-test. Students in developmental reading course sections taught in the multiple-user virtual environment (MUVE) earned higher final test scores than students taught in a non-MUVE section. Scoring rubrics are used as both formative assessment to inform instructors’ teaching and summative assessment at semester’s end. Leist, Woolwine, and Bays (2012) described a study undertaken to test a critical thinking rubric meant to assess reading achievement for students in all sections of a reading intervention course. Reading intervention course sections were linked to general education courses, and the critical thinking protocol was designed for congruence with institutional and state reading competency initiatives. There was no control group, so several repeated-measure ANOVAs compared pretest scores with each of the five consecutive rubric scores and a posttest score. The researchers found significant development of students’ reading achievement as measured by the rubrics in all three general courses, between the pretest rubric and later rubrics as well as the posttests, demonstrating positive instructional value of the rubric activity. An important issue in postsecondary reading development is selection—or institutional development—of appropriate reading assessments (Flippo & Schumm, 2009; see also Chapters 19 and 20 in this handbook). Now that ACT’s Compass tests are no longer offered, and course placement tests are scrutinized for validity, programs must assess the effectiveness of their placements and explore alternatives (Barnett & Reddy, 2017), keeping in mind that each institution needs to conduct its own study of factors potentially biasing those placements (Aguinis, Culpepper, & Pierce, 2016). Behrman and Street (2005) administered a test of prior knowledge of content, a content-specific reading comprehension test, and a content-general reading comprehension test to students on the first day of a challenging anatomy course. Using multiple regression to final course grades, the researchers determined that the content-specific reading comprehension test predicted end-of-course grades. They concluded that continuing to administer general-content reading comprehension tests for placement in general education courses at their institution could inappropriately place many students in remedial rather than college-level courses.
317
Jan Norton and Karen S. Agee
Qualitative Approaches For some questions about program effectiveness and reading behaviors, qualitative assessments, such as surveys and interviews, are appropriate. To explore the outcomes of a campus common book program, Daugherty and Hayes (2012) conducted online surveys of students. Similarly, to determine whether their institution’s common book program was achieving its goals, Ferguson, Brown, and Piper (2014) surveyed first-year students and conducted interviews with instructors. However, if researchers survey very small numbers of students about their learning experiences, such as the three students in Pacello’s (2014) study, generalizing to other students and contexts will be inappropriate. Other studies have utilized multiple qualitative measures to assess outcomes. Jones, Brookbank, and McDonough (2008–2009), for instance, described methods of analyzing observations using “triangulation, internal member checks, two-tiered coding, constant comparative analysis, and ongoing memoing” (p. 14) to document learning of both graduate-student literacy tutors and their adult tutees. Though demanding much time and effort of participant-researchers, this approach was appropriate because one purpose of the study was to involve graduate students in cyclical processes of planning and assessment of differentiated instruction. Another issue in literacy research is the effect of beliefs and attitudes on reading achievement. Students’ attitudes toward their reading experiences can be surveyed in several ways. Banks (2005) conducted phenomenological interviews with first-year students to elicit and analyze perceptions of their high school preparation and literacy practices. By contrast, Isakson, Isakson, Plummer, and Chapman (2016) described the Isakson Survey of Academic Reading Attitudes (ISARA), a self-report measure that quantifies attitudes toward academic reading. The researchers recommend administering ISARA early and late in a course to measure development of students’ attitudes along three constructs (reading behavior, self-efficacy, and value), or using the ISARA as one measure of college reading program success. Instructors use in-course assessments to learn more about their students’ reading behaviors and modify instruction accordingly. Ideally, studies of student attitudes constitute action research resulting in change. For instance, Helms and Turner Helms (2010) studied the use of instructor-devised reading guides (“note launchers”) to improve students’ active reading of their math texts. Results were used to improve instruction by restructuring the note launchers in response to survey responses.
Current Issues in Study Strategy Assessment Researchers continue to study the effectiveness of varied study strategies; these include such practices as student note-taking during lectures, textbook reading and marking, memorization techniques, and group review sessions. There are many challenges to such studies, especially given the tendency of students to favor some strategies over others. Students may also alter their study strategy use according to the course or instructor.
Quantitative Approaches Among the recent publications that should prompt further research is a meta-analysis examining 10 study strategies and their application (Dunlosky, Rawson, Marsh, Nathan, & Willingham, 2013). The authors rated the effectiveness of each strategy on a three-level scale of usefulness: practice testing and distributed practice (high utility); interleaved practice, elaborative interrogation, and self-explanation (moderate utility); and summarization, highlighting and underlining, keyword mnemonic, imagery use for text learning, and rereading (low utility). Participants
318
Program Assessment
included students of multiple ages, learning circumstances, implementation, and teaching methods, among other variables. Hesser and Gregory (2016) compared test question scores of chemistry students who were assigned to instructional support sessions (based on their lower SAT scores) with the test question scores of students not assigned to support sessions. By the middle of the course, the support session students were scoring as well as the academically prepared students and, by the end of the course, were outscoring the prepared students on the common test questions. A learning strategies effectiveness study examining over 1400 students in South Korea and the U.S. found statistically significant correlations between certain learning strategies and students’ grade point averages (Lee, Lee, Makara, Fishman, & Teasley, 2017). The correlations differed, though, by country. In South Korea, strategies related to motivation, time management, task orientation, and cognition were correlated with student GPA; correlations with GPA in the U.S. students were found for only motivation and task orientation. Additional studies examining strategy effectiveness and academic impacts are clearly needed, and older lists of strategies considered essential for success should be reexamined, especially given emerging educational technologies and increasing student diversity. Richardson, Robnolt, and Rhodes (2010) reviewed research on study skills from 1900 to 2010. In addition to considering the sciences of metacognition and motivation, the study examined assessments used within study strategies research. One assessment often studied is the Learning and Study Strategies Inventory (LASSI). Now in its third edition (Weinstein, Palmer, & Acee, 2016), the LASSI assessment tool includes 60 items (formerly 80) covering 10 subscales: anxiety, attitude, concentration, information processing, motivation, selecting main ideas, self-testing, test strategies, time management, and using academic resources. The LASSI continues to be prominent in study strategies research. Dill et al. (2014) used LASSI as a pre- and posttest of students in a support program for at-risk students; “results revealed a statistically significant increase in the posttest percentile for every scale” (p. 30) along with a clear correlation between reductions in anxiety and removal from suspension. Griffin, MacKewn, Moser, and VanVuren (2012) found that several LASSI subscales correlate positively with students’ GPAs. Hoops, Yu, Backscheider Burridge, and Wolters (2015) compared GPAs of students who had taken a student success course with those of a matched sample. The researchers also compared pre- and postcourse LASSI scores of the success course students as measures of self-regulated learning. And Mireles, Offer, Ward, and Dochen (2011) used the LASSI to research student academic behaviors in the specific contexts of college algebra and developmental math courses. In addition to the LASSI, the Study Behaviors Inventory (SBI) provides an alternative self-report survey. SBI’s 46 items review students’ preparation and academic confidence. Yang and Bliss (2014) recently reexamined the SBI to assess correlations between its survey items and student academic success. The Motivated Strategies for Learning Questionnaire (MSLQ) (Pintrich & DeGroot, 1990) has 81 items in 15 scales. The MSLQ examines factors such as test anxiety, motivation, critical thinking, and self-efficacy. Researchers have used the MSLQ to examine student motivation and learning strategies among student populations and within various course contexts, including online courses (Broadbent, 2017; Duncan & McKeachie, 2005). Another assessment tool is the Academic Success Inventory for College Students (ASICS). However, only one subscale is directly related to learning and study strategies: general academic skills, “a combination of effort expended, study skill and self-organizational strategies” (p. 2), with 12 items and instructions for students to focus on only one difficult or challenging course (Prevatt et al., 2011). Researchers are continuing to develop and study additional assessment tools. In Brazil, Boruchovitch and Santos (2015) reviewed the MSLQ and LASSI, then generated a new assessment of 35 items to further define and research student motivation and learning strategies. Holliday and Said (2015) created a quantitative and qualitative research study of student stress in specific
319
Jan Norton and Karen S. Agee
learning conditions. In biology and chemistry study groups, the researchers administered surveys to record students’ satisfaction with the study sessions and trained students to measure their pulse rates twice per session, revealing a pre- to postlesson measure of direct psychophysiological data indicating successful reduction in stress for particular lessons.
Qualitative Approaches Additional research related to college students’ study strategies does not rely on standardized assessments such as the LASSI and MSLQ. Litchfield and Dempsey (2015), for example, argued for more creative and “authentic” measures. Though they supported relatively quantitative rubrics developed for specific assignments, evaluating the extent to which expectations are met can be highly qualitative. Irwin and Hepplestone (2012) used the context of writing and marking student essays to examine the potential impacts of student-selected assessment formats: “The use of flexibility in assessment formats supports core agendas in higher education such as accessibility and promoting autonomous learners, and has been called for by the literature in these fields” (p. 782). Blasiman, Dunlosky, and Rawson (2017) developed a survey for students to self-report on their minutes of study time as well as their intended and actual use of 10 different study strategies: reading notes, using flashcards, copying notes, practice testing, highlighting notes, creating examples, summarizing, rereading text, outlining, and highlighting text. The researchers surveyed students starting in the first week of classes and followed up with interviews six times throughout the semester. Students most frequently reported reading notes and the text, yet indicated that reading, copying, and highlighting notes were most effective. Toms (2016) did a qualitative study of college freshmen in a math course. She conducted four interviews with each of the eight student participants over the semester. Results indicated that activities created or recommended by the faculty proved to be most effective for the students’ learning.
Continuing Issues Analyzing a range of assessment studies can yield some conclusions about useful approaches as well as avoidable traps. Numerous texts, websites, and commercial vendors provide strategies to guide assessment and evaluation research in higher education. For reading and study strategies professionals undertaking program and student outcomes assessment, the authors offer six considerations for research.
Matching Assessment to Instruction Assessment measures should be aligned not only with the outcomes under study but also with the types of instruction provided. Banta and Palomba (2015) noted that “performance-based assessments are expected to be indistinguishable from the goals of instruction” (p. 96). Brown, Bull, and Pendlebury (2013) discussed the importance of connecting assessment tools to the processes of teaching and student learning. Assessment scores should be representative of the students’ capabilities and match the learning objectives of the course.
Scheduling Assessments Thorough assessment of students’ reading and study strategies examines a range of abilities and the endurance of skills taught. Ideally, students retain their learning to be assessed at any time. While the most notable learning impacts may be reflected in a grade or survey nearest to the time of
320
Program Assessment
instruction, Banta and Palomba (2015) offered practical advice not “to implement every method immediately or even every year. A comprehensive assessment plan should have a schedule for implementing each data-gathering method at least once over a period of three to five years” (p. 21). The goal of assessment planning is to allow a program or institution to distribute its efforts and capture a wide range of quantitative and qualitative assessments during each evaluation or accreditation cycle.
Getting Assistance With Assessment and Evaluation As these research studies have shown, reading and study strategies programs clearly need access to student academic and demographic information (e.g., gender, ethnicity, age, first-generation status) to conduct effective program evaluation. Some sources of reliable data may be institutional research or assessment, registrar, admissions, or financial aid professionals. Most commercial software for tracking students’ use of tutoring and other support services (e.g., TutorTrac, AccuTrack, WCOnline) can upload student data as part of the technology’s attendance functions. Programs that lack (or cannot access) staff dedicated to assessment and evaluation may find help from other colleagues (Kramer, Knuesel, & Jones, 2012). Faculty may be interested in collaborating on assessment and evaluation activities (Guetterman & Mitchell, 2016). A graduating senior, graduate student, or intern majoring in statistics or educational research can perhaps receive academic credit for assistance. Colleagues at other institutions may be interested in assisting with and participating in a program evaluation process. In all cases, professional ethics as well as established laws must protect the privacy of students and their academic information.
Interpreting Statistics In quantitative studies, evaluators—especially reading and study strategies personnel—typically use basic statistics, such as comparisons of means, t-tests, correlations, and ANOVA, to determine whether assessment results are effectively demonstrating program quality. Sometimes, however, deeper mathematical exploration may be legitimately needed. Researchers in Belgium conducted an impressive longitudinal study of 425 students’ changing learning strategies from high school and college (Coertjens et al., 2013). The researchers used multiple statistical analyses to examine the same data set to determine the most effective method for showing growth over time. Such flexibility can be permitted not for the sake of reporting misleading results but rather to acknowledge the complexity of learning and the challenge of measuring student learning.
Examining the Impact of Student Motivation Whether reading and study strategies instruction is voluntary is an important factor in assessment. Some measure of motivation, whether direct or indirect, is necessary so that student motivation does not become a confounding variable. When reviewing data from a commonly provided service, Supplemental Instruction (SI), Szal (2017) pointed out that the students who already have high GPAs are more likely to attend SI sessions, thus complicating any analysis of attenders’ vs. nonattenders’ grade data. Dawson, van der Meer, Skalicky, and Cowley (2014) noted that too few SI research studies control for motivation.
Finding Long-Term Evidence of Program Effectiveness Too few program assessments examine possible learning impacts beyond the course or academic term in which the learning occurred. Unless students demonstrate strategic reading and study skills in later courses, it is difficult to claim that they have learned. Two of the studies referenced
321
Jan Norton and Karen S. Agee
in this chapter did note that LASSI scores can correlate with overall student GPA. Also, Kuh, O’Donnell, and Reed (2013) discussed high-impact practices (HIPs) they claimed could improve student grades and retention, such as “frequent, timely, and constructive feedback” and “significant investment of time and effort by students over an extended period of time” (p. 10).
Assessing and Evaluating for Program Improvement As Arendale (2010) pointed out, each assessment process brings unique challenges. Because programs have different missions and objectives, they explore different SLOs. In some cases, decreasing scores (assessments) can nevertheless indicate improvement (evaluation). For example, an end-of-course LASSI may indicate decline in students’ self-assessments of their skills due to improved self-awareness, strategy development, and sophistication in applying a broader variety of learning strategies since the pretest. Standards and guidelines for program assessment have been established by learning assistance and developmental education associations (see Norton & Agee, 2014) and other higher education organizations and agencies. The nine principles for assessing student achievement published by the American Association for Higher Education in 1992 and now endorsed by six organizations and seven accrediting commissions (National Institute for Learning Outcomes Assessment, 2013) are highly recommended for meaningful and productive assessment. In addition, the new Valid Assessment of Learning in Undergraduate Education (VALUE) initiative focuses on direct evidence of student learning (McConnell & Rhodes, 2017). Though developed to support institutional assessment, the rubrics may influence program assessment as well, by demonstrating the superiority of direct assessment of student learning over standardized testing that is “divorced from the curriculum” (p. 3). Program assessment is a necessary and ongoing process. With its efforts and challenges come significant rewards and worthwhile program improvements. After undertaking the multiyear National Association for Developmental Education certification process (now NADE accreditation), Greci (2016) noted that benefits included increased knowledge of her program and its institutional context, greater confidence, assistance from NADE certifiers, empowerment through the structured review process, and revised objectives—that is, positive change. Fullmer (2012) used the Council for the Advancement of Standards’ learning assistance program standards to assess the effectiveness of her program and then conducted a productive analysis of strengths and weaknesses to develop an action plan for ongoing change and improvement of the program. Indeed, the purpose of program assessment is to improve programs and enhance student learning. Program reviews such as these clearly reflect the power of assessment and evaluation to drive positive changes for student learning.
References Aguinis, H., Culpepper, S. A., & Pierce, C. A. (2016). Differential prediction generalization in college admissions testing. Journal of Educational Psychology. Retrieved from dx.doi.org/10.1037/edu0000104 Arendale, D. R. (2010). Access at the crossroads: Learning assistance in higher education. San Francisco, CA: Jossey-Bass. Astin, A. W., & Antonio, A. L. (2012). Assessment for excellence: The philosophy and practice of assessment and evaluation in higher education (2nd ed.). Lanham, MD: Rowman & Littlefield. Banks, J. (2005). African American college students’ perceptions of their high school literacy preparation. Journal of College Reading and Learning, 35(2), 22–37. *Banta, T. W., & Palomba, C. A. (2015). Assessment essentials: Planning, implementing, and improving assessment in higher education (2nd ed.). Hoboken, NJ: Wiley. Barnett, E. A., & Reddy, V. (2017). College placement strategies: Evolving considerations and practices. Center for the Analysis of Postsecondary Readiness. Retrieved from ccrc.tc.columbia.edu/media/k2/attachments/ college-placement-strategies-evolving-considerations-practices.pdf
322
Program Assessment
Behrman, E. H., & Street, C. (2005). The validity of using a content-specific reading comprehension test for college placement. Journal of College Reading and Learning, 35(2), 5–21. Blasiman, R. N., Dunlosky, J., & Rawson, K. A. (2017). The what, how much, and when of study strategies: Comparing intended versus actual study behavior. Memory, 25(6), 1–9. doi:10.1080/09658211.2016.122 1974 Boruchovitch, E., & Santos, A. A. A. d. (2015). Psychometric studies of the Learning Strategies Scale for University Students. Paidéia, 25(60), 19–27. doi:10.1590/1982-43272560201504 *Boylan, H. R., & Bonham, B. S. (2009). Program evaluation. In R. F. Flippo & D. C. Caverly (Eds.), Handbook of college reading and study strategy research (2nd ed., pp. 379–407). New York, NY: Routledge. Broadbent, J. (2017). Comparing online and blended learner's self-regulated learning strategies and academic performance. The Internet and Higher Education, 33, 24–32. doi:10.1016/j.iheduc.2017.01.004 *Brown, G. A., Bull, J., & Pendlebury, M. (2013). Assessing student learning in higher education. New York, NY: Routledge. Burgess, M. L., Price, D. P., & Caverly, D. C. (2012). Digital literacies in multiuser virtual environments among college-level developmental readers. Journal of College Reading and Learning, 43(1), 13–30. Chen, P. D., & Mathies, C. (2016). Assessment, evaluation, and research. New Directions for Higher Education, 2016(175), 85–92. doi:10.1002/he.20202 Coertjens, L., van Daal, T., Donche, V., De Maeyer, S., Vanthournout, G., & Van Petegem, P. (2013). Analysing change in learning strategies over time: A comparison of three statistical techniques. Studies in Educational Evaluation, 39(1), 49–55. *Council for the Advancement of Standards in Higher Education (CAS). (2016). Learning assistance programs: CAS standards and guidelines. Washington, DC: Author. Culver, T. F. (2016). Increasing reading compliance and metacognitive strategies in border students. Journal of College Reading and Learning, 46(1), 42–61. doi:10.1080/10790195.2015.1075447 Daugherty, T. K., & Hayes, M. W. (2012). Social and academic correlates of reading a common book. The Learning Assistance Review, 17(2), 33–41. Dawson P., van der Meer, J., Skalicky, J., & Cowley, K. (2014). On the effectiveness of Supplemental Instruction: A systematic review of Supplemental Instruction and peer-assisted study sessions literature between 2001 and 2010. Review of Educational Research, 84(4), 609–639. doi:10.3102/0034654314540007 Dill, A. L., Justice, C. A., Minchew, S. S., Moran, L. M., Wang, C., & Weed, C. B. (2014). The use of the LASSI (the Learning and Study Strategies Inventory) to predict and evaluate the study habits and academic performance of students in a learning assistance program. Journal of College Reading and Learning, 45(1), 20–34. doi:10.1080/10790195.2014.906263 Duncan, T. G., & McKeachie, W. J. (2005). The making of the Motivated Strategies for Learning Questionnaire. Educational Psychologist, 40(2), 117–128. *Dunlosky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., & Willingham, D. T. (2013). Improving students’ learning with effective learning techniques: Promising directions from cognitive and educational psychology. Psychological Science in the Public Interest, 14(1), 4–58. doi:10.1177/1529100612453266 Ebner, R. J., & Ehri, L. C. (2016). Teaching students how to self-regulate their online vocabulary learning by using a structured think-to-yourself procedure. Journal of College Reading and Learning, 46(1), 62–73. doi: 10.1080/10790195.2015.1075448 Ferguson, K., Brown, N., & Piper, L. (2014). “How much can one book do?” Exploring perceptions of a common book program for first-year university students. Journal of College Reading and Learning, 44(2), 164–199. doi:10.1080/10790195.2014.906267 Flippo, R. F., & Schumm, J. S. (2009). Reading tests. In R. F. Flippo & D. C. Caverly (Eds.), Handbook of college reading and study strategy research (2nd ed., pp. 408–464). New York, NY: Routledge. Fullmer, P. (2012). Assessment of tutoring laboratories in a learning assistance center. Journal of College Reading and Learning, 42(2), 67–89. Greci, D. (2016). Benefits of the NADE certification process: Self-knowledge, informed choices and programmatic strength. NADE Digest, 9(1), 2–12. Retrieved from files.eric.ed.gov/fulltext/EJ1097585. pdf Griffin, R., MacKewn, A., Moser, E., & VanVuren, K. (2012). Do learning and study skills affect academic performance? An empirical investigation. Contemporary Issues in Education Research, 5(2), 109–116. Guetterman, T. C., & Mitchell, N. (2016). The role of leadership and culture in creating meaningful assessment: A mixed methods case study. Innovative Higher Education, 41(1), 43–57. doi:10.1007/s10755015-9330-y Helms, J. W., & Turner Helms, K. (2010). Note launchers: Promoting active reading of mathematics textbooks. Journal of College Reading and Learning, 41(1), 109–119.
323
Jan Norton and Karen S. Agee
Hesser, T. L., & Gregory, J. L. (2016). Instructional support sessions in chemistry: Alternative to remediation. Journal of Developmental Education, 39(3), 22–28. Holliday, T. L., & Said, S. H. (2015). Psychophysiological measures of learning comfort: Study groups’ learning styles and pulse changes. The Learning Assistance Review, 20(2), 93–106. (Originally published 2008). Hoops, L. D., Yu, S. L., Backscheider Burridge, A., & Wolters, C. A. (2015). Impact of a student success course on undergraduate academic outcomes. Journal of College Reading and Learning, 45(2), 123–146. doi:10.1080/ 10790195.2015.1032041 *Irwin, B., & Hepplestone, S. (2012). Examining increased flexibility in assessment formats. Assessment & Evaluation in Higher Education, 37(7), 773–785. Isakson, R. L., Isakson, M. B., Plummer, K. J., & Chapman, S. B. (2016). Development and validation of the Isakson Survey of Academic Reading Attitudes (ISARA). Journal of College Reading and Learning, 46(2), 113–138. doi:10.1080/10790195.2016.1141667 Jalilifar, A., & Alipour, M. (2007). How explicit instruction makes a difference: Metadiscourse markers and EFL learners’ reading comprehension skill. Journal of College Reading and Learning, 38(1), 35–52. Jones, J. A., Brookbank, B., & McDonough, J. (2008–2009). Meeting the literacy needs of adult learners through a community-university partnership. Journal of College Literacy & Learning, 35, 12–18. Kramer, P. I., Knuesel, R., & Jones, K. M. (2012). Creating a cadre of assessment gurus (at your institution). Assessment Update, 24(4), 5–6, 11–12. Retrieved from digitalcommons.csbsju.edu/cgi/viewcontent. cgi?article=1017&context=oarca_pubs Kuh, G. D., O’Donnell, K., & Reed, S. (2013). Ensuring quality and taking high-impact practices to scale. Washington, DC: Association of American Colleges and Universities. Lee, H., Lee, J., Makara, K., Fishman, B., & Teasley, S. (2017). A cross-cultural comparison of college students’ learning strategies for academic achievement between South Korea and the USA. Studies in Higher Education, 42(1), 169–183. doi:10.1080/03075079.2015.1045473 Leist, C. W., Woolwine, M. A., & Bays, C. L. (2012). The effects of using a critical thinking scoring rubric to assess undergraduate students’ reading skills. Journal of College Reading and Learning, 43(1), 31–58. Linderholm, T., & Wilde, A. (2010). College students’ beliefs about comprehension when reading for different purposes. Journal of College Reading and Learning, 40(2), 7–19. Litchfield, B. C., & Dempsey, J. V. (2015). Authentic assessment of knowledge, skills, and attitudes. New Directions for Teaching and Learning, 2015(142), 65–80. doi:10.1002/tl.20130 *McConnell, K. D., & Rhodes, T. L. (2017). On solid ground: VALUE report 2017. Association of American Colleges & Universities. Retrieved from www.aacu.org/OnSolidGroundVALUE *Middaugh, M. F. (2010). Planning and assessment in higher education: Demonstrating institutional effectiveness. San Francisco, CA: Jossey-Bass. Mireles, S. V., Offer, J., Ward, D. P., & Dochen, C. W. (2011). Incorporating study strategies in developmental mathematics/college algebra. Journal of Developmental Education, 34(3), 12–14, 16, 18–19, 40–41. *National Institute for Learning Outcomes Assessment. (2013). Principles for effective assessment of student achievement. Retrieved from www.learningoutcomesassessment.org/documents/EndorsedAssessment Principles_SUP.pdf *Norton, J., & Agee, K. S. (2014). Assessment of learning assistance programs: Supporting professionals in the field. White paper commissioned by the College Reading and Learning Association. Retrieved from www. crla.net/index.php/publications/crla-white-papers Pacello, J. (2014). Integrating metacognition into a developmental reading and writing course to promote skill transfer: An examination of student perceptions and experiences. Journal of College Reading and Learning, 44(2), 119–140. Perin, D., Hare Bork, R., Peverly, S. T., & Mason, L. H. (2013). A contextualized curricular supplement for developmental reading and writing. Journal of College Reading and Learning, 43(2), 8–38. Pintrich, P. R., & De Groot, E. V. (1990). Motivational and self-regulated learning component of classroom academic performance. Journal of Educational Psychology, 82, 33–40. Prevatt, F., Li, H., Welles, T., Festa-Dreher, D., Yelland, S., & Lee, J. (2011). The Academic Success Inventory for College Students: Scale development and practical implications for use with students. Journal of College Admission, Spring, 26–31. Retrieved from files.eric.ed.gov/fulltext/EJ926821.pdf Richardson, J., Robnolt, V., & Rhodes, J. (2010). A history of study skills: Not hot, but not forgotten. Reading Improvement, 47(2), 111–123. Szal, R. J., & Kennelly, K. R. (2017). The effects of Supplemental Instruction on student grades in a blended learning context. Developments in Business Simulation and Experiential Learning, 44, 230–236. Toms, M. (2016). A qualitative analysis of the self-regulated learning of first-semester college students. Journal of the First-Year Experience & Students in Transition, 28(1), 71–87.
324
Program Assessment
*Trochim, W. M. (2006). The research methods knowledge base (2nd ed). Retrieved from www.socialresearchmethods.net/kb/datatype.php Weinstein, C. E., Palmer, D. R., & Acee, T. W. (2016). Learning and Study Strategies Inventory (3rd ed.). Clearwater, FL: H & H Publishing. Woodcock, A., Hernandez, P. R., Estrada, M., & Schultz, P. W. (2012). The consequences of chronic stereotype threat: Domain disidentification and abandonment. Journal of Personality and Social Psychology, 103(4), 635–646. doi:10.1037/a0029120 Yang, Y., & Bliss, L. B. (2014). A Q factor analysis of college undergraduate students’ study behaviours. Educational Research and Evaluation, 20(6), 433–453. doi:10.1080/13803611.2014.971817
325
19 Student Assessment Tina Kafka san diego state university
Introduction Thorndike (1917) first described the challenges of decoding the minds of readers in the act of reading. The goal of capturing the mind-at-work on text has driven the development of reading comprehension assessment: (a) short answer (early 20th century), (b) fill in the bubble (1930s), (c) the essay (after World War II), and (d) oral response in a discussion (throughout but used primarily in portfolio assessment) (Sarroub & Pearson, 1998). Each of these assessment techniques, however, is at best indirect, representing “the residue of the comprehension process, rather than the process itself ” (p. 98).
Purposes of Assessment Reading assessment may be inexact, but it nonetheless serves multiple purposes. This chapter will examine those purposes. First, the role of assessment for college placement will be explored. All postsecondary education involves sorting (Hughes & Scott-Clayton, 2011). Public four-year and elite colleges generally sort students prior to admission with aptitude tests – either the SAT or ACT – Advanced Placement scores, courses completed, grades, and extracurricular activities; open-access, two-year colleges admit most students without regard to their performance on standardized tests. Once students enroll, most community colleges administer placement tests to determine proficiency in basic skills, usually reading, writing, and math. “Prepared” students enroll directly into college-level courses. If assessments indicate deficits in one or more basic skills, remediation is usually required before entry into college-level courses for the “unprepared students.” Those basic skills classes – known interchangeably as developmental or remedial classes – do not count toward a degree or transfer. In this way, colleges aim to protect both access and standards (Perin, 2006). The chapter will next examine assessment to inform instruction. Several specific classroom assessment techniques (CATs) will be described that help instructors design instruction responsive to the needs of the students in their classrooms. The next section will expand on summative assessment for grading purposes, including assessment that entail essays and research projects as well as discussion of the Nelson-Denny Reading Test, a norm-referenced instrument often employed in research studies. The chapter will conclude with a discussion of assessment that fosters self-reflection and metacognition. If the goal of reading instruction is the development of competent, independent, strategic readers, assessment that promotes metacognition and thereby moves students in that direction is optimal.
326
Student Assessment
Placement Measures In 2012, the National Assessment Governing Board reported that 87 percent of colleges nationwide rely exclusively on standardized tests to place students into remedial English (Fields & Parsad, 2012). At the time, Accuplacer, published by the College Board, and Compass, published by ACT, were commonly used for that purpose. Some colleges administer the College Tests for English Placement (CTEP) or the Assessment of Skills for Successful Entry and Transfer (ASSET), also published by ACT. Some faculty develop their own assessments (Grubb & Gabriner, 2013). Others mix and match. One issue that arises across the country, and even within states and districts, is lack of uniform cutoff scores to determine placement. Studies reported in 2012 that community colleges generally set a higher bar than do four-year colleges (Fain, 2012), a result that surprised many. There are arguments both for and against the standardization of assessment and placement policies – particularly within states. Proponents argue that common benchmarks of academic proficiency are key to maintaining high standards and facilitating transfer between colleges. Critics claim that standardization of placement policy undermines institutional autonomy (Cohen & Brawer, 2008; Grubb & Gabriner, 2013; Hughes & Scott-Clayton, 2011). Though uncertainty surrounds placement policies and practices, most leaders concur that some policy is necessary (Hughes & Scott-Clayton, 2011). Many colleges have begun providing better information to students about the high-stakes nature of placement assessments and encouraging them to review concepts and practice test-t aking skills before they take the tests. Summer “boot camps,” bridge programs, orientations, and online practice tests benefit students who need only to brush up on rusty skills (Bailey, Jaggars, & Jenkins, 2015; Hughes & Scott-Clayton, 2011). See Bridge Programs (Chapter 16) in this handbook to learn more about these approaches.
Multiple Measures Assessment The conversation about assessment and placement changed dramatically in 2016 when ACT retired the Compass exam. Colleges scrambled to replace the Compass with an alternative – in many cases, they merely switched to Accuplacer – but the demise of Compass accelerated moves for a comprehensive reevaluation of math, reading, and writing assessment, and placement overall. ACT’s own research (Adams, 2015) contributed to the growing understanding that a single measure might be insufficient to predict a student’s ability to succeed in college-level courses (Adams, 2015). Multiple measures assessment has evoked a great deal of interest. Measures may include ACT/SAT scores, high school grades, class rank, information about high school classes completed, home language, and time since high school graduation as well as noncognitive assessments, such as the Learning and Study Strategies Inventory (LASSI) (Saxon, Levine-Brown, & Boylan, 2008). Bailey and colleagues underscore the growing acknowledgment that “in general, a student’s high school GPA is a much stronger predictor of success in college than ACT or SAT scores” (Bailey et al., 2015, p. 131).
Corequisite Remediation Corequisite models in which students enroll directly in college-level English (and math), despite poor performance on assessments are also being investigated. A report by Complete College America (CCA) details efforts to replace traditional remediation with corequisite remediation in Colorado, Georgia, Indiana, Tennessee, and West Virginia (Complete College America, 2016). Students who score below the cutoff in the traditional placement tests enroll directly in college-level courses while simultaneously receiving extra academic support in areas of need. In 2012, CCA
327
Tina Kafka
referred to remediation as “higher education’s bridge to nowhere” (p. 2). Four years later, CCA characterized corequisite remediation as “a bridge that is spanning the divide between hope and attainment” (Complete College America, 2016).
Acceleration At some colleges, accelerated reading and writing courses provide another option for students whose initial assessment scores indicate need for support. Students at Chabot College in the East Bay region of California have two developmental alternatives: an accelerated four-credit integrated reading and writing course or a two-semester course in which they earn two credits each. Both courses are “open access” (Chabot College, 2010). The courses are carefully aligned with the initial college-level English course. Chabot initiated this option in the 1990s; the program has since evolved into the California Acceleration Project and includes nearly two-thirds of California’s 113 community colleges (Edgecombe, Jaggars, Xu, & Barragan, 2014; Hern & Snell, 2013).
Fairness of College Entrance/Placement Tests As more states require high schools to administer college entrance exams to juniors and seniors, focus on the role of standardized tests has intensified. When it was introduced in 1926, the SAT, published by the College Board, was viewed as a way to level the playing field by minimizing the importance of social origins to college access (Neiman, 2016). A new version of the SAT was implemented in 2016, still aimed at that irregular playing field. Though the nature of the playing field has changed drastically in the last century, its bumpiness persists. High school graduation rates are unprecedented; increasing numbers of underserved students, including English language learners, students from low socioeconomic groups, minority students, and students with disabilities, aspire to earn college degrees. Pedagogy and policy scramble to meet the needs of these changing demographics. The new SAT is one effort; it is considered more straightforward and accessible, its content a better reflection of the skills that students learn in school and require for college and career success (Neiman, 2016). The College Board has also moved forward. The newest version of the ACT offers test-taking accommodations to students with limited English proficiency. Those accommodations include extra time and the choice to have instructions read aloud in a student’s native language or to use an approved bilingual glossary (Gewertz, 2016). The move aligned ACT with the Every Student Succeeds Act (ESSA) signed into law by President Obama in 2015. ESSA requires that all students, including English language learners, be assessed in ways that “intentionally reduce barriers and improve flexibility in how students receive information or demonstrate knowledge” (U.S. Department of Education, 2016, p. 2). It also specifies that states must offer appropriate accommodations for English learners and students with disabilities (U.S. Department of Education, 2016). Despite accommodations, the college entrance exam remains a barrier for many disadvantaged students. The tests require them to read critically for over an hour, a challenge for less accomplished readers, such as English language learners. Many students skip the optional essay. Test preparation services are expensive or impractical for many students. Students who lack adult guidance might not voluntarily engage in test practice sessions. To mitigate these issues, the College Board partnered with the Khan Academy in 2015 to offer official SAT preparation at no cost to students. The Khan Academy website features a series of diagnostic tests designed by Khan to assess skill levels corresponding to each section of the SAT. The site then directs students to specific videos to remediate skills or fill gaps (Thomsen, 2015).
328
Student Assessment
Test preparation is one form of “shadow education” that advantaged students often take for granted. Shadow education includes educational activities such as tutoring, extra classes, and formal test preparation that take place outside the regular school day (Buchmann, Condron, & Roscigno, 2010, p. 435). An analysis by Buchmann and colleagues (2010) confirmed what many intuit: Students from higher socioeconomic groups are much more likely to use shadow educational activities than are students from lower socioeconomic groups, which has important implications for entrance test performance and selective college enrollment (Buchmann et al., 2010). Lani Guinier, a law professor at Harvard University, takes a broader view. Her 2015 book The Tyranny of the Meritocracy: Democratizing Higher Education in America contrasts “democratic merit” – the benefit to society of educating a diverse group of people – with “testocratic merit” – the assumption that test scores are the best evidence of an applicant’s worth. She discounts the tendency to place responsibility for differential test scores on factors such as race, ethnicity, and socioeconomic status; instead, she pins blame on the tests themselves, which, she says, magnify the advantages of relative wealth. Tests would better serve society, she claims, if they assessed an individual’s potential to contribute ( Jaschik, 2015). She points to the Posse Foundation, begun a decade ago with a $1.9 million grant from the Andrew Mellon Foundation. The Posse Foundation developed an assessment tool known as the Bial-Dale Adaptability Index (BDI) designed to identify students who excel in social interaction and the ability to work in groups and become active members of their campuses. The BDI evaluates students in groups of 10 or 12 at a time through exercises such as working together to duplicate a robot built from LEGO bricks by sharing collective ideas. The BDI was used to score hundreds of senior high school students in New York City who had applied for college through the Posse program. The BDI predicted persistence, ability to access resources, and contribute to a campus community. Furthermore, after controlling for SAT scores, students with high BDI scores had higher GPAs and were more likely to graduate in four years ( Jaschik, 2015).
Exit Strategies Exit Criteria The tension between access and standards described by Perin (2006) is exemplified by the lack of uniform policies within states and districts to determine readiness to exit developmental course work. Some college reading programs rely on standardized tests both to place students into remedial reading and determine their readiness to exit (Stahl, Simpson, & Hayes, 1992). Cohen and Brawer (2008) underscore the difficulty in setting fixed exit criteria for courses and programs that have no set entry requirements. Many college faculty favor setting standards to ensure that students enter college-level courses with college-level proficiency. From an institutional perspective, remedial instruction should enable students to succeed in standard courses. Sawyer and Schiel (2000) point out that notions about “what constitutes standard, lower level, and higher level courses vary from institution to institution” (p. 5). Grubb and Gabriner (2013) describe the difficulty in setting exit criteria without coherent oversight over the entire system (Grubb & Gabriner, 2013). The criterion used by most colleges is course grade, yet grading standards are far from consistent (Grubb & Gabriner, 2013). In some colleges, the same standardized instrument used for placement is readministered to determine readiness to exit. This raises the specter of the age-old struggle “to teach to the test” or not “to teach to the test” (Sawyer & Schiel, 2000, p. 38). The drive for accountability is accompanied by the risk that “students get better scores but are not better readers” (Sarroub & Pearson, 1998, p. 101).
329
Tina Kafka
Built-In Exit Students at Chattahoochee Technical College in Tennessee, like students at many colleges across the country, remediate reading deficiencies in a learning lab on campus. They work through a series of computer-adaptive modules designed to target vocabulary, reading comprehension, study skills, critical reading skills, and critical thinking skills. Exit tests are built in to the modules. Passing score requirements vary, with associate’s degree programs requiring the highest scores; those for diploma and certificate programs are lower (Chattahoochee Technical College, 2017).
The Role of Technology in Testing At the onset of the third decade of the 21st century, the role of technology in the testing landscape is expanding. One key question remains unanswered: Do computer-based exams measure skills and knowledge as accurately as do traditional paper-based tests (Herold, 2016)? The question was highlighted in 2016 with news that millions of high school students who took the Partnership for the Assessment of Readiness for College and Careers (PARCC) computer-based exam scored worse than those who took the same test with paper and pencil. PARCC is one of two federally funded consortia tasked with developing tests aligned with the Common Core State Standards Initiative, originally launched in 2009. Smarter Balanced Assessment Consortium (SBAC) was the second group. Discrepancies were reported in Illinois, Rhode Island, and the Baltimore County School District in Maryland and were most pronounced on the English Language Arts and upper grades math exams. In Rhode Island, for example, 42.5 percent of students who took the PARCC exam on paper scored at the proficient level compared with 34 percent by those who took the exam on the computer (Herold, 2016). The infrastructure to support administration of computer-based exams is inconsistent across states and districts. Mounting evidence suggests that at present, computer-based exams are likely assessing a student’s expertise in navigating the digital interface as much or more than they are testing academic knowledge (Herold, 2016). The testing landscape, however, is changing.
The Nation’s Report Card Every two years since 1969, the National Assessment of Educational Progress (NAEP) is administered in key subjects, such as reading and math, to samples of 4th-, 8th-, and 12th-grade students across the country. The results are released as The Nation’s Report Card and used by researchers, parents, policymakers, and school districts to improve education (The Nation’s Report Card, n.d.-a). The NAEP administered for the first time in 2014 an Assessment of Technology and Engineering Literacy (TEL) to 21,500 eighth graders at 840 schools across the nation. Questions assessed knowledge and skill in understanding technological principles, solving technology and engineering-related problems, and using technology to communicate and collaborate. The TEL marked a departure from other NAEP designs since it was completely computer-based and included interactive scenario-based tasks. NAEP assessments are administered uniformly across the nation to ensure that they serve as a common metric for all states and selected urban districts. NAEP began administering digitally based assessments in math, reading, and writing in 2017; additional subjects will be added in 2018 and 2019. The TEL project represents “a catalyst and laboratory for the future: leading the way for the critical transition of NAEP’s complete portfolio from paper-based assessments to digital-based assessments” (The Nation’s Report Card, n.d.-b). The issues surrounding assessment and placement have assumed new urgency as the third decade of the century draws near. Scrutiny is directed at the long-standing tradition of placing students into remedial courses or sequences based on the results of a single assessment or cutoff
330
Student Assessment
score on a standardized test. Many institutions are investigating multiple measures assessment that accounts for students’ educational background, motivation, and other noncognitive factors. Technology is poised to assume an increasing presence in every aspect of the college e xperience – from enrollment to placement to completion – yet the outcomes of its predominance in the educational landscape are still nebulous. Technology might finally tame the playing field. Or, technology could exacerbate its asymmetry. The jury is still out. The next section of the chapter will discuss assessment that informs instruction. In addition to a brief history, some specific classroom assessment tools will be described.
Formative Assessment Shulman (1986) described pedagogical content knowledge as knowledge needed specifically for teaching. It includes two essential elements: an understanding of the misunderstandings that complicate learning for students and a toolbox of pedagogical strategies to repair those misunderstandings. First, pedagogical content knowledge must be grounded on a firm understanding of what students know – or think they know – and how they operationalize that knowledge. In primary and some secondary school contexts, informal reading inventories are administered to individual students to assess areas of weakness and strength. The process yields valuable information but is time-intensive, impractical within the structure of most college classes, despite McShane’s (2005) assertion that “Assessment is especially important in working with adult readers because the learners in any classroom vary greatly in their reading skills” (p. 23). In fact, assessment and placement policies at the college level generally result in classes filled with students whose individual needs vary widely. Even two sections of the same course taught by the same instructor are shaped by the complex mix of individual student background variables. Angelo and Cross (1993) maintain that each class assumes “its own collective personality, its own chemistry” as a result of the mix of socioeconomic class, linguistic and cultural backgrounds, attitude and values, and previous knowledge (p. 5). Formative assessment enables instructors to address those collective needs.
Classroom Assessment Techniques Theories of learning shifted in the late 1980s and early 1990s to account for new understanding grounded in sociocultural theory that learners are affected by multiple domains, including home, school, and community. That shift in thinking about learning was reflected in changes in thinking about assessment of learning (Sarroub & Pearson, 1998). Performance assessments and portfolios gained favor with their emphasis on a “personal orientation toward evidence of growth and learning rather than the more categorical skills-based approach of previous years” (p. 102). Those assessments, however, were unwieldy and took time from teaching; in addition, instructors found it burdensome to grade portfolios and performance assessments. Thus, assessments that fulfilled multiple purposes simultaneously gained favor. Formative assessment provides information about what students are learning while they are in the process of learning it (Gewertz, 2015a). Formative assessments are process-oriented as opposed to product-oriented; they focus on the strategies, approaches, and processes that students use as they read and write. They are identified by their group orientation, their ongoing nature, and their utility in providing critical information to guide instruction (Flippo, 2011; Simpson, Stahl, & Francis, 2004). Richard Stiggins, past director of test development at ACT and president of the Assessment Training Institute in Portland, Oregon from 1992 to 2010, underscores the noncognitive benefits of formative assessment: “Good formative assessment keeps students believing that success is within reach if they keep trying” (Quoted in Gewertz, 2015b).
331
Tina Kafka Table 19.1 C ATs for the College Classroom Name of CAT
Description
Analysis
Prep: (Low, Med, High)
Minute Paper
Ask students to respond to two questions: What was the most important thing you learned in class? How can you apply what you learned? Challenges students to answer the questions “Who does what to whom, when, where, how, and why?” (WDWWWWHW) in one grammatically correct sentence.
Review responses and note useful comments. Share noteworthy issues raised at the next class meeting.
Low
One-Sentence Summary
Word Journal
Reading Rating Sheets
Have students draw slash Medium marks between the focus elements of the sentence. Evaluate each component with a 0 (inadequate); (adequate), or + (good). Tally responses; look for patterns of strength. Students summarize a short text Keep track of words used Medium to High in a single word, then write a by multiple students. paragraph explaining choice of Categorize by types of word. Works well with primary explanations. Share three texts. or four different approaches with the class. Write out four or five yes/no Tally responses and look for Low or multiple-choice questions patterns in short comments. followed by one or two short Group into meaningful answer questions to assess clusters. Provide feedback student response to and perceived focusing most on responses value of assigned reading. Adapt that rate the value of the as needed. reading.
Source: Adapted from Angelo and Cross (1993).
Formative assessment is quick and actionable, provides insight into student understanding and the effectiveness of instructional practices, and helps students reflect on their goals (Andrade & Cizek, 2010). Table 19.1 provides a sample of the 50 CATs described by Angelo and Cross (1993).
The Ticket to Retention Divoll, Browning, and Vesey (2012) developed a CAT that fosters retention of key concepts. The instructor presents students with three to five focus questions at the beginning of class. As the instructor lectures, students jot down notes that address the focus questions. Students then compare notes with two peers (one at a time). At the end of class, students respond in writing to the original questions (Divoll et al., 2012).
Grading Formative Assessment Opinions differ about the value or even wisdom of grading formative assessments. Critics argue that formative assessment should maintain low stakes to ensure transparency. Proponents insist that students apply themselves more assiduously when grades are involved, thus ensuring more accurate results (Gewertz, 2015a).
332
Student Assessment
The Feedback Loop The effectiveness of formative assessments hinges most of all on the “ feedback loop” established in the classroom by the instructor (Angelo & Cross, 1993, p. 6). Instructors analyze the results of the assessment, provide feedback to the students, base the next teaching steps on the results – repair a misunderstanding, reteach in a different way, or proceed – then use another or repeat the same CAT to recheck understanding. Angelo and Cross (1993) emphasize that this approach, once integrated into everyday classroom activities, connects faculty to students and teaching to learning in powerful ways.
Summative Assessment The overriding purpose of summative assessment is to certify a level of achievement at a point in time – the end of a curricular unit or semester or annually at the same time each year (Boud & Falchikov, 2006). Summative assessments may be standardized with norms developed from large groups of similar students, or they may be objective tests developed by one teacher. The questions may be selected from question pools that accompany reading textbooks or embedded in online platforms (Education Week, 2015). Grubb (1999) underscores the interdependence of instruction and assessment. Advocates of skills and drills tend to rely on multiple-choice exams. Those who favor constructivist or meaning-centered instruction devise alternative assessments. Either way, instructional style tends to determine assessment choices (Grubb, 1999). Flippo (2014) points out that although formative assessments are used to inform teaching, they can also be used for summative purposes (i.e., to provide summary information about students’ knowledge and performance): Although there is sometimes a tendency to think of formative assessment in a positive way and summative assessment in a negative way, neither judgment is accurate. Teachers must be able to use and understand both formative and summative assessment in order to work within the parameters of schools and schooling… [while keeping] the purpose of the assessments in mind. (pp. 11–12) Authentic assessments may also be used for summative purposes. The term was popularized in the 1990s by Wiggins (1990) who described it as a full array of tasks that mirror the priorities and challenges found in the best instructional activities: conducting research; writing, revising and discussing papers; providing an engaging oral analysis of a recent political event; collaborating with others on a debate, etc. (p. 1) Authentic assessments include presentations, research projects, and oral arguments. Flippo (2014) describes authentic assessment as assessment of many and various literacy abilities in contexts that closely match the actual situations in which those abilities are used; and likewise, authentic purposes as “real purpose[s] for reading and study, as opposed to contrived purposes and assignments” (pp. 207–208, 320). Performance assessment is one type of authentic assessment. Performance assessment typically involves application of reading strategies to disciplinary tasks. Detailed rubrics describe the criteria used to evaluate the work, descriptions of work that satisfy the various criteria, and the scores (e.g., not apparent, developing, proficient, exemplary) that will be assigned to each level of performance. The open-ended format of performance assessment responses makes them amenable to various approaches to problem-solving (Afflerbach, 2012).
333
Tina Kafka
Simulations In the mid-1970s, Bartholomae and Petrovsky (1986) designed a Basic Reading and Writing course for students at the University of Pittsburg who were “unprepared for the textual demands of a college education” (p. 4). The final exam comprised an in-class essay in response to a question regarding significant ideas in a chapter from Margaret Mead’s autobiography. The exam was the culmination of a full semester of reading and writing about the process of “coming of age in America” (p. 85). Stahl et al. (1992) proposed a similar assessment that entailed distribution of an introductory chapter from a sociology text on Monday with the assignment to prepare for an objective and essay exam on the material by week’s end. The students’ study materials – notes, outlines, concept maps, etc. – were collected and scored on a rubric as part of the final grade. Instructor participants in the California Acceleration Project assess academic literacy in a similar way. Inquiry questions frame semester-long themes, such as “Food Justice” and “What Makes People Do What They Do?” (California Acceleration Project, 2016). Students read challenging texts and engage in ongoing projects and class discussions throughout the semester. The final exams reflect the type of reading, writing, and thinking that typify college courses (Hern & Snell, 2013). Simulations and essay exams provide crucial insights into academic literacy, but they are not always practical at institutions dominated by large classes and part-time faculty.
Nelson-Denny Reading Test The Nelson-Denny Reading Test (NDRT) lies at the opposite extreme. The NDRT is a norm- referenced reading comprehension test with two alternate forms G and H. It consists of two parts: a 15-minute vocabulary test and a 7-passage, 20-minute comprehension test that provides raw scores, scale scores, grade-equivalent scores, and national percentile ranks. It primarily functions as a screening instrument for high school and college-level students (Barksdale-Ladd & Rose, 1997; Brown, Fischco, & Hanna, 1993; Caverly, Nicholson, & Radcliffe, 2004; Lavonier, 2016; Perin & Bork, 2010; Perin, Raufman, & Kakamkarian, 2015). The test has been employed in many research studies to determine the effectiveness of a reading strategy under investigation. Read more about these tests and others in Reading Tests (Chapter 20) in this handbook.
Assessment That Fosters Learning Assessment fulfills many functions at a college institution. Assessment determines placement, informs instruction, and measures achievement. Boud and Falchicov (2006) aptly point out, The raison d’etre of a higher education is that it provides a foundation on which a lifetime of learning in work and other social settings can be built. Whatever else it achieves, it must equip students to learn beyond the academy once the infrastructure of teachers, courses, and formal assessment is no longer available. (p. 399) Assessment that fosters learning and promotes metacognition can function as the means to that end. Vacca and Vacca (1994) proposed that deep processing of complex material requires two types of knowledge. Metacognitive knowledge, or “the ability to think about or control your own learning” (p. 47), is one type. Metacognitive knowledge, in turn, includes self-knowledge and task knowledge. Self-knowledge is the knowledge students hold about themselves as learners. Task knowledge is knowledge about available skills and strategies. Formative assessment also involves self-regulation or “the ability to monitor and regulate comprehension” (Vacca & Vacca, p. 47).
334
Student Assessment
Self-Assessment Afflerbach (2014) contends that self-assessment – a form of metacognition – lies at the heart of independence. He points out too that the “ante has been raised” as scrutiny on outcomes has intensified (p. 30). Boud and Falchicov (2006) underscore the importance of preparing students to render complex judgments about their own work and that of others and for making decisions in life beyond school. Moreover, self-assessment transfers control of learning onto the student. Mahlberg (2015) reported significantly higher use of self-regulated learning practices among students who participated in self-assessment. Behaviors included preparing for class, setting goals, and modifying study strategies, including reading, to increase learning. Mahlberg contends that “activating metacognitive strategies by expecting students to reflect on their own performance is emerging as a low/no cost strategy to increase retention” (Mahlberg, 2015, p. 780). Chapter 12, “Strategic Study-Reading,” of this handbook outlines several strategies designed to encourage self-regulation. Mulcahy-Ernt and Caverly (2009) note, The shift in focus is from a pedantic stance to one that fosters students as agents engaged with the tasks, materials, and discussions. The implication of this pedagogical shift is to foster the student’s own planning, decision-making, reflection, and evaluation of effective strategies. (p. 191) The strategies described included many based on a now classic framework that made its debut 70 years ago – SQ3R – Survey the topic; turn headings into Questions, Read to answer the questions, Recite to recall the main points and answers to the questions, and Review the main points (Mulcahy-Ernt & Caverly, 2009; Robinson, 1946). SQ4R added a step – wRite (Smith, 1961). A similar concept frames Read, Organize, Write, Actively Read, Correct Predictions (ROWAC, Roe, Stoodt-Hill, & Burns, 2007); Survey, Read, Underline, Notate (S-RUN, Bailey, 1988); and Survey, Read, Underline, Notate, Review (S-RUN-R, van Blerkom & Mulcahy-Ernt, 2005). Each strategy described reflects academic tasks students encounter in college and strengthens self-regulation as students become agents in their own learning (Flippo & Schumm, 2000).
Assessing Metacognition Mokhtari and Reichard (2002) developed the The Metacognitive Awareness of Reading Strategies Inventory (MARSI) to measure adolescent and adult readers’ metacognitive awareness and perceived use of reading strategies while reading academic materials. The instrument lists 30 strategies across three areas that readers use to make sense of text: global reading strategies, reading-support strategies, and problem-solving strategies. Students rate the frequency with which they use the strategies on a five-point scale from 1 (Never) to 5 (Always). The instrument was administered to five sections of Chabot’s accelerated Reading, Reasoning, and Writing course (English 102) at the beginning and end of the Fall 2009 semester. Instruction that semester emphasized the “Reading Support” strategies that students reported using least. Strong gains in use of those strategies were noted at semester’s end (Chabot College, 2010).
Assessing Reading Attitude There is growing interest in the role of noncognitive factors in learning, such as motivation and persistence – or grit. While many instruments exist to measure those elements in general, no such instrument had been developed until recently to measure reading attitude specifically. To fill that gap, Isakson, Isakson, Plummer, and Chapman (2016) developed the Isakson Survey of Academic
335
Tina Kafka
Reading Attitudes (ISARA), a 20-item instrument to measure academic reading attitude on three subscales: (1) students’ perceived academic reading behaviors, (2) expectations for success with academic reading tasks, and (3) the value placed on academic reading. One preliminary study indicated that the ISARA discriminated changes in attitude from pretest to posttest. Few would disagree, the authors maintain, that “good attitudes are needed for choosing to read, sustaining effort in reading, and for learning at a deep level” (p. 129).
Areas for Further Research As multiple measures assessment gains traction nationwide, it will be important to track changes in rates of success over time as measured by college completion and transfer. Ongoing study of free online SAT practice courses, such as those offered by the Khan Academy, should be investigated more fully; early results are promising. Another key question that merits further investigation is the effect of computer-based assessments on test results. Few would argue that the original intention behind developmental education in both math and reading was the promotion of equity; one unforeseen consequence was the erection of barriers that many students never overcome. The replacement of long sequences with corequisite and accelerated models is promising. Ongoing studies are necessary to determine their effectiveness, both in promoting college completion and providing students with the means to navigate critically the multiple texts that confront them in college. Effective college classroom assessment has been largely overlooked in recent research. The development of effective and practical classroom assessments that reflect today’s multiple literacies and foster self-reflection and metacognition is long overdue. The ISARA is a beginning, one ripe for further development.
Implications for Practitioners Reading instructors confront a complex task. Students in a stand-alone reading class are usually not there by choice. Therefore, the first and arguably most daunting challenge for any reading instructor is to build confidence and to engage students in the effort to improve. Engagement is essential for progress. Effective assessment is part of that. Good assessment is enlightening. It provides crucial information to instructors and students: Instructors come to understand how students have processed the information conveyed in class; students learn how far they have progressed and what areas still need attention. The goal of reading instructors – in fact, all instructors – is to participate in the development of independent learners prepared to think critically both about their own work and that of others. Mahlberg’s (2015) assertion that self-assessment transfers control of learning onto the student along with Afflerbach’s (2014) contention that self-assessment lies at the heart of independence underscore the obligation on the part of practitioners to fully integrate assessment into course curricula. Well-executed, assessment functions as a potent tool in the pedagogical tool kit.
Conclusion Student assessment of reading has progressed since Thorndike (1917) bemoaned its shortcomings a century ago. In fact, neuroscience has begun to open a window onto the brain at work on a book that offers the closest look so far into the elusive “phenomenological act of comprehension” (Sarroub & Pearson, 1998, p. 98). By combining functional magnetic resonance imaging (fMRI) – a brain scan – with fixation-related potentials (FRP) – a device that tracks eye movement – neuroscientists are beginning to understand how words are represented in the brain
336
Student Assessment
(Nikos-Rose, 2016). Yet even as scientists pinpoint the neural processes of text comprehension, primary responsibility for interpreting that data and fostering strategic reading still resides with the reading instructor. At least for now.
References and Suggested Reading Adams, C. (2015, June 18). ACT phases out Compass placement tests. Education Week, Retrieved from blogs. edweek.org/edweek/college_bound/2015/06/act_phases_out_compass_placement_tests.html Afflerbach, P. (2012). Understanding and using reading assessment K-12. Newark, DE: International Reading Association. *Afflerbach, P. (2014). Self-assessment and reading success. Reading Today, November/December, 30–33. Andrade, H. L., & Cizek, G. J. (Eds.). (2010). Handbook of formative assessment. New York, NY: Routledge. *Angelo, T. A., & Cross, K. P. (1993). Classroom assessment techniques: A handbook for college teachers. San Francisco, CA: Jossey-Bass. Bailey, N. (1988). S-RUN: Beyond SQ3R. Journal of Reading, 32(2), 170–171. *Bailey, T., Jaggars, S. S., & Jenkins, D. (2015). Redesigning America’s community colleges. Cambridge, MA: Harvard University Press. Barksdale-Ladd, M. A., & Rose, M. C. (1997). Qualitative assessment in developmental reading. Journal of College Reading and Learning, 28(1), 34–55. doi:10.1080/10790195.1997.10850052 *Bartholomae, D., & Petrosky, A. (1986). Facts, artifacts, and counterfacts. Upper Montclair, NJ: Boynton/ Cook Publishers. Boud, D., & Falchikov, N. (2006). Aligning assessment with long-term learning. Assessment & Evaluation in Higher Education, 31(4), 399–413. Brown, J. A., Fischco, V. V., & Hanna, G. (1993). Nelson-Denny reading test: Manual for scoring and interpretation, forms G & H. Rolling Meadows, IL: Riverside. Buchmann, C., Condron, D. J., & Roscigno, V. J. (2010). Shadow education, American style: Test preparation, the SAT, and college enrollment. Social Forces, 89(2), 435–462. *California Acceleration Project. (2016). Inspiration for accelerated, thematic reading and writing courses: Themes and texts from past members of CAP’s community of practice. Paper presented at the CAP Summer Institute, Southern California. Caverly, D. C., Nicholson, S. A., & Radcliffe, R. (2004). The effectiveness of strategic reading instruction for college developmental readers. Journal of College Reading & Learning, 35(1), 25–49. nglish Chabot College. (2010). Jumpstart: Assessment of student’s reading practices in Chabot’s developmental E classes. Center for Teaching and Learning. Retrieved from www.chabotcollege.edu/learningconnection/ctl/ FIGs/jumpstart/readingassmt.asp Chattahoochee Technical College. (2017). Learning support: A navigational guide. Retrieved from www. chattahoocheetech.edu/learning-support/ Cohen, A. M., & Brawer, F. B. (2008). The American community college. San Francisco, CA: John Wiley & Sons, Inc. Complete College America. (2012). Remediation: Higher education’s bridge to nowhere. Indianapolis, IN: Author. *Complete College America. (2016). Corequisite remediation: Spanning the completion divide. Indianapolis, IN: Author. Divoll, K. A., Browning, S. T., & Vesey, W. M. (2012). The ticket to retention: A classroom assessment technique designed to improve student learning. The Journal of Effective Teaching (online journal), 12(2), 45–64. Edgecombe, N. D., Jaggars, S., Xu, D., & Barragan, M. (2014). Accelerating the integrated instruction of developmental reading and writing at Chabot college. CCRC Working Paper No. 71. New York, NY: Community College Research Center. Retrieved from ccrc.tc.columbia.edu/publications/accelerating-integratedinstruction-at-chabot.html Education Week. (2015, November 9). Types of assessments: A head-to-head comparison. Understanding Formative Assessment: A Special Report. Retrieved from www.edweek.org/ew/section/multimedia/types-ofassessments-a-head-to-head-comparison.html Fain, P. (2012, December 12). Placement tests still rule. Inside Higher Ed. Retrieved from www.inside highered.com/news/2012/12/21/colleges-rely-heavily-popular-remedial-placement-tests Fields, R., & Parsad, B. (2012). Tests and cut scores used for student placement in postsecondary education: Fall 2011. Washington, DC: National Assessment Governing Board. Flippo, R. F. (2011). Transcending the divide: Where college and secondary reading and study research coincide. Journal of Adolescent and Adult Literacy, 54(6), 396–401. doi:10.1598/JAAL.54.6.1
337
Tina Kafka
Flippo, R. F. (2014). Assessing readers: Qualitative diagnosis and instruction (2nd ed.). New York, NY: Routledge; and Newark, DE: International Reading Association. *Flippo, R. F., & Schumm, J. S. (2000). Reading tests. In R. F. Flippo & D. C. Caverly (Eds.), Handbook of college reading and study strategies (pp. 403–472). Mahwah, NJ: Erlbaum. Gewertz, C. (2015a, November 11). Is formative assessment ‘just good teaching’ or something more specific? Education Week. Retrieved from www.edweek.org/ew/articles/2015/11/11/searching-for-clarityon-formative-assessment.html Gewertz, C. (2015b, November 11). Q&A: Misconceptions about formative assessment. Education Week. Retrieved from www.edweek.org/ew/articles/2015/11/11/qa-misconceptions-about-formative-assessment. html Gewertz, C. (2016, November 29). ACT to offer test supports for English learners. Education Week. Retrieved from www.edweek.org/ew/articles/2016/11/30/act-to-offer-test-supports-for-english-learners. html?qs=SAT+and+aCT+test+preparation Grubb, W. N. (1999). Honored but invisible. New York, NY: Routledge. Grubb, W. N., & Gabriner, R. (2013). Basic skills education in community colleges. New York, NY: Routledge. *Hern, K., & Snell, M. (2013). Toward a vision of accelerated curriculum and pedagogy: High challenge, high support classrooms for underprepared students. Oakland, CA: LearningWorks. Herold, B. (2016, February 23). Comparing paper and computer testing: Seven key research studies. Education Week. Retrieved from www.edweek.org/ew/articles/2016/02/23/comparing-paper-and-computertesting-7-key.html?qs=technology+based+reading+assessment *Hughes, K. L., & Scott-Clayton, J. S. (2011). Assessing developmental assessment in community colleges. Community College Review, 39(4), 327–351. Isakson, R. L., Isakson, M. B., Plummer, K. J., & Chapman, S. B. (2016). Development and validation of the Isakson Survey of Academic Reading Attitudes (ISARA). Journal of College Reading and Learning, 46(2), 113–138. doi:10.1080/10790195.2016.1141667 Jaschik, S. (2015). The tyranny of the meritocracy. Retrieved from Inside Higher Ed website: www.inside highered.com/news/2015/02/03/qa-lani-guinier-about-her-new-book-college-admissions *Lavonier, N. (2016). Evaluation of the effectiveness of remedial reading courses at communit colleges. Community College Journal of Research and Practice, 40(6), 523–533. doi:10.1080/10668926.2015.1080200 *Mahlberg, J. (2015). Formative self-assessment college classes improves self-regulation and retention in first/ second year community college students. Community College Journal of Research and Practice, 39(8), 772–783. McShane, S. (2005). Applying research in reading instruction for adults. Washington, DC: National Institute for Literacy. Mokhtari, K., & Reichard, C. A. (2002). Assessing students’ metacognitive awareness of reading strategies. Journal of Educational Psychology, 94(2), 249–259. *Mulcahy-Ernt, P. I., & Caverly, D. C. (2009). Strategic study-reading. In R. F. Flippo & D. C. Caverly (Eds.), Handbook of college reading and study strategy research (2nd ed.). New York, NY: Routledge. Neiman, G. (2016, March 16). The new SAT won’t close the achievement gap. Education Week. Retrieved from www.edweek.org/ew/articles/2016/03/16/the-new-sat-doesnt-mitigate-the-achievement.html?qs= new+SAT+won't+close+achievement+gap Nikos-Rose, K. (2016, April 6). Brain reading: Brain scans show activity during normal reading. EurekAlert! Retrieved from neurosciencenews.com/fmri-reading-neuroimaging-3987/ Perin, D. (2006). Can community colleges protect both access and standards? The problem of remediation. Teachers College Record, 108(3), 339–373. Perin, D., & Bork, R. H. (2010). A contextualized reading-writing intervention for community college students (CCRC Brief No. 44). Retrieved from ccrc.tc.columbia.edu/publications/contextualized-interventiondevelopmental-reading.html *Perin, D., Raufman, J., & Kakamkarian, H. S. (2015). Developmental reading and English assessment in a researcher-practitioner partnership. New York, NY: Community College Research Center, Teachers College, Columbia University. Robinson, F. P. (1946). Effective study (2nd ed.). New York, NY: Harper & Row. Roe, B. D., Stoodt-Hill, B. D., & Burns, P. C. (2007). Secondary school literacy instruction: The content areas. Boston, MA: Houghton-Mifflin. Sarroub, L., & Pearson, P. D. (1998). Two steps forward, three steps back: The stormy history of reading comprehension assessment. The Clearing House, 72(2), 97–105. Sawyer, R., & Schiel, J. (2000). Posttesting students to assess the effectiveness of remedial instruction in college. Paper presented at the Annual Meeting of the National Council on Measurement in Education, New Orleans, LA.
338
Student Assessment
Saxon, D. P., Levine-Brown, P., & Boylan, H. (2008). Affective assessment for developmental students, part 1. Research in Developmental Education, 22(1), 1–4. Shulman, L. S. (1986). Those who understand: Knowledge growth in teaching. Educational Researcher, 15, 4–14. Simpson, M. L., Stahl, N., & Francis, M. A. (2004). Reading and learning strategies: Recommendations for the 21st century. Journal of Developmental Education, 28(2), 2–4, 6, 8, 10–12, 14–15, 32. Smith, D. E. P. (1961). Learning to learn. New York, NY: Harcourt Brace Jovanovich. Stahl, N., Simpson, M. L., & Hayes, C. G. (1992). Ten recommendations from research for teaching highrisk college students. Journal of Developmental Education, 16(1), 2–10. The Nation’s Report Card. (n.d.-a). About the nation’s report card. Washington, DC: National Assessment of Educational Programs. Retrieved from www.nationsreportcard.gov/about.aspx The Nation’s Report Card. (n.d.-b). A new generation of NAEP assessments. Washington, DC: National Assessment of Educational Programs. Retrieved from www.nationsreportcard.gov/tel_2014/ Thomsen, J. (2015, June 2). New website for a new test. Retrieved from Inside Higher Ed website: www. insidehighered.com/news/2015/06/02/college-board-and-khan-academy-team-ease-access-new-sat Thorndike, E. (1917, June). Reading as reasoning: A study of mistakes in paragraph reading. Journal of Educational Psychology, 8(6), 323–332. U.S. Department of Education. (2016). Every Student Succeeds Act (ESSA) [Press release]. Retrieved from www.ed.gov/essa?src=rn Vacca, R., & Vacca, J. (1994). Content area reading: Literacy and learning across the curriculum (5th ed.). New York, NY: HarperCollins. van Blerkom, D. L., & Mulcahy-Ernt, P. I. (2005). College reading and study strategies. Belmont, CA: Thompson Wadsworth. Wiggins, G. (1990). The case for authentic assessment. Practical Assessment, Research and Evaluation, 2(2). Retrieved from www.pareonline.net/Articles.htm
339
20 Reading Tests Rona F. Flippo university of massachusetts boston
Sonya L. Armstrong texas state university
Jeanne Shay Schumm university of miami
Much recent educational scholarship has focused on concerns about assessment across educational levels and disciplines. Specific to the postsecondary level, debates over the growing use of standardized commercial tests, questions about the best measurements of college-readiness, and concerns about current course placement instruments all have had direct implications for college reading and learning. However, despite the prevalence of these conversations in the professional literature, assessment too often remains a mysterious, annoying, or even dreaded task rather than an integrated part of the work in the field. On the other hand, it is becoming increasingly more common for assessment to be far removed, especially as central testing centers or other administrative units take over decision-making for all areas of a college. Either way, there is a need for continued exploration of assessment best practices for college reading and learning professionals. Toward that goal, in this chapter, we aim to demystify some aspects related to college reading tests. In order to initiate this discussion, though, we must first examine the current context for the primary uses of college reading tests.
The Current Context for College Reading Tests “College and career readiness” has become the focal point of major educational reform efforts over the past several years. Perhaps most notably, the Common Core State Standards Initiative (CCSSI) emphasizes issues of college-readiness as its goal, clarifying standards beginning with what is considered “college ready” and then systematically backward-benchmarking each grade level (Barnett & Fay, 2013; Common Core State Standards Initiative [CCSSI], 2018; Holschuh, 2014; King, 2011). It should be noted that not all states and territories have adopted the CCSSI (and, indeed, some that previously adopted the Standards have since rescinded participation) (Achieve, 2013; CCSSI, 2018). Regardless, the conversations surrounding “college and career readiness” highlighted by the CCSSI are widespread, even in non-CCSSI states, such as Indiana and Texas (Indiana Commission for Higher Education, 2014; Texas Higher Education Coordinating Board, 2009). Specifically, the goal of students exiting high school ready for the literacy expectations of college-level courses has been a common call to action and, in fact, is exemplified throughout the
340
Reading Tests
English/Language Arts Standards (CCSSI, 2018). Nonetheless, at present, increasing numbers of beginning college students each year are identified as not ready for college-level literacy expectations and are placed into one or more developmental reading courses prior to their beginning actual college-level courses (Belfield & Crosta, 2012; Hughes & Scott-Clayton, 2011; Kena et al., 2014; Quint, Jaggars, Byndloss, & Magazinnik, 2013). These numbers are unlikely to change in the near future—at least as related to recent college-readiness reform efforts—especially given that it will likely take years for the CCSSI (and other state reform efforts) curricular redesign to have its full impact on incoming college students. In addition, the upward trend of nontraditional students returning for postsecondary work has continued (Kena et al., 2014). To further complicate the goal of “college and career readiness,” despite numerous attempts to define the construct (Achieve, 2013; Conforti, 2013; Conley, 2007, 2008, 2012; Education Commission of the States [ECS], 2012; Mishkind, 2014), the reality is that no universally accepted definition of college-text ready exists (Armstrong & Stahl, 2017; Armstrong, Stahl, & Kantner, 2016; Hodara, Jaggars, & Karp, 2012; Holschuh, 2014; National Center on Education and the Economy [NCEE], 2013). This combination of factors—the continued need for literacy support for beginning college students and the lack of a common understanding of what constitutes “college and career readiness” for reading—presents an interesting problem for professionals in college reading and learning, particularly as this relates to assessment. With this current context in mind, the purpose of this chapter is to provide an overview of the current state of college reading tests, with a particular focus on the most common usage: placement assessment. We begin with a discussion of purposes for reading tests, followed by an overview of different types of reading tests as well as current conversations related to the topic. Our goal in doing so is to engage college reading and learning professionals in the kind of deliberation that is needed to make informed decisions about reading test selection. After all, as Bean, Readence, and Dunkerly-Bean (2017) have noted, “The end product of assessment in the content classroom, or any classroom for that matter, should be instructional decision making” (p. 96). As with previous versions of this chapter, we conclude with a compendium of reviews of select commercial tests for use as a reference for college reading professionals.
Purposes and Philosophies for College Reading Tests There are multiple possible purposes for the use of college reading tests, and, in fact, the previous editions of this chapter (Flippo, Hanes, & Cashen, 1991; Flippo & Schumm, 2000, 2009) provided detailed discussions of some of the traditional uses of such tests at the postsecondary level. Three major purposes cited for testing college students’ reading are conducting an initial screening, making a diagnostic assessment, and evaluating the effectiveness of the instruction resulting from the assessment. Simpson and Nist (1992) draw heavily from the work of Cross and Paris (1987) in conceptualizing these three assessment purposes as follows: “sorting, diagnosing, and evaluating” (Simpson & Nist, 1992, p. 452). Currently, the primary purpose of reading tests at the postsecondary level is the first one: sorting; thus, this will be the major focus of this chapter. Generally speaking, sorting has two underlying purposes: (a) to determine if a student has need for reading assistance and (b) to determine placement in the appropriate reading course or in special content courses or sections. The second purpose, diagnosing, is far less frequently adopted in postsecondary settings at present, though it has a long history within the field of college reading, particularly in campus reading labs in the 1940s, 1950s, and 1960s (Maxwell, 1966; Spache, 1953; Triggs, 1943; see also Nist & Hynd, 1985). Finally, the third purpose of the use of reading tests is course and reading program evaluation (Maxwell, 1971). Evaluation of the instruction’s effectiveness has two functions: (a) to assess individual student progress and determine appropriate changes in instructional approach, and (b) to
341
Flippo, Armstrong, and Schumm
assess or document the overall success of the college reading program and identify needed changes (for a larger conversation on program assessment purposes, see Chapter 18). An articulated purpose should drive all decisions regarding assessment and especially test selection. However, the three aforementioned overarching purposes are but a starting point. Indeed, much work is needed en route to test selection, and this process begins with a philosophical/ conceptual model. Essentially, college reading professionals must begin with an understanding of what reading is—a model of reading—in order to avoid the mismatch that occurs for “college reading teachers who often find that their classroom instruction and the tests used to measure the effectiveness of that instruction actually have little to do with each other” (Wood, 1988, p. 224). Further, according to McKenna and Stahl (2009), “all reading assessment is based on a model”— for instance, a deficit model, contextual model, stage model, etc. (pp. 2–3), even if this model is a tacit one. One very basic way to begin the process of articulating a model is to respond to a very simple, but often overlooked, question: “What is reading?” (see Farrall, 2012). In short, we argue that those charged with making decisions about test instruments need to define reading and be explicit about their model of reading first and foremost before beginning the process of informed test selection. As well, professionals need to recognize the implicit (or explicit) model of reading underlying the universe of potential instruments to ensure a solid match. To this end, in the present chapter—and in a previous edition of this chapter (Flippo & Schumm, 2009)—we explicitly call attention to this in the review of test instruments by analyzing the “Overall Theory of Reading.” Simpson and Nist (1992) suggest as an essential characteristic of a comprehensive assessment model that “a match exists between the philosophical base, the short-and long-term goals of the reading program, and the assessment instruments used” (p. 453). As Wood (1988) has noted, if we hold a more complex, multifaceted view of reading, then we have to question whether a test exists that will allow us to “observe or completely understand what is happening when people read” (p. 225). This is where big-level mismatches occur, particularly in programs that are more holistic in their approach to teaching reading but are assessing discrete skills. College reading and learning professionals must be intentional in critiquing test options in light of stated reading models (Stahl & Armstrong, 2018). For one, test publishers may or may not have appropriate expert pedagogy and sound content knowledge. Also, historically, reading tests were largely atheoretical, with no grounding in a consensus of what was to be measured. This fact, however, has not deterred test publishers from going about the business of measuring reading, despite the lack of a theoretical basis for defining it (Flippo & Schumm, 2009, p. 414). The assumptions behind standardized tests are threefold: that these tests provide a standardized sample of an individual’s behavior; that by observing this sample, valid inferences can be made about that individual; and that appraising certain aspects of reading gives insight into overall reading achievement. However, as Kingston (1955) pointed out, if professionals cannot agree on the essential skills involved, they cannot agree on the measurement of reading. Thus, the debate about reading skills, what should be tested, and how to test is likely to continue, and this presents a particularly difficult problem for the most common usage of reading tests at the postsecondary level: placement assessment.
The Placement Problem Most postsecondary institutions, even those with open admissions policies, require incoming students to take tests, including reading tests, as part of the admissions and/or placement process (Bailey, Jaggars, & Jenkins, 2015; Fields & Parsad, 2012). Recently, much attention has been focused on the purpose and goals of placement testing, though. One question that has been repeatedly asked over the last 10 or so years has been an iteration of the following question, posed by researchers at the Community College Research Center (CCRC): “Do the assessments currently
342
Reading Tests
in use sufficiently predict student outcomes? Even more importantly, does the use of these assessments seem to improve student outcomes?” (Hughes & Scott-Clayton, 2011, p. 2). And several researchers have provided responses to this question. For example, other CCRC-affiliated researchers have concluded that “Placement tests do not yield strong predictions of how students will perform in college” (Belfield & Crosta, 2012). Fulton (2012), a policy analyst for the Getting Past Go Project of the Education Commission of the States, noted that present research on this topic has found that “it is not the instruments that are the problem, but how they are being used” (p. 3). Morante (2012) prompts professionals in the field to consider the purpose of placement, noting that it is a way to determine students’ achievement levels in order to best match them to appropriate instruction. However, this is easier said than done as appropriate instruction encompasses not only the known (to field professionals) entity of developmental reading but also the largely unknown areas beyond developmental-level instruction. To that end, Wood (1988) has reminded us that “reading instructors need to be able to describe what college students are expected to read in college so that real college reading can be compared with that required by reading tests” (p. 228). Many within the field have similarly called for such an understanding of the purpose of reading tests. For instance, Flippo and Schumm (2009) argued that “college reading tests should assess readers’ ability to deal with the academic demands of college-level coursework” (p. 409). Nevertheless, what is “college-level,” and how does one know? As was noted earlier in this chapter, there is no single, universal, or even widely accepted definition of “college-text ready” (Armstrong et al., 2016; Stahl & Armstrong, 2018).
Decision-Making for Test Selection It is important to note that specific test instruments have yet to be discussed in this chapter. That is an intentional attempt to highlight the idea that purpose must precede test selection as an instrument itself cannot impose a purpose. Decisions about test selection may occur at state, institutional, departmental, program, or individual class levels. To choose the best test for a given circumstance, individuals or groups of individuals (e.g., state committees, faculty committees) must be informed about assessment in general but reading tests in particular. Ultimately, the decision of which instrument or battery of instruments to use must be made on the basis of the particular situation, the underlying model or theory of reading, the purpose and goals of assessment, and any requirements for accountability to the college or the state. In the absence of predetermined standards, individuals or groups of individuals making the decision must define the reading theory and related competencies relevant for student success in their college courses and determine the reading needs of the population they serve before selecting the most appropriate tests for their program. If an appropriate test is not available, the state or institution should develop its own test rather than use an evaluation instrument that is poorly suited to the program’s purposes, theory of reading, and student population. This, of course, is a topic for a completely separate chapter-length discussion, but we will address it briefly in the next section, which provides an overview of different types of college reading tests.
Types of College Reading Tests This section provides an overview of the array of college reading tests presented in five different categorizations: survey and diagnostic tests, formal and informal tests, norm-referenced and criterion-referenced tests, group and individual tests, and homegrown and commercial tests (see also Flippo, 2014; McKenna & Stahl, 2009; and other reading diagnosis texts for similar categorizations of reading tests).
343
Flippo, Armstrong, and Schumm
Survey and Diagnostic Tests College reading tests can be categorized in many different ways, but perhaps the most general breakdown has to do with the overall purpose of the test: the survey test (or screening instrument), which is meant to provide information about students’ general level of reading proficiency, and the diagnostic test, which is meant to provide more in-depth information about students’ specific reading strengths and weaknesses (Kingston, 1955). Both types of tests have historically had a place in college reading programs, though more recently, survey tests are the predominant focus. Survey tests allow colleges to identify students in need of reading assistance services, but they allow only a quick and general estimate of a student’s reading proficiency. In part, because they currently tend to be given on a computer with immediate results that do not require an assessment expert or a reading expert for interpretation purposes, these types of tests have become the most commonly used on college campuses. Flippo (1980a, 1980b, 1980c) contends that once students have been identified as needing reading or learning skills assistance, in-depth diagnosis is needed to determine the proper course of instruction. Information gleaned from diagnostic tests is detailed and complex, and thus must be analyzed and interpreted by a knowledgeable reading specialist and combined with other student assessment data. Diagnostic testing examines the various components of reading in greater depth and determines the relative strengths and weaknesses of the reading in performing these skills or strategies. This type of assessment provides an individual performance profile or a descriptive account of the reader’s level of proficiency in reading that should facilitate instructional planning based on individual needs. However, diagnostic tests are used only very infrequently on most campuses today. Grubb et al. (2011) found in their research that faculty’s “most common complaints about assessment tests are that they are placement tests, not diagnostic tests […] and therefore generate no information for the instructor on what skills to emphasize” (pp. 6–7). Of course, even with the best diagnostic tests available, the results will not yield all the answers; however, diagnostic assessment of reading can provide some immediately useful insights.
Formal and Informal Tests Within the two broad categories of reading tests, survey and diagnostic, many different types of tests exist. One way of classifying tests is as formal or informal, a distinction that gets at “how rigidly they are administered and interpreted” (McKenna & Stahl, 2009, p. 25), and to what extent “there is only one correct or acceptable answer to questions” (Flippo, 2014, p. 322). Formal tests are usually standardized instruments with formalized and prescribed instructions for administration and scoring. Formal tests tend to be commercially prepared and usually provide the examiner with a means of comparing the students tested via a norm group representative of students’ particular level of education and other related factors. This comparison, of course, is valid only to the extent to which the characteristics of the students being tested resemble those of the norm group. Informal tests, by contrast, tend to be instructor-developed, administered, and scored. Because of the large number of students who participate in college reading programs, most programs limit their testing to standardized group instruments. However, carefully designed informal reading assessments appropriate for college students can provide more diagnostic information and probably more useful information than any of the formal group tests currently available. Informal reading inventories (IRIs) are one example of a published informal instrument; IRIs are particularly useful in pinpointing a reader’s areas of strength and needed improvement. However,
344
Reading Tests
using individual IRIs with college students presents two problems: (a) They are time-consuming to administer and analyze, and (b) few commercially available individual IRIs are appropriate for college populations. Although time-consuming, an individual IRI may be appropriate for students exhibiting unusual or conflicting results on other assessments, or for students indicating a preference for more in-depth assessment. The college reading professional has to decide when it makes sense to administer an individual IRI; certainly, few reading authorities would deny the power of the IRI as a diagnostic tool. It may well be that the level of qualitative analysis one can get from an IRI is worth the time it takes.
Norm-Referenced and Criterion-Referenced Tests The next way of categorizing college reading tests has to do with the interpretation of results. Test scores make reference to either a student’s achievement in comparison to a norming group (norm-referenced) or in comparison to a predetermined criterion that serves as a benchmark (criterion-referenced). Norm-referencing describes a process of comparing a student’s test scores to a similar group of students’ test scores (these can be local, state, or even national samples, depending on the development of the test). Thus, such tests rank students relative to the norming group. Although most standardized tests (especially commercially prepared ones) are norm-referenced, it should be understood that not all standardized tests are norm-referenced. Standardized test scores that are not norm-referenced are typically criterion-referenced. Tests—especially informal tests—that do not provide an outside norm group for comparison purposes are usually criterion-referenced, that is, the test maker has defined a certain passing score or level of acceptability against which individual students’ responses are measured. Students’ scores are compared to the criterion (rather than to the scores of an outside norm group of students), often to determine mastery of a specified skill. Flippo (2002) defines criterion-referenced scores as “test results that have been determined by comparing an individual’s raw score to a predetermined passing score for the test or subtest being taken” (p. 615). It should be noted that the questions used on a standardized test could actually be the same ones used on a criterion-referenced test. The way test results are interpreted and reported (criterion-referenced or norm-referenced) has nothing to do with the actual test questions themselves (Flippo, 2014).
Group and Individual Tests Another way of classifying reading tests is as group or individual tests. These terms refer to whether the test is designed for administration to a group of students during a testing session or to one or two students at a time. Traditionally and currently, college reading programs have relied primarily on group tests because they are perceived to be more time efficient given the large number of students to be tested. Often, survey tests are designed and administered as group tests, especially through computer administration. Diagnostic tests can be designed either for group administration or for individual testing, and most diagnostic tests can be used both ways. Similarly, formal, informal, norm-referenced, and criterion-referenced tests can all be designed specifically for either group or individual administration but sometimes can be used both ways.
Homegrown and Commercial Tests A final way of classifying tests is based on the origin of their development, as either homegrown or commercially prepared tests. This distinction refers to whether a test has been developed locally by an instructor, department, or college, or whether a commercial publisher has prepared it.
345
Flippo, Armstrong, and Schumm
When appropriate commercially prepared tests are available that match both purpose and philosophy, colleges usually opt to use one of those rather than going to the expense and trouble of developing their own. After evaluating commercial tests, however, college reading and learning professionals may decide that none of these materials adequately assess the reading skills and strategies they want to measure, or that they are not well aligned to the underlying conception of reading. In these cases, homegrown tests can be developed. However, if a college elects to develop its own test, it should do so with the expertise of a testing and measurement specialist. Further, such tests must undergo rigorous review and field testing by the content and reading faculty involved. Clearly, this is an expensive, time-intensive undertaking; thus, colleges must be prepared to commit the necessary resources to this effort. It is not within the scope of this chapter to describe the process of developing a college reading test; however, colleges wishing to develop their own tests should enlist as much expertise and use as much care as possible.
Matching the Test to the Purpose As previously noted, the purpose of testing should determine the type or types of tests selected or developed. Therefore, a key question to ask when the issue of testing arises is “What purpose are we trying to accomplish with this testing?” Although this question may sound trivial at first, the answer is not as simple as it seems. Guthrie and Lissitz (1985) emphasized that educators must be clear about what decisions are to be made from their assessments, and then form an appropriate match with the types of assessments they select. If the required function of testing is gross screening to determine whether students could benefit from reading assistance, testers might select an appropriate instrument from the available commercially prepared standardized group survey tests. If the screening is to be used for placement into a particular course or intervention, the college might develop instruments to measure students’ ability to read actual materials required by the relevant courses (see also Simpson, Stahl, & Francis, 2004). If the purpose of testing is individualized diagnostic assessment, an appropriate instrument might be selected from the commercially prepared individual tests, or one could be developed by the instructor or the college. Finally, if the purpose of testing is to evaluate the effectiveness of the instruction or the program, an appropriate combination of formal and informal instruments may be selected. Guthrie and Lissitz (1985) indicated that although formal, standardized reading tests are helpful for placement or classification of students as well as for program accountability purposes, they do not provide the fine-grained diagnostic information vital to making informed decisions about instruction. Consequently, using such tests “as a basis for instruction decisions raises the hazard of ignoring the causes of successful learning, which is the only basis for enhancing it” (p. 29). On the other hand, the authors note that although informal assessments such as teacher judgments and observations are helpful to the instructional process, they are not necessarily satisfactory for accountability or classification purposes. Despite initiatives to reform assessment, formal, norm-referenced, commercially prepared, group-administered survey tests are still the primary tools used, whether by choice or by default, for most developmental reading programs. As Ryan and Miyasaka (1995) explained, “reports about the demise of traditional approaches to assessment are premature” (p. 9). Given the current climate of accountability and impact of high-stakes testing (Kohn, 2000; Thomas, 2005; Zacher Pandya, 2011), this is an understatement. If anything, several concerns related to assessment practices that have long been a part of the conversations in K-12 contexts have slowly, finally reached the postsecondary level, as will be discussed in a later section.
346
Reading Tests
Psychometric Properties In an early compendium called “A Guide to Postsecondary Reading Tests,” similar to the one provided here, Gordon (1983) observed the following: Many postsecondary reading personnel are familiar with the considerations of validity, reliability, and comparability of the norming population to the students to be tested. But familiarity has not meant, for most of us, the sophistication necessary to critically read a technical manual— which, by its very nature as an advertisement, will present test properties in a favorable light. (p. 45) We agree and further recognize that often, the work of choosing tests is, within increasing frequency, removed from the work of college reading and learning professionals as testing centers and other administrative units have growing control over decision-making in this area. For both of these reasons, we provide detailed explanations of four key considerations related to reading tests’ psychometric properties: normative considerations, reliability considerations, validity considerations, and readability considerations.
Normative Considerations Most of the tests reviewed in the compendium at the end of this chapter are norm-referenced. This means that norms, or patterns typical of a particular group, are employed to make comparisons among students. Comparisons are often made using percentiles, stanines, or grade equivalents. Test publishers usually report the procedures used to norm their instrument in the test manual. This information includes a description of the norming group in terms of age, sex, educational level, socioeconomic status (SES), race, geographical setting, and size. Without this information, it is impossible to determine whether the test results are applicable to the populations to be tested (Abedi, 2001, 2006; McMillan, 2001; Peters, 1977; Thurlow, Elliott, & Ysseldyke, 2003). For example, if all students to be tested were beginning college students from low socioeconomic rural populations, the use of a test that was normed with high school students from an upper middle-class urban area would have to be questioned. Even if the norm group and the group tested are comparable, the normative data must be current. For example, a group of beginning college students tested in the 1950s will differ from the same population tested in the 21st century. Therefore, both the test and the normative data should be updated continuously. As Gregory (2007) recommended, “norms may become outmoded in just a few years, so periodic renorming of tests should be the rule, not the exception” (p. 77).
Reliability Considerations Test reliability, usually reported in reliability coefficients, is the “attribute of consistency in measurement” (Gregory, 2007, p. 97). In short, “Reliability is the ‘consistency’ of the test—whether it will produce roughly the same results if administered more than once to the same group in the same period. Reliability checks how dependable the test is” (Richardson, Morgan, & Fleener, 2009, p. 35). If a test is highly reliable, we can assume that test scores are probably accurate measures of students’ performance rather than a fluke or error. One measure of reliability is the coefficient of stability, or a report of test-retest reliability, an indication of stability in performance over time. A test is considered reliable to the extent that students’ test scores would be the same each time they took the test (assuming, of course, that
347
Flippo, Armstrong, and Schumm
no learning would take place between test administrations and that the student would remember nothing about the test at the next administration). Most reading tests also report other types of reliability. One of these is the coefficient of equivalence (also called parallel forms reliability or alternate forms of reliability). This method is used when a test has two forms; with such instruments, to compute reliability, both forms are given to the same sample group, and then the scores are correlated. This measure is particularly important with alternative forms of an instrument used for pre- and posttesting. Another type of reliability reported for many reading tests is internal consistency reliability, which measures the relationship between items on a test and looks at the consistency of performance on various test items. Internal consistency is usually calculated with the Kuder-Richardson KR20 or KR-21 formulae or by using split-half reliability. With the split-half method, reliability is computed by dividing the test into two parts and comparing or correlating scores on the parts. Of course, no test can be 100 percent reliable. Variability is inevitable when dealing with human beings. In addition to individual differences of test takers, McMillan (2001) identifies a number of test construction factors that can also impact reliability: spread of scores and number, difficulty, and quality of assessment items. Sattler (2001) also notes that testing conditions and guessing can also influence test reliability. The higher the reliability coefficient for a test, however, the more confident users can be that the test accurately measures students’ performance. Some experts recommend reliability coefficients of 0.90 or higher for a test that will be used to make decisions about individual students (Gregory, 2007). Testing and measurement authorities advise the one way to determine whether a particular test’s reliability score is acceptable is to measure it against the highest score attained by a similar test. As Brown (1983) explained, “the reliability of any test should be as high as that of the better tests in the same area” (p. 89). However, Brown also indicates that performance measures with reliability values of 0.85–0.90 are common. Peters (1977) noted that 0.80 or higher is a high correlation for equivalent, parallel, or alternative form reliability. Because reliability pertains to error in measurement, another statistic, the standard error of measurement, is salient to this discussion. Gunning (2006) defines standard error of measurement as an “estimate of the difference between the obtained score and what the score would be if the test were perfect” (p. 77). Given that a certain amount of error is inevitable in assessment, the standard error of measurement guards against the notion that test scores are absolute and precise. The smaller the standard error of measurement, the more likely that the student’s obtained score is the actual score with error taken into account. On a final note, users must remember to analyze what a given test actually measures. Even if a test is highly reliable, if it does not measure the appropriate skills, strategies, abilities, and content knowledge of the students to be tested, it is of no value.
Validity Considerations A test is considered valid to the extent that it measures what the test user is trying to measure. If a test measures the skills, strategies, abilities, and content knowledge that the college or program deems important for a given student population’s academic success, it is a valid instrument. If the test also measures extraneous variables, its validity is weakened proportionately. A test cannot be considered valid unless it measures something explicitly relevant both to the population being tested and to the purpose of the testing. Benson (1981) reminded instructors that validity extends beyond test content to include appropriateness of the test’s structure and materials. Item wording, test format, test directions, length and readability of passages, and content materials must all be analyzed to determine their appropriateness for the given population and the purposes of the testing.
348
Reading Tests
Test developers and publishers use different terminology to describe the validity of their tests. This terminology actually describes different types of validity. Type of validity is usually a function of the way the test publisher determined that the test was valid for a given purpose. It is important for test users to know something about the different types of validity to understand the terminology reported in test manuals. Often test publishers report only one or two types of validity for their tests. One type of validity cannot be generalized to another. However, with a stated purpose for testing, an understanding of what is being tested, and what the population’s needs are, one can usually determine the validity of a test even with limited information. Of course, as Peters (1977) pointed out, reading instructors should demand appropriate validity documentation in test manuals. If an instrument does not provide validity information, it should not be used; purchasing such a test only perpetuates the assumption made by some test publishers that this information is of little importance to professionals. Morante (2012) as well as Armstrong (2000) discuss validity-by-instructor, which is an often-used approach, particularly with placement testing. In this model, basically, there is communication to determine if there are any students thought to be misplaced between the instructor and testing area.
Types of Validity According to Brown (1983), the numerous types of validity generally fall into three main classes: criterion-related validity, content validity, and construct validity. The basic research question for the criterion-related validity measure is “How well do scores on the test predict performance on the criterion?” (Brown, 1983, p. 69). An index of this predictive accuracy, called the “validity coefficient,” measures the validity of the particular test. What is of ultimate interest is the individual’s performance on the criterion variable. The test score is important only as a predictor of that variable, not as a sample or representation of behavior or ability. An example of criterion-related validity is use of the SAT to predict college grade point average (GPA). Concurrent validity and predictive validity are two types of criterion-related validity often noted in test manuals. Concurrent validity refers to the correlation between test scores and a criterion measure obtained at the same (or nearly the same) time; therefore, it measures how well test scores predict immediate performance on a criterion variable. Predictive validity examines the correlation of test scores with a criterion measure obtained at a later point in time. The most frequently used method of establishing criterion-related validity is to correlate test scores with criterion scores. A validity coefficient is a correlation coefficient; the higher the correlation, the more accurately the test scores predict scores on the criterion task. Thus, if the choice is between two tests that are equally acceptable for a given population and purpose, and one test has a validity coefficient of 0.70 while the other has a validity coefficient of 0.80, the test user should choose the latter. According to Peters (1977), a test should have a validity coefficient of 0.80 or above to be considered valid. Any coefficient below this level, he says should be considered questionable. The basic question researched for content validity is “How would the individual perform in the universe of situations of which the test is but a sample?” (Brown, 1983, p. 69). The content validity of a test is evaluated on the basis of the adequacy of the item sampling. Because no quantitative index of sample adequacy is available, evaluation is a subjective process. In evaluating this type of validity, the test score operates as a sample of ability. An example of content validity is use of an exam that samples the content of a course to measure performance in that course. Face validity is often confused with content validity. A test has face validity when the items seem to measure what the test is supposed to measure. Face validity is determined by a somewhat
349
Flippo, Armstrong, and Schumm
superficial examination that considers only obvious relevance, whereas content validity entails thorough examination by a qualified judge who considers subtle as well as obvious aspects of relevance. Another issue related to content validity is the degree to which items generate gender or cultural bias. To control for such bias, test constructors typically engage expert reviewers (both internal and external) to review for item bias. In addition, a statistical procedure, differential iteming functioning (DIF), can be used to detect items that are potentially unfair to a particular gender or ethnic group. Adapted from statistical procedures used in the field of medicine (Mantel & Haenszel, 1959), in 1986 Educational Testing Service further refined the procedure for educational testing (Holland & Thayer, 1986). The basic question researched for construct validity is “What trait does the test measure?” (Brown, 1983, p. 69). Construct validity is determined by accumulating evidence regarding the relationship between the test and the trait it is designed to measure. Such evidence may be accumulated in various ways, including studies of content and criterion-related validity. As with content validity, no quantitative index of the construct validity of a test exists. An example of construct validity is the development of a test to define a trait, such as intelligence. Congruent validity, convergent validity, and discriminant validity are all types of construct validity that are cited in test manuals. Congruent validity is the most straightforward method of determining that a certain construct is being measured. Congruence is established when test scores on a newly constructed instrument correlate with test scores on other instruments measuring a similar trait or construct. Convergent and discriminant validities are established by determining the correlation between test scores and behavior indicators that are aligned theoretically with the trait (convergent validity) or that distinguish it from opposing traits (discriminant validity). For example, we would expect that scores on verbal ability tests would correlate highly with observed performance on tasks that require verbal skills. On the other hand, we would expect a lower correlation between scores on manual ability tests and verbal behaviors because these traits, in theory, are distinct. Ideally, if convergent validity is reported, discriminant validity is also reported. Haladyna (2002) identifies validity as “the most important component in testing” (p. 41). Any test will have many different validities, and responsible test publishers spend time and energy in establishing and reporting validity data. It must be remembered that validity is always established for a particular use of a test in a particular situation (Haladyna, 2002). We urge professionals who are reviewing tests for possible use in their college reading programs to consider always the particular situation and how the test will be used. For more in-depth information about issues related to reliability and validity, the reader is referred to the Standards for Educational and Psychological Testing (AERA, 2014).
Passage Dependency Although not usually mentioned in testing and measurement texts as an aspect of validity, the passage dependency of a test should be considered by reading test users. According to the more traditional testing and measurement perspective, if students can answer test items by recalling prior knowledge or applying logic without having to read and understand the passage, the test items are passage-independent, and the validity of the results should be questioned. Reading instructors who adhere to this perspective would not want students to be able to answer test questions by drawing on past experience or information. That would defeat the instructors’ purpose in conducting a reading assessment. They would argue that if test items are well constructed, students should have to read and understand the test passages to answer correctly questions on those passages.
350
Reading Tests
One approach that has been used to address the issue of passage dependency is the cloze procedure. Cloze assessment is a method of determining a student’s ability to read specific text materials by filling in words that have been deleted from the text. Reading tests using the traditional model—a brief paragraph followed by multiple-choice questions—appear to be less passage- dependent as answers to questions are sometimes available from the examinees’ background knowledge or reasoning ability. Professionals can best determine the passage dependency of a reading test by conducting their own studies of the reading materials. In these studies, the same test questions are administered to two groups of examinees: One group takes the test in the conventional manner with the reading passages present, while the other group attempts to answer the items without reading the passages. In contrast to the more traditional test perspective, some reading researchers consider it desirable to allow prior knowledge to affect reading assessment ( Johnston, 1984; Simpson, 1982). In addition, Flippo, Becker, and Wark (2009) noted the importance of logic as one of the cognitive skills necessary for the test-taking success of college students. Test users must decide for themselves the importance of prior knowledge, logic, and passage dependency as each relates to the measurement of reading comprehension. We recommend that practitioners learn as much as possible about any test they plan to use so they can more accurately analyze their results and better understand all the concomitant variables. College students’ use of logic or prior knowledge while taking test may provide practitioners with insights into students’ ability to handle textual readings.
Readability Considerations Many test publishers use traditional readability formulae (e.g., Dale & Chall, 1948; Fry, 1977) to compute the approximate readability of test passages included in the measure. According to Stahl, Henk, and Eilers (1995), “Readability is defined as the relative ease or difficulty a reader experiences in attempting to understand the concepts presented by an author in written text” (p. 106). Traditionally, readability has been estimated through mathematical formulae that measure syntactic or semantic text aspects, such as number of words per sentence or number of syllables per word. These indices produce a score that approximates the grade estimate a reader would need to have achieved in order to comprehend the text. In general, the assumption underlying these indices is that words with fewer syllables and sentences with fewer words are more readable, so a reader at a lower grade level would be able to comprehend. Typically, a range of readability levels is included within each level of a test (Hewitt & Homan, 2004). This information can be potentially useful when selecting an appropriate test level for the student body in question. Although readability levels are useful, we want to point out the limitations of these formulae. Traditional readability formulae, including the more current industry standard, Lexiles, consider only sentence and word length, with the assumption that the longer the sentence and the longer the words in the sentence (or the more syllables per word), the more difficult the passage is for the reader. However, we believe that other factors, such as the inclusion of text-considerate or text-friendly features (e.g., headings/subheadings, words in boldface type, margin notes), also contribute to comprehensibility of text (Armbruster & Anderson, 1988; Schumm, Haager, & Leavell, 1991; Singer & Donlan, 1989). These factors are less tangible and therefore more difficult to quantify and measure. Those reviewing tests and materials for any purpose should consider several readability factors in addition to sentence and word length: (a) the complexity of the concepts covered by the material, (b) students’ interest in the content, (c) students’ past life experience with the content, (d) students’ cognitive experience with the content, and (e) students’ linguistic experience with the syntax of the material.
351
Flippo, Armstrong, and Schumm
Current Conversations Surrounding College Reading Tests Although the more recent scholarship within the field of college reading and learning has not represented a significant focus on reading assessment, much has been written lately about overall postsecondary assessment practices, especially those specific to developmental education. A review of the relevant literature (that is, both specific to college reading tests and to broader postsecondary assessment practices) has revealed several important conversations that have implications for assessment practices in college reading and learning. Interestingly, a concurrent review of the historical literature in the field revealed that these conversations are not at all new. For that reason, we introduce six such topics in the sections that follow that have not only become hot topics of late but have also been part of the dialogue in the field over the years. In addition, we present a topic new to the literature with the advent of the college-readiness reform movement, and an additional topic that we have identified as being sorely lacking in the past and present conversations surrounding college reading tests.
The High-Stakes Nature of Tests Often, regardless of the educational context, when we hear the term “assessment,” it is adjacent to or in close proximity to the term “high stakes” (Afflerbach, 2005; Burdman, 2012). This is particularly true at the college level where particular types of assessment instruments influence decisions about students’ course requirements: whether they are eligible to take a course or whether they need prerequisite courses (e.g., Noble, Schiel, & Sawyer, 2003). This issue becomes problematic, of course, when these assessments are being used improperly or for a purpose that does not align with the test properties (Grubb et al., 2011). This can take several forms. Take, for instance, the issue of content validity, or whether the test items actually reflect the curriculum; such a mismatch in purpose can lead to a complete disconnect between assessment and instruction (Burdman, 2012; Scott-Clayton, 2012). Another such issue is a potential mismatch previously addressed in this chapter, that of underlying views of reading. Continuing the discussion posed earlier in this chapter regarding concerns about the placement process, a key concern recently is with the problem of misplacement. This takes two forms: Either students are overplaced or underplaced (Barnett & Reddy, 2017). Students who are overplaced are essentially placed into college-level courses when they should have been assigned to developmental coursework. When students are underplaced, it means that those “who are college-ready but assigned to remediation incur unnecessary extra tuition and time costs and progress more slowly toward completion” (Rodriguez, Bowden, Belfield, & Scott-Clayton, 2014, p. 1). One problem with this argument, of course, is that it assumes a definition of college-ready actually exists, and, given the scope of the work, that this is a widespread definition. No such definition exists, often even at the same institution (Armstrong et al., 2016; Hodara et al., 2012; NCEE, 2013). Read more about this topic in the Student Assessment chapter of this handbook.
Mandatory and Voluntary or Self-Placement Mandatory placement means that students are required to take the course/course path into which they place. Traditionally, best practices literature has recommended mandatory testing and placement (Boylan, 2002; Hadden, 2000). Because this is the typical model in most developmental education programs, many people are not aware that there are other models. Some argue that mandatory placement flies in the face of an open-access mission specific to most community colleges, and therefore some models call for students to have some choice in the decision, or some make placement voluntary.
352
Reading Tests
This latter model is often referred to as the right-to-fail model (Morante, 1989); however, there are some models that are a bit more complex. Self-placement, for example, includes the guided or directed self-placement models where students work with advisers to choose the most appropriate course (Bedore & Rossen-Knill, 2004; Burdman, 2012; Felder, Finney, & Kirst, 2007; Kenner, 2016; Noble, Schiel, & Sawyer, 2003; Reynolds, 2003; Royer & Gilles, 1998, 2003; Simonson & Steadman, 1980; Troka, 2016). The right-to-fail perspective was most prevalent in the 1970s and 1980s; however, it does seem to be making a new appearance in the literature, with one of the most notable being a statewide voluntary-placement initiative occurring in Florida during 2014–2015 (Smith, 2015; Troka, 2016).
Problems with Single-Measure Protocols Unfortunately, despite a long history of scholarship within the field of college reading calling for multiple-measure placement protocols (i.e., Greenbaum & Angus, 2018; Grubb et al., 2011; Maxwell, 1997; Morante, 2012; Shelor & Bradley, 1999; Simpson & Nist, 1992; Wood, 1988, 1989), inclusion of affective components (i.e., Levine-Brown, Bonham, Saxon, & Boylan, 2008; Saxon, Levine-Brown, & Boylan, 2008), and the need for more and better reading tests for this population (Flippo & Schumm, 2009; Gordon & Flippo, 1983), the trend toward a single-measure protocol has persisted. This practice presents a host of limitations (Belfield & Crosta, 2012; Burdman, 2012; Fulton, 2012; Hodara et al., 2012; Scott-Clayton, 2012), and it has led to an era in which only a few such commercial reading instruments have been relied on in the field. However, as one of the top two most commonly used instruments (COMPASS Reading Comprehension, including ACT’s ASSET STUDENT SUCCESS SYSTEM ) has recently been phased out (e.g., Fain, 2015), college reading professionals are left with fewer commercial options for a formal, computer-adaptive survey test for placement purposes. This is especially a problem with the continued emphasis—whether because of institutional staffing or resource constraints or administrative push—on the use of a single measure (Rodriguez et al., 2014). The reality is that the use of a multiple-measures protocol is not common, and, in fact as of 2011, only 13 percent of institutions were using anything other than just a reading measure to place students (Fields & Parsad, 2012). As of 2012, only four states had policies mandating the use of multiple measures (Fields & Parsad, 2012). Several key figures within the field of developmental education have argued for a more comprehensive and whole-learner approach to assessment for “sorting” purposes, especially one that extends the placement process with advising/counseling and focused support (e.g., Boylan, 2009; Levine-Brown et al., 2008; Saxon et al., 2008; Saxon & Morante, 2014; Simpson & Nist, 1992); however, it has been noted that cost continues to hinder what is or can be done with placement assessment, and that investing more up front is what is needed (Boylan, 2009; Rodriguez et al., 2014). In its position statement titled High-Stakes Testing in PreK-12 Education, the American Educational Research Association (AERA) cautions against making instructional decisions based on a single test and encourages the use of multiple measures (AERA, 2000). Beyond professional organizations, at least one state has legislated the use of multiple measures; the nation’s largest higher education system, the California Community College System, is required by legislation to employ multiple measures for placement assessment (AB 705, 2017). We likewise contend that standardized reading tests should be only a part of a comprehensive reading assessment plan (Simpson, 2004); however, in many states and individual institutions, a single test continues to be the way that admissions, placement, and/or completion decisions are made. So why are college reading professionals not using multiple measures? As Rodriguez and colleagues (2014) point out, it is expensive and time-consuming.
353
Flippo, Armstrong, and Schumm
Affective Issues Along the same line of argument from the previous issue is that of the nature of assessments used, whether strictly “cognitive” or “noncognitive.” Most professionals within the field of developmental education tend to identify pedagogically as being learner-centered and thus try to focus on the whole learner. So, it really should come as no surprise that calls from within the field have emphasized the need for affective or noncognitive influences on assessment for years (e.g., Bliss, 2001; Maxwell, 1979; Roueche & Kirk, 1973). According to Saxon et al. (2008), “as much as 25 percent of student performance is determined by affective characteristics” (p. 1; see also Bloom, 1976). Especially given what we know and believe about equity in developmental education, the issue becomes clear: “traditional assessment methods overlook the challenges many students face in gaining access to college, thus perpetuating the cycle of inequality” (Ramsey, 2008, p. 12).
New Curricular Structures Of late, a return to a focus on integrated reading and writing (IRW) has been occurring in the college literature. This practice, first developed for the postsecondary level, as described by Bartholomae and Petrosky (1986), has become increasingly more prevalent, typically in response to institutional or state mandates for acceleration (see Chapter 9 for a complete discussion of IRW). This move has highlighted a considerable gap in the available instruments—no test currently exists that purports to assess both reading and writing as integrated processes. A similar situation is relevant to the movement back toward contextualized reading courses within the disciplines, as no domain-specific reading assessments exist to support placement into contextualized curricular structures.
System/State Mandates on Placement Testing One major issue specific to the current literature has to do with questions surrounding state (or system-wide) mandates on placement testing, which has direct implications for college reading. The extent to which reading tests are used varies from institution to institution and from state to state (Perin, 2006; Scott-Clayton, 2012), though there have been trends toward state-level mandates on placement testing (Ewell, Boeke, & Zis, 2008). In fact, as of 2012, 13 states had policies mandating a common placement assessment, including common cut scores (Fulton, 2012; Saxon & Morante, 2014). One specific topic in this discussion has to do with the inclusion of college-readiness assessments. The CCSSI, in addition, to the development of back-mapped standards from a theoretical construct of “college ready” has also resulted in the implementation of two government-funded consortia charged with development of computer-based assessments: the Partnership for Assessment for Readiness for College and Careers (PARCC) and the Smarter Balanced Assessment Consortium (SBAC). State-level conversations have recently been focused on whether to include these measures of college-readiness—in part or in whole—in placement assessment protocols. Texas, for example, already does this through state legislation mandating the Texas Success Initiative Assessment (TSIA) to determine placement (Texas Higher Education Coordinating Board, 2017).
Biases in Testing One conversation that has been limited in the literature related to college reading tests surrounds issues of biases in testing. Although the topic of discriminatory bias within instruments has been raised frequently in PK-12 assessment literature (e.g., Kohn, 2000), in workplace-related settings (e.g., Haney & Hurtado, 1982), and even in college entrance exams (Freedle, 2003; Santelices &
354
Reading Tests
Wilson, 2010), it simply isn’t widespread in the college reading tests literature (one example found was Utterback (1998) who reviewed a number of “Discriminatory Issues” with college placement tests, including socioeconomic, racial, and cultural impacts). More information about this topic is included in the Student Assessment chapter in this handbook.
Evaluating Commercial Group Tests Given these—and other—critical conversations surrounding reading tests reported in the literature, we see a threefold need in the field: • • •
new, revised, and updated reading tests for college students more and better professional development on assessment for college reading and learning professionals better understanding of the tests that are currently available.
Although it is clearly not within the scope of this chapter to meet the first need, we do make recommendations to help college reading professionals make informed choices from among the more recent commercially available group tests. To accomplish this task, the compendium that follows contains a selective review of commercially available group tests—specific to reading—that were published or revised in 2005 or later (with one exception, Nelson-Denny, which has not been updated since 1993). These tests include those that are commonly used for college reading programs and other reading tests that seem to be applicable to college populations. The purpose of this test review is to help college reading and learning professionals make informed decisions regarding appropriate test instruments for their specific program purposes and populations. Although we reiterate that many scholars, researchers, and practitioners alike consider the currently available commercial tests to be weak, inappropriate, or limited, we also acknowledge that college reading professionals still need to assess large numbers of students. Until better commercial tests are available, professionals must either choose from the currently available instruments or revise or design their own assessments with the support and assistance of appropriate college personnel or test developers. We begin with a brief overview of currently available informal tests.
Informal Tests Specific examples of informal tests could focus on assessing course content, as with essays, quizzes, or portfolios, or they might serve a diagnostic function in assessing proficiency, as with IRIs (see Johns, 1993, for a reference guide) or miscue analyses (see Goodman, Watson, & Burke, 2005, for a reference guide). At present, there are four commercially prepared IRIs that contain passages and scoring information through the 12th-grade level and may therefore be adapted for use with college students: • • • •
the Bader Reading and Language Inventory, 7th Edition (Bader & Pearce, 2013) Informal Reading Inventory: Preprimer to Twelfth Grade, 8th Edition (Burns & Roe, 2011) the Basic Reading Inventory: Pre-Primer Through Grade Twelve and Early Literacy Assessments, 12th Edition ( Johns & Elish-Piper, 2017) the Stieglitz Informal Reading Inventory, 3rd Edition (Stieglitz, 2002).
Those interested in using a commercially available IRI should consult Flippo (2014), Paris and Carpenter (2003), and Schumm (2006) for a listing of frequently asked questions about IRIs, and for selection assistance as well see Flippo, Holland, McCarthy, and Swinning (2009). In addition, Pikulski and Shanahan (1982) provide a critical analysis of the IRI and other informal assessment measures.
355
Flippo, Armstrong, and Schumm
In addition to IRIs, other informal measures that tap reading attitudes, habits, and readingrelated study skills are those that can be included in an assessment portfolio. Some standardized tests (e.g., the Stanford Diagnostic Reading Test) offer such measures as supplemental material. Other measures are available commercially, such as the Learning and Study Strategies Inventory (LASSI; Weinstein, Palmer, & Acee, 2016) and the Inventory of Classroom Style and Skills (INCLASS; Miles & Grummon, 1999). In addition, some informal instruments are publicly available on the web, including the Metacognitive Awareness of Reading Strategies Inventory (MARSI; Mokhtari & Reichard, 2002). The results of informal tests, of course, are valid only to the extent to which the criteria reflect the reading skills and strategies students actually need to accomplish their own or their institutional goals.
Formal Tests If formal, norm-referenced, group-administered, commercially prepared reading tests are to be used, they should be selected and used wisely. A number of reading tests that traditionally have been used with college students in the past are not included in this review. Also, we did not include several measures included in the previous reviews because they are currently out of print or being phased out by the publisher (for a review of most of these tests, please see earlier editions of this chapter, as indicated in the following sections, as well as Blanton, Farr, & Tuinman, 1972): • • • • • • •
Nelson Denny Reading Test, Forms A and B (Nelson, Denny, & Brown, 1960), Forms C and D (Brown, Nelson, & Denny, 1973), Forms E and F (Brown, Bennett, & Hanna, 1980) the McGraw-Hill Basic Skills Reading Test (Raygor, 1970) the Davis Reading Test (Davis & Davis, 1961) the Diagnostic Reading Tests (Triggs, 1947) the Cooperative English Tests-Reading (Derrick, Harris, & Walker, 1960) the Sequential Tests of Educational Progress: Reading (Educational Testing Service, 1969) the Gates-MacGinitie Reading Tests, Level F (MacGinitie & MacGinitie, 1989).
The present chapter contains an updated version of three previous reviews of commercially available college-level reading tests (see Flippo et al., 1991; Flippo & Schumm, 2000, 2009). The reader is referred to Flippo et al. (1991) for reviews of the following: • • • • • • • • •
California Achievement Test, Level 5 (CTB/McGraw-Hill, 1970) California Achievement Test Levels 19, 20 (CTB/McGraw-Hill, 1985) Degrees of Reading Power (College Board, 1983) Gates-MacGinitie Reading Tests, Level F (MacGinitie, 1978) Iowa Silent Reading Tests, Levels 2 & 3 (Farr, 1973) Minnesota Reading Assessment (Raygor, 1980) Nelson-Denny Reading Test, Forms C and D (Brown et al., 1973) Reading Progress Scale, College Version (Carver, 1975) Stanford Diagnostic Reading Test, Blue Level (Karlsen, Madden, & Gardner, 1976, 1984)
Similarly, the reader is referred to Flippo and Schumm (2000) for reviews of the following, not included in the present review: • •
356
Gates-MacGinitie Reading Tests, Level F (1989) Nelson-Denny Reading Test, Forms E and F (1980)
Reading Tests
And, finally, the reader is referred to Flippo and Schumm (2009) for reviews of the following, not included in the present review because they have been discontinued, retired, or refocused on K-12 learners only: • • • • •
ASSET Student Success System (1993) California Achievement Test, 5th Ed. (CAT/5) (1992) Computerized Adaptive Placement Assessment and Support System (COMPASS) (2006) Stanford Diagnostic Reading Test (SDRT) (1995) TerraNova Performance Assessments: The Second Edition (CAT/6) (2005).
College reading and learning practitioners using one of the dated instruments listed earlier should consider the more up-to-date tests reviewed in the compendium of the current chapter. Although we acknowledge that no one commercial instrument is likely to be without flaws, we suggest that the new instruments are probably more appropriate than their dated counterparts, if for no other reason than outmoded language, topics, foci, and norming. Finally, due to their limited use as confined to a single state, also not included in the current review are tests such as California’s College Tests for English Placement (CTEP), Florida’s Postsecondary Education Readiness Test (PERT), or the TSIA for Texas, all of which are basic skills tests. As in previous editions of this chapter, each of the tests listed has been reviewed to provide the following information. • • • • • • • • • • • • • • • • • • • • • • •
Name and author(s) of the test Publisher of test Type of test Use(s) of the test Skills or strategies tested Population recommended Overall theory of reading Readability and source of passages Format of test/test parts Testing time Forms Levels Types of scores Norm group(s) Computer applications Meeting the needs of special populations Date of test publication or last major revision Reliability Validity Scoring options Cost Weaknesses of the test Strengths of the test.
Careful analysis of this information should help practitioners select the most appropriate tests for their given situations and populations. It should be clear that we do not endorse any of these tests in particular, especially not in isolation or as a single measure of a student’s achievement. Our
357
Flippo, Armstrong, and Schumm
purpose is simply to present practitioners with detailed information regarding the commercially available choices to enable them to make the most informed decisions possible.
Conclusion In three previous analyses, we reviewed reading tests commonly used in college reading programs (Flippo et al., 1991; Flippo & Schumm, 2000, 2009). Prior analyses have concluded that (a) no one available test was sufficient for the needs of all programs and all student populations, (b) few reading tests are normed on beginning undergraduate college students, (c) none of the standardized tests used extended passages of the type college students are required to read in their textbooks, and (d) most standardized tests are survey instruments. These conclusions resurfaced in the present review as well. We next present implications for practice and recommendations for future research that follow these conclusions. The chapter closes with the compendium of reading tests reviewed.
Implications for Practice “The word assessment can be considered a buzzword, a bad word, or even a fighting word” (A rmstrong, Stahl, & Boylan, 2014, p. 445). Indeed, for those affiliated with college reading and learning, it can even bring up emotional feelings of powerlessness. Often, this is because they are not included in the planning of assessment for their courses and program, or because they feel voiceless without a background in this discourse and formal training in this area. This all serves as a rationale for our primary implication for practice: a call for additional professional development or training in the area of college reading assessment. Extending this overarching implication, seven related implications also emerge for college reading and learning professionals: 1 Be clear. An articulated theoretical model of reading as well as a carefully considered statement of purpose mark a critical starting point for determining alignment to a particular test(s). 2 Be involved. Decision-making regarding which tests are used, how many, for what purposes, and how the results are interpreted and applied is happening at the institutional, system-wide, and—with increasing frequency—the state levels. These conversations, though they often do not, should involve experts in college reading. 3 Be critical. An awareness of the strengths and limitations of the measures they use is essential to ensure their appropriate administration and interpretation. This is particularly true for diverse populations and students with disabilities. 4 Be informed. Awareness of the technical aspects of measurement, ethical use of instruments, and in the use of technology in assessment is warranted. 5 Be comprehensive. Multiple reading measures, including diagnostic tests or other measures (affective) where appropriate, should be considered to provide comprehensive assessment protocols. 6 Be self-directed. College reading programs should compile their own data and develop their own local norms. 7 Be part of the dialogue. Individual institutions, programs, and professionals can contribute to the ongoing dialogue—and the field’s evolving understanding of assessment best practices— by sharing information on tests and test protocols, including evaluations, especially assessment practices that are questionable or ineffective.
Recommendations for Future Research Perhaps the most disconcerting conclusion of this review is not only a dearth in new measurement instruments, but also a continued decline over the years in the number of options for college
358
Reading Tests
reading tests concurrent with an overwhelming tendency for the vast majority of institutions to adopt one of only two instruments (ACCUPLACER and, before its departure, COMPASS). Equally problematic is the dearth in literature reviewing and evaluating college reading tests and test protocols, particularly in the area of college reading. Data-based articles in premier peer- reviewed reading journals (e.g., Reading Research Quarterly, Journal of Literacy Research, Literacy Research and Instruction) focusing on college-level reading are practically nonexistent. Data-based articles in journals frequented by college reading professionals (e.g., Journal of Adolescent and Adult Literacy, Journal of College Reading and Learning, Journal of College Literacy and Learning, and Journal of Developmental Education) focusing on assessment topics are also rare lately. There exists, at present perhaps more than ever before, a dire need for research on college reading tests from within the field. Especially given the plethora of recent research—done by researchers outside the field of college reading and learning—that has found that current placement practices are problematic, it is critical for experts within the field to investigate potential solutions. The consequences of the testing of reading at the college level on public policy, state funding, and the futures of students are high. Further, not only is more research in this area needed; more funding for high-quality research in this area is equally necessary. Although endless possible topics exist, we close with three time-sensitive and important topic areas, posed as broad questions, for future research from experts within the field: 1 How should college-readiness for text be measured? Especially given the current emphasis on placement testing (Belfield & Crosta, 2012; Hodara et al., 2012; Hughes & Scott-Clayton, 2011; Scott-Clayton, 2012), this question is of utmost urgency. With no universal understanding of college ready emerging (and, many would argue that such is not possible), it begs the question as to whether standardized tests can yield any information about students’ levels of readiness for college text. This is especially concerning because the instruments currently in use do not measure the types of reading required of most college students. Therefore, the results of these tests only provide partial information concerning students’ capacity to handle college reading assignments. 2 What are the possibilities with local, homegrown reading tests? To complicate matters, placement purposes is in no way singular, as local understandings, definitions, enactments of reading through particular curricula all matter to that overall construct. In the past, more placement assessment was homegrown, but of course, considerable time and resources are required for such work. Some have called for institutions to define “reading” at the local level to determine what is or is not college text ready, which would necessarily entail evaluation of reading programming and its alignment—or misalignment—to the next-level courses (Armstrong, Stahl, & Kantner, 2015a, 2015b; Armstrong et al., 2016; Armstrong & Stahl, 2017). 3 Which affective issues matter to reading testing at the postsecondary level? Big-tent reading research has identified affective characteristics as correlates of reading for years (e.g., Guthrie & Wigfield, 2000; Guthrie, Klauda, & Ho, 2013; Henk, Marinak, & Melnik, 2012; O’Brien & Dillon, 2008). Several key figures within the field of developmental education have argued for a more comprehensive and whole-learner approach to assessment (particularly placement testing), especially one that extends the placement process with advising/counseling and focused support (e.g., Boylan, 2009; Levine-Brown et al., 2008; Saxon et al., 2008; Saxon & Morante, 2014; Simpson & Nist, 1992). However, because such a comprehensive approach is undertaken so rarely (despite a long-standing call for such in the field), and because such approaches are published studies even more rarely, there is still much to be learned in the field. In sum, the literature related to reading tests for college students raises a number of important issues. It is important for college reading and learning professionals to understand these issues and
359
Flippo, Armstrong, and Schumm
what the literature says about them to clarify their assessment needs and criteria for selecting the most appropriate tests for their programs. In any case, it is imperative that reading professionals become aware of the strengths, limitations, and appropriate use of the tests they use. The implications for students’ futures can be dire (Thomas, 2005). Further, the current push in higher education in general is toward a more data-driven approach to curriculum—and funding—decisions. Given recent studies and reports that have questioned the efficacy of developmental education programming, it has become increasingly important for professionals in that field to provide an evidence base to support curricular decisions and to justify funding allocations. Of course, reading programming is no stranger to this type of push, as Martha Maxwell’s (1997) comment about budget cuts and the need to have “evidence of your program’s value to students” (p. 307) illustrates. More recently, the push for program prioritization or outcomes-based funding has led to the same realization: that there is a very real reason to need to justify—to others—college reading programs. In short, reading tests, especially for their current primary usage of placement, are high stakes for fiscal, policy, and legislative reasons (Afflerbach, 2005). However, for most college reading and learning professionals, this is a new expectation that requires reading professionals to learn about assessment protocols on their own.
References and Suggested Readings AB 705, California (2017). Seymour-Campbell Student Success Act of 2012. Retrieved from leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180AB705 Abedi, J. (2001, Summer). Assessment and accommodations for English language learners. CRESST Policy Brief 4. Abedi, J. (2006). Psychometric issues in the ELL assessment and special education eligibility. Teachers College Record, 108(11), 282–303. Achieve (2013). Closing the expectations gap 2013 annual report on the alignment of state k–12 policies and practice with the demands of college and careers. Mountain View, CA: Author. Afflerbach, P. (2005). National Reading Conference policy brief: High stakes testing and reading assessment. Journal of Literacy Research, 37(2), 151–162. *American Educational Research Association. (2000). AERA position statements: High-stakes testing in PreK-12 education. Retrieved from www.aera.net/policyandprograms/?id=378 *American Educational Research Association. (2014). Standards for educational and psychological testing. Washington, DC: American Educational Research Association. Armbruster, B. B., & Anderson, T. H. (1988). On selecting “considerate” content area textbooks. Remedial and Special Education, 9(1), 47–52. Armstrong, S. L., & Stahl, N. A. (2017). Communication across the silos and borders: The culture of reading in a community college. Journal of College Reading and Learning, 47(2), 99–122. Armstrong, S. L., Stahl, N. A., & Boylan, H. R. (2014). Teaching developmental reading: Historical, theoretical, and practical background readings. Boston, MA: Bedford/St. Martin’s. Armstrong, S. L., Stahl, N. A., & Kantner, M. J. (2015a). Investigating academic literacy expectations: A curriculum audit model for college text readiness. Journal of Developmental Education, 38(2), 2–4, 6, 8–9, 12–13, 23. Armstrong, S. L., Stahl, N. A., & Kantner, M. J. (2015b). What constitutes ‘college-ready’ for reading? An investigation of academic text readiness at one community college (Center for the Interdisciplinary Study of Language and Literacy [CISLL] Technical Report No. 1). Retrieved from www.niu.edu/cisll/_pdf/reports/ TechnicalReport1.pdf Armstrong, S. L., Stahl, N. A., & Kantner, M. J. (2016). Building better bridges: Understanding academic text readiness at one community college. Community College Journal of Research and Practice, 40(11), 1–24. Armstrong, W. B. (2000). The association among student success in courses, placement test scores, student background data, and instructor grading practices. Community College Journal of Research and Practice, 24(8), 681–695. Bader, L. A., & Pearce, D. L. (2013). Bader reading & language inventory (7th ed.). Upper Saddle River, NJ: Pearson. Bailey, T. R., Jaggars, S. S., & Jenkins, D. (2015). Redesigning America’s community colleges: A clearer path to student success. Cambridge, MA: Harvard University Press.
360
Reading Tests
Barnett, E. A., & Fay, M. P. (2013). The Common Core State Standards: Implications for community colleges and student preparedness for college (NCPR Working Paper). New York, NY: Columbia University, National Center for Postsecondary Research. Barnett, E. A., & Reddy, V. (2017). College placement strategies: Evolving considerations and practices (CAPR Working Paper). New York, NY: Columbia University, Teachers College, Center for the Analysis of Postsecondary Readiness. Bartholomae, D., & Petrosky, A. (Eds.) (1986). Facts, artifacts and counterfacts: Theory and method for a reading and writing course. Portsmouth, NH: Heinemann. Bean, T. W., Readence, J. E., & Dunkerly-Bean, J. (2017). Content area literacy: An integrated approach (11th ed.). Dubuque, IA; Kendall/Hunt. Bedore, P., & Rossen-Knill, D. (2004). Informed self-placement: Is a choice offered a choice received? Writing Program Administration, 28(1–2), 55–78. Belfield, C. R., & Crosta, P. M. (2012). Predicting success in college: The importance of placement tests and high school transcripts (CCRC Working Paper No. 42). New York, NY: Columbia University, Teachers College, Community College Research Center. Benson, J. (1981). A redefinition of content validity. Educational and Psychological Measurement, 41(3), 793–802. *Blanton, W., Farr, R., & Tuinman, J. J. (Eds.). (1972). Reading tests for the secondary grades: A review and evaluation. Newark, DE: International Reading Association. Bliss, L. B. (2001). Assessment in developmental education. In V. L. Farmer & W. A. Barham (Eds.), Selected models of developmental education programs in higher education (pp. 359–386). Lanham, NY: University Press of America. Bloom, B. S. (1976). Human characteristics and school learning. New York, NY: McGraw-Hill, 1976. Bormuth, J. R. (1966). Readability: A new approach. Reading Research Quarterly, 1(3), 79–132. Boylan, H. R. (2002). What works: Research-based best practices in developmental education. Boone, NC: Continuous Quality Improvement Network/National Center for Developmental Education. Boylan, H. R. (2009). Targeted intervention for developmental education students (T.I.D.E.S.). Journal of Developmental Education, 32(3), 14–23. Brown, F. G. (1983). Principals of educational and psychological testing (3rd ed.). New York, NY: Holt, Rinehart, & Winston. Brown, J. I., Bennett, J. M., & Hanna, G. S. (1980). Nelson-Denny Reading Test (Forms E & F). Boston, MA: Houghton Mifflin. Brown, J. I., Fishco, V. V., & Hanna, G. S. (1993). Nelson-Denny Reading Test (Forms G & H). Boston, MA: Houghton Mifflin. Brown, J. I., Nelson, M. J., & Denny, E. C. (1973). Nelson-Denny Reading Test (Forms C & D). Boston, MA: Houghton Mifflin. Burdman, P. (2012). Where to begin? The evolving role of placement exams for students starting college. Washington, DC: Achieving the Dream. Burns, P. C., & Roe, B. D. (2011). Informal reading inventory: Preprimer to twelfth grade (8th ed.). Boston, MA: Wadsworth. Carver, R. P. (1975). Reading progress scale, college version. Kansas City, MO: Retrace. College Board. (1983). Degrees of reading power. New York, NY: Author. College Board. (1993). ACCUPLACER. New York, NY: Author. College Board. (2017). Next-Generation ACCUPLACER Test Specifications. Retrieved from accuplacer.collegeboard.org/sites/default/files/next-generation-test-specifications-manual.pdf *Common Core State Standards Initiative (2018). Common Core State Standards for English language arts & literacy in history/social studies, science, and technical subjects. Washington, DC: Council of Chief State School Officers and the National Governors Association Center for Best Practices. Conforti, P. A. (2013, May). What is college and career readiness? A summary of state definitions. New York, NY: Pearson Education, Inc. Retrieved from researchnetwork.pearson.com/wp-content/uploads/TMRSRIN_Bulletin_22CRCDefinitions_051313.pdf Conley, D. (2007). Redefining college readiness. Eugene, OR: Education Policy Information Center. Conley, D. T. (2008). Rethinking college readiness. New Directions for Higher Education, 144, 3–13. Conley, D. T. (2012). A complete definition of college and career readiness. Retrieved from Educational Policy Improvement Center (EPIC) website www.epiconline.org/publications/documents/College and Career Readiness Definition.pdf Cross, D. R., & Paris, S. G. (1987). Assessment of reading comprehension: Matching test purposes and test properties. Educational Psychologist, 22(3), 313–332. CTB/McGraw-Hill. (1970). California Achievement Test (CAT/5). Level 5. Monterey, CA: Author.
361
Flippo, Armstrong, and Schumm
CTB/McGraw-Hill. (1985). California Achievement Test (CAT/5). Levels 19, 20. Monterey, CA: Author. CTB/McGraw-Hill. (1992). California Achievement Test (CAT/5). Levels 20 and 21/22. Monterey, CA: Author. CTB/McGraw-Hill. (2005). Terranova Performance Assessments (CAT/6). Levels 19/20, 21/22 (2nd ed.). Monterey, CA: Author. Dale, E., & Chall, J. S. (1948). A formula for predicting readability. Educational Research Bulletin, 27(1), 11–20. Dale, E., & O’Rourke, J. (1981). The living word vocabulary. Chicago, IL: World Book-Childcraft. Davis, F. B., & Davis, C. C. (1961). Davis Reading Test. New York, NY: Psychological Corp. Davis, T., Kaiser, R., & Boone, T. (1987). Speediness of the Academic Assessment Placement Program (AARP) reading comprehension test. Nashville, TN: Board of Regents. (ERIC Document 299 264). Derrick, C., Harris, D. P., & Walker, B. (1960). Cooperative English tests-reading. Princeton, NJ: Educational Testing Service. Educational Testing Service. (1969). Sequential tests of educational progress: Reading. Princeton, NJ: Author. Education Commission of the States. (2012). Education Commission of the States: Annual report 2012. Denver. CO: Author. Retrieved from www.ecs.org/ec-content/uploads/AnnualReport2012web.pdf Ewell, P. T., Boeke, M., & Zis, S. (2008). State policies on student transitions: Results of a fifty-state inventory. Boulder, CO: National Center for Higher Education Management Systems (NCHEMS). Fain, P. (2015, June). Finding a new compass. Inside Higher Education. Retrieved from www.insidehighered. com/news/2015/06/18/act-drops-popular-compass-placement-test-acknowledging-its-predictive-limits Farr, R. (Cood. Ed.). (1973). Iowa Silent Reading Tests (Level 2 & 3). Cleveland, OH: Psychological Corp. Farrall, M. L. (2012). Reading assessment: Linking language, literacy, and cognition. Hoboken, NJ: John Wiley & Sons. Felder, J. E., Finney, J. E., & Kirst, M. W. (2007). Informed self-placement at American River College: A case study. National Center Report #07-2. San Jose, CA: National Center for Public Policy and Higher Education. Fields, R., & Parsad, B. (2012). Tests and cut scores used for student placement in postsecondary education: Fall 2011. Retrieved from National Assessment Governing Board at www.nagb.org/content/nagb/assets/ andresources/test-and-cut-scores-used-for-student-placement-indocuments/commission/research postsecondary-education-fall-2011.pdf Flippo, R. F. (1980a). Comparison of college students’ reading gains in a developmental reading program using general and specific levels of diagnosis. Dissertation Abstracts International, 30, 3186A–3187A. (University Microfilms No. 70–2200). *Flippo, R. F. (1980b). Diagnosis and prescription of college students in developmental reading programs: A review of literature. Reading Improvement, 17(4), 278–285. *Flippo, R. F. (1980c). The need for comparison studies of college students’ reading gains in developmental reading programs using general and specific levels of diagnosis. In M. L. Kamil & A. J. Moe (Eds.), Perspectives on reading research and instruction (pp. 259–263). Washington, DC: National Reading Conference. (ERIC Document 184 061). Flippo, R. F. (2002). Standardized testing. In B. J. Guzzetti (Ed.), Literacy in America: An encyclopedia of history, theory, and practice (Vol. 2, pp. 615–617). Santa Barbara, CA: ABC-CLIO. *Flippo, R. F. (2014). Assessing readers: Qualitative diagnosis and instruction (2nd ed.). New York, NY: Routledge, and Newark, DE: International Reading Association. *Flippo, R. F., & Schumm, J. S. (2000). Reading tests. In R. F. Flippo & D. C. Caverly (Eds.), Handbook of college reading and study strategy research (pp. 403–472). Mahwah, NJ: Lawrence Erlbaum Associates. *Flippo, R. F., & Schumm, J. S. (2009). Reading tests. In R. F. Flippo & D. C. Caverly (Eds.), Handbook of college reading and study strategies (2nd ed., pp. 408–464). New York, NY: Routledge. Flippo, R. F., Becker, M. J., & Wark, D. M. (2009). Test taking. In R. F. Flippo & D. C. Caverly (Eds.), Handbook of college reading and study strategy research (pp. 249–286). Mahwah, NJ: Lawrence Erlbaum Associates. *Flippo, R. F., Hanes, M. L., & Cashen, C. J. (1991). Reading tests. In R. F. Flippo & D. C. Caverly (Eds.), College reading & study strategy programs (pp. 118–210). Newark, DE: International Reading Association. *Flippo, R. F., Holland, D., McCarthy, M., & Swinning, E. (2009). Asking the right questions: How to select an informal reading inventory. The Reading Teacher, 63(1), 79–83. Freedle, R. O. (2003). Correcting the SAT’s ethnic and social bias: A method for reestimating SAT scores. Harvard Educational Review, 73(1), 1–43. Fry, E. B. (1977). Fry’s readability graph: Clarifications, validity, and extension to Level 17. Journal of Reading, 21(3), 242–252. Fulton, M. (2012). Using state policies to ensure effective assessment and placement in remedial education. Denver, CO: Education Commission of the States. Goodman, Y.M., Watson, D.J., & Burke, C.L. (2005). Reading miscue inventory: From evaluation to instruction (2nd ed.). Katonah, NY: Richard C. Owen Publishers, Inc.
362
Reading Tests
Gordon, B. (1983). A guide to postsecondary reading tests. Reading World, 23(1), 45–53. *Gordon, B., & Flippo, R. F. (1983). An update on college reading improvement programs in the southeastern United States. Journal of Reading, 27(2), 155–165. Greenbaum, J., & Angus, K. B. (2018). Rights of postsecondary readers and learners. Journal of College Reading and Learning, 48(2), 138–141. Gregory, R. J. (2007). Psychological testing: History, principles, and applications. (5th ed.). Boston, MA: Pearson. Grubb, W. N., Boner, E., Frankel, K., Parker, L., Patterson, D., Gabriner, R., … Wilson, S. (2011). Assessment and alignment: The dynamic aspect of developmental education (Basic Skills Instruction in California Community Colleges, Number 7). Stanford, CA: Stanford University, Policy Analysis for California Education. Retrieved from www.stanford.edu/group/pace/PUBLICATIONS/WORKINGPAPE RS/2012_WP_GRUBB_NO7.pdf Gunning, T. G. (2006). Assessing and correcting reading and writing difficulties (3rd ed.). Boston, MA: Pearson. Guthrie, J. T., & Lissitz, R. W. (1985, Summer). A framework for assessment-based decision making in reading education. Educational Measurement: Issues and Practice, 4(2), 26–30. Guthrie, J. T., & Wigfield, A. (2000). Engagement and motivation in reading. In M. L. Kamil, P. B. Rosenthal, P. D. Pearson, & R. Barr (Eds.), Handbook of reading research: Vol. III (pp. 403–422). Mahwah, NJ: Lawrence Erlbaum Associates. Guthrie, J. T., Wigfield, A., & Klauda, S. L. (2012). Adolescents’ engagement in academic literacy (Report No. 7). Retrieved from corilearning.com/research-publications. Hadden, C. (2000). The ironies of mandatory placement. Community College Journal of Research and Practice, 24(10), 823–838. Haladyna, T. M. (2002). Essentials of standardized achievement testing: Validity and accountability. Boston, MA: Allyn & Bacon. Haney, C., & Hurtado, A. (1994). The jurisprudence of race and meritocracy: Standardized testing and “race-neutral” racism in the workplace. Law and Human Behavior, 18(3), 223–248. Harris, A. J., & Jacobson, M. D. (1973). The Harris-Jacobson primary readability formula. Paper presented at the annual convention of the International Reading Association. Bethesda, MD. Henk, W. A., Marinak, B. A., & Melnick, S. A. (2012). Measuring the reader self perceptions of adolescents: Introducing the RSPS2. Journal of Adolescent & Adult Literacy 56(4), 299–308. Hewitt, M. A., & Homan, S. P. (2004). Readability level of standardized test items and student performance: The forgotten validity variable. Reading Research and Instruction, 43(2), 1–16. Hodara, M., Jaggars, S. S., & Karp, M. M. (2012). Improving developmental education assessment and placement: Lessons from community colleges across the country (CCRC Working Paper No. 51). New York, NY: Columbia University, Teachers College, Community College Research Center. Holland, P. W., & Thayer, D. T. (1986). Differential item performance and the Mantel-Haenszel procedure ( Technical Report No. 86–69). Princeton, NJ: Educational Testing Service. *Holschuh, J. P. (2014). The Common Core goes to college: The potential for disciplinary literacy approaches in developmental literacy classes. Journal of College Reading and Learning, 45(1), 85–95. Hughes, K. L., & Scott-Clayton, J. (2011). Assessing developmental assessment in community colleges (CCRC Working Paper No. 19). New York, NY: Columbia University, Teachers College, Community College Research Center. Indiana Board of Higher Education. (2014). Indiana college readiness report for 2014. Retrieved from www. in.gov/che/2489.htm Johns, J. (1993). Informal reading inventories: An annotated reference guide. DeKalb, IL: Communitech International Incorporated. Johns, J., & Elish-Piper, E. (2017). Basic reading inventory: Kindergarten through grade twelve and early literacy assessments (12th ed). Dubuque, IA: Kendall/Hunt. *Johnston, P. H. (1984). Prior knowledge and reading comprehension test bias. Reading Research Quarterly, 14, 219–239. Karlsen, B., & Gardner, E. R. (1995). Stanford Diagnostic Reading Test, Blue Level (4th ed.). San Antonio, TX: Psychological Corp. Karlsen, B., Madden, R., & Gardner, E. R. (1976). Stanford Diagnostic Reading Test, Blue Level (2nd ed.). New York, NY: Psychological Corp. Karlsen, B., Madden, R., & Gardner, E. R. (1984). Stanford Diagnostic Reading Test, Blue Level (3rd ed.). New York, NY: Psychological Corp. Kena, G., Aud, S., Johnson, F., Wang, X., Zhang, J., Rathbun, A., Wilkinson Flicker, S., & Kristapovich, P. (2014). The condition of education 2014 (NCES 2014–083). Washington, DC: U.S. Department of Education, National Center for Education Statistics. Retrieved from http://nces.ed.gov/pubsearch.
363
Flippo, Armstrong, and Schumm
Kenner, K. (2016). Student rationale for self-placement into first-year composition: Decision making and directed self-placement. Teaching English in the Two-Year College, 43(3), 274–289. King, J. E. (2011). Implementing the Common Core State Standards: An action agenda for higher education. Retrieved from www.acenet.edu/news-room/Documents/Implementing-the-Common-Core-State-Standards2011.pdf *Kingston, A. J. (1955). Cautions regarding the standardized test. In O. J. Causey & A. J. Kingston (Eds.), Phases of college and adult reading (pp. 100–107). Milwaukee, WI: National Reading Conference. Kohn, A. (2000). The case against standardized testing: Raising the scores, ruining the schools. Portsmouth, NH: Heinemann. Levine-Brown, P., Bonham, B. S., Saxon, D. P., & Boylan, H. R. (2008). Affective assessment for developmental students, part 2. Research in Developmental Education, 22(2), 1–4. MacGinitie, W. H. (1978). Gates-MacGinitie Reading Tests, Level F (2nd ed.). Boston, MA: Houghton Mifflin. MacGinitie, W. H., & MacGinitie, R. K. (1989). Gates-MacGinitie Reading Tests, Level F (3rd ed.). Boston, MA: Houghton Mifflin. MacGinitie, W. H., MacGinitie, R. K., Maria, K., Dreyer, L. G., & Hughes, K. E. (2002). Gates MacGinitie Reading Tests, Forms S & T (4th ed.). Boston, MA: Houghton Mifflin. Mantel, N., & Haenszel, W. (1959). Statistical aspects of the analysis of data from retrospective studies of disease. Journal of the National Cancer Institute, 22(4), 719–748. *Maxwell, M. (1979). Improving student learning: A comprehensive guide to successful practices and programs for increasing the performance of underprepared students. San Francisco, CA: Jossey-Bass Publishers. Maxwell, M. (1997). Improving student learning skills: A new edition. Clearwater, FL: H&H Publishing. Maxwell, M. J. (1966). Training college reading specialists. Journal of Reading, 9, 147–155. Maxwell, M. J. (1971). Evaluating college reading and study skills programs. Journal of Reading, 15(3), 214–221. McKenna, M. C., & Stahl, K. A. D. (2009). General concepts of assessment. In M. C. McKenna & K. A. D. Stahl (Eds.), Assessment for reading instruction (2nd ed., pp. 24–40). New York, NY: Guilford Press. McMillan, J. H. (2001). Classroom assessment: Principles and practice for effective instruction (2nd ed.). Boston, MA: Allyn & Bacon. Miles, C., & Grummon, P. (1999). INCLASS: Inventory of classroom style and skills. New York, NY: Pearson. Mishkind, A. (2014). Definitions of college and career readiness: An analysis by state. College and Career Readiness and Success Center. Washington, DC: American Institutes for Research. Mokhtari, K., & Reichard, C. (2002). Assessing students’ metacognitive awareness of reading strategies. Journal of Educational Psychology, 94(2), 249–259. Morante, E. (1989). Selecting tests and placing students. Journal of Developmental Education, 13(2), 2–4, 6. Morante, E. A. (2012). Editorial: What do placement tests measure? Journal of Developmental Education, 35(3), 28. National Center on Education and the Economy. (2013). What does it really mean to be college and work ready? A study of the English literacy and mathematics required for success in the first year of community college. Washington, DC: Author. Nelson, M. J., Denny, E. C., & Brown, J. I. (1960). Nelson-Denny Reading Test (Forms A & B). Boston, MA: Houghton Mifflin. Nist, S. L. & Hynd, C. R. (1985). The college reading laboratory: An old story with a new twist. Journal of Reading, 28(4), 305–309. Noble, J. P., Schiel, J. L., & Sawyer, R. L. (2003). Assessment and college course placement: Matching students with appropriate instruction. In J. E. Wall & G. R. Walz (Eds.), Measuring up: Assessment issues for teachers, counselors, and administrators (pp. 297–311). Greensboro, NC: CAPS Press. O’Brien, D. G., & Dillon, D. R. (2008). The role of motivation in engaged reading of adolescents. In K. Hinchman & H. Sheridan-Thomas (Eds.), Best practices in adolescent literacy instruction (pp. 78–96). New York, NY: Guilford. *Paris, S. G., & Carpenter, R. D. (2003). FAQs about IRIs. The Reading Teacher, 56, 579–581. Perin, D. (2006). Can community colleges protect both access and standards? The problem of remediation. Teachers College Record, 108, 339–373. Peters, C. W. (1977). Diagnosis of reading problems. In W. Otto, N. Peters, & C. W. Peters (Eds.), Reading problems: A multidisciplinary perspective (pp. 151–188). Reading, MA: Addison-Wesley. *Pikulski, J. J., & Shanahan, T. (Eds.). (1982). Approaches to the informal evaluation of reading. Newark, DE: International Reading Association. Quint, J. C., Jaggars, S. S., Byndloss, D. C., & Magazinnik, A. (2013). Bringing developmental education to scale: Lessons from the developmental education initiative. New York, NY: MDRC.
364
Reading Tests
Ramsey, J. (2008). Noncognitive assessment and college success: The case of the Gates Millennium Scholars. Washington, DC: Institute for Higher Education Research (IHEP). Raygor, A. L. (1970). McGraw-Hill basic skills reading test. New York, NY: McGraw-Hill. Raygor, A. L. (1977). The Raygor readability estimate: A quick and easy way to determine difficulty. In P. D. Pearson (Ed.), Reading: Theory, research, and practice (26th yearbook of the National Reading Conference). Clemson, SC: National Reading Conference. Raygor, A. L. (1980). Minnesota reading assessment. Rehoboth, MA: Twin Oaks. Athens, GA: American Reading Forum. (ERIC Document 198 485). Reynolds, E.J. (2003). The role of self-efficacy in writing and directed self-placement. In D. J. Royer & R. Gilles (Ed.), Directed self-placement: Principles and practices (pp. 73–103). Cresskill, NJ: Hampton Press, Inc. Richardson, J. S., Morgan, R. F., & Fleener, C. E. (2009). Reading to learn in the content areas (7th ed.). Belmont, CA: Thompson Wadsworth Publications. Rodriguez, O., Bowden, B., Belfield, C., & Scott-Clayton, J. (2014) Remedial placement testing in community colleges: What resources are required, and what does it cost? (CCRC Working Paper No. 73). New York, NY: Columbia University, Teachers College, Community College Research Center. Roueche, J. E., & Kirk, R. W. (1973). Catching up: Remedial education. San Francisco, CA: Jossey-Bass Publishers. Royer, D. J., & Gilles, R. (1998). Directed self-placement: An attitude of orientation. College Composition and Communication, 50(1), 54–70. Royer, D. J., & Gilles, R. (2003). Directed self-placement: Principles and practices. Cresskill, NJ: Hampton Press. Ryan, J. M., & Miyasaka, J. (1995). Current practices in testing and assessment: What is driving the change? NASSP Bulletin, 79(573), 1–10. Santelices, M. V., & Wilson, M. (2010). Unfair treatment? The case of Freedle, the SAT, and the standardization approach to differential item functioning. Harvard Educational Review, 80(1), 106–133. Sattler, J. M. (2001). Assessment of children (4th ed.). La Mesa, CA: Jerome Sattler. Saxon, D. P., & Morante, E. A. (2014). Effective student assessment and placement: Challenges and recommendations. Journal of Developmental Education, 37(3), 24–31. Saxon, D. P., Levine-Brown, P., & Boylan, H. (2008). Affective assessment for developmental students, part 1. Research in Developmental Education, 22(1), 1–4. *Schumm, J. S. (Ed.). (2006). Reading assessment and instruction for all learners. New York, NY: Guilford. *Schumm, J. S., Haager, D., & Leavell, A. G. (1991). Considerate and inconsiderate text instruction in postsecondary developmental reading textbooks: A content analysis. Reading Research and Instruction, 30(4), 42–51. *Scott-Clayton, J. (2012). Do high-stakes placement exams predict college success? (CCRC Working Paper No. 41). New York, NY: Columbia University, Teachers College, Community College Research Center. Shelor, M. D., & Bradley, J. M. (1999) Case studies in support of multiple criteria for developmental reading placement. Journal of College Reading and Learning, 30(1), 17–33. doi:10.1080/10790195.1999.10850083 Simonson, C. A., & Steadman, C. J. (1980). Using testing and self assessment for better advising and placement. Proceedings of the Annual Conference of the Western College Reading Association, 13(1), 57–64. Simpson, M. L. (1982). A diagnostic model for use with college students. Journal of Reading, 26(2), 137–143. *Simpson, M. L., & Nist, S. L. (1992). Toward defining a comprehensive assessment model for college reading. Journal of Reading, 35(6), 452–458. Simpson, M. L., Stahl, N. A., & Francis, M. A. (2004). Reading and learning strategies: Recommendations for the 21st century. Journal of Developmental Education, 28(2), 2–15, 32. Singer, H., & Donlan, D. (1989). Reading and learning from text (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum Associates. Smith, A. (2015). When you’re not ready. Inside Higher Ed. Retrieved from www.insidehighered.com/news/ 2015/06/25/floridas-remedial-law-leads-decreasing-pass-rates-math-and-english Spache, G. (1953). A new readability formula for primary-grade reading materials. Elementary School Journal, 53, 410–413. Stahl, N. A., & Armstrong, S. L. (2018). Re-claiming, re-inventing, and re-reforming a field: The future of college reading. Journal of College Reading and Learning, 48(1), 47–66. Stahl, N. A., Henk, W. A., & Eilers, U. (1995). Revisiting the readability of state drivers’ manuals. Transportation Quarterly, 49(1), 105–116. Stahl, N. A., Simpson, M. L., & Hayes, C. G. (1992). If only we had known: Ten recommendations from research for teaching high-risk college students. Journal of Developmental Education, 16(1), 2–11. Stieglitz, E. L. (2002). The Stieglitz informal reading inventory: Assessing reading behaviors from emergent to advanced levels (3rd ed.). Boston, MA: Pearson.
365
Flippo, Armstrong, and Schumm
Texas Higher Education Coordinating Board. (2017). Developmental Education/TSI. Retrieved from www. thecb.state.tx.us/index.cfm?objectid=84A33C61-9922-285A-918F9403E122804F Texas Higher Education Coordinating Board. (2009). Texas college and career readiness standards. Retrieved from www.thecb.state.tx.us/reports/PDF/1513.PDF Thomas, R. M. (2005). High-stakes testing: Coping with collateral damage. Mahwah, NJ: Lawrence Erlbaum Associates. Thurlow, M. L., Elliott, J. L., & Ysseldyke, J. E. (2003). Testing students with disabilities: Practical strategies for complying with district and state requirements (2nd ed.). Thousand Oaks, CA: Corwin. Triggs, F. O. (1943). Remedial reading: The diagnosis and correction of reading difficulties at the college level. Minneapolis, MN: University of Minnesota Press. Triggs, F. O. (1947). Diagnostic reading tests: A history of their construction and validation. New York, NY: Committee on Diagnostic Tests. Troka, T. M. (2016). Understanding directed self-placement as it relates to student persistence and success (Unpublished doctoral dissertation). DeKalb: Northern Illinois University. Utterback, J. (1998). Closing the door: A critical review of forced academic placement. Journal of College Reading and Learning, 29(1), 48–56. doi:10.1080/10790195.1998.10850069 Weinstein, C. E., Palmer, D. P., & Acee, T. W. (2016). Learning and study strategies inventory (3rd ed.). Clearwater, FL: H & H Publishing. *Wood, K. (1989). Reading tests and reading assessment. Journal of Developmental Education, 13, 14–19. *Wood, N. V. (1988). Standardized reading tests and the postsecondary reading curriculum. Journal of Reading, 32(3), 224–230. Wright, G. L. (1973). An experimental study comparing the differential effectiveness of three developmental reading treatments upon the rate, vocabulary, and comprehension skills of white and black college students. Dissertation Abstracts International, 34, 5811A (University Microfilm No. 74 6257). Zacher, P. J. (2011). Overtested: How high-stakes accountability fails English language learners. New York, NY: Teachers College Press.
366
Compendium Commercially Available Reading Tests Reviewed
ACCUPLACER, (Next-Generation ACCUPLACER) Test/Author(s) ACCUPLACER Next-Generation Reading The College Board
Type of Test Computer adaptive, screening, norm-referenced, standardized, with a diagnostic component
Use(s) of Test 1 To determine a student’s readiness for placement into college-entry or developmental-level courses 2 To monitor a student’s course progress and make recommendations post-intervention. (The publisher notes that the instrument is NOT designed for making admissions decisions.)
Skills/Strategies Tested 1 2 3 4
Information and ideas Rhetoric Synthesis Vocabulary.
Population Recommended Students entering community college, four-year college, and technical schools.
367
Compendium
Overall Theory of Reading The Next-Generation ACCUPLACER reading test focuses on four discrete knowledge/skill areas: information and ideas, rhetoric, synthesis, and vocabulary. According to the test publisher, the instrument assesses a student’s “ability to derive meaning from a range of prose texts and to determine the meaning of words and phrases in short and extended contexts” (College Board, p. 1). This assessment is a multiple-choice format with text passages representing a range of content areas, rhetorical modes, and levels of complexity. The combination of the focus on discrete areas of reading, the emphasis on “determining meaning,” and the multiple-choice format is suggestive of more of a basic skills philosophy or a simple view of reading perspective. However, given the attempts to provide a range of text types (including excerpts and whole texts) and the inference-based types of questions that go beyond simplistic retrieval of information, Next- Generation ACCUPLACER offers a step toward a more holistic perspective of reading.
Readability/Sources of Passages The initial source of items for the “classic” ACCUPLACER was the New Jersey College Basic Skills Test. Curricular experts conducted an extensive review of items and developed additional items as necessary. In addition to an internal review, external expert reviews and extensive field tests were conducted. Both reading comprehension and sentence skills subtests include passages from a wide range of content areas including social sciences, natural and physical sciences, human relations and practical affairs, and the arts. The test pool includes both authentic texts and commissioned texts, and represents a range of text complexities from basic levels of complexity (defined as 4th–5th grade complexity band) to highly complex (defined as lower-division undergraduate complexity band), based on both quantitative and qualitative text complexity measures, subject-matter experts’ feedback, and test data on student performance. According to the publisher, many of the passages reflect a college and career readiness level (College Board, 2017).
Format of Test/Test Parts This adaptive instrument is available in both web-based (Next-Generation ACCUPLACER) and paper and pencil (ACCUPLACER COMPANION) versions. Reading comprehension for the Next-Generation ACCUPLACER consists of passages of varying lengths, with a mix of multiple-choice questions that are either discrete, stand-alone questions, or combined as part of a set for a common passage. Question types include identifying main ideas, direct statements, inferences, applications, and sentence relationships. For the web version, students answer a total of 20 questions (40 questions for the COMPANION).
Testing Time Untimed.
Forms One form (either web based or hard copy), with an extensive item pool plus an optional Diagnostic Reading Comprehension Test.
Levels One level is available, but the test is computer-adaptive.
368
Compendium
Types of Scores ACCUPLACER scaled score (Next-Generation scaled scores range from 200 to 300) as well as the Conditional Standard Error of Measurement (CSEM).
Norm Group(s) The national reference group for ACCUPLACER is based on students whose institutions have implemented the instrument. The reference group included in the technical manual represented data collected in 1994. Thirty-eight percent of the group were underrepresented students with 15 percent being African American and 14 percent Latinx. The number of students was not reported, but the number of institutions totaled 87.
Date Published or Last Revised 1993 and ongoing for the “classic” test; 2016 for Next-Generation ACCUPLACER.
Reliability A 139-page technical manual is available online. Test-retest reliability coefficients for the reading subtest range from 0.76 to 0.90 and for the sentence skills subtest from 0.73 to 0.83. Reliability of classification for cut-scores was 0.91 for reading comprehension and 0.94 for sentence skills.
Validity The technical manual cites the rigorous internal and external content reviews as the source of content validity. An examination of multiple concurrent validity studies indicated coefficients of 0.60 or higher. In addition, reviews were also conducted to assure lack of gender and ethnic bias. Similarly, differential item functioning analyses indicated that items function adequately for ethnic and gender groups included in the norming sample. Several predictive validity studies are reported in the manual that include ACCUPLACER and GPA data, with coefficients ranging from 0.41 to 0.84. However, the manual includes a discussion of the limitations of such correlations given the absence of student variables (e.g., attendance, motivation) in such analyses.
Scoring Options Computer scoring.
Computer Applications The Next-Generation ACCUPLACER is entirely web based. A CD-ROM is available for scoring the paper and pencil version.
Accommodations Braille, large-print, paper, and audio versions are available.
Cost Next-Generation ACCUPLACER (web-based units): $1.95–2.30 per student/per test (depending on institutional qualifications for various discount options). 100-unit minimum order.
369
Compendium
Nonreusable test booklets: $2.50/test. ACCUPLACER ACCUscore scanning software: $595 per year license fee.
Publisher The College Board, 250 Vesey Street, New York, NY 10281 (www.collegeboard.org)
Weaknesses of the Test 1 The brevity of passages provides a snapshot of student performance in reading comprehension. 2 Findings from predictive validity studies vary. Institutions are encouraged to gather institutional data about student placement accuracy.
Strengths of the Test 1 The web-based instrument is easy to use and does not require advanced technical knowledge or software installation. 2 The assessment has the capacity to screen large numbers of students in a short period of time.
Degrees of Reading Power Online Test/Author(s) Degrees of Reading Power (DRP) Online Questar
Type of Test Survey, formal, standardized, criterion-referenced.
Use(s) of Test 1 To measure a student’s comprehension of text passages. 2 To determine the most difficult prose a student can read with instructional assistance and as an independent reader. 3 To monitor progress and growth in the ability to read with comprehension. 4 To screen students for placement into developmental reading courses at the college level.
Skills/Strategies Tested Overall reading comprehension ability, as well as information about students' understanding of key ideas and details, craft and structure, and integration of knowledge and ideas.
Population Recommended Although college students are not included in the norming population, the publisher does recommend the use of the DRP to place college students in developmental reading programs and
370
Compendium
to document student progress in reading. In addition, the publisher points out that because the DRP is untimed, it is an appropriate measure for students with disabilities and students who speak English as a second language. Because all information needed to complete cloze passages is included in the passage, culturally dependent prior knowledge is less of a factor in this measure that would inhibit performance.
Overall Theory of Reading First published in 1983, the DRP provides an alternative to traditional standardized testing. The DRP measures a student’s ability to derive meaning from connected prose text. The text is written at different levels of difficulty or readability. According to the test publisher, the purpose of the DRP is to measure the process of reading rather than to measure individual subskills, such as main idea or author purpose. In the DRP Online, for students to answer questions correctly, they must read and comprehend the text pertaining to those items.
Readability/Sources of Passages The readability of passages is measured on a scale ranging from 0 to 100 DRP units, rather than in grade equivalencies. In practice, commonly encountered English text runs from about 30 DRP units at the easy end of the scale to about 85 DRP units at the difficult end. Bormuth’s (1966) mean cloze formula and a set of standardized procedures are used to derive DRP units. The test consists of expository passages organized in ascending order according to passage difficulty.
Format of Test/Test Parts The DRP Online is a modified cloze test. Each passage is a prose selection of progressive difficulty. DRP test items are created by the deletion of seven words in each passage. The student selects the most appropriate word from the options provided for each blank. The DRP Online incorporates the following characteristics: 1 The test passage must be read and understood for students to respond correctly. That is, the sentences containing the blanks will make sense with each of the options when read in isolation. However, when the surrounding text is taken into account, only one response is plausible. 2 Regardless of the difficulty of the passage, all response options are common words. 3 Item difficulty is linked to text difficulty.
Testing Time Untimed. Reports from colleges (both two- and four-year) using the DRP indicate that the majority of those tested complete the test in approximately one hour. (Note: Students are urged to stop when the test no longer is comprehensible; guessing is not encouraged.)
Forms Multiple levels with alternate forms for pre-/posttesting.
Levels Grade-level forms spanning Grade 2 through Adult, with Advanced levels available for middle school and beyond.
371
Compendium
Types of Scores Raw score, national percentile, instructional DRP score (indicate the most difficult text students can read and comprehend with instructional support), independent DRP score (indicate the most difficult text students can read and comprehend independently).
Norm Group(s) As in previous versions of the DRP, college students were not included in the norming population (last norming 1999).
Date Published or Last Revised 2013.
Reliability The publisher offers a technical report on national standardization, validity, and reliability. KuderRichardson formula 20 reliability coefficients were computed. Of the 72 reliability coefficients computed, 52 were greater than or equal to 0.95. The range of reliability coefficients was 0.93–0.97.
Validity The publisher suggests that because of the design of the test (i.e., the student who comprehends the prose should be able to answer items correctly), the DRP is unambiguously a measure of ability to read with comprehension. By definition, this is the central validity issue of a reading test.
Scoring Options Computer-based scoring.
Computer Applications 1 The DRP Online is now available as a web-based version of the original paper-and-pencil instrument. 2 After students review their online responses, results are available immediately. Both individual student and group reports can be generated. 3 Score-converting and reporting software are also available. 4 Machine-scorable answer sheets can be processed locally or through the publisher. 5 DRP-Book Link software can be used to match students with books based on their performance on the DRP.
Accommodations The DRP is an untimed test by design. The publisher states that identifying struggling readers and monitoring their progress is a primary function of the measure.
Cost DRP Online: $4.00–5.00 (per student, per administration, based on number of students tested).
372
Compendium
Publisher Questar Assessment Inc. 5550 Upper West 147th Street Apple Valley, MN 55124 www.questarai.com/
Weaknesses of the Test 1 College students were not included in the norming population. 2 Does not provide specific information or breakdown of different comprehension-related components.
Strengths of the Test 1 An attempt has been made to develop a state-of-the-art, nonthreatening reading test representing reading as a holistic process. 2 The untimed administration of the test is an asset for students with disabilities and students who are of non-English language background. 3 An attempt to have answers evolve from the context of passages minimizes demands on prior knowledge—a feature particularly important for students of culturally diverse backgrounds. 4 Independent and instructional reading levels can be determined. 5 The publisher suggests that the students’ reading level (in DRP units) can be matched to textbook readability (also in DRP units). DRP-Book Link software can be used to match students with books based on their performance on the DRP.
Gates-MacGinitie Reading Tests, Forms S and T Fourth Edition, 2006 Test/Author(s) Gates-MacGinitie Reading Tests (GMRT-4) (4th Edition); Levels 10/12, AR (Adult Reading) MacGinitie, W. H., MacGinitie, R. K., Maria, K., Dreyer, L. G., & Hughes, K. E.
Type of Test Survey, formal, standardized, norm-referenced.
Use(s) of Test 1 To provide an initial screening of a student’s general reading achievement. 2 To identify students who may benefit from additional reading instruction/programming. 3 To evaluate effectiveness of instructional programs.
Skills/Strategies Tested 1 Vocabulary 2 Comprehension.
373
Compendium
Population Recommended Unlike previous editions, GMRT-4 does have norms for a postsecondary population.
Overall Theory of Reading The GMRT-4 is designed to measure weaknesses in reading comprehension (for instance, relying on prior/background knowledge of a passage rather than using text-provided information) and limited vocabulary. Thus, the focus of the scoring is on wrong answers, with the implication that identifying wrong answer provides insights as to poor or inappropriate comprehension strategies. Also, because the comprehension and vocabulary sections are treated separately, the perspective of reading seems to be less than holistic in nature.
Readability/Sources of Passages All reading comprehension passages and target vocabulary words are new to this edition. Particular emphasis was placed on selecting items that teachers and former teachers deemed to be representative of typical home and school reading material. Both narrative and non-narrative texts are included. The Dale-Chall (1948) and Fry (1977) readability formulae were used to estimate passage readability (the Spache formula; Spache, 1953) was also used for tests in primary grades). Combined Dale-Chall and Fry estimates for Level 10/12 was 10.2 for Form S and 10.0 for form T. For Level AR, combined estimates were 8.1 for both Forms.
Format of Test/Test Parts The vocabulary test has 45 items. It uses words in brief context, each followed by five single-word choices. The student is to select the word that most nearly matches the test word in meaning. The comprehension test has 48 items, with passages of varying lengths (all are fairly short), followed by completion-type questions with four possible short alternatives, requiring an explicit or implicit understanding of information in the passage. A variety of narrative and expository passages is included.
Testing Time Vocabulary: 20 minutes; Reading Comprehension: 35 minutes.
Forms Two equated forms (S and T).
Levels Multiple levels of the instrument are available ranging from Pre-Reading (PR) through Adult Reading (AR). With the Third Edition of GMRT, many institutions used either Levels 7/9 or 10/12 depending on the needs of the local population. Level AR is new to this edition and was designed for community colleges and adult training programs. It is actually easier than Level 10/12, yet normed with adult populations.
374
Compendium
Types of Scores Raw scores, chance-level scores, percentile ranks, normal curve equivalent scores, stanine scores, grade equivalents, Lexile measures.
Norm Group(s) The 1998/1999 norming sample included 65,000 K-12 students and 2,800 community college students. The following were considered in sample selection: geographic region, school district enrollment, and SES. Students receiving special services (e.g., students with disabilities, gifted and talents, students in Title 1 reading programs) were included in the sample if they received 50 percent or more of their instruction in the general education classroom. The GMRT was renorming in 2006.
Date Published or Last Revised Forms S and T were first published in 1998/1999 and again in 2006.
Reliability A technical manual (115 pages) details information about standardization, reliability, and validity of GMRT-4. Kuder-Richardson Formula 20 coefficients were calculated for all forms and levels. Coefficient values for all forms and levels were at or above 0.90 for all total tests and for the Vocabulary and Reading Comprehension subtests except for Level AR. Coefficient values for Level AR were at 0.88 or higher. Alternate form reliabilities were calculated for total test and subtest scores. Total test score coefficients were 0.81 or higher. Subtest score coefficients were 0.74 or higher.
Validity To assure content validity, an extensive item development process was implemented. In addition, statistical analyses (using the Mantel-Haenszel measure of differential item functioning) and a content analysis by an expert panel were conducted to assure balanced treatment of test content for underrepresented students and across genders.
Scoring Options Paper-Pencil Version • • • •
Hand-scorable answer sheet test booklets that include answer keys Self-scorable answer sheets Machine-scorable booklets that can be processed by the publisher Scoring software for local scoring (NCS or ScanTron).
Online Version •
Scoring reports generated centrally by the publisher.
375
Compendium
Computer Applications 1 Forms S and T are available online. 2 The online versions include prompts to assist students in use of time during the test and indicators of which items have been answered along the way. 3 Norms tables are available on CD-ROM. 4 Score-converting and reporting software are also available. 5 Machine-scorable answer sheets can be processed locally or through the publisher. 6 Central scoring services offer a variety of report options for teachers, administrators, and parents.
Accommodations 1 The publisher recommends that students who have difficulty tracking from the test booklet to a separate answer sheet write their answers directly in the answer booklet. 2 The online versions provide options for extended testing time. Norms for extended testing conditions are not provided.
Cost Reusable test booklets (25): $107.55; Directions for administration: $14.65; Machine-scorable answer sheets (100): $152.50.
Publisher Nelson (formerly Riverside Publishing, a subsidiary of Houghton Mifflin Harcourt) 1120 Birchmount Rd Toronto, ON M1K 5G4 www.nelson.com/assessment/index.html
Weaknesses of the Test 1 The GMRT-4 does include a new AR level, but the norming population was community college students only. Norms for students entering four-year institutions are not provided. 2 As in previous editions, passage length for the comprehension selections is short. However, the authors do address this issue claiming that passage length and administration time must be balanced. They admit that this is a “practical limitation” of the measure.
Strengths of the Test 1 Overall test development and norming procedures were done with care, indicating potential for a quality instrument. The procedures used to develop this test provide us with a sense of integrity of this instrument. 2 The new edition provides a publication: Testing to Teaching: A Classroom Resource for Reading Assessment and Instruction that provides suggestions for additional assessment, instructional strategies, and hints for interpretation of scores. 3 The inclusion of linkage of GMRT-4 scores to Lexile measures can assist instructors in student placement in instructional and recreational reading materials.
376
Compendium
Nelson-Denny Reading Test, Forms G and H, 1993 Test/Author(s) Nelson-Denny Reading Test (NDRT), Forms G and H Brown, J. I., Fishco, V. V., & Hanna, G. S.
Type of Test Survey, formal, standardized, norm-referenced.
Use(s) of Test Primary use: initial screening • •
To identify students who may need special help in reading To identify superior students who could profit from placement in advanced/accelerated classes.
Secondary uses • •
Predicting success in college courses Diagnosing strengths and weaknesses in vocabulary, comprehension, and reading rate.
Skills/Strategies Tested 1 Vocabulary 2 Comprehension 3 Reading rate.
Population Recommended This test could be used effectively with entering college students for screening purposes. Due to the difficulty of the reading comprehension passages, students reading more than two years below their grade level could become frustrated. This test could also be used for preprofessional and pregraduate students and for students in community reading efficiency courses. For maximum effectiveness, local norms should be developed.
Overall Theory of Reading M. S. Nelson and E. C. Denny developed the original version of this measure in 1929 at Iowa State Teacher’s College. The intent was to create an instrument that could quickly and efficiently assess reading ability. The authors list reading comprehension, vocabulary development, and reading rate as the three most important components of the reading process, noting that they are related, interdependent functions.
Readability/Sources of Passages All passages for Forms G and H of the NDRT were culled from high school and college textbooks (including social science, science, and humanities). As with previous versions, the first
377
Compendium
passage is the longest and easiest. The seven passages in each form were gauged for readability using three formulae: Dale-Chall Grade Level, Fry formula, and the Flesch Reading Ease Score. Passages are arranged from easiest to most difficult in terms of readability level and passage content. The technical manual reports that readability levels for all passages were in the upper high school range.
Format of Test/Test Parts The general format of the current version of the NDRT is similar to its predecessors due (as the publisher puts it) to its widespread acceptance. In preparation for the current edition, test users were surveyed for recommended improvements to the test. In response to user input and criticisms from test reviews in the literature, the number of vocabulary items and number of comprehension passages were reduced in the current edition to reduce working-time pressures. The vocabulary section gives 80 words in minimum context (e.g., Pseudo feelings are). The comprehension section has seven passages followed by multiple-choice questions with five alternatives. The first passage has eight questions; the rest have five each for a total of 38 questions. For rate, the students read from the first passage of the comprehension section for one minute and then mark the point they have reached when time is called.
Testing Time Regular administration Vocabulary: 15 minutes; Comprehension: 20 minutes; Rate: 1 minute Extended-time administration Vocabulary: 24 minutes; Comprehension: 32 minutes; Rate: omitted.
Forms Two equivalent and parallel forms (G and H). The C/D, E/F, and G/H forms are not all parallel and equivalent, and should not be used interchangeably.
Levels One level is available for Grade 9 through college.
Types of Scores Standard scores, stanines, normal curve equivalents, percentile ranks, grade equivalents, rate equivalents
Norm Group(s) Three samples were selected: one from the high school population, one from the two-year college population, and one from the four-year college and university population. For the college s amples, 5,000 students from 39 two-year institutions and 5,000 students from 38 four-year colleges/ universities were sampled in September and October 1991 and 1992. For both, three criteria were used to select a representative group: geographic regions, size of institution, and type of institution (public or private). Samples include diversity in gender, race, and ethnicity.
378
Compendium
Date Published or Last Revised The current form of the Nelson-Denny was tested in 1991 and 1992 and published in 1993. Earlier versions were the Nelson-Denny Forms A/B, C/D, and E/F.
Reliability The publisher offers a 58-page technical report on national standardization, reliability, and validity. Kuder-Richardson formula 20 reliability coefficients were computed. For the vocabulary subtest, coefficients ranged from 0.92 to 0.94, and for the comprehension subtest, they ranged from 0.85 to 0.89. Alternate-form reliability coefficients were 0.89 for vocabulary, 0.81 for comprehension, and 0.68 for rate.
Validity Minimal information about test validity is provided in the technical manual. In developing Forms G and H, a content analysis of current textbooks for developmental reading students was conducted to assure the content validity of key test components: vocabulary, comprehension, and reading rate. In addition, statistical analyses (using the Mantel-Haenszel Measure of differential item functioning) and a content analyses by an expert panel were conducted to assure balanced treatment of test content for underrepresented students and across genders.
Scoring Options Machine scoring • •
Use the NCS answer sheets Set up an institutional scoring system.
Hand scoring •
Answer keys are provided in the manual.
Self-scoring •
Self-scorable answer sheets are available from the publisher.
Computer Applications The NDRT is now available in CD-ROM.
Accommodations The publisher provides extended time guidelines for qualifying students.
Cost Test booklets (25): $77.00; Test manual for scoring and interpretation: $28.00; Self-Scorable Answer Sheet (25): $343.00.
379
Compendium
Publisher Houghton Mifflin Harcourt 125 High Street Boston, MA 02110 www.hmhco.com/hmh-assessments/other-clinical-assessments/ndrt
Weaknesses of the Test 1 The rate section remains a problem. Only one minute is allowed for testing reading rate, and no comprehension check is involved. 2 Length of passages continues to be a concern. Passages do not represent the length of typical college reading assignments.
Strengths of the Test 1 The extended-time administration of this edition is an attempt to address reviews of previous editions. The extended-time is designed to accommodate English language learners, students with learning disabilities, and returning adults. 2 The test can be administered in a typical college class period. 3 The passages in the reading comprehension section are an attempt to test students’ ability to read typical textbook material. However, due to the brevity of all passages except the first, reading of more extended text is not measured. 4 Some attention was given in this edition to address concerns about working-time pressures through the reduction of vocabulary items and comprehension passages. 5 The readability level appears to be lower than previous editions. 6 Attention to cultural diversity was considered in the selecting the norming populations.
380
Afterword Defining the State of the Art and Describing the Challenges to the State of That Art Hunter R. Boylan national center for developmental education, appalachian state university
In the introduction to his section of this handbook, Eric Paulsen promises that it will provide a framework for the “consideration” of college reading and study strategies instruction. This handbook certainly delivers on that promise. It is comprehensive, scholarly, and timely. It is comprehensive because of its depth and breadth, even including research and commentary on what Stahl and King refer to as “allied fields,” such as learning assistance and developmental education. The handbook includes scholarly discussions on an extremely wide variety of topics associated with college reading, from the history of college reading instruction to theories of reading development to college bridge programs to policies and practices to study strategies to reading assessment instruments. The authors whose work is included here do an excellent job of using a combination of classic and contemporary research to guide their commentary and recommendations. Recognizing, citing, and discussing classic research is one of the strong points of this handbook. The work of college reading and study strategies is founded upon the scholarship and practice of a host of predecessors. Sadly, many of those whose names appear in references and whose research has guided us for decades have passed away. Many others, however, are still making their contributions to the field. This handbook helps to honor those contributions. Those who author the chapters in this handbook do a scholarly job of reviewing, analyzing, and integrating the most current research in the field. Many of them not only explore the scholarship of others but are also among our best thinkers, conducting their own research to advance our knowledge in the field. They are the intellectual leaders who actually create the state of the art in college reading and study strategies. Their work both summarizes and advances the state of the art in the field of college reading and study strategies. The resulting chapters reflect the best of what we currently know and can speculate upon for the future. This handbook will, no doubt, become a standard text for students and faculty of college reading and study strategy programs for years to come. This is vitally important because it advocates for the themes we have traditionally valued and continue to support. A primary theme running throughout this handbook is that access to postsecondary education is meaningless unless students who benefit from that access also have the opportunity to be successful. A huge component of that success is the ability to read college-level material and deploy study strategies appropriate for that material. In an age when college completion represents a major agenda item for policy makers and legislators (Bill and Melinda Gates Foundation, 2009; Lumina Foundation, 2013), the importance of reading and study strategies to that agenda should not be underemphasized.
381
Hunter R. Boylan
And yet, the current completion agenda does, indeed, underemphasize the importance of college reading instruction. What Parker refers to in Chapter 3 of this handbook as the “politicization of remediation” has resulted in a number of state policy mandates undermining college reading instruction. Based on preliminary positive findings, legislators and state higher education executive offices in many states have decided that such things as corequisite remediation and integrated reading and writing are keys to improving college completion ( Jaggars, Hodara, Cho, & Xu, 2015). The former model provides remedial support to students enrolled in college-level classes (Belefield, Jenkins, & Lahr, 2014). The latter combines reading and writing into a single course (Raufman, 2016). Both are viable models, grounded in research, and showing considerable potential for improving student completion of college-level gateway courses. Both, however, contribute to a de-emphasis of college reading instruction. Reading is seldom offered as part of corequisite remediation and, when it is, it often results in the substitution of the corequisite course for the stand-alone remedial reading course. This works well for students at the upper end of the underpreparedness distribution, but it does not help students who lack requisite reading skills and strategies who may need more intensive reading instruction to catch up. The integration of reading and writing also has the frequent effect of eliminating reading courses from the curriculum. In this model, instead of being a stand-alone course, reading is integrated into writing instruction. This makes sense and has a demonstrated record of success (Edgecombe, Jaggars, Xu, & Barragan, 2014). However, the integration of reading and writing into a single course tends to reduce the number of reading courses available to the most underprepared students. Mandated policy in some states also allows more students to opt out of remedial courses. Given the choice, many students whose test scores or grades might have placed them into remedial reading are able to bypass it. This has dramatically reduced enrollment in college reading classes (Park et al., 2015). The so-called “reform movement” in remediation provides many benefits. But it also contributes to the decline in the availability of college reading instruction. This, in turn, reduces the opportunity for our least prepared students to obtain the reading instruction they may need to be successful. The field of college reading and study strategies has always been a component of educational opportunity. The opportunity for students to participate in college reading instruction may be declining but it is imperative that we continue to advocate for that opportunity. Another theme running throughout this handbook is the importance of research in guiding practice. Many of the chapters included here utilize the research to support successful teaching techniques in such areas as disciplinary reading, vocabulary, and comprehension. They also describe successful approaches to program management and evaluation and address issues of student and linguistic diversity. As Pat Cross once said, “Research without practice is empty but practice without research is blind” (1998). One of the keys to insuring the future of college reading and study strategies is conducting ongoing research on the effectiveness of various approaches for doing our work. As Ranshaw and Boggs point out in their chapter on student diversity, we are dealing with a vastly different population of students today. They are not only diverse in their ethnicity; they are diverse in their cultures, their physical and mental abilities, and their values and beliefs. Although this is noticeable to anyone who is looking and well validated by research, the implications of this diversity have yet to be addressed adequately, particularly from the research standpoint. In her chapter on student assessment, Tina Kaf ka addresses many of the problems associated with such student assessment. She also suggests solutions to some of these problems. Conducting research on the outcomes of reading and study strategies courses and programs is an important
382
Afterword
responsibility for professionals in the field. Doing so to understand the outcomes of courses and programs on diverse students is especially important because such research enables us to better serve these students who have traditionally been underserved. Furthermore, given the diversity of our students, our research needs to identify what works for which groups and under what circumstances. Much of this we can learn by applying a variety of measures to the assessment of outcomes for students participating in our courses and services. Jan Norton and Karen Agee’s chapter on program assessment is another example of the kind of research that must be continued for the benefit of the reading and study strategies professional community. For our field to continue and prosper, we must be able to prove that what we do makes a difference and that it contributes to student completion. We must also be able to describe the differences we make and articulate those differences to a wide audience. As legislators and policy makers seek to understand what returns they are getting for their investment in postsecondary education and as pressures for accountability mount, it becomes ever more important for us to establish the value of what we do. A third theme in this handbook is that the field of college reading and study strategies is itself, diverse. This has both advantages and disadvantages. As Stahl and King make clear in the opening chapter, our history is extensive, and our efforts have many roots. As other chapters in this work point out our professionals engage with students in many ways. We not only provide basic literacy instruction for adults but teach advanced reading and learning strategies for graduate students. We not only teach courses but provide tutoring and advising and a plethora of other services designed to enhance student learning. We also promote the development of metacognition and teach students how to improve the ways in which they process and understand the college curriculum (Weinstein et al., 1997). The advantage of this is that we provide a comprehensive menu of interventions and that comprehensiveness increases the likelihood of success (Boylan, 2002). Too often, however, we do these things in isolation from one another. There are some organizational models where reading instruction, learning assistance, and study strategies development activities are all housed in the same program. Unfortunately, such models are scarce. On most campuses, reading instruction is separate from the learning center, and study strategies instruction is relegated to student success courses housed in an academic department. What is needed is greater collaboration between all units and personnel associated with college reading and study strategies interventions. As the availability of reading courses in colleges and universities declines, reading instructors will have to find other venues to teach reading. Reading instructors may be able to serve as consultants to disciplinary content area instructors to ensure that reading skills are emphasized across the curriculum. This is also true of study strategy instructors who might work with disciplinary content faculty to integrate study strategies concepts into their courses. Greater collaboration with learning centers and tutoring programs may provide opportunities to enhance the availability of reading instruction. The need for collaboration also works in the other direction. Learning center personnel could profit from working more closely with reading instructors. Learning center services can and should be built into reading courses as well as disciplinary content courses. The ancient proverb, “In unity there is strength” applies just as well today to the field of college reading and study strategies. We are a diverse field, but we can have the greatest impact by bringing that diversity together in an integrated fashion focused on student success. It is difficult to predict what the future may hold for college reading programs and study strategies courses. Several of the handbook’s authors have discussed the difficulties of establishing college reading as a credible discipline in spite of the field’s copious base of literature and research and in spite of its well-documented contributions to postsecondary success. Yet these difficulties can be overcome.
383
Hunter R. Boylan
Several things become clear to the astute reader of this handbook and scholar of the field. First of all, the field of reading and study strategies is historically linked to and forms much of the underpinning of postsecondary educational opportunity. Second, there is a considerable amount of solid research in this field, and we should enhance our efforts to infuse our practice with that research. Finally, the diversity of our field is a benefit but one that must be accompanied by collaboration among all of its diverse elements if the field is to survive and prosper.
References Bill and Melinda Gates Foundation. (2009). New initiative to double the number of low income students in the U.S. who earn a postsecondary degree. Retrieved from www.gatesfoundation.org/Media-Center/Press- Releases/2008/12/New-Initiative-to-Double-the-Number-of-LowIncome-Students-in-the-USWho-Earn-a-Postsecondary-Degree Belefield, C., Jenkins, D., & Lahr, H. (2014, September). Is co-requisite remediation cost-effective: Early findings from Tennessee. CCRC Research Brief No. 62, New York, NY: Teachers College Columbia University. Boylan, H. (2002). What works: Research-based best practices in developmental education. Boone, NC: Continuous Quality Improvement Network/National Center for Developmental Education. Cross, K. P. (1998, April). What do we know about student’s learning and how do we know it? Keynote address presented at the American Association of Higher Education, Reno, NV. Edgecombe, N., Jaggars, S., Xu, D., & Barragan, M. (2014, May). Accelerating the integrated instruction of developmental reading and writing at Chabot College. CCRC Working Paper No. 71. New York, NY: Teachers College, Columbia University. Jaggars, S., Hodara, M., Cho, S. -W., & Xu, D. (2015). Three accelerated developmental programs: Features, student outcomes, and implications. Community College Review, 43(1), 3–26. Lumina Foundation. (2013, February). Lumina Foundation strategic plan: 2013 to 2016. Indianapolis, IN: Author. Retrieved from www.luminafoundation.org/files/resources/2013lumina-strategic-plan.pdf Park, T., Woods, C., Richard, K., Tanberg, D., Hu, S., & Jones, T. (2015). When developmental education is optional, what will students do? A preliminary analysis of survey data on student course enrollment decisions in an environment of increased choice. Innovative Higher Education, 41, 221–236. Raufman, J. (2016, June). From “additive” to “integrative”: Research on designing and teaching integrated reading and writing. Presented at the Conference on Acceleration in Developmental Education, Denver, CO. Weinstein, C. E., Hanson, G., Powdrill, L., Roska, L., Dierking, D., & Husman, J. (1997). The design and evaluation of a course in strategic learning.
384
Author Index
Abidin, M. J. A. 82 Abrams, S. S. 107, 110 Acee, T. W. xiii, xvii, 124, 179, 233 Ackerman, J. M. 146 Adelman, C. 52 Adler, K. xvi Adler, R. M. 186 Afflerbach, P. 126, 196–197, 206, 333, 335–336, 352, 360 Agee, K. xiii, xiv, xvii, 14, 279, 299–300, 302, 306, 315–316, 322, 383 Ahmad, N. 82 Ahrendt, K. M. 4 Aikenhead, G. 94 Alexander, P. A. 31, 33, 118–124 Alfassi, M. 120, 126 Alim, H. S. 62, 68 Alipour, M. 316 Allen, B. M. 113, 260 Alozie, N. M. 287 Alvermann, D. E. 92 Anastasi, A. 254–256 Anderson, E. M. 28, 66–67, 100, 104, 247 Andrews, S. 245 Angelo, T. A. 331, 333 Anmarkrud, O. 201 Anyon, J. 62, 69 Apodaca, R. E. 36, 122 Appatova, V. 179 Appel, M. 260 Applebee, A. N. 143 Araiza, M. 82 Ards, S. 47 Arendale, D. xiii, xvi, 8, 9–10, 15, 53, 279, 322 Armstrong, S. L. xiii, xiv, xvi, 3, 16, 31, 34, 87, 89, 90–95, 107–108, 124, 279, 341 Aronson, J. 260–261 Artz, N. J. 109 Ash, G. E. 99 Ashong, C. 120 Astin, A. W. 43, 53 Atkinson, H. A. 103 Attewell, P. 31, 44, 46, 48, 52, 54 Aultman, L. P. 105, 118, 123
Austin, R. 201 Ausubel, D. P. 105 Azila, N. 252 Babcock, C. 298 Bader, L. A. 355 Bailey, J. L. 4–5, 28, 54, 62, 205 Baker, L. 33, 119–120 Bangeni, B. 119 Bangert-Drowns, R. L. 258 Banks, J. 318 Banta, T. W. 320–321 Barab, S. 32 Barbier M. 120 Barbosa-Leiker, C. 285 Barczyk, C. G. 82, 84 Barnett, E. 284 Barrett, M. 42, 174 Barron, R. F. 106 Barroso L. 285–286 Bartholomae, D. 36, 145, 153, 155, 334 Basham, J. 75 Basu, A. 82 Battle, A. 81 Bauer, E. B. 35, 120 Bauer, L. 15, 18 Baumann, J. F. 99, 102, 104 Bays, C. L. 317 Bean, T. W. i, xiii, xiv, xvi, 68, 87, 90, 93 Bear, D. 101 Beck, I. 99–103, 106, 109, 111 Becker, B. J. 256 Beemyn, G. B. 67 Behrman, E. H. 317 Beilock, S. L. 260 Belfield, C. 44 Bell, D. 66, 199 Benjamin, M. 303 Bennett, J. M. 267 Benson, J. 348 Berger, A. 7, 113 Bergin, D. A. 245 Bergman, L. 95 Berlin, J. A. 144
385
Author Index
Berliner, D. C. 126 Bernstein, S. N. 54 Berry, M. S. 286 Best, N. 248 Bettinger, E. P. 52 Biancarosa, G. 105 Bicer, A. 285–286 Bidwell, A. 76 Bishop, C. H. 242 Blachowicz, C. L. Z. 108, 109, 110, 111 Blair, T. R. 124 Blake, W. S. 4 Blanshetyn, V. 248 Blanton, W. 245 Blasiman, R. N. 320 Boedeker, P. 285–286 Boggs, B. J. xiii, 1, 66 Bogost, I. 173 Bojar, K. 145 Bol, L. 93, 234 Bonham, B. S. 10 Bork, R. H. 34 Bork, H. 317 Bosch, T. E. 82 Bosvic, G. M. 250 Botvinick, M. 123 Boud, D. 334, 335 Boutin, G. E. 267 Bowers, P. N. 101 Boyd, R. T. C. 266 Boylan, H. xvi, 3, 8, 10, 14, 15, 16, 28, 35, 51, 296, 303 Bransford, J. D. 206 Bråten, I. 125 Braver, T. 123 Brawer, F. B. 329 Bray, G. B. 34 Breen, M. 122 Breneman, D. W. 53 Bridgeman, B. 243 Brier, E. M. 6, 11 Britt, M. A. 197 Brookbank, B. 318 Brophy, J. E. 123, 124 Brosvic, G. M. 250 Brown, C. 32 Brown, M. 66, 106, 119, 120, 206, 286, 320, 348, 349–350 Browning, S. T. 331 Brozo, W. G. 12, 101–102 Bruner, J. 145 Bruning, R. 33 Bruns, A. 74 Bryer, T. A. 74 Buchmann, C. 254– 257, 329 Buehl, D. 201 Buel, M. M. 126 Bulfin, N. 83 Bulgren, J. 197
386
Bull, B. S. 257, 320 Bullock, T. L. 9 Bunch, G. C. 222 Burgess, M. L. 317 Burns, P. C. 355 Burridge, A. 319 Burton, A. 101 Bustillos, L. 42 Buswell, G. 3 Butler, G. 249–250 Cahalan, M. 287 Calfee, R. C. 144, 146 Callan, P. M. 44, 47 Camera, L. 76 Campbell, J. 111, 234 Campione, J. C. 206 Canche, M. S. 81 Capraro, R. M. 285–286 Carlston, D. L. 204 Carpenter, R. D. 355 Carpenter, K. 9 Carr, E. M. 106 Carruthers, G. 47 Carson, J. G. 124 Caruso, J. 74 Casazza, M. E. 9, 15, 18 Cassady, J. C. 260 Cassaza, M. E. 45 Castek, J. 206 Castillo, A. xiii, 279 Caverly, D. C. vii, xiii, 31, 32, 34, 76, 81, 106, 158, 179, 206, 317 Cazden, C. 174 Chall, J. S. 5 Chambers, T. G. 120 Chaney, B. 287 Chang, Y. J. 203, 245 Chapman, S. B. 318, 335 Chase, N. D. 124 Chen, I. J. 43, 46, 77, 199, 202, 203, 204, 220, 295, 315 Child, R. L. 284 Ching, C. 75 Chmielewski, T. L. 106 Cho, B. Y. 28, 68, 126, 206 Chowenhill, D. 147 Choy, S. 62 Christ, F. L. vii, 8, 17, 205, 292 Chrystal, L. L. 287 Chung-Herrera, B. G. 254 Cirino-Gerena, G. 246 Clancy, W. J. 32 Clark, M. H. 77–81, 122, 243, 286 Clifford, G. J. 144 Clinedinst, M. 65 Coatoam, S. 122 Coffman, W. E. 255 Cohen, K. W. 288, 329
Author Index
Coiro, J. 206 Cokley, K. 68, 120 Cole, V. 158, 245, 298 Collins, T. 3, 32, 105, 250 Commander, N. E. 120 Conklin, L. 251 Conley, D. 182–183 Connell, M. 120 Cook, J. G. 205, 250, 288 Cope, B. 192 Copeland, C. T. 28 Corno, L. 194 Cowley, K. 321 Coxhead, A. 104–105 Cranney, G. xvi Crassas, M. E. 126 Crawford, M. K. 92 Cromley, J. G. 98, 100 Cross, K. P. 8, 15, 62, 331, 333 Crosson, A. C. 109 Crosta, P. M. 44 Cruce, T. M. 64 Crump, B. M. 102 Csikszentmihalyi, M. 123 Culver, T. F. 317
Diller, C. 119 Dillingofski, M. S. 36 Dimino, R. K. 158 Dinsmore, D. L. 122 Dochen, C. W. 319 Dole, J. A. 90 Dollinger, S. J. 243 Domina, T. 31, 44 Dominguez, M. 61–62, 69 Donahue, P. 111, 144 Donovan, D. A. 120 Douglas-Gabriel, D. 42 Downing, S. M. 257 Downs, D. 151–152, 158 Doyle, W. R. 126 DuBois, N. 32, 120 Duell, O. K. 125 Duguid, P. 32 Duncan, D. G. 82 Dunkerly-Bean, J. xiii, 68, 87, 90, 93, 95 Dunlosky, J. 119, 197, 202, 320 Dunston, P. J. 106 Dupre, M. 63 Durkin, D. 126 Dvorak, J. 14
Dahl, T. 194 Dale, E. 99 Dalgarno, B. 83 Dalton, B. 105, 110 Dansereau, D. F. 106 Dauenheimer, D. 122 Daugherty, L. 318 Davies, D. A. 251 Davies, M. 105 Davis, A. 80 Davis, C. C. 356 Davis, C. H. F. 81 Davis, E. 181 Davis, F. B. 356 Davis, J. R. 151 De Castell, S. 170, 173 de Kleine, C. xiii, 179, 218 Deil-Amen, R. 81 del Valle, R. 32 Delaney, C. 76 DeMarais, L. 3 DeMers, L. P. 248 Deming, M. P. 143 Deng, Z. 94 Denzin, N. 4, 13, 15, 66 DePoy, E. 63 Deshler, D. D. 197, 253 Dewey, J. 42, 119 Dibattista, D. 264 DiCarlo, S. E. 250 Dickson, M. 145 Diekhoff, G. M. 106 Dihoff, R. E. 250
Eaton, J. E. 54 Ebel, R. 242 Ebner, R. J. 98, 102, 107–113, 316 Eckert, L. S. 31 Edgecombe, N. D. 37, 55 Ehri, L. C. 98, 102, 107–113, 316 Eilers, V. 12, 351 Elias, M. J. 260 Eliot, C. W. 43–45 Elish-Piper, E. 355 Ellis, A. P. J. 243–254 Embse, N. v. d. 260–261 Emerson, N. M. 267 Emig, J. 152 English, J. 258 Epstein, M. L. 250 Ewell, P. T. 47 Fader, D. 145 Fairbanks, M. M. 102 Falk-Ross, F. C. 127 Faller, S. E. 104 Fan, C. Y. 204 Fang, F. 32, 81, 113, 122 Fay, L. 284 Ferguson, K. J. 125, 248 Ferrara, R. A. 206 Ferris, D. R. 217 Filippini, A. 105 Fingeret, L. 33 Finn, M. E. 264 Fiorella, L. 118 Fischer, M. R. 248
387
Author Index
Fisher, D. 30, 104, 108–111 Fiske, S. T. 122 Fitzgerald, S. H. 147 Flanigan, K. 105 Flannigan, S. L. 74 Flavell, J. H. 119–120, 228 Flinders, D. 64 Flippo, R. F. i, vii, xi, xiii, xiv, xvi, 179, 199, 200, 201, 247, 266–268, 279, 317, 333, 341–345, 355–357 Flores, N. 65 Flynn, J. 247 Foley, L. M. xiii, 1, 80 Fong, C. J. 124 Forzani, E. 206 Fox, S. 75 Francis, M. A. xiii, 31, 37, 87, 99–101, 107–110 Freebern, G. 119 Freebody, P. 100, 104, 118 Freire, P. 95, 145 Frey, N. 30, 104 Friedman, T. 169 Frierson, H. T. 253, 257 Fullan, M. 89, 93–95 Fulton, M. 42, 50, 343 Gabriner, R. 329 Galazeski, R. C. 261 Gandara, P. 44 Gansemer-Topf, A. 287 Garcia, D. J. 34, 68, 120, 193–194 Garcia-Retamero, R. J. 120 Gardner, J. N. 105, 298 Gaskins, I. 33 Gay, G. 35 Gee, J. P. 30–31, 75, 119, 169–174, 191 Geiger, M. A. 247 Gerda, D. 80–81 Gerlach, G. J. 266 Gewertz, C. 331 Ghazal, S. 120 Ghojogh, A. N. 243 Gianneschi, M. 42, 50 Gibson, S. U. 124 Gillotte-Tropp, H. 36, 146–147, 154–157 Gilson, S. F. 63 Gilyard, K. 219 Gish, H. 120 Glau, G. R. 146 Gleason, B. 157 Glynn, S. M. 123 Gnewuch, M. M. 107 Goen, S. 36, 146, 154–157 Goen-Slater, S. 36, 146–147 Goerss, B. L. 106 Goetz, T. 244, 298 Goldschmid, B. 302 Gonyea, R. M. 64 Goode, J. 75–76
388
Goodfellow, R. 75 Goodwin, A. P. 101, 287 Gordon, D. K. 258, 347 Gosse, L. 264 Grabe, W. 104 Graesser 119 Graham, A. C. 202 Grant, M. 70 Graves, M. F. 101, 104, 111, 119 Gray, A. 3 Gray, W. S. 17 Green, B. F. 244, 255 Greene, B. 28–29, 47, 51 Greenwood, S. C. 106 Gregory, K. xiii, 87, 93–95, 319 Gress, M. 98 Griffin, T. D. 120 Grisham, D. L. 105–107, 110 Grodsky, E. 257 Groen, J. 62 Grossman, P. L. 42 Grubb, W. N. 147, 329, 343 Guerrero, R. 42 Guinier, L. 329 Gunning, T. G. 348 Guthrie, J. T. 34, 120, 201, 345 Gutierrez, A. P. 120 Guzzetti, B. J. xiii, 1, 82–83 Haarlow, W. N. 53 Hacker, D. J. 119 Hadley, K. 120 Hagen, Å. M. 120 Haggard, M. R. 107, 110 Hagtvet, K. A. 260 Haladyna, T. M. 349–350 Haley, J. 14 Hall, G. 93, 106, 288 Hall-Clark, B. 68 Hampton, S. H. 246 Hanes, M. L. 341 Hanford, E. 42 Hansen, D. 95 Harackiewicz, J. 109 Hardin, V. B. 33 Harl, A. L. 143 Harmon, J. M. 98, 106–107, 110–111 Harrell, J. R. 288 Harris, D. P. 109 Harrison, J. 64 Hartley, J. 194 Hartman, D. K. 17–20 Hasson, R. 260, 261 Hawisher, G. E. 77 Hayati, A. M. 243 Hayes, S. M. 104–105, 157 Hayward, C. 148 Hedrick, W. B. 98 Heller, R. 93
Author Index
Henk, W. A. 4, 11, 16, 351 Henry, L. A. 206 Herbert, T. J. 63, 202 Hern, K. 148 Herold, C. P. 106 Heron, E. B. 4–5 Hesser, T. L. 319 Hewitt, M. A. 285 Hidi, S. 109 Hiebert, E. H. 104 Higbee, J. L. 61 Hocevar, D. J. 249 Hodapp, T. 288 Hodges, R. xiii, 279, 306 Hofer, B. 34, 124–125 Hoffman, P. R. 28, 34, 105 Holland, P. W. 350 Holland, D. 355 Hollandsworth, J. G. 261 Holleran, T. A. 124 Holliday, M. 118, 201, 319 Holschuh, J. P. xiii, 1, 33–34, 82, 87–91, 98, 105, 106, 109, 118, 121, 124–126, 197–199, 201 Honeyford, M. 31, 90 Hooper, S. 197 Hoops, L. D. 319 Hoppers, C. A. 69 Hord, S. M. 93 Horn, L. G. 43, 51 Horne, A. M. 263 Horrigan, J. D. 75 Houk, P. 92 Howard, B. C. 32, 68, 120 Howell, L. 122 Hsu, H. 74, 76, 105, 113 Huang, D. W. 204 Hubbard, B. P. 33–34, 121 Huck, S. 245 Hudley, A. H. 220–223 Hughes, C. A. 253 Hull, G. 37, 158 Hung, L. 260 Hunt, J. B. 47 Hussain, M. 119 Hussar, W. 62 Hwang, Y. 103 Hyland-Russell, T. 62 Hynd, C. R. 4, 12, 16 Hynd-Shanahan, C. R. 121, 124–125 Isakson, R. L. 318, 335 Israel, S. E. 16 Jackson, J. M. 143 Jacobs, C. 95 Jacobson, M. J. 82 Jaeger, A. J. 120 Jalilifar, A. 316 Jalongo, M. R. 266
James, W. 119 Jamieson-Noel, D. 100 Jang, E. 75 Jarosz, A. F. 120 Järvelä, S. 123 Järvenoja, H. 123 Jehangir, R. R. 63–64 Jenkins, H. 74, 78, 103, 171–172 Jensen, D. 36, 170, 173 Jeong, D. W. 28 Jetton, T. L. 31, 122 Jiang, X. 104 Jing, H. 120 Johns, J. L. 204, 355 Johnsen, T. B. 260 Johnson, A. B. 18, 174 Johnston, P. H. 118, 122 Jones, H. 8–9, 65, 74, 247, 261, 318 Joshi, G. S. 98, 109, 111 Kabilian, M. K. 82 Kaestle, C. F. 144 Kaf ka, T. xiv, 279 Kalamkarian, H. S. 55 Kalantzis, M. 192 Kame’enui, E. J. 99, 102 Kammen, C. 17 Kantner, M. J. 90, 108, 341 Kaplan, R. M. 264 Kapp, R. 119 Karabel, J. 45 Karpicke, J. D. 249, 268 Katz, J. R. 285 Kaufman, G. 247 Keiffer, M. J. 104 Kelley, J. G. 104 Kellinger, J. J. xiii, 87 Kelly, N. 103 Kennedy, M. L. 205–206 Kerr, M. 120, 193 Kerstiens, J. 7, 14, 20 Kibler, A. K. 222 Kienhues, D. 126 Kiewra, K. A. 120 Kim, J. Y. 126 Kim, Y. H. 244 King, J. R. xiii, 1, 12–19, 28, 89, 146 Kingston, A. J. 13 Kintsch, W. 122 Kinzer, C. K. 206 Kinzie, J. 64 Kirby, J. R. 101 Kirkland, K. 261 Kitsantas, A. 124 Klages, M. A. 78–81 Knepper, P. 51 Knight, A. E. 262 Knobel, M. 74–75 Koch, A. K. 298
389
Author Index
Koch, L. C. 286 Koehler, R. A. 246 Koh, L. C. 123, 124 Koranteng, A. 65 Kraska, M. F. 51 Kress, G. 75 Kreutzer, M. A. 120 Kronberger, N. 260 Kubilius, R. 257 Kucer, S. 31, 118, 143 Kucer, S. 30 Kuh, G. H. 64, 322 Kulik, C. C. 258 Kulik, J. A. 258 Kurlaender, M. 54 Kwon, M. 200 Kyvig, D. E. 17 Laanan, F. S. 287 Ladson-Billings, G. 61, 66, 69 Lafond, S. 65 Laine, M. N. 36 Lambert, J. 80 LaMont, E. 63 Lampi, J. P. xiii, 87, 158 Lankshear, C. 74–75 Larson, I. 82 Lave, J. 32 Lavin, H. 31, 44 Lawton, R. xiii, 179 Lazorchak, B. 80 Lea, M. R. 119, 191 Lee, C. D. 83, 174 Lee, N. L. xviii, 279, 281–292 Leedy, P. D. 4–5 Lei, S. A. 68, 113 Leist, C. W. 317 Lenhart, A. 82 Lenz, B. K. 197 Leonard, C. 120 Lepper, M. 172, 174 Lesaux, N. K. 104, 109, 111 Lester, D. 244 Leu, D. J. 206 Leung, D. Y. P. 246 Levey, T. 31, 44 Levin, H. 103 Lewin, K. 149 Lewis, L. 28–29, 51, 295–296 Li, M. 122, 204 Lian, L. 252 Liebert, R. M. 261 Limber, J. E. 199 Lincoln, Y. 66 Linderholm, T. 200, 317 Linnenbrink, E. A. 34, 123–124 Lipson, M. Y. 32, 119, 130 Lissitz, R. W. 345
390
Liu, K. 202–203 Livingston, G. 75 Ljungdahl, L. 118 Lockhart, T. 157 Lomax, R. 36 Long, B. T. 52–54, 124 LoSchiavo, F. 245 Lowe, A. J. 6–7 Lubliner, S. 104 Luke, A. 118 Lusk, M. 251 Lynch, D. 248 MacArthur, S. 185 Madden, D. A. 9 Magnifico, A. 74 Majors, R. E. 266 Makoelle, T. 69 Mallery, A. L. 9, 13 Mallinson, T. 220–223 Malone, T. 172–174 Maloney, E. A. 260 Mamun, A. 284 Mandeville, T. F. 206 Manzo, A. V. 110 March, P. 118 Markham, E. 120 Marrs, H. 68 Marsh, E. J. 154–156, 197, 250 Marshall, R. 145 Marshall-Wolp, E. 248 Martinez, M. E 119 Martino, N. L. 28, 34 Marzano, R. J. 108, 111 Mason, R. B. 12, 34, 317 Mason-Egan, P. 33 Massey, W. M. 15 Mathies, C. 315 Matson, J. L. 263 Matthews, W. J. 124, 260, 267 Mavis, B. E. 258 Maxwell, M. vii, x, 3–4, 7–8, 11–15, 341, 360 Maxwell-Jolly, J. 44 May, B. 63 Mayer, C. 118, 194–197, 200, 206, 209, 228–229 Maykel, C. 206 Mazer, J. P. 82 Mazur-Stewart, M. 106 Mazzeo, J. 111 McCarthy, D. N. 356 McClain, L. 244 McLure, G. T. 284 McCombs, B. L. 120, 124 McCordick, S. M. 36, 122, 264 McCormick, A. C. 51 McCue, E. 248 McDaniel, M. A. 103, 249 McDermott, K. B. 249
Author Index
McDonough, J. 318 McGaghie, W. C. 257 McGuire, S. Y. 205 McKenna, L. D. 342 McKeown, M. 99–103, 106, 109–111 McMillan, J. H. 348 McMorris, R. F. 248 McNamara, L. P. 204 McNeil, E. B. 145 McNeal, L. D. 102 Melton, S. 264 Mercer, M. 253 Merisotis, J. P. 31, 47, 53, 54 Messick, S. 254 Meyer, D. K. 123–124 Milia, L. D. 248 Miller, C. F. 103, 288 Millman, J. C. 242–246 Mireles, S. F. 319 Misischia, C. 193 Mitchell, N. 264, 287 Miyasaka, J. 346 Moffett, J. 145 Moje, E. B. 36–37, 90–95 Mokhtari, K. 335 Monaghan, E. J. 16, 17, 20 Moon, G. F. 144 Moore, D. W. 20, 29, 105 Morante, E. A. 51, 343, 349 Morgan, J. 243, 285–286 Morris, L. 261 Morrison, C. 145 Mountain, L. 105 Mraz, M. 202 Mueller, J. 70 Muir-Cochrane, E. 82 Muis, K. R. 119 Mulcahy-Ernt, P. I. xiii, 32, 179, 199 Mullen, J. L. 14 Muraskin, L. D. 287 Murphy, R. E. 82, 122–124, 144 Nagy, W. E. 98, 103–104, 107–111 Nakamura, J. 123 Narang, H. L. 12 Nathan, M. J. 197 Neal, H. N. 104–107, 111 Nelson, N. 144, 146, 262 Newman, M. 31, 63, 107, 124 Ng, C. H. 34, 123 Nichols, W. D. 106–107, 111, 124 Nicholson, S. A. 31, 34, 81, 206 Nieto, S. 123–124 Niit, T. 120 Nilson, L. B. 199 Nist, S. L. 32, 33, 35, 102, 121–124, 342 Nist-Olejnik, S. 197–201 Noddings, N. 64, 92
Norman, N. 261 Norris, J. 34 North, S. 119 Norton, J. xiv, 279 Nussli, N. 82–83 O’Brien, D. G. 92 Ocal, T. 111 O’Connell, M. 257 O’Donnell, A. M. 106, 322 Offer, J. 319 Ogle, D. 111 Oh, I. S. 82–83 Olejnik, S. 102 Orlando, V. P. 14 Ottenheimer, H. 267 Owens, A. M. 123 Ozgungor, S. 120, 201 Pacheco, M. B. 101 Palomba, C. A. 320–321 Paris, S. G. 32, 62, 68, 119–120, 355 Park, T. 119 Parker, T. L. xiii, 1, 28–29, 42–44, 50–54 Parkinson, M. M. 122 Parodi, G. 36 Parr, F. W. 7 Parsad, B. 28, 51, 295–296 Pascarella, E. T. 34 Patrick, J. xiii, 35 Pauk, W. 11, 20 Paul, C. 244 Paulson, E. J. xii–xiii, xvi, 1, 30–35, 82, 89–91, 98 Pawan, F. 31, 90 Pearce, D. L. 355 Pearson, P. D. 120, 143 Peebles, J. 7 Pendlebury, M. 320 Perez, T. 82, 234 Perin, D. xiii, xvi, 34, 55, 76, 179, 317, 329 Perry, W. G. 125, 130 Peters, C. W. 348, 349 Petersen, L. E. 122 Peterson, C. L. 76 Petrosky, A. R. 36, 145, 153–155 Petruzzi, D. C. 260 Petty, W. T. 106 Peverly, S. T. 34, 317 Pheatt, L. 284 Phelps, P. V. 288 Phipps, R. A. 31, 47, 53–54 Pierson, C. T. 34 Pintrich, P. R. 33–34, 119–127, 193–194, 233 Ploeg, H. M. 260 Plummer, K. J. 113, 318, 335 Potts, R. 120 Powers, D. E. 246, 256 Prensky, M. 75, 168
391
Author Index
Pressey, M. 28, 33, 103, 119–200 Price, D. 98, 317 Purcell-Gates, V. 36 Putwain, D. W. 261 Pyros, S. W. 106 Pytash, K. E. 15 Quinn, K. B. 36, 93 Radcliffe, R. 31, 34, 206 Radford, A. W. 43 Ramani, A. 123 Randall, S. 113 Rangel, A. 68 Ranhaw, T. S. xiii Rankin, S. R. 66–67 Ransaw, T. S. 1 Ransby, M. J. 120 Ransdell, S. 120 Rao, S. P. 250 Ratliff, C. A. 47 Rauchwarger, A. 257 Raufman, J. 55 Raugh, M. R. 103 Rawlings, H. P. 47 Rawson, K. A. 197, 320 Raygor, A. xvi, 266 Readence, J. E. 68, 105 Redmond, C. 47 Reed, C. 169, 322 Reichard, C. A. 335 Reisman, A. 33–34, 122–124 Rhinehart, P. J. 68 Rhoads, C. 206 Richards, I. A. 145 Richardson, E. 43, 54, 219, 252, 319 Richards-Smith, H. 8–9 Rideout, H. M. 28 Ridgeway, V. G. 106 Ridley, D. S. 194 Rimbey, M. 99 Rios-Aguilar, C. R. 81 Roberts, G. H. 9, 103 Robertson, E. 65 Robinson, F. 3, 144, 203, 297–298 Rodríguez, O. 51 Roe, B. D. 355 Roediger, H. L. 249–250, 268 Rohwer, W. D. 120 Romm, J. 93 Rosa, J. 65 Rose, M. 13–16 Rosenkoetter, J. 244 Rosenshine, B. V. 126 Roth, D. 101, 109, 112 Roueche, J. E. 303 Rouet, J. F. 197, 206 Ruddell, N. 107 Rupley, W. H. 98, 106, 107, 109, 111, 124
392
Rush, L. 118 Russell, D. 144 Russo, A. 251 Ryan, J. M. 123, 243, 254, 346 Rynearson, K. 120, 193 Said, S. H. 319 Salvatori, M. 146 Samson, G. M. 258, 258–259 Sandora, C. 99, 109 Sarason, S. B. 260, 267 Sarnacki, R. 243 Sartain, H. T. 104 Sattizahn, J. R. 260 Sawyer, R. 329 Saxon, D. P. 51, 52, 296 Scanlon, E. 106 Scherbaum, C. A. 248 Schiel, J. 329 Schirm, A. 284 Schiro, M. S. 89 Schoerning, E. 98 Schommer, M. 120, 125 Schommer-Aikens, M. 125 Schraw, G. 33, 120 Schreiber, J. B. 125 Schultz, K. 158 Schumaker, J. B. 253 Schumm, J. S. xiv, 279, 317, 355, 357 Schwartz, A. E. 106, 248 Schwarzer, R. 260 Schmitt, N. 104 Scott, J. 94, 98, 109, 111 Scott-Clayton, J. 27, 51 Scribner, S. 158 Searfoss, L. xvi Seftor, N. S. 284 Seifert, T. L. 200 Selfe, C. L. 77 Seybert, J. A. 51 Seymour, E. 285 Shaffer, D. W. 174 Shanahan, T. 29–30, 35–36, 90, 98, 104, 113, 122, 127, 143, 148, 153, 156, 193 Shanks, D. R. 120 Shatz, M. 245, 248 Shaughnessy, M. 145 Shearer, B. A. 107 Shen, L. B. 4–5 Sherk, J. K. 110 Sherman, D. C. 47 Shetron, T. H. 106 Shi, H. 64 Shinn, D. 125 Shirky, C. 81 Shoup, R. 64 Shulman, L. S. 149, 152, 331, 343 Sigler, E. 68 Silverman, N. xvi, 9
Author Index
Sim, S. 252 Simonds, C. S. 82 Simonides 228 Simpson, M. L. xiii, 12, 31–32, 35–37, 87, 99–104, 107–111, 118, 121–122, 151, 297, 342 Simsek, A. 197 Sinatra, G. M. 126 Singer, H. xvi, 13, 18 Skalicky, J. 321 Skinner, B. F. 126 Slakter, M. J. 246 Sleeter, C. 70 Sloane, P. 109 Slocum, T. A. 103 Smetana, L. 107 Smith, N. B. 3–6, 36, 74, 144, 201, 247–248, 260, 264 Smith-Burke, M. T. 13 Snow, C. E. 98, 104, 118–119, 194 Snyder, T. D. 83 Soliday, M. 55, 157 Soria, K. 286 Spache, G. 3, 6 Spann, M. G. 43 Sperling, R. A. 32, 119, 120 Spielberger, C. D. 260 Spivey, N. N. 146 Springer, S. E. 90 Squire, J. R. 172 Stahl, N. A. xiii, 1–4, 11–19, 28–31, 37, 87–94, 99, 100–104, 107–108, 112, 120, 148, 341–342, 351 Stahlberg, D. 122 Staley, R. 32, 120 Steele, C. M. 261, 282 Stefanich, G. P. 29 Sternberg, R. J. 103 Stewart, P. W. 92, 120, 244 Sticht, T. G. 203 Stieglitz, E. L. 355 Stiggins, R. 331 Stokrocki, M. 82–83 Stoll, E. 106 Stoltz, D. F. 51 Straff, W. W. 4–5 Strang, R. 3, 16, 247 Strauss, R. 248 Street, C. 119, 191, 317 Strømsø, H. I. 125 Surber, J. R. 120 Svinicki, M. D. 34, 124 Swanson, H. L. 120 Swinning, E. 356 Swords, R. 223 Szal, R. J. 321 Taasoobshirazi, G. 122 Tagg, T. 217 Tan, A. Y. 252
Tangney, J. P. 34 Taraban, R. 193 Tate, G. 66 Taylor, E. A. 16, 158 Taylor, M. 64, 120, 122 Templeton, S. 101, 105 Thelin, J. R. 45 Therriault, D. J. 200 Thomas, J. D. 120, 284 Thorndike, E. L. 119, 242, 326, 336 Thornton, S. 64 Thurstone, E. L. 7 Tierney, R. 143, 148 Timbrell, N. 206 Tinker, M. 3 Tomlinson, L. M. 9 Toms, M. 320 Tosi, D. J. 267 Toth, C. 257 Townsend, B. K. 101, 104–105, 108, 287 Trainin, G. 120 Tran, K. 68 Triesman, P. U. 303 Trudeau, K. J. 252 Tryon, G. S. 260 Tseng, S. S. 202 Turnblow, K. 44 Turner, J. C. 35, 123–124 Twenge, J. M. xi Twiest, M. M. 266 Upcraft, J. N. 286 Vacca, R. 202, 334 Valeri-Gold, M. 16, 143 van Blerkom, D. L. 199, 205 van der Meer, J. 321 Van Etten, S. 119 Van Gilder, L. L. 14 Van Someren, Q. 78 Vandal, B. 55 Vesey, W. M. 331 Vie, S. 77 Volet, S. 123 Vygotsky, L. 119 Wafula, R. M. 70 Walker, M. M. J. 15, 18 Walsh, S. 110 Walveker, C. C. 12 Wang, S. Y. 74, 76, 81 Ward, L. 319 Wardle, E. 151–152, 158 Warfield, A. 80 Wark, D. 179, 263, 266–267 Warren, S. H. 32, 251 Wasson, B. 252 Wathington, H. D. 286–287 Watts-Taffe S. 111
393
Author Index
Weber, E. S. 194 Weininger, E. B. 54 Weinstein, C. E. xiii, 124, 179, 194–197, 200, 209, 228–229, 231–233, 236–237, 298–299 Weis, R. 183 Wellman, L. 54 Wenger, E. 32 Wentzel, K. R. 123 Werner, L. S. 257 Wheeler, R. S. 223 White, W. 8 Whittaker, T. A. 245 Wibrowski, C. R. 124 Wiggins, G. 333 Wilde, A. 317 Wildemouth, B. 260 Wiley, J. 120 Wilkinson, T. J. 255, 258 Willett, M. B. 148 Williams, J. xiii, 87, 157 Williamson, G. L. 29 Willingham, D. T. 98, 197 Wills, T. W. 98–100 Wilson, S. M. 90, 200, 287 Winch, G. 118 Windisch, H. C. 62 Wine, J. 262 Wineburg, S. S. 33–34, 122, 125 Winne, P. H. 100, 120, 130, 232 Winters, F. I. 122 Wittrock, M. C. 118, 228 Wixson, K. K. 32, 119
394
Wolf, M. 174 Wolfe, C. R. 250 Wolsey, T. D. 107 Wolters, C. A. 119–120, 319 Wood, E. 98, 245 Woodle, K. S. 288 Woolwine, M. A. 317 Worka, R. 113 Wyatt, M. 7, 28, 45 Xie, Q. 245 Xu, D. 90 Yancey, K. B. 77 Yang, L. H. 120, 202, 220 Yates, C. M. 256 Yeh, H. C. 202 Yen, C. J. 234 Yokoi, L. 119 Yood, J. 143 Yosso, T. 66 Yu, S. L. 319 Zavattaro, S. 74 Zeidner, M. 260, 267 Zeruth, J. A. 124 Zhang, C. 257 Zhao, Y. 120 Zimbardo, P. G. 250 Zimmer, J. E. 17 Zimmer, J. W. 249 Zimmerman, B. J. 100, 193–194, 228, 233 Zlotlow, S. F 260
Subject Index
academic coaching 301–302 academic courses, bridge programs 294–300 academic environment, Model of Strategic Learning (MSL) 234 academic literacy learning strategies 197–199; rehearsal strategies 199 academic literacy tasks, digital texts 195–196 academic preparedness; assessing 184–185; defining 182–183; helping students increase 185–187; low literacy skills as risk factor 183–184; Think, Know, Act, And Go 182–183 Academic Success Inventory for College Students (ASICS) 319 acceleration, student assessment 328 Access at the Crossroads: Learning Assistance in Higher Education 10 ACCUPLACER 367–369 ACT, college-readiness benchmarks 43 active role in learning 109 administration of development reading courses 295–296 affective influences 34; comprehension 122–123 age, student diversity 62 algorithmic systems 203 ALP (Accelerated Learning Program) 300 American Physical Society (APS) Bridge Program 288 American Reading Instruction 3 annotation 130–131; elaborative strategies 201–208 answers, changing 247–248 anxiety, test taking 259–264 appropriation 78 assessment and evaluation, IRW (Integrated Reading and Writing) 157–158 Association for the Tutoring Profession (ATP) 300 authentic assessments 333 autobiographies 15–17 backward-benchmarking 340 Basic Reading and Writing (BRW) 143–145; San Francisco State model 146–147 basic reading skills, improving 186 beliefs about text, comprehension 124–125 bias, reading tests 354–355
biographies 15–17 bound morphemes 101 bridge programs 281–282; academic courses 294–300; Accelerated Learning Program (ALP) 300; coaching 300–302; Council for Opportunity in Education (COE) 288–289; Education Talent Search (ETS) 283; first-year college experience 286–287; Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP) 283; Health Careers Opportunity Program (HCOP) 288; high school to college 283–286; learning communities 305–306; management 306–307; mentoring 300–302; middle school to high school 283; peer cooperative programs 302–304; research 288–289; tutoring 300–302; two-year college to four-year college 287; undergraduate to graduate/ professional school 287–288 built-in exits 330 Chabot model (IRW) 147–148 Cicero 227 classroom assessment techniques (CATs) 326, 331–332 classroom practice, IRW (Integrated Reading and Writing) 155–157 coaching, test preparation 253–259 coaching programs 300–302 cognitive influences, comprehension 120–121 cognitive justice, facilitating 69 cognitive learning strategies 228–229 cognitive processing models 194–195 cognitive views of reading process 32 college entrance tests 328–329 college reading 6–7, 89–90 college reading and literacy courses 294–295 college remediation 42; current policies 45–47 college-level reading courses 297 college-readiness benchmarks, ACT 43 combinational systems 203 commercial reading tests 345–346, 367–380; evaluating 355–358 Common Core State Standards Initiative (CCSSI) 340 comprehensive motivation 34
395
Subject Index
comprehension 118, 133; affective influences 122–123; beliefs about text 124–125; cognitive influences 120–121; disciplinary influences 122; epistemological beliefs 125–126; explicit instruction 126–128; future research 133–134; metacognitive influences 119–120; model of domain learning 121; motivation 123–124; sociological influences 118–119; strategies 128–132; theoretical rationale 118–126 Computer Assisted Design (CAD) 92 concept mapping 129–130 content knowledge, reading-writing connection 149–152 contemporary ability to identify as authors 79–80 contemporary reading skills and abilities 78–79 contemporary writing skills and abilities 79 context, vocabulary 109 contextual analysis, vocabulary 102–103 corequisite remediation, student assessment 327–328 Council for the Advancement of Standards in Higher Education (CAS) 289–290, 307–308 Council for Opportunity in Education (COE) 288–289 course development, IRW (Integrated Reading and Writing) 153–154, 154–155 criterion-referenced tests 345 critical (college-level) reading courses 297 critical thinking 78 CSU, Executive Order 665, 49–51 Culturally Relevant Pedagogy (CRP) 61 culturally responsive teaching, linguistic diverse students 219–220 curricular structures, reading tests 353 curriculum and instruction knowledge, reading- writing connection 152–158 Degrees of Reading Power (DRP) Online 369–373 demographics, test-wiseness 243–246 development education 8–11 developmental education, outcomes 51–52 developmental education; assessment and placement 49–51; future research 57–58; location 54; politicization 47–54; recommendations 56–57; reforms 55 Diagnostic Assessment and Achievement of College Skills 184–185 diagnostic tests 344 dictionary definitions 102 differentiated instruction 317 digital divide 75–76 digital glossing; elaborative strategies 202; rehearsal strategies 199 digital literacies 75–77 digital natives 74 digital storytelling 80 digital texts; academic literacy tasks 195–196; rehearsal strategies 199
396
diglossia 69–70 disability, student diversity 62–64 disciplinary influences, comprehension 122 disciplinary knowledge 151–152 disciplinary literacy (DL) 89–90; approaches 92–94; change theory 93–94; changing status quo 92–93; future research 95–96; implications for practice 94–95; integrating reading and writing 35–37; perspective 193 discourse, role of 30–31 diversity 61–62; age 62; disability 62–63, 63–64; gender 64; identity 66–67; implications for practice 69–70; language 64–65; LGBTQIA 66–67; race/ethnicity 65–66; sexual orientation 66–67; spirituality 67–68 domain identification 282–283 dualism 125 E-generation 74 Education Talent Search (ETS) 283 elaborative strategies 131–132, 200–208, 229–230 Elaborative Verbal Rehearsal (EVR) 132 emerging scholar program 303–304 English as a Second Language (ESL) 215–219 English/Language Arts Standards 340 epistemological beliefs, comprehension 125–126 e-portfolios 80 era studies 12–13 Errors and Expectations: A Guide for the Teacher of Basic Writing 145 ethnicity, student diversity 65–66 exit strategies, student assessment 329–331 exit strategies, built-in exits 330 explicit instruction, comprehension 126–128 extended orientation 286 Facts, Artifacts and Counterfacts 145 FairTest 297 federal grant programs (TRIO) 304–305 financial literacy classes 70 first-year college experience bridge program 286–287 First-Year Experience (FYE) experience program 286 formal reading tests 344–345 formative student assessment 331–333 free morphemes 101 Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP) 283 games; The Legend of Zelda 169; reading games as text 171–172; Scratch 173; The Settlers of Catan 169; SimCity 172; Tellagami 173; Tetris 172; Twine 173; using to teach reading 168–175 gender, student diversity 64 generative note-taking 200 Giles-MacGinitie Reading Tests 373–376
Subject Index
Graduate Record Examination (GRE) 242 group reading tests 345; evaluating 355–358 Health Careers Opportunity Program (HCOP) 288 heuristic systems 203 high school to college bridge programs 283–286 high-risk courses 303 high-risk students 303 historical research 18–20 historical study resources, literacy instruction 3–6 homegrown reading tests 345–346 Hooked on Books: Programs and Proof 145 How to Read a Page 145 hyperlinks 197 identity, student diversity 66–67 impactful practice, IRW (Integrated Reading and Writing) 159–160 Improving Student Learning Skills 7–8 individual reading tests 345 informal reading tests 344–345 informal reading inventories (IRIs) 344–345 informed acceleration, IRW (Integrated Reading and Writing) 159 informed role in learning 109 institutional histories 13–15 instruction and curriculum knowledge 152–158 instructional terrain of college reading 31–35 instrumental hypothesis 104 Integrated Reading and Writing (IRW) 143–144; assessment and evaluation 157–158; Chabot model 147–148; classroom practice 155–157; course development 153–154; course technologies 154–155; historical background 144–148; impactful practice 159–160; implications for practice 158–159; informed acceleration 159; Pittsburgh model 145; recommendations for future research 159–160 intensive instruction, vocabulary 110 International Mentor Training Program Certification (IMTPC) 301–3502 Interpretive Biography 15 Isakson Survey of Academic Reading Attitudes (ISARA) 316 Journal of Developmental Education 15–16 key information, isolating 130 keyword studies 103–104 knowledge hypothesis 104 L2 students; challenges 215–219; culturally responsive teaching 219–220; linguistically informed feedback 220–221, 223; nonmainstream varieties of English 217–219; support strategies 219–223 language, student diversity 64–65
Learning and Study Strategies Inventory (LASSI) 236–237, 319 learning assistance 7–8 learning communities 305–306 learning disabilities, college students 184 learning strategies 227, 237; assessments 235–236; cognitive 228–229; history 227–228; Learning and Study Strategies Inventory (LASSI) 236–237; MSL (Model of Strategic Learning) 230–234; strategy instruction 234–235 Legend of Zelda, The 169 LGBTQIA, student diversity 66–67 life history 18 linguistically informed feedback, linguistic diverse students 220–221, 223 linguistic diverse students 215–216; challenges 215–219; culturally responsive teaching 219–220; future research 223; linguistically informed feedback 220–221; nonmainstream varieties of English 217–219; support strategies 219–223 literacy and college career preparedness 184 literacy communities, purposeful studying 191–193 literacy courses 294–295 Literacy Information and Communication System (LINCS) Community of Practice 187 literacy instruction; historical study resources 3–6; historical summaries 6–11; time line 6–11 Lives on the Boundary 16 local history 17–18 low literacy skills as risk factor, academic preparedness 183–184 mandatory reading tests 352–353 mapping 203 mentoring programs 300–302 metacognitive influences, comprehension 119–120 metacognitive processes, prior knowledge 193 metacognitive reading processes 32–34 middle school to high school bridge programs 283 mind-at-work 326 model of domain learning, comprehension 121 Model of Strategic Learning (MSL) 227–234, 237; academic environment 234; self-regulation 233–234; skill 230–232; will 232–233 morphemic analysis 100 Motivated Strategies for Learning Questionnaire (MSLQ) 319 motivation, comprehension 123–124 multifaceted instruction 31; affective 34; beyond heuristics 34–35; cognitive 32; metacognitive 32–34; social 31 multiple document literacies 197 multiple documents, reading online 206
397
Subject Index
multiple measures assessment 327 multiplicity 125 National Assessment of Education Progress (NAEP) 330–331 Nation’s Report Card 330–331 nearby history 17–18 negintertextuality 78 negotiation skills 78 Nelson-Denny Reading Test (NDRT) 334, 376–380 New Directions for College Learning Assistance 7 new perspectives, adopting 83 Node Acquisition and Integration Technique (NAIT) 106 nonmainstream varieties of English 217–219 normative conditions, reading tests 347 norm-referenced reading tests 345 oral histories 15–17 organizational histories 13–15 organizing strategies; comprehension 129; mapping 203 Partnership for Assessment for Readiness for College and Careers (PARCC) 354 passage dependency, reading tests 350–351 Pedagogy of the Oppressed 145 peer cooperative programs 302–304 peer mentoring 301–302 peer tutoring 301 peer-led team learning (PLTL) 304 performance assessments 333 Philosophy of Literary Form, The: Studies Symbolic Action 145 pioneers 15 Pittsburgh model (IRW) 145 placement measures, student assessment 327 placement tests 328–329 policies; developmental education recommendation 56–57; developmental education research 57–58 policymakers 42; past references 45 politicization, developmental education 47–54 practitioners 15 Predict, Locate, Add, and Note (PLAN) 205 Predict, Organize, Rehearse, Practice, Evaluate (PORPE) 205 Pre-Plan, List, Activate, Evaluate (PLAE) 205 Process of Education, The 145 professional associations 3 program assessment 315; evaluating improvement 321–322; interpreting statistics 321; language 315–316; long-term effectiveness 321–322; matching to instruction 320; qualitative 316–317; quantitative 316–317; scheduling 320–321; study strategy 318–320; supplemental instruction 321
398
program management 293–294, 306–307 psychometric properties, reading tests 347–351 purposeful studying, literacy communities 191–193 qualitative program assessment 316–317 quantitative program assessment 316–317 race, student diversity 65–66 reading, using games to teach 168–175 reading classes, necessity 29 reading comprehension 118, 133; affective influences 122–123; beliefs about text 124–125; cognitive influences 120–121; disciplinary influences 122; epistemological beliefs 125–126; explicit instruction 126–128; future research 133–134; improving 186; metacognitive influences 119–120; model of domain learning 121; motivation 123–124; sociological influences 118–119; strategies 128–132; theoretical rationale 118–126 reading difficulties 183; academic preparedness 183 reading multiple documents online 206 “Reading Paired with History,” 37 reading tests 340, 340–341; bias 354–355; commercial 345–346; commercially available 367–380; criterion-referenced 345; curricular structures 353; decision-making for selection 343; evaluating commercial group 355–358; formal 344–345; group 345; high-stakes 351; homegrown 345–346; individual 345; informal 344–345; mandatory and voluntary 352–353; matching to purpose 346; norm-referenced 345; placement 354; psychometric properties 347–351; purposes and philosophies 341–343; self-placement 352–353; single-measure protocols 353; survey and diagnostic 344; system/state mandates 354 reading-writing connection 143; content knowledge 149–152; curriculum and instruction knowledge 152–158; theoretical knowledge 148–149 reforms, developmental education 55 rehearsal strategies 199 relativism 125 reliability conditions, reading tests 347–348 reliability considerations, reading tests 350–351 remediation 42 right-to-fail model 353 Robinson’s Survey Q3R (SQ3R) 11 San Francisco State model (BRW) 146–147 SAT (Scholastic Aptitude Test) 242 scheduling program assessments 320–321 Scratch 173 self-placement reading tests 352–353 self-regulated learning 193–194; academic literacy learning strategies 197–207; academic literacy tasks 195–197; MSL (Model of Strategic Learning) 233–234 self-schemas 34
Subject Index
Settlers of Cazan, The 169 sex, student diversity 64 sexual orientation, student diversity 66–67 SimCity 172 Simonides 227 single-measure protocols, reading tests 353 skill, MSL (Model of Strategic Learning) 230–232 skills, test taking 242–254 Skinner, B.F. 126 Smarter Balanced Assessment Consortium (SBAC) 354 social media 74; advancing agenda 83–84; changing natures of texts 78–79; critical thinking 78; digital divide 75–76; digital literacies 75, 77; digital storytelling 80; e-portfolios 80; social networking sites 81–82; teaching and learning 76–77; virtual worlds 82–83; weblogs 80–81 social networking sites 81–82 social proactive perspective 29–32 sociological influences, comprehension 118–119 spirituality, student diversity 67–68 SQ3R 203–205 state of the art, defining 380–383 statistics, program assessment 321 stereotype threat 282 strategic learning 227, 237; assessments 235–236; cognitive 228–229; history 227–228; LASSI (Learning and Study Strategies Inventory) 236–237; MSL (Model of Strategic Learning) 230–234; strategy instruction 234–235 strategic study-reading; future research 208–209; practice 206–208; purposeful studying 191–193 strategies, comprehension 128–132 structured learning assistance (SLA) 303 student assessment; acceleration 328; college entrance/placement tests 328–329; corequisite remediation 327–328; exit strategies 329–331; formative 331–333; fostering learning 334–336; implications for practice 336–337; multiple measures assessment 327; placement measures 327; purposes 326; research 336; summative 333–334 student demographics, test-wiseness 243–246 student diversity; age 62; disability 62–63; first- generation students 63–64; future research 69–70; gender 64; identity 66–67; implications for practice 68–69; language 64–65; LGBTQIA 66–67; race/ethnicity 65–66; sexual orientation 66–67; spirituality 67–68 student learning outcomes (SLOs) 315 student success courses 297–299 Student Support Services (SSS) 282 student-centered instructional approaches 35 Study Behaviors Inventory (SBI) 319 study strategies 227; program assessment 318–320 subject-matter knowledge 149–151 summarizing, elaborative strategies 202–203 summative student assessment 333–334
supplemental instruction 303 survey and diagnostic tests 344 synonyms 102 Teaching the Universe of Discourse 145 Tellagami 173 terrain of college reading 27; instructional 31–35; theoretical 29–31 test preparation 241–242; anxiety 259–264; coaching 253–259; practice 265–267 test taking 241–242; 21st century approaches 249–252; changing answers 247–248; recognizing cues 246–247; retesting 249; skills 242–254, 252–253; test anxiety 259–264; test-wiseness 242–254 tests; college entrance 328–329; commercial reading tests 367–380; placement 328–329; reading 340–341, 340–358; technology 330 Tetris 172 texts, changing nature 78–79 theoretical knowledge 148–149 theoretical rationale, reading comprehension 118–126 theoretical terrain of college reading 29–31 Think, Know, Act, And Go 182–183 topical studies 11–13 transfer shock 287 transmedia navigation 78 TRIO programs 282, 283–286, 304–305; COE (Council for Opportunity in Education) 288–289; SSS (Student Support Services) 294 Tutor Matching Service 301 tutoring programs 300–302 Twine 173 two-year college to four-year college bridge program 287 undergraduate to graduate/professional school bridge program 287–288 underlining/highlighting, rehearsal strategies 199 Upward Bound (UB) program 284 Valid Assessment of Learning in Undergraduate Education (VALUE) 322 validity conditions, reading tests 348–350 virtual worlds 82–83 visual organizers, vocabulary 105–106 vocabulary; academic strategies 104; analyzing extant programs and practices 112; contextual analysis 102–103; development 186; development and instruction 98–100; dictionary definitions 102; emphasizing active and informed role in learning process 109; encouraging students 111–112; instructional recommendations 108–111; intensive instruction 110; keyword studies 103–104; measuring knowledge 99; morphemic analysis 100–101; ongoing feedback to publishers 112; student input on selection 107; student-centered
399
Subject Index
instructional approaches 106; studies on traditional knowledge approaches 100; synonyms 102; teaching from context 108–109; technology 107– 108; useful research 113; visual organizers 105–106 Vocabulary Overview Guide (VOB) 105–106 voluntary reading tests 352–353
400
weblogs 80–81 writing skills, improving 186–187 writing-reading connection 143; content knowledge 149–152; curriculum and instruction knowledge 152–158; theoretical knowledge 148–149