150 8 4MB
English Pages 261 [253] Year 2022
Researching and Teaching Second Language Writing in the Digital Age Mimi Li
Researching and Teaching Second Language Writing in the Digital Age “This informative monograph makes a much-needed contribution to L2 writing scholarship, advancing the field toward a comprehensive understanding of the nature of L2 writing in the digital age. Written in a clear and cogent style, the book provides valuable insights for L2 writing researchers and practitioners across different contexts from around the world.” —Icy Lee, The Chinese University of Hong Kong “This book is a very welcome and timely addition to the field of L2 writing. It discusses in a clear and accessible manner how rapidly changing technologies have impacted L2 writing practices. It promises to be an invaluable resource for L2 writing scholars wishing to understand and take advantage of the teaching and research opportunities created by these new technologies.” —Neomy Storch, University of Melbourne, Australia “Li provides a comprehensive and well-organized overview of how writing is changing in the digital age and the implication of these changes for second language teaching and research. This is a valuable resource for both scholars and educators interested in second language writing.” —Mark Warschauer, University of California Irvine, USA
Mimi Li
Researching and Teaching Second Language Writing in the Digital Age
Mimi Li Department of Literature and Languages Texas A&M University–Commerce Commerce, TX, USA
ISBN 978-3-030-87709-5 ISBN 978-3-030-87710-1 https://doi.org/10.1007/978-3-030-87710-1
(eBook)
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Cover illustration: Piranka/Getty Images This Palgrave Macmillan imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
This book is dedicated to my parents who instill in me virtues and ignite my passion for learning and teaching
Acknowledgments
This very first book of mine would not have been possible without the support and help from many people. I owe a great debt of gratitude to my academic advisers, particularly Dr. Wei Zhu who has conscientiously trained me in second language research and enlightened my path ahead in the field of second language writing (SLW). Her continuing guidance, insights, and encouragement fuel my professional growth. I am extremely grateful to my two doctoral students Miriam Akoto and John Gibbons for sharing their tremendous passion for SLW scholarship and working diligently on illustrating previous research in the form of tables for this book. I would like to express my utmost appreciation to the anonymous reviewer for many constructive comments on the earlier version of the book. I also immensely thank Dr. Neomy Storch, Dr. Icy Lee, and Dr. Mark Warschauer for reading this book and writing endorsements. Moreover, I am very much thankful for receiving the AHSS Research Grant to support this book project from my institution Texas A&M University–Commerce (TAMUC). My heartfelt appreciation also goes to my colleagues Dr. Lucy Pickering, Dr. Salvatore Attardo, Dr. Hunter Hayes and others at TAMUC for their faith in me and constant support
vii
viii
Acknowledgments
in my research efforts. Furthermore, my deepest thanks go to my good friends Dr. Julie Dell-Jones and Dr. Jinrong Li, who have been my brilliant consultants, sounding boards, and cheerleaders over these years. Finally, I am extremely indebted to my husband and soul mate Dr. Fengming Wang for his wisdom, inspiration, and devotion, and two sons Jeremy and Oliver for sparkling up my life’s journey.
Contents
1
Introduction The Aims of the Book The Structure of the Book References
1 1 3 5
2
New Landscape of L2 Writing and Theoretical Frameworks New Landscape of L2 writing Theoretical Frameworks Informing Research References
7 7 12 17
3
Computer-Mediated Teacher Feedback Introduction Key Texts Research Directions Teaching Recommendations References
23 23 25 42 44 47
4
Computer-Mediated Peer Response Introduction
51 51
ix
x
Contents
Key Texts Research Directions Teaching Recommendations References
53 70 72 75
5
Digital Multimodal Composing Introduction Key Texts Research Directions Teaching Recommendations References
79 79 82 100 102 109
6
Computer-Mediated Collaborative Writing Introduction Key Texts Research Directions Teaching Recommendations References
113 113 116 137 140 145
7
Automated Writing Evaluation Introduction Key Texts Research Directions Teaching Recommendations References
151 151 153 174 175 179
8
Corpus Analysis and Corpus-Based Writing Instruction Introduction Key Texts Research Directions Teaching Recommendations References
183 183 186 206 208 213
Resources Books Journals Professional Conferences and Association SIGs
217 218 221 227
9
Contents
10
xi
Webinars Websites and More
230 232
Conclusion Recap of the Content Limitations Concluding Remarks
235 235 237 238
Index
241
About the Author
Dr. Mimi Li is a faculty member in the Department of Literature and Languages at Texas A&M University–Commerce (TAMUC), where she teaches courses in linguistics/applied linguistics and advises graduate students. Dr. Li earned her Ph.D. in Second Language Acquisition/Instructional Technology from the University of South Florida. She worked at Marshall University and then Georgia Southern University prior to her appointment at TAMUC. Dr. Li is an enthusiastic educator who has taught a diverse student population for twenty years. Her research areas focus on second language writing and computer assisted language learning. She has conducted research projects on online collaborative writing, computer-mediated teacher/peer feedback, multimodal composing, game-based vocabulary learning, and multimodal pedagogy in teacher education. Her work has appeared in Journal of Second Language Writing (JSLW), Computer Assisted Language Learning, Language Learning & Technology (LLT), Language Teaching, System, Computers & Education, Computers and Composition (CC), Journal of Computers in Education, and International Journal of Computer-Assisted Language Learning and Teaching (IJCALLT), among
xiii
xiv
About the Author
others. She received the JSLW 2016 Best Article Award for the publication on wiki-based collaborative writing. She co-edited the 2017 special issue on “Second language writing in the age of computer mediated communication” and is co-editing the 2022 special issue on “L2 writing assessment in the digital age” for JSLW. Dr. Li serves on the Editorial Boards of LLT, JSLW, CC, and IJCALLT.
List of Figures
Fig. 10.1 Fig. 10.2
Interaction of teacher, students, and technology in digital L2 writing classrooms Contents covered in this book
236 237
xv
List of Tables
Table 3.1 Table 3.2 Table 3.3 Table 4.1 Table 4.2 Table 4.3 Table 5.1 Table 5.2 Table 5.3 Table 6.1 Table 6.2 Table 6.3 Table 7.1
Research matrix of computer-mediated teacher feedback Research timeline of computer-mediated teacher feedback Representative technologies for computer-mediated teacher feedback Research matrix of computer-mediated peer response Research timeline of computer-mediated peer response Representative technologies for computer-mediated peer response Research matrix of digital multimodal composing Research timeline of digital multimodal composing Representative technologies for digital multimodal composing Research matrix of computer-mediated collaborative writing Research timeline of computer-mediated collaborative writing Representative technologies for computer-mediated collaborative writing Research matrix of automated writing evaluation
27 33 46 55 60 74 84 90 106 117 125 142 155
xvii
xviii
Table 7.2 Table 7.3 Table 8.1 Table 8.2 Table 8.3
List of Tables
Research timeline of automated writing evaluation Representative automated writing evaluation systems/tools Research matrix of corpus analysis and corpus-based writing instruction Research timeline of corpus analysis and corpus-based writing instruction Representative corpora and writing analysis tools
163 177 187 195 209
1 Introduction
The Aims of the Book In the digital age, we are increasingly engaged with writing in multimediated, electronic, and collaborative environments. New technologies have revolutionized the ways in which we communicate and construct knowledge, and the technology development has added layers of complexity by dramatically changing the landscape of second language (L2) writing (Li & Storch, 2017). As Hyland (2016) posits, computermediated communication (CMC) technologies have impacted “the ways we write, the genres we create, the authorial identities we assume, the forms of our finished products, and the ways we engage with readers” (p. 40). New forms of writing tasks continue to emerge in L2 contexts, such as digital multimodal composing (e.g., Hafner, 2013; Jiang et al., 2020; Shin et al., 2020) and computer-mediated collaborative writing (e.g., Kessler & Bikowski, 2010; Li & Kim, 2016; Li & Zhu, 2013; Yim & Warschauer, 2017). Meanwhile, scaffolding and interaction afforded by new technologies bloom in L2 writing classes, including teachers’ multimodal feedback (e.g., Cunningham, 2019; Elola & Oskoz, 2016), online peer feedback (e.g., Li & Li, 2017; Liou & Peng, 2019), © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 M. Li, Researching and Teaching Second Language Writing in the Digital Age, https://doi.org/10.1007/978-3-030-87710-1_1
1
2
M. Li
automated writing evaluation (e.g., Li et al., 2015; Zhang, 2020), and mediation of corpus tools at different stages of the writing process (e.g., Flowerdew, 2015; Larsen-Walker, 2017). Despite the rapid technological changes, our knowledge of the nature of writing that takes place in the digital environment and the impact of the environment on writing process, texts, and L2 writing development (involving the transformational roles new technologies may have) is still fairly limited. Researching and Teaching Second Language Writing in the Digital Age, therefore, aims to provide an extensive and up-todate coverage of the current themes in this promising field, and guide L2 scholars in undertaking new L2 writing research and instructional practice in the digital era. This volume is intended as an accessible and comprehensive guide for L2 researchers and teachers as they pursue research and pedagogical inquiries on writing in the digital age. The book discusses six up-to-date dynamic areas of technology-mediated L2 writing (see “the structure of the book” for details), synthesizes relevant literature, and provides both research and pedagogical insights. It should be noted at the outset that the term “‘second language’ is itself ambiguous” (Hyland, 2019, p. 2), and it can involve a wide array of learners with different backgrounds. This book focuses on the L2 learners who systematically study a second or foreign language in formal educational settings and meanwhile centers on the instructional use of technology in teaching L2 writing. Regarding the selection of scholarly work for this book, I conduct a literature search via Google Scholar by inputting key words pertinent to each of the six areas. I do not aim to do a comprehensive review of the current body of literature but select articles addressing the six key areas, published in peer-refereed journals with relatively high impact (CiteScore > 2.0) over the recent decade. The selected articles regarding each area aim to cover a wide range of research themes and educational contexts, and the synthesis of these studies suggests the recent trends for research practices in the specific areas, based on which potential avenues are expected to be explored in future research. This book aims to keep L2 scholars abreast of current developments in L2 writing and technology and enable them to explore novel issues of their interests. Specifically, one goal of the book is to encourage more researchers to further examine complex emerging issues addressing L2
1 Introduction
3
writing in the digital age. The other goal is to motivate and inspire language practitioners to implement effective writing activities using diverse new computer technologies. With the nexus between research and practice, L2 researchers and instructors are expected to make collective efforts to discover optimal pedagogical practices to best serve our digital student writers.
The Structure of the Book This book consists of ten chapters. Following the introduction in this chapter, chapter “New Landscape of L2 Writing and Theoretical Frameworks” presents a new landscape of writing pedagogy due to the affordances of CMC technologies. It also explains theoretical frameworks across multiple disciplines that have informed previous research on L2 writing and technology and that underlie the topics I will discuss in the following chapters. Chapter “Computer-Mediated Teacher Feedback” to chapter “Corpus Analysis and Corpus-Based Writing Instruction” map out six main research areas that have gained momentum in the research domain of L2 writing: (1) computer-mediated teacher feedback; (2) computer-mediated peer response; (3) digital multimodal composing; (4) computer-mediated collaborative writing; (5) automated writing evaluation; and (6) corpus analysis and corpus-based writing instruction. Each chapter is composed of four subsections: introduction, key texts, research directions, and teaching recommendations. After introducing the definition and rationales behind each research area, each of the six chapters explains the research strands for the specific area, and reviews representative research articles, supplemented with illustrative tables. After pinpointing the research gap, each chapter concludes with suggestions for future research and pedagogical practices, including the discussion of state-of-the-art technology tools. Specifically, chapter “New Landscape of L2 Writing and Theoretical Frameworks” presents a new landscape of L2 writing pedagogy reflected in new writing tasks, new forms of writing feedback/assessment, innovative approaches to teaching writing, and emergence of digital literacies and new identities. It also takes account
4
M. Li
of theoretical frameworks informing previous research across multiple disciplines, including L2 acquisition, sociocultural theory, computermediated collaborative learning, L2 writing, and linguistics. Chapter “Computer-Mediated Teacher Feedback” discusses teacher feedback in CMC contexts, involving different modes (i.e., written, audio, and video) using technology tools (e.g., MS Word and ScreencastO-Matic). Chapter “Computer-Mediated Peer Response” focuses on online peer response and probes how various technologies (e.g., MS Word, Turnitin PeerMark, online chatting tools) afford computermediated peer review and what are the effects of online peer review on writing development. Chapter “Digital Multimodal Composing” continues to illustrate the nature of digital multimodal writing and explores the role of digital multimodal composing in L2 writing instruction. Chapter “Computer-Mediated Collaborative Writing” discusses computer-mediated collaborative writing using technology tools (e.g., wikis and Google Docs) and examines how technologies afford collaborative writing and how computer-based collaborative writing influences learners’ writing outcome in L2. The next two chapters highlight the major role of technology tools in writing assessment and instruction. Chapter “Automated Writing Evaluation” focuses on automated writing evaluation (AWE) using tools such as Criterion, My Access!, and Grammarly. Chapter “Corpus Analysis and Corpus-Based Writing Instruction” addresses data-driven learning (DDL) of academic writing/research writing using corpus applications (e.g., the concordance tools of AntConc and COCA). Chapter “Resources” catalogues main resources on L2 writing in the digital age, such as books, journals, websites, and webinars, with the aim to enrich L2 practitioners’ and researchers’ knowledge and motivate them to further explore digital L2 writing practices in their own educational contexts. Chapter “Conclusion,” the last chapter, summarizes what has been covered in this book, addresses the limitations, and restates the goals of the book.
1 Introduction
5
References Cunningham, K. J. (2019). Student perceptions and use of technologymediated text and screencast feedback in ESL writing. Computers and Composition, 52, 222–241. Elola, I., & Oskoz, A. (2016). Supporting second language writing using multimodal feedback. Foreign Language Annals, 49 (1), 58–74. Flowerdew, L. (2015). Using corpus-based research and online academic corpora to inform writing of the discussion section of a thesis. Journal of English for Academic Purposes, 20, 61–82. Hafner, C. A. (2013). Digital composition in a second or foreign language. TESOL Quarterly, 47 , 830–834. https://doi.org/10.1002/tesq.135 Hyland, K. (2016). Teaching and researching writing (3rd ed.). Routledge. Hyland, K. (2019). Second language writing (2nd edition). Cambridge University Press. Jiang, L., Yang, M., & Yu, S. (2020). Chinese ethnic minority students’ investment in English learning empowered by digital multimodal composing. TESOL Quarterly, 54 (4), 954–979. Kessler, G., & Bikowski, D. (2010). Developing collaborative autonomous learning abilities in computer mediated language learning: Attention to meaning among students in wikispace. Computer Assisted Language Learning, 23(1), 41–58. Larsen-Walker, M. (2017). Can data driven learning address L2 writers’ habitual errors with English linking adverbials? System, 69, 26–37. Li, J., Link, S., & Hegelheimer, V. (2015). Rethinking the role of automated writing evaluation (AWE) feedback in ESL writing instruction. Journal of Second Language Writing, 27 , 1–18. Li, M., & Kim, D. (2016). One wiki, two groups: Dynamic interactions across ESL collaborative writing tasks. Journal of Second Language Writing, 31, 25– 42. Li, M., & Li, J. (2017). Online peer review using Turnitin in first-year writing classes. Computers and Composition, 46 , 21–38. Li, M., & Storch, N. (2017). Second language writing in the age of CMC: Affordances, multimodality, and collaboration. Journal of Second Language Writing, 36 , 1–5. https://doi.org/10.1016/j.jslw.2017.05.012 Li, M., & Zhu, W. (2013). Patterns of computer-mediated interaction in small writing groups using wikis. Computer Assisted Language Learning, 26 (1), 62–81.
6
M. Li
Liou, H. C., & Peng, Z. Y. (2009). Training effects on computer-mediated peer review. System, 37 (3), 514–525. Shin, D. S., Cimasko, T., & Yi, Y. (2020). Development of metalanguage for multimodal composing: A case study of an L2 writer’s design of multimedia texts. Journal of Second Language Writing, 47 , 100714. Yim, S., & Warschauer, M. (2017). Web-based collaborative writing in L2 contexts: Methodological insights from text mining. Language Learning & Technology, 21(1), 146–165. Zhang, Z. V. (2020). Engaging with automated writing evaluation (AWE) feedback on L2 writing: Student perceptions and revisions. Assessing Writing, 43, 100439.
2 New Landscape of L2 Writing and Theoretical Frameworks
New Landscape of L2 writing Scholarly interests in L2 writing have evolved considerably over the decades (Manchón, 2017). The advancement of Web 2.0 tools has brought new challenges as well as new opportunities to L2 literacy practices. The extended conceptualization of literacy, often referred to as digital literacies, encompasses the appropriate use of new computermediated communication (CMC) tools for writing in the technologydriven world (Thorne & Reinhardt, 2008). Digital literacies entail “a high level of conceptual mastery,” including “using language in combination with other semiotic resources for communication, entering into relationships with new kinds of audiences, and constructing new kinds of identities” (Hafner, 2013, p. 830). The development of digital technologies has led to new forms of writing practices and allowed for scaffolding and collaboration on an unprecedented scale. The new landscape of L2 writing is featured with new writing tasks, new forms of writing feedback/assessments, new approaches to teaching writing, and emergence of new literacies and new identities. I discuss these features in the following sections. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 M. Li, Researching and Teaching Second Language Writing in the Digital Age, https://doi.org/10.1007/978-3-030-87710-1_2
7
8
M. Li
New Writing Tasks Tasks are fundamental in L2 writing classrooms as they determine the learning experiences and writing activities that learners engage in (Hyland, 2019). With the proliferation of CMC tools and social software, the writing field has witnessed radical changes as to the notions of audience, authorship, and writing per se. Writing can be reconceptualized as multimodal composing, which enables writers to deploy multiple semiotic resources (e.g., linguistic, visual, audio, gestural, spatial) to construct meaning and engage audience. Advancement of Web 2.0 technologies (e.g., blogs, wikis, and social networking applications) facilitates the emergence of new genres—Web genres. Images and sounds have increasingly replaced texts in web genres, which coincides the visual turn in writing studies: seeing texts as visual and treating images as texts (Purdy, 2014). Echoing the visual turn, “the way of writing representation has been shifting from purely verbal to visual in a wide range of informative, persuasive, and entertainment texts” (Li & Storch, 2017, p. 1). Diverse multimodal writing task forms are increasingly implemented in L2 classrooms, such as video projects, digital stories, multimodal presentations, and brochures/posters (Zhang et al., 2021). For example, in Hafner’s (2013) study, students in an English for Specific Purposes (ESP) class worked in small groups and created a digital scientific documentary, shared through YouTube. Smith et al. (2017) deployed a curriculum-based project in which secondary school students created voice-embedded PowerPoint presentations. In Yang et al.’s (2020) study, students designed and created multimedia-rich stories using Prezi and Google Drive. Web genres (e.g., blogs, wikis, and fanfictions) have connected people worldwide (Li & Storch, 2017) and allowed for more interactive participation in creating and sharing information (Warschauer & Grimes, 2007). The new technologies encourage collaboration in multiple writing processes, which may challenge traditional notions of authorship and authority. Specifically, blogs promote interactions between authors and audience, but keep strong authorial voices, while Google Docs/wikis blur the distinction between authors and audience and encourage high levels
2 New Landscape of L2 Writing and Theoretical Frameworks
9
of author-audience interactivity (Li & Storch, 2017; Vandergriff, 2016). In particular, computer-mediated collaborative writing, as an effective pedagogical practice, has been widely implemented in L2 contexts. L2 learners in pairs/small groups, drawing on technology tools, negotiate meaning and make joint decisions throughout the writing process and produce a single text with shared responsibility and co-ownership (Li, 2018; Storch, 2002). For example, in Mak and Coniam’s (2008) study, the secondary school students worked in small groups asynchronously on wikis and collaboratively created a school brochure, which were later distributed to their parents. The college students in Cho’s (2017) study engaged in a synchronous collaborative writing project using Google Docs and text/voice chats, in which they jointly summarized what had been discussed in their debate club. Such new tasks create wider audience, promote self-expression, and enhance interaction and community building.
New Forms of Writing Feedback/Assessments Writing feedback from both teachers and peers has been regarded as an essential practice in L2 writing classes for decades. CMC technologies have been expanding the sites for teachers and peers to provide writing feedback. Feedback via electronic files, discussion forums, and social networking sites is becoming a norm in the digital age (Elola & Oskoz, 2017). Writers can receive more timely and varied modes of feedback from others, which thus influences their writing products. The electronic teacher/peer feedback takes on two modes: asynchronous CMC and synchronous CMC. Ene and Upton (2018) implemented both asynchronous teacher electronic feedback (using Word doc comments and track changes) and synchronous teacher feedback (using Chats) in their EAP classes. Students were found to have significantly higher uptake from Word doc feedback than from chat sessions. Other than the widely used monomodal feedback (i.e., written feedback), multimodal feedback, relying on audio and visual modes (e.g., voice commenting, screencast feedback), has started to capture instructors’ and researchers’ attention. For instance, Cunningham (2019) utilized both text feedback (using
10
M. Li
Word doc) and screencast feedback (using TechSmith’s Snagit) in college intensive English classes, and the results showed that students generally preferred screencast feedback to written feedback because of its awareness-raising capacity and clarity. Along the same line, online peer feedback is facilitated by the development of new technologies. Emerging forms of peer response soar, in contrast to the traditional electronic peer feedback provided via Word docs. To take an example, Li and Li (2017) implemented peer review activities using Turnitin PeerMark in the first-year composition classes, and the students commented on the ease and effectiveness of Turnitin PeerMark as a peer review tool due to its distinctive features (e.g., composition marks, commenting bubbles, PeerMark questions). Moreover, automatic computer-generated feedback, commonly called automated writing evaluation (AWE), has played an important role in writing instructions. AWE systems commonly provide individualized feedback in multiple writing areas (e.g., grammar, usage, mechanics, style) and also generate automatic scores. For instance, in Zhang’s (2020) study, the EFL college students used a relatively new AWE system named Pigai for the revisions of their writing. Different from some other AWE systems, Pigai offers corrective feedback on collocations as well as positive feedback on good expressions in the students’ essays. The results revealed that the students used AWE feedback not only to make corrections on language errors but also to expand their linguistic knowledge.
New Approaches to Teaching Writing Moreover, corpora have been developed as resources for writers to access examples of authentic language use in collections of electronic texts (Li et al., 2020). Drawing on the corpora, students observe and analyze linguistic conventions well-established in their academic disciplines, which is known as data-driven learning (DDL). Corpus-based instruction, using the learner-centered self-discovery approach, enables L2 learners to explore specific genres by means of corpus analysis in English for academic writing classes (Cortes, 2007). Students enhance
2 New Landscape of L2 Writing and Theoretical Frameworks
11
language awareness of a specific academic register reflected in lexicogrammatical features and conventional rhetorical organizations (Biber & Conrad, 2009). To take an example, Cotos et al. (2017) designed the Research Writing Tutor (RWT), containing “an English language corpus annotated to enhance rhetorical input, a concordancer that was searchable for rhetorical functions, and an automated writing evaluation engine that generated rhetorical feedback” (p.104). They found that technologymediated corpora fostered L2 novice writers’ learning of genre conventions and helped enhance their rhetorical, formal, and procedural aspects of genre knowledge. Online corpus applications broaden the opportunities for learning the genre of academic/research writing. Also, AWE tools provide timely feedback in multiple writing aspects and create great potentials for language and writing development.
Emergence of Digital Literacies and New Identities In the digital age, literacy goes far beyond the ability to read and write. “Literacies” are redefined as social practices that are fluid, sociocultural, multimodal, and dynamic (Chen, 2013). Digital literacies are not limited to technical competencies; they highlight the agency of practitioners while engaging in meaning making through deploying multimedia resources. In L2 learning contexts, students search and critically evaluate online information, create multimodal texts, remix online texts, and interact with others in the online learning community. It is now known that literacy practice exists in a social context, which “includes shared understandings, ideologies and social identities as well as the social rules that regulate the access and distribution of texts” (Hyland, 2016, p. 36). Meanwhile, technological innovations have opened up new identities to writers. For instance, science students may play the role of disciplinary experts and teach a wide authentic audience of non-specialists about scientific topics of their interests via video projects (Hafner, 2014). Technologies also enable learners to try out different aspects of their identity, which may particularly benefit self-conscious L2 users, who are more inclined to exhibit their language ideology and express themselves online (Lam, 2000).
12
M. Li
Theoretical Frameworks Informing Research Research on L2 writing and technology has been flourishing in response to the new landscape of L2 writing in the digital age. Multiple theoretical frameworks across disciplines guide the research in new areas of L2 writing. I discuss below six theoretical approaches that undergird one or more research areas that will be expounded in the following chapters.
Second Language Acquisition (SLA) L2 writing and SLA interests have converged for decades. Two important theories that inform L2 writing and learning are cognitive theory and interactionist theory. The skill acquisition models (e.g., Anderson, 1983; DeKeyser, 1997; McLaughlin, 1987) propose that explicit knowledge is developed through the conscious, controlled processing of L2 information, and learners who are taught explicit knowledge about linguistic form and then practice it can achieve positive learning outcomes. Ellis et al. (2006) later posited that metalinguistic feedback, namely the explanation of what has caused errors, also results in language gains. These cognitive models support the positive roles of teacher feedback, peer feedback, and computer-generated feedback, as cognitive processing of input is helpful for developing explicit and implicit knowledge in L2. The interactionist theory (e.g., Long, 1996; Schmidt, 1990; Swain, 1995) further emphasizes the role of interaction in SLA. The interactionist scholars argue that exposure to L2 input is not sufficient for L2 learning, and learners need to know when their output does not conform to L2 structure and to be pushed to modify it when corrective feedback indicates the problem (Bitchener & Storch, 2016). This approach provides a theoretical support for peer feedback and collaborative writing activities in which L2 writers engage in meaning negotiation and discussion to facilitate L2 acquisition and writing development. In addition, a few constructs in L2 acquisition (e.g., autonomy, investment) have informed studies on digital multimodal composing and automated writing evaluation. For instance, Jiang et al. (2020) drew on the theoretical lens of investment (Norton, 2013) to understand learners’
2 New Landscape of L2 Writing and Theoretical Frameworks
13
“historically constructed relationship to the target language across space and time” and identify the intersection of investment with identity and ideology (Jiang et al., 2020, p. 960).
Sociocultural Theory Another important theoretical approach enlightening L2 writing research, particularly collaborative writing and writing feedback, is sociocultural theory (Vygotsky, 1978), which maintains that higher forms of learning and cognitive development are social in nature, depending on the social and cultural contexts (Lantolf, 2000; Lantolf & Thorne, 2006). Language and social interaction facilitate learning in the learners’ Zone of Proximal Development (ZPD). Specifically, collaborative writing tasks provide learners with opportunities to elicit and provide scaffolding— the finely tuned assistance leading to L2 learners’ language development within their ZPD. Peer interaction creates L2 learning opportunities and leads to social and cognitive gains; it helps student writers move from stages of other regulation to self-regulation (Villamil & de Guerrero, 2006). Meanwhile, language serves dual functions during the language learning process: a means for communication and a cognitive tool for mediating knowledge co-construction (Antón & DiCamilla, 1998; Swain & Lapkin, 1998). A branch of sociocultural theory, activity theory (Engeström, 1987, 1999; Leont’ev, 1981), also informs research on writing feedback and collaborative writing. Activity theory posits that human purposeful activities are driven by needs or motives. Needs, either biologically or culturally constructed, become motives when they get directed at a specific object, and motives are then realized in specific goal-directed actions (Lantolf, 2000; Leont’ev, 1981). Activity theory helps elucidate the role of motive/goal in construing interactional processes and variations in peer interaction (Li & Zhu, 2017; Zhu & Mitchell, 2012). Based on Vygotsky’s (1978) notion that language learning is a mediated process that involves mediation by artifacts, self, and others in social interactions (Lantolf, 2000), Engeström (1987) later proposed the activity system model (consisting of tools, subject, object, outcome, community,
14
M. Li
division of labor, and rules) and explained that individual actions and goals were interconnected with other sociocultural factors. This collective, artifact-mediated, and object-oriented activity system provides a new lens for examining the complex nature of peer-peer and studentteacher interaction in diverse writing task environments. For instance, Li (2021), drawing on the activity system, examined interconnected mediating components in the collaborative wiki writing activities, which were found to afford and/or constrain students’ participation in online collaborative writing.
Computer-Mediated Collaborative Learning Also, research on L2 writing and technology is largely supported by computer-mediated collaborative learning (Warschauer, 1997), which gains insights from above-mentioned conceptual frameworks (e.g., interactionist perspective and sociocultural perspective). Computer-mediated collaborative learning is also empowered by computer-mediated communication (CMC); thus, CMC is considered an essential component of computer-mediated collaborative learning. The notion of CMC informs all the six research areas I will discuss in this book. Warschauer (1997) specifies five features that distinguish CMC from other communication media: text-based and computer-mediated interaction, many-to-many communication, time and space independence, long-distance exchanges, and hypermedia links. CMC encourages both reflection and interaction, and computer-mediated writing unleashes the interactive power of text-based communication. To take the synchronous chat mode as an example, it allows students to engage in prompt interaction, but students can pause, reflect, and think carefully during interaction. Students tend to be more expressive in this mode, which facilitates collaborative construction of knowledge. Due to the affordance of many-to-many communication, including contribution at one’s own time and space, and the reduction of social context and nonverbal cues, CMC results in more equal participation from L2 learners than face-to-face discussion (Warschauer, 1997). Worthy of note, asynchronous CMC allows
2 New Landscape of L2 Writing and Theoretical Frameworks
15
for more in-depth analysis and critical reflection and broadens opportunities for learner-learner and learner-teacher interaction and learning outside the classrooms. In addition, long-distance exchanges and hypermedia links bring a wider audience for diverse writing tasks (including multimedia writing tasks) and afford global intercultural projects.
Writing Studies Three more theoretical frameworks enlightening L2 writing research are from the discipline of writing studies. First, the process writing theory shifts our attention from writing products to writing process in which writing takes place (Hayes & Flower, 1980). Writing is viewed as dynamic, non-linear, and recursive process of meaning making and knowledge transformation rather than a product-oriented activity (Yu & Lee, 2016). The process writing model has informed many effective pedagogical practices in L2 writing classes, such as teacher/peer feedback, automated writing evaluation, collaborative writing, and corpus-assisted writing process. The second influential approach is genre analysis. Drawing on the theory of systemic functional linguistics that explores the relationship between language and its social functions (Halliday & Hasan, 1989), genre approach considers writing a goal-oriented, staged social process (Martin, 1992). Genre analysis is the development of text linguistics and description of academic genres, moving from a focus on lexicogrammatical features to rhetorical moves. By setting out the stages and moves, students learn the explicit grammar of linguistic choices that serve specific purposes. Swales (1990) initially defined genre as a class of communicative events taken on by members in the discourse community that share some set of communicative purposes; he created the CARS model to analyze the introductions of research articles, which shed much light on genre analysis of research articles and beyond. Based on the increasing awareness of genre and process writing, genre-based instruction is known as an important approach in the L2 and writing classes (Paltridge, 2014; Tardy, 2019). We also see that corpus-based pedagogy increasingly interacts with genre analysis in EAP/ESP classes.
16
M. Li
Moreover, literacy studies have moved writing research away from academic, literary, and paper-based writing to embrace what people actually do in their daily literacy practices (Hyland, 2016). New literacies go beyond the conventional view of literacy as printed and written texts and extend to meaning-making practices using digital technologies. New London Group (1996) proposed the pedagogy of multiliteracies, which highlighted linguistic diversity and multimodal forms of knowledge construction and representation. The renewed literacy pedagogies address “the increasing complexity and inter-relationship of different modes of meaning,” including linguistic, visual, aural, spatial, and gestural modes (New London Group, 1996, p. 78). This approach provides novel insights into the design of multimodal writing tasks in language classrooms.
Linguistics/Applied Linguistics Furthermore, a few branches of linguistics/applied linguistics have illuminated the inquiries of L2 interaction and writing process. One is pragmatics, an approach to analyzing discourse. Although most researchers in pragmatics focused on studying conversation and spoken forms, the concepts they used have been applied to written discourse. As Hyland (2016) noted, “pragmatic processes such as speech acts, relevance, cooperation, reference and politeness provide ways to analyse how writers seek to encode their messages for a particular audience” (p. 239). For instance, multiple pragmatic notions (e.g., speech acts, politeness) can help us understand peer interactions in collaborative L2 writing (Li, 2012, 2013). The other approach that shed light on L2 writing is corpus linguistics, which refers to the study of language in use through corpora. Text analysis software, such as a concordance program, has been used to analyze corpora so as to discover frequency, phraseology, and collocation. At the discourse level, L2 writers explore specific genres using corpus analysis in EAP classes, in which they unpack salient and complex characteristics of particular writing genres (Cortes, 2007). Corpus-based instruction contextualizes corpus data so as to enhance L2 learners’ language awareness and knowledge of lexico-grammatical
2 New Landscape of L2 Writing and Theoretical Frameworks
17
features as well as genre characterized with conventional rhetorical organizations (Biber & Conrad, 2009). In addition, conversation analysis can be a valuable approach to capturing turn-by-turn interactional moments when students work on L2 online writing tasks, such as the text-based asynchronous multiparty interaction during online collaborative writing, as reported in Abe (2019). In short, L2 writing in the digital age boasts robust interest and straddles multiple disciplines, and a wide array of areas on L2 writing and technology merit our investigation, which will be discussed in chapter “Computer-Mediated Teacher Feedback” through chapter “Corpus Analysis and Corpus-Based Writing Instruction.”
References Abe, M. (2019). L2 interactional competence in asynchronous multiparty text-based communication: Study of online collaborative writing. Computer Assisted Language Learning, 34 (4), 409–433. Anderson, J. (1983). The architecture of cognition. Harvard University Press. Antón, M., & DiCamilla, F. (1998). Socio-cognitive functions of L1 collaborative interaction in the L2 classroom. The Canadian Modern Language Review, 54, 314–342. Biber, D., & Conrad, S. (2009). Register, genre, and style. Cambridge University Press. Bitchener, J., & Storch, S. (2016). Written corrective Feedback for L2 development. Multilingual Matters. Chen, H. I. (2013). Identity practices of multilingual writers in social networking spaces. Language Learning & Technology, 17 (2), 143–170. Cho, H. (2017). Synchronous web-based collaborative writing: Factors mediating interaction among second-language writers. Journal of Second Language Writing, 36 , 31–57. Cortes, V. (2007). Exploring genre and corpora in the English for academic writing class. The ORTESOL Journal, 25, 8–14. Cotos, E., Link, S., & Huffman, S. (2017). Effects of DDL technology on genre learning. Language Learning & Technology, 21(3), 104–130. Cunningham, K. J. (2019). Student perceptions and use of technologymediated text and screencast feedback in ESL writing. Computers and Composition, 52, 222–241.
18
M. Li
DeKeyser, R. M. (1997). Beyond explicit rule learning: Automatizing second language morphosyntax. Studies in Second Language Acquisition, 19, 195– 221. Ellis, R., Loewen, S., & Erlam, R. (2006). Implicit and explicit corrective feedback and the acquisition of L2 grammar. Studies in Second Language Acquisition, 28, 339–368. Elola, I., & Oskoz, A. (2017). Writing with 21st century social tools in the L2 classroom: New literacies, genres, and writing practices. Journal of Second Language Writing, 36 , 52–60. Ene, E., & Upton, T. A. (2018). Synchronous and asynchronous teacher electronic feedback and learner uptake in ESL composition. Journal of Second Language Writing, 41, 1–13. Engeström, Y. (1987). Learning by expanding: An activity theoretical approach to developmental research. Orienta-Konsultit. Engeström, Y. (1999). Activity theory and individual and social transformation. In Y. Engeström, R. Miettinen, & R. Punamäki-Gitai (Eds.), Perspectives on activity theory (pp. 19–38). Cambridge University Press. Hafner, C. A. (2013). Digital composition in a second or foreign language. TESOL Quarterly, 47 (4), 830–834. Hafner, C. A. (2014). Embedding digital literacies in English language teaching: Students’ digital video projects as multimodal ensembles. TESOL Quarterly, 48(4), 655–685. Halliday, M. A. K., & Hasan, R. (1989). Language, context and text: Aspects of language in a social-semiotic perspective (2nd ed.). Oxford University Press. Hayes, J. R., & Flower, L. (1980). Identifying the organization of writing processes. In L. W. Gregg & E. R. Steinberg (Eds.), Cognitive processes in writing: An interdisciplinary approach (pp. 3–30). Lawrence Erlbaum. Hyland, K. (2016). Teaching and researching writing (3rd ed.). Routledge. Hyland, K. (2019). Second language writing (2nd ed.). Cambridge University Press. Jiang, L., Yang, M., & Yu, S. (2020). Chinese ethnic minority students’ investment in English learning empowered by digital multimodal composing. TESOL Quarterly, 54 (4), 954–979. Lam, W. (2000). L2 literacy and the design of the self: A case study of a teenager writing on the internet. TESOL Quarterly, 34 (3), 457–482. Lantolf, J. P. (2000). Second language learning as a mediated process. Language Teaching, 33, 79–96. Lantolf, J. P., & Thorne, S. L. (2006). Sociocultural theory and the genesis of second language development. Oxford University Press.
2 New Landscape of L2 Writing and Theoretical Frameworks
19
Leont’ev, A. N. (1981). Problems of the development of the mind . Progress. Li, M. (2012). Politeness strategies in wiki-mediated communication of EFL collaborative writing tasks. IALLT Journal, 42(2), 1–26. Li, M. (2013). Individual novices and collective experts: Collective scaffolding in wiki-based small group writing. System, 41(3), 752–769. Li, M. (2018). Computer-mediated collaborative writing in L2 contexts: An analysis of empirical research. Computer Assisted Language Learning, 31(8), 882–904. Li, M. (2021). Participation and interaction in wiki-based collaborative writing: An activity theory perspective. In P. García Mayo (Ed.), Working collaboratively in second/foreign language learning (pp. 227–248). De Gruyter Mouton. Li, M., & Li, J. (2017). Online peer review using Turnitin in first-year writing classes. Computers and Composition, 46 , 21–38. Li, M., & Storch, N. (2017). Second language writing in the age of CMC: Affordances, multimodality, and collaboration. Journal of Second Language Writing, 36 , 1–5. Li, M., & Zhu, W. (2017). Explaining dynamic interactions in wiki-based collaborative writing. Language Learning & Technology, 21(2), 96–120. Li, Z., Dursun, A., & Hegelheimer, V. (2020). Technology and L2 writing. In C. Chapelle & S. Sauro (Eds.), The handbook of technology and second language teaching and learning (pp. 77201392). Wiley Blackwell. Long, M. H. (1996). The role of linguistic environment in second language acquisition. In W. C. Ritchie & T. K. Bhatia (Eds.), Handbook of second language acquisition (pp. 413–468). Academic. Mak, B., & Coniam, D. (2008). Using wikis to enhance and develop writing skills among secondary school students in Hong Kong. System, 36 , 437– 455. Martin, J. R. (1992). English text: System and structure. John Benjamins. McLaughlin, B. (1987). Theories of second language learning. Edward Arnold. Manchón, R. M. (2017). The multifaceted and situated nature of the interaction between language and writing in academic settings: Advancing research agendas. In J. Bitchener, N. Neomy, & R. Wette (Eds.), Teaching writing for academic purposes to multilingual students (pp. 183–199). Routledge. New London Group. (1996). A pedagogy of multiliteracies: Designing social futures. Harvard Educational Review, 66 (1), 60–92. Norton, B. (2013). Identity and language learning: Extending the conversation (2nd ed.). Multilingual Matters.
20
M. Li
Paltridge, B. (2014). Genre and second language academic writing. Language Teaching, 47 (3), 303–318. Purdy, J. P. (2014). What can design thinking offer writing studies. College Composition and Communication, 65 (4), 612–641. Schmidt, R. (1990). The role of consciousness in second language learning. Applied Linguistics, 11(2), 129–158. Smith, B., Pacheco, M., & de Almeida, C. (2017). Mutimodal code-meshing: Bilingual adolescents’ processes composing across modes and languages. Journal of Second Language Writing, 36 , 6–22. Storch, N. (2002). Patterns of interaction in ESL pair work. Language Learning, 52(1), 119–158. Swain, M. (1995). Three functions of output in second language learning. In G. Cook & B. Seidlhofer (Eds.), Principle and practice in applied linguistics: Studies in honor of H.G. Widdowson (pp. 125–144). Oxford University Press. Swain, M., & Lapkin, S. (1998). Interaction and second language learning: Two adolescent French immersion students working together. Modern Language Journal, 82, 320–337. Swales, J. (1990). Genre analysis: English in academic and research settings. Cambridge University Press. Tardy, C. (2019). Genre-based writing: What every ESL teacher needs to know. Michigan University Press. Thorne, S. L., & Reinhardt, J. (2008). Bridging activities”, new media literacies and advanced foreign language proficiency. CALICO Journal, 25 (3), 558– 572. Vandergriff, I. (2016). Second-language discourse in the digital world: Linguistic and social practices in and beyond the networked classroom. John Benjamins. Villamil, O., & de Guerrero, M. (2006). Sociocultural theory: A framework for understanding socio-cognitive dimensions of peer feedback. In K. Hyland & F. Hyland (Eds.), Feedback in second language writing: Contexts and issues. Cambridge University Press. Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Harvard University Press. Warschauer, M. (1997). Computer-mediated collaborative learning: Theory and practice. Modern Language Journal, 81(4), 470–481. Warschauer, M., & Grimes, D. (2007). Audience, authorship, and artifact: The emergent semiotics of web 2.0. Annual Review of Applied Linguistics, 27 , 1–23.
2 New Landscape of L2 Writing and Theoretical Frameworks
21
Yang, Y. T. C., Chen, Y. C., & Hung, H. T. (2020). Digital storytelling as an interdisciplinary project to improve students’ English speaking and creative thinking. Computer Assisted Language Learning. Yu, S., & Lee, I. (2016). Peer feedback in second language writing (2005– 2014). Language Teaching, 49 (4), 461–493. Zhang, M., Akoto, M., & Li, M. (2021, online first). Digital multimodal composing in post-secondary l2 settings: A review of the empirical landscape. Computer Assisted Language Learning. Zhang, Z. V. (2020). Engaging with automated writing evaluation (AWE) feedback on L2 writing: Student perceptions and revisions. Assessing Writing, 43, 100439. Zhu, W., & Mitchelle, D. (2012). Participation in peer response as activity: An examination of peer response stance from an activity theory perspective. TESOL Quarterly, 46 (2), 362–386.
3 Computer-Mediated Teacher Feedback
Introduction Teacher feedback has been recognized as language/writing teachers’ most important and challenging tasks for decades (Ferris, 2003; Hyland & Hyland, 2006). The development of computer technologies has provided unprecedented options for L2 teachers to conduct feedback on students’ writing. Teachers are now adopting different modes of feedback, namely written, audio, and video, using tools such as Microsoft (MS) Word, Google Docs, and Screencast-O-Matic. A series of questions capture L2 instructors’ and researchers’ attention. For instance, how do the different modes influence the nature of teacher feedback? How is synchronous teacher feedback different from asynchronous teacher feedback? How is the effect of written feedback compared with screencast video feedback on students’ revisions? This chapter discusses the research and instructional practice addressing above-mentioned questions. It begins with the definition of computer-mediated teacher feedback (CMTF), followed with the explanation of different types of online teacher feedback. It then explains key texts in this domain of research, with important information presented © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 M. Li, Researching and Teaching Second Language Writing in the Digital Age, https://doi.org/10.1007/978-3-030-87710-1_3
23
24
M. Li
in illustrative tables. Following a synthesis of main strands of research, I pinpoint the research gap and address specific research directions. This chapter ends with teaching recommendations.
Defining Computer-Mediated Teacher Feedback Computer-mediated teacher feedback (CMTF) refers to teachers’ or tutors’ delivery of feedback on writing using technology tools. It is also referred to as electronic teacher feedback. I begin with a brief discussion of teacher writing feedback in general. Teacher feedback is also called teacher commentary, in which teachers provide feedback on multiple aspects of student writing, including content, rhetorical structure, grammar, vocabulary, style, and mechanics (Ferris, 2014). That is, instructors provide feedback on both global issues and local issues; the feedback can be at micro levels (e.g., lexicon) and/or macro levels (e.g., discourse). One important part of teacher writing feedback is written corrective feedback (WCF), which can include direct feedback and indirect feedback. According to Chandler (2003), direct WCF means that the teacher supplies an acceptable form for a student’s error (correction), whereas indirect WCF means that the teacher indicates an error using techniques such as underlining and providing error codes, but does not offer a correct language form. Via digital media, teachers provide computer-mediated writing feedback (including WCF) in different ways.
Different Types of Computer-Mediated Teacher Feedback Considering the immediacy of computer-mediated feedback, electronic teacher feedback falls into two categories: Synchronous and asynchronous feedback (Ene & Upton, 2018). The common CMTF is conducted asynchronously using comments and track changes functions in the MS Word. Teachers can also provide asynchronous writing feedback via email and discussion forums (Ho & Savignon, 2007). As another alternative, students submit their electronic papers to the course management systems (CMS), and teachers produce feedback on CMS
3 Computer-Mediated Teacher Feedback
25
directly. In terms of feedback delivered synchronously, multiple tools can be adopted, including Google Docs, Zoom, Skype, and MSN messenger. Most of synchronous teacher feedback is conducted following students’ completion of writing drafts when teachers hold individual conferences online using digital technologies (Ene & Upton, 2018). Teacher WCF during online collaborative writing tasks is also reported in a few studies (e.g., Kim et al., 2020; Shintani & Aubrey, 2016). Due to the increasing role of multimodality in teaching and learning, CMTF takes two new modes, namely audios and videos. Such multimodal feedback caters to students of the auditory/visual learning style and meanwhile enhances teachers’ social presence—making the feedback personal, supportive, and engaging (Cheng & Li, 2020). In particular, the screencast video feedback involves the recording of the instructor’s voice, along with the screen, where the written work is seen and commented on using screen recording software, such as Jing and Screencast-O-Matic (Bakla, 2017). The recorded video is usually shared using a web-based tool, allowing students to view teacher feedback as many times as desired. Previous research (e.g., Cheng & Li, 2020; Ducate & Arnold, 2012; Elola & Oskoz, 2016) reported that teachers are engaged in in-depth explanations on global issues such as ideas and organizations during screencast video feedback.
Key Texts To zoom in on the research on CMTF in the L2 context, I searched the Google Scholar database using the key words of “teacher feedback”, “computer” and “L2” and meanwhile limiting the results to the articles published from 2010 to 2020 in the journals with the CiteScore higher than 2.0. Twelve articles were subsequently selected for illustration in this section. As I noted in Chapter 1, I do not aim to do a comprehensive review of the current body of literature, but the selected articles cover a wide range of research themes, and the synthesis of these studies can suggest the recent trends for research practices. This section begins with an overview of these reported studies, including context and participants, writing tasks and technology, theoretical framework, methodological
26
M. Li
approach, research questions, and validity/reliability strategies1 (as illustrated in Table 3.1). Then these articles are further analyzed in terms of thematic categories to address the investigation foci during the past decade (see Table 3.2).
Overview As Table 3.1 shows, the empirical studies were mainly conducted in ESL/EFL contexts, with the majority of the studies at the tertiary educational level. Of particular note, a few studies addressed underrepresented language learning contexts, such as Korean as a Foreign Language (FL) and German as a FL. The technologies used for teacher feedback included course management system, MS Word, Google Docs, and screencast feedback tools such as Screencast-O-Matic, Jing, and Snagit. The studies are mainly informed by the socio-cognitive approach (e.g., noticing hypothesis, interaction hypothesis, output hypothesis) and sociocultural theory. Many of the research studies on CMTF utilized the mixed-methods approach and the quantitative approach. To ensure the research validity and reliability, researchers adopted multiple techniques, including intercoder/inter-rater reliability (e.g., Cronbach’s alpha, Cohen’s Kappa), triangulation, and member checks.
53 secondary students in Hong Kong
34 EFL students, tertiary level, Taiwan
12 ESL undergraduate students and 3 teachers from an EAP program, USA
Lee et al. (2013)
Chen (2014)
Ene and Upton (2014)
Study
Context and participant
Methodological approach Quantitative study
Mixed-methods study
Longitudinal mixed methods study
Theoretical framework Not reported
Not reported
Long’s (1996) interaction hypothesis; Schmidt’s (1990) noticing hypothesis; Swain’s (1985, 1995) output hypothesis
Argumentative essays Essay Critiquing System—ECS
Blog-writing projects on the topics of current events
Two summaryresponse writing assignments
Writing task and technology
Table 3.1 Research matrix of computer-mediated teacher feedback
To what extent and how does the use of blended learning ((i.e., immediate ECS feedback + teacher feedback) help learners write better essays with respect to content enrichment and organization of ideas? 1. What types of feedback do teachers (including TAs) offer on blogs? 2. How do students perceive teacher feedback of their blog-mediated writing? Is there a match between the feedback most offered and best received? 1. What kind of teacher feedback do ESL learners in a university context receive on writing assignments that have been submitted and returned electronically? 2. What is the relationship between teacher electronic feedback (TEF) and student uptake?
Research question
(continued)
Triangulation; intercoder reliability
Inter-rater reliability; triangulation
Triangulation
Validity and reliability strategy
3 Computer-Mediated Teacher Feedback
27
Qualitative case study
Quantitative study
Interactionist (Long, 1996); sociocultural theory (Lantolf, 2000; Lantolf & Thorne, 2007)
Socio-cognitive theory of writing (Flower & Hayes, 1980; Grabe & Kaplan, 1996; Kellogg, 1996)
Two expository essays, two argumentative essays, and two narrative essays Screencast-OMatic and Microsoft Word
Three text reconstruction tasks Google Docs
4 undergraduate students from an advanced Spanish FL writing course, USA
68 EFL students, tertiary level, Japan
Shintani and Aubrey (2016)
Methodological approach
Theoretical framework
Writing task and technology
Elola and Oskoz (2016)
Study
Context and participant
Table 3.1 (continued)
1. To what extent does using Word or screencast to provide written or oral feedback influence how the instructor provides feedback to students? 2. To what extent does receiving oral or written feedback using Word or screencast influence students’ revisions? 1. Does focused asynchronous direct written corrective feedback have an effect on grammatical accuracy in a new writing task? 2. Does focused synchronous direct written corrective feedback have an effect on grammatical accuracy in a new writing task? 3. Is there any difference in the effects of direct asynchronous and synchronous written feedback on grammatical accuracy in a new writing task?
Research question
Inter-rater reliability as measured by Cohen’s Kappa
Inter-rater reliability; triangulation
Validity and reliability strategy
28 M. Li
9 tutors and 54 students from Spanish and German FL courses, tertiary level, UK
Harper et al. (2018)
Ene and Upton (2018)
Five teachers’ comments on 375 undergraduate students’ academic writing in Sweden 64 ESL undergraduate students and 3 teachers from an EAP program, USA
Adel (2017)
Study
Context and participant
1. What does teacher electronic feedback (TEF) provided via synchronous and asynchronous computer-mediated communication focus on? 2. How does the effectiveness of TEF differ when it is synchronous versus asynchronous? 3. How do teachers and students perceive TEF? 1. How do tutors use Jing to provide feedback? Are there any different approaches used according to course levels and student competence? 2. What are the students’ and tutors’ perceptions on the use of the video feedback tool? Corpus-based, mixed-methods study
Qualitative study
Long’s (1996) interaction hypothesis; Schmidt’s (1990) noticing Hypothesis; Swain’s (1985, 1995) output hypothesis
Feedback-related theories, e.g., depth-level in feedback comments (Brown & Glover, 2006)
(continued)
Triangulation; member checks
Triangulation
How visible are the writer, the reader, and the current text itself in teacher feedback?
Appropriate assignments chosen for each course level Jing
Not reported
Research question
Corpus-based descriptive quantitative study
Theory of metadiscourse (Hyland, 2000, 2005) Interactive approach
Corpus of teacher feedback from five writing tasks WordSmith Tools; text highlighter MS Word “comment” function Six writing assignments (3 per course) Microsoft Word and a course management system
Validity and reliability strategy
Methodological approach
Theoretical framework
Writing task and technology
3 Computer-Mediated Teacher Feedback
29
12 ESL undergraduate students, USA
3 instructors of ESL writing, tertiary level, USA
Cunningham (2019a)
Cunningham (2019b)
Study
Context and participant
Table 3.1 (continued) Methodological approach Mixed-methods study
Quantitative study
Theoretical framework Schmidt’s (1990) noticing hypothesis; Long’s (1996) interaction hypothesis
The Appraisal framework (Martin & White, 2005, 2015)
Writing task and technology
Four TOEFL practice essays (200–300 words each) Microsoft Word & Snagit
Four major writing assignments Microsoft Word and Snagit
1. How do student perceptions of and preferences for text (MS Word comment) and video (screencast) feedback compare? 2. How do students make use of (apply & interact with) the text and video feedback? 3. How does the time needed to prepare screencast and text feedback compare? 1. How are ENGAGEMENT resources used in text and video feedback? 2. How does this ENGAGEMENT resource use compare across text and video feedback?
Research question
Inter-rater reliability
Inter-rater reliability; triangulation
Validity and reliability strategy
30 M. Li
Bakla (2020)
Study
33 Turkish EFL students, tertiary level
Context and participant
Methodological approach Mixed-methods study
Theoretical framework Long’s (1996) interaction hypothesis; Schmidt’s (1990) noticing hypothesis; Swain’s (1985, 1995) output hypothesis
Writing task and technology
A multi-draft essay writing task and an essay revision task Google Drive, Kaizena; a screencasting software
1. Which digital feedback modes (electronically written, audio, screencast) could help the participants perform a higher rate of successful revisions at the microlevel, macrolevel, and global level in a multi-draft essay writing task? 2. Which digital feedback modes (electronically written, audio and screencast) could help the participants perform a higher rate of successful revisions at the microlevel, macrolevel, and global level in revising the essays supplied to them? 3. What are the participants’ preferences of the feedback modes (electronically written, audio, screencast) and what factors could account for these preferences? 4. How did the participants engage with each feedback mode and interact with the researcher?
Research question
(continued)
Triangulation; inter-rater reliability tested via Cohen’s Kappa
Validity and reliability strategy
3 Computer-Mediated Teacher Feedback
31
Kim et al. (2020)
Study
53 undergraduate students from four sections of an Elementary Korean FL course, USA
Context and participant
Table 3.1 (continued) Methodological approach Quasiexperimental study
Theoretical framework Sociocultural theory (Vygotsky, 1978) and associated constructs such as mediation (Swain & Lapkin, 1998)
Writing task and technology
Six genre-based collaborative writing tasks (2 per unit) Digital audio recorder and GoPro digital video camera
1. How successfully can college-level ESL student writers use the AWCF provided? Are there any differences between direct and indirect SWCF in terms of the amount of correct uptake during collaborative writing? 2. Are there any differences between direct and indirect SWCF in terms of students’ learning of Korean grammar? 3. How do students feel about the helpfulness of direct and indirect SWCF during collaborative writing?
Research question
Inter-rater reliability
Validity and reliability strategy
32 M. Li
Reference
Lee, C. Cheung, W., Wong, K., & Lee, F. (2013). Immediate web-based essay critiquing system feedback and teacher follow-up feedback on young second language learners’ writings: an experimental study in a Hong Kong secondary school. Computer Assisted Language Learning, 26(1), 39–60
Chen, W. C. (2014). Actual and preferred teacher feedback on student blog writing. Australasian Journal of Educational Technology, 30(4)
Year
2013
2014
C.2 D.1
Lee et al. conducted an experimental study to examine the differences between student essays that received immediate essay critiquing system (ECS) feedback and teacher feedback (treatment group), and essays that received only teacher feedback (control group). In this study, a total of 53 Hong Kong secondary students completed two timed argumentative essay writing tasks, which subsequently received either system feedback + teacher feedback or merely teacher feedback. Their final essays were assessed according to the grading rubrics. Results showed that both groups were found to have significant gains, and the treatment group’s gains are more remarkable due to the fact that more lower proficiency level students were included. The treatment group also gave high ratings on both the system and teacher feedback in the post-task interviews and survey. This study suggested the great potential of combined feedback methods for L2 writing instruction in the secondary level educational setting This mixed-methods study looks at the types of feedback provided by teachers during a blog-mediated writing project and students’ perceptions and receptivity of such feedback. Taiwanese EFL students (n = 34) from two colleges were asked to participate in blog-based writing regarding the topics on current events with their e-pals, after which teachers provided comments on the blog posts. Five types of teacher feedback (i.e., task, process, self-regulation, superficial praise, and mediation) were coded and analyzed using a modified version of Hattie and Timperley’s (2007) taxonomy. The results indicated that there was a high frequency of mediation type in which teachers went beyond commenting on language and organizational problems and focused more on promoting cooperative learning between the groups. Additionally, as a result of the indirect nature of online interaction, there was a low frequency of superficial praise. With regards to students’ perceptions and receptivity of the feedback, although students expressed positive opinions on most of the feedback items, the most preferred feedback type was also the least offered feedback type (i.e., self-regulation), which suggested that students’ valued feedback required more in-depth, higher-order reflection as opposed to corrections of superficial grammatical errors
(continued)
A.1 C.2
Theme
Annotation
Table 3.2 Research timeline of computer-mediated teacher feedback
3 Computer-Mediated Teacher Feedback
33
Reference
Ene, E., & Upton, T. A. (2014). Learner uptake of teacher electronic feedback in ESL composition. System, 46, 80–95
Elola, I., & Oskoz, A. (2016). Supporting second language writing using multimodal feedback. Foreign Language Annals, 49(1), 58–74
Year
2014
2016
Table 3.2 (continued) A.1 B.1
Ene and Upton’s study provided a comprehensive picture of the nature of teachers’ asynchronous electronic feedback and its impact on learner uptake. They closely examined the different types of electronic written feedback that ESL students received on their essays in relation to their revisions in order to determine whether there was a connection between the teacher’s feedback and learners’ uptake. Based on the detailed analysis of multiple drafts, they found that teachers’ electronic feedback was generally direct/explicit, need-based, and systematic. In terms of learner uptake, the results indicated that participants were able to successfully implement the teacher feedback received, with a high uptake rate. The result of the study also supported that electronic teacher feedback contributed to the significant increase in learners’ awareness of grammatical structures as well as their attention to content and organization This case study examined the effect of using multimodal feedback on four Spanish FL students’ writing revisions. The multimodal feedback comprised of both written and oral feedback provided through Microsoft Word and Screencast-O-Matic respectively. After analyzing and comparing the initial and final drafts along with the teacher’s comments, Elola and Oskoz found that the use of both technological tools had a positive effect not only on the quantity but also on the quality of teacher feedback. The comments provided via screencast were found to be more detailed in terms of structure, organization, and content. The teacher seemed to focus more on the form when Microsoft Word was used. Moreover, with regards to students’ perceptions, the analysis of questionnaire responses and interview transcripts showed that students found both modes of feedback to be helpful in enhancing their L2 writing skills A.3 C.2
Theme
Annotation
34 M. Li
Reference
Shintani, N., & Aubrey, S. (2016). The effectiveness of synchronous and asynchronous written corrective feedback on grammatical accuracy in a computer-mediated environment. The Modern Language Journal, 100(1), 296–319
Adel, A. (2017). Remember that your reader cannot read your mind: Problem/solution-oriented metadiscourse in teacher feedback on student writing. English for Specific Purposes, 45, 54–68
Year
2016
2017
D.1 D.2
Shintani and Aubrey’s experimental study sought to determine whether the timing of corrective feedback (i.e., synchronous vs. asynchronous) has any effects on L2 grammar acquisition. Japanese EFL students (n = 68) were assigned to three different groups: (1) a synchronous CF group (SCF) who received CF as they worked on the writing tasks via Google Docs, (2) an asynchronous CF group (ACF) who only received CF after completing the tasks, and (3) a comparison group, who received no feedback at all. After a series of statistical analyses of students’ scores on the pre-, immediate postand delayed posttests, the results indicated that both the ACF and SCF groups performed significantly better on the pretest and posttests than the comparison group. However, with regards to the effectiveness of the feedback timing, the SCF group had a statistically significant advantage over the other two groups. This result further confirms that feedback provided synchronously yielded better results in terms of grammar acquisition Based on the theoretical framework of metadiscourse, Abdel seeks to answer how visible the writer, the reader, and the text itself is in computer-mediated teacher feedback. Using a pilot corpus of 40,000 words from 375 texts commented by five teachers, the researcher investigated the teachers’ use of metadiscourse, defined as reflexive expressions pertaining to the discourse, the author, and audience. The teacher feedback was compared to other types of academic discourses (written and spoken). The results showed that the teacher comments in the reported corpus had an exceptionally high level of metadiscourse, with the student (“you”) more visible than the teacher (“I”). Also detected were large quantities of references to the text, with “here” used to indicate trouble spots. The results revealed that the metadiscourse is problem/solution-focused in computer-mediated teacher feedback, trying to serve the metalinguistic function and solve communication problem. This study highlighted the multidimensional nature of teacher feedback
(continued)
A1
Theme
Annotation
3 Computer-Mediated Teacher Feedback
35
Reference
Ene, E., & Upton, T. A. (2018). Synchronous and asynchronous teacher electronic feedback and learner uptake in ESL composition. Journal of Second Language Writing, 41, 1–13
Harper, F., Green, H., & Fernandez-Toro, M. (2018). Using screencasts in the teaching of modern languages: Investigating the use of Jing® in feedback on written assignments. The Language Learning Journal, 46(3), 277–292
Year
2018
2018
Table 3.2 (continued) A.2 B.1 C.1 C.2 D.2
Expanding on their earlier longitudinal study (2014), Ene and Upton conducted this corpus-based study to better understand the use and effectiveness of e-feedback, provided synchronously via online chats and asynchronously via Microsoft Word comments. They also examined students’ and teachers’ perceptions of e-feedback. ESL students (n = 64) from both face-to-face and online EAP courses and their teachers (n = 3) participated in the study. Drawing on both the quantitative analysis of 1125 teacher feedback comments as well as the qualitative analysis of student survey responses and transcripts from teacher interviews, Ene and Upton found that the feedback was mainly content-focused and emphasis was placed on the successful incorporation of the feedback provided. Also, they found the use of both feedback modes in conjunction with each other was beneficial and helped learners focus on higher-order concerns. Regarding the perception data, both teachers and students expressed positive attitude toward the different modes of feedback. The students found online chats to be most beneficial to their L2 learning whereas the teachers believed that mixing synchronous and asynchronous feedback is beneficial for critical thinking as it offers them the flexibility to adapt to different learner needs This study examined the ways in which a screencast software (i.e., Jing) was implemented for teacher feedback as well as the perceptions of this tool from the students and their tutors. Questionnaires, semi-structured interviews, and feedback on drafts were collected from students (n = 54) and tutors (n = 9) in Spanish and German FL courses. The findings indicated that the tutors used Jing in two main ways: adopting the presentation format (e.g., showing the whole script, an indicative paragraph, generic recordings), and providing textual feedback (e.g., highlighting grammatical accuracy, range of language, content and strengths). Jing allowed the tutors not only to provide explicit feedback to students but also to include details and better explain the language points. In terms of perceptions, both tutors and students expressed positive views about the use of screencasting video feedback. For the students, the individualized screencasts gave them a sense of personal value and some stated that the feedback provided was more understandable. The tutors reported that the video feedback allowed them to provide more detailed explanations on a one-on-one level A.1 C.1 C.2
Theme
Annotation
36 M. Li
Reference
Cunningham, K. J. (2019a). Student perceptions and use of technology-mediated text and screencast feedback in ESL writing. Computers and Composition, 52, 222–241
Cunningham, K. J. (2019b). How language choices in feedback change with technology: Engagement in text and screencast feedback on ESL writing. Computers & Education, 135, 91–99
Year
2019a
2019b
A.3 C.2
This exploratory study sought to shed light on students’ perceptions of and use of screencast feedback versus online written feedback. Twelve ESL students were divided into two groups where they completed four writing tasks. While one group received screencast feedback on two tasks, the other received written feedback and vice versa. The results suggested that although students reported both feedback types as beneficial, they generally preferred screencast feedback to written feedback for its awareness-raising capacity, clarity, and effectiveness. The study also reported that screencast feedback helped students spend less time on revisions since screencast comments were more clearly understood compared to written feedback Cunningham’s study is one of the few studies that draws attention to the interpersonal dimension of feedback by exploring the ways in which instructors’ choices of technology might affect the message and delivery of feedback. Using the “Engagement” index from Martin and White’s appraisal framework, she examined screencast feedback via Snagit and written feedback via Microsoft Word from three ESL instructors over four writing tasks. Following a detailed analysis of all the instances of engagement (i.e., instances where other voices/opinions were taken into account in the feedback), she found that screencast video feedback incorporated more engagement resources, thus providing students more room for autonomy and agency to make choices based on the instructor’s suggestions. On the contrary, written feedback had limited engagement resources, leading to the instructor’s sole authority in addressing language problems
(continued)
A.3
Theme
Annotation
3 Computer-Mediated Teacher Feedback
37
Reference
Bakla, A. (2020). A mixed-methods study of feedback modes in EFL writing. Language Learning and Technology, 24(1), 107–128
Kim, Y., Choi, B., Kang, S., Kim, B., & Yun, H. (2020). Comparing the effects of direct and indirect synchronous written corrective feedback: Learning outcomes and students’ perceptions. Foreign Language Annals, 53(1), 176–199
Year
2020
2020
Table 3.2 (continued) B.1 C.2 D.1
This mixed methods study examined the effectiveness of three feedback modes (electronically written, audio, screencast) in relation to L2 students’ writing performance both at the microlevel (linguistic) as well as the macrolevel (discourse). Students’ preferences of these modes and the processes involved as they receive such feedback were also addressed. Drawing on quantitative and qualitative analyses of a pre-task survey, essay drafts/revisions, semi-structured interviews, and screen recordings, the author found that although there was no significant difference between the three feedback modes, the group that received the audio feedback performed better and made the most accurate revisions on their writing. However, there was mismatch between the least preferred feedback mode (i.e., audio) and the ranking of mean scores in students’ writing. The author attributed this finding to multimodality and interactivity that technology affords, and the researcher’s social presence This quasi-experimental study conducted in an elementary level Korean FL class compared two types of synchronous written corrective feedback (i.e., direct vs. indirect) in terms of the effect on the amount of correct uptake of target linguistic features. The conditions that might affect students’ L2 learning as well as their perceptions of the helpfulness of the feedback were also explored. The students worked in pairs to complete two collaborative writing tasks. Each pair received one of three treatment conditions (i.e., two feedback conditions (direct and indirect) and one control). The analyses of pretests, posttests, reflection survey, and exit interviews revealed that students made more accurate error corrections when direct feedback type was provided compared to the other conditions. Moreover, despite no statistically significant difference in posttest scores between the indirect and direct feedback types, students in the control group underperformed their counterparts who received feedback. Students’ responses to the Likert-scale survey questions indicated that they found synchronous written corrective feedback to be beneficial to their learning irrespective of the delivery method C.2 D.1
Theme
Annotation
38 M. Li
3 Computer-Mediated Teacher Feedback
39
Thematic Categories To provide a rough timeline of the empirical studies that reflects the foci of investigation, I present below the twelve articles chronologically with respective annotations and research themes (Table 3.2).2 The themes are categorized as follows: A. Nature of computer-mediated teacher feedback 1. Focus/types/features of CMTF 2. Differences between synchronous written teacher feedback and asynchronous teacher feedback 3. Differences between electronic written comments and screencast video comments B. Student uptake of teacher feedback (revisions based on teacher feedback) 1. Uptake in asynchronous and/or synchronous CMC environments 2. Comparison of uptake between MS Word feedback and Screencast feedback C. Perceptions 1. Instructors’/tutors’ perceptions 2. Students’ perceptions D. Effect of computer-mediated teacher feedback 1. Effect of computer-mediated teacher feedback on language learning and writing development 2. Comparison of the effect of synchronous written feedback and asynchronous written feedback As Table 3.2 shows, an important strand of research is the nature of CMTF (Category A). Ene and Upton (2014) explored different types of teacher electronic feedback that ESL college students received in EAP classes. Results showed that CMTF, mainly comprised of marginal
40
M. Li
comments, were “directive, explicit, principled, systematic, and needsbased” (p. 80). The teacher feedback mainly focused on the content, followed by discourse organization and grammar. Also, many instances of direct feedback were observed, such as directives to execute specific changes and correction of grammatical forms. In a later study, Ene and Upton (2018) examined the features of teachers’ synchronous electronic feedback on ESL college students’ writing drafts using text chats in comparison with those of asynchronous feedback using Word comments and track changes. They reported that teachers provided content-focused feedback, with supplementary attentions to language and writing in both modalities. Teacher feedback was more focused on content and organization via text chats whereas much more comments on vocabulary, grammar, and mechanics were evident in Word documents. They also maintained that the combined use of synchronous and asynchronous feedback led to better feedback on higher-order issues (e.g., organization of ideas). With the increasing awareness of the importance of multimodality in L2 classes, screencast video feedback captures wider attention from researchers and instructors. Researchers started to examine traditional teacher electronic feedback in comparison with screencast feedback. For instance, Elola and Oskoz (2016) compared teacher electronic feedback using MS Word and the feedback using Screencast-O-Matic; they found that the comments provided via screencast videos were more detailed in terms of content and organization, while Word feedback focused more on language forms. Different from previous studies examining the nature of teacher feedback drawing on analyses of types and foci, Cunningham (2019) used Martin and White’s appraisal framework to examine teachers’ “engagement” (e.g., other voices were taken into account) reflected in their comments. The results showed that Snagit screencast video feedback incorporated more engagement resources thereby providing students more room for autonomy and agency to make choices based on teacher feedback. On the contrary, written feedback on Word docs had limited engagement resources, thus making the instructor the sole authority in addressing language problems. Continued inquires of different types of CMTF would help instructors make informed decisions when planning to implement it in their instructional contexts.
3 Computer-Mediated Teacher Feedback
41
The second important research strand, still under-explored, is students’ uptake of teacher feedback, namely how students incorporate teacher feedback into their revisions (Category B). In Ene and Upton’s study (2018), students were found to have implemented (many successfully and some unsuccessfully) the majority of teacher feedback (either synchronous or asynchronous) in their final drafts. When the two modalities compared, successful uptake was significantly higher from Word doc feedback than from chat sessions. Bakla et al. (2020) most recently compared EFL students’ uptake of teacher feedback as to three feedback modes (i.e., written texts on Google Drive, audio add-on Kaizena on Google Drive, and screencast feedback), and found that the audio feedback led to students’ most accurate revisions on their writing despite no statistically significant difference. These initial findings can motivate instructors to test different ways of teacher feedback themselves and find out which works most effectively. Moreover, much research investigated students’ perceptions of CMTF (Category C). Cunningham (2019), for instance, examined ESL students’ perceptions of screencast feedback and online written feedback. The results suggested that although students considered both feedback types beneficial, they generally preferred screencast feedback to written feedback for its awareness-raising capacity, clarity, and effectiveness. The students reported that the screencast feedback led to less time they spent on revisions since screencast comments were more clearly expressed compared to written feedback. Due to our scarce knowledge about teachers’ perspectives, instructors’ perceptions on different modes of teacher feedback deserve more investigation. Another important strand of research is the effect of CMTF on L2 grammar learning and writing development (Category D). Recently, Kim et al. (2020) examined the effect of two types of synchronous written corrective feedback (WCF), namely direct and indirect, when Korean as a FL students worked on collaborative writing tasks. The results showed that the groups receiving direct and indirect teacher WCF had better writing outcome than the groups who did not receive WCF, despite the lack of statistically significant difference in posttest scores. In another study, Shintani and Aubrey (2016) compared the effect of synchronous WCF and asynchronous WCF as to text reconstruction tasks Japanese
42
M. Li
EFL students worked on via Google Docs. The results showed that the synchronous and asynchronous groups performed better than the control group (no teacher feedback provided) in both immediate posttest and delayed posttests. The synchronous WCF group, in particular, yielded significantly better results in terms of grammar acquisition.
Research Directions The research review has shown that studies on CMTF has predominantly focused on tertiary-level educational settings. The research to be conducted at different learning settings (e.g., underrepresented language learning contexts and non-tertiary L2 levels) would be very much welcome given the scarcity of available research on those populations. In terms of research foci, teacher feedback conducted asynchronously constitutes the main topic of research, such as the screencast feedback and text-based asynchronous feedback. Synchronous teacher feedback is largely under-explored, although a few studies address the role of textbased synchronous corrective feedback in writing processes. For instance, teacher writing conferences using technology tools (e.g., Zoom, Google Classroom, Skype), as a new way of multimodal feedback, should receive more attention. The current body of literature reports little findings on using these emerging tools for multimodal feedback; the use of such tools is expected to continue and increase in the post-pandemic era. Methodologically, ethnographic case study can be undertaken to inquire how different types of CMTF or WCF in particular can be implemented systematically over time to facilitate students’ learning (Lee, 2017). The research agenda in this domain is rich and multiple specific inquires deserve our attention. Regarding the nature of CMTF (Category A), we can pose this question: how do technology tools influence the nature of teacher feedback? Various tools (e.g., MS Word, Google Docs, Screencast-O-Matic, Jing, Snagit) have been used for teacher feedback, but little research has probed into how the different tools afford teacher response and influence the types, foci, and language use of teacher feedback. Also, with the increasing awareness of multimodality, synchronous multimodal feedback using technology tools (e.g., Zoom,
3 Computer-Mediated Teacher Feedback
43
Google Classroom, and Skype) will become a trend (see Monteiro, 2014 as an example). How the synchronous multimodal feedback would differ from screencast video feedback delivered asynchronously or face-to-face teacher writing conferences would be a research question that attracts future researchers. A related inquiry would be interactions and dynamics in teacher-student/tutor-tutee computer-mediated writing conferences. In terms of the features of CMTF, in particular, recent research (e.g., Amundrud, 2015; Ene & Upton, 2021; Mahboob, 2015) started to take an innovative approach by deeming teacher feedback as a specific genre. For instance, Ene and Upton (2021) drew on move analysis (Swales, 1990) and corpus analysis (Biber et al., 2007) and examined rhetorical structures of online teacher chats from a database collected from the institutional EAP writing courses. Future research can follow this route and conduct discourse/move analyses of electronic teacher feedback with an aim of finding out what are rhetorical structures and linguistic characteristics that highly effective feedback may have. Moreover, students’ uptake as to different types of CMTF (Category B) and students’ perspectives vis-à-vis teachers’ perspectives (Category C ) would deserve our further investigation. Concerning the effect of CMTF (Category D), future studies need to further investigate how different types of CMTF impact students’ shortterm and long-term language learning. Informed by the most recent studies (Sherafati et al., 2020; Tian & Zhou, 2020; Zhang & Hyland, 2018), future inquiries can delve into L2 learners’ engagement with CMTF in comparison with other ways of feedback (e.g., computermediated peer response and automated writing evaluation) and compare the effects of different types of feedback on students’ writing development. In terms of text-based synchronous CWF discussed in previous research (e.g., Kim et al., 2020), we are encouraged to further probe the actual role of synchronous CWF. One pressing question is how synchronous CWF might help L2 learners with writing processes and outcome rather than interfering with their writing flow? All the abovementioned research tasks would largely inform our pedagogical practices of incorporating CMTF into L2 classes.
44
M. Li
Teaching Recommendations As discussed earlier, teacher writing feedback can be delivered via different modes and using various technology tools. In this section, I discuss pedagogical implications in terms of modes of teacher feedback and technology selection based on the research synthesis. Before discussing these two aspects in detail, I want to stress the importance of what constitutes effective teacher feedback. As Mahboob (2015) stated, teachers should provide “a purposeful, goal-oriented, and staged text” in response to the carefully selected problems in students’ texts. (Mahboob, 2015, p. 360). With regards to students’ language issues, rather than persistently providing comprehensive WCF which is time-consuming and results in a lower quality of correction (Truscott, 2001), teachers are encouraged to give focused WCF or adopting a middle position combining both, taking a principled approach, considering the number of error types, nature of tasks, and students’ needs (Ferris, 2011; Lee, 2017). Also, teachers need to aim at collaboratively and intentionally accomplishing the feedback task, encouraging students to become active agents in charge of their own learning (Ene & Upton, 2018; Lee, 2014).
Modes of Teacher Feedback Three main modes of teacher feedback can be implemented in L2 classes, namely texts, audios, and videos. The most common text-based teacher feedback is provided via MS Word. Specifically, teachers can offer direct corrective feedback using the function of track changes, provide indirect feedback using color coding, or leave global feedback (e.g., purpose, rhetorical structure, organization) and local feedback (e.g., vocabulary and grammar) using comment bubbles. Teachers can also consider providing audio feedback solely or in combination with text comments. Audio feedback is found to add a personal touch, thus motivating students to incorporate teacher feedback into revisions. In particular, previous research (Bakla, 2020) reported that the audio from instructors enabled students to practice listening and speaking skills. When text feedback and audio feedback combined, audios can focus more on
3 Computer-Mediated Teacher Feedback
45
the global areas in which teachers commend students for what they did well and summarize what can be improved. Moreover, video feedback has been capturing instructors’ and researchers’ increasing attention due to continuous advancement of technology tools. Video feedback utilizes the multimodal features (i.e., text, audio, and video) and makes teacher comments clearer, personalized, and more engaging. Video feedback is found to be most helpful for online/distance learning classes in which students learn individually at their own pace and usually do not have many opportunities to interact with instructors (Cheng & Li, 2020).
Selection of Technology Tools Various technology tools can be used for teachers to provide writing feedback. If asynchronous feedback meets students’ needs, teachers can provide text-based feedback on MS Word docs, use Audacity to do video feedback, or utilize Camtasia or Screencast O’Mastic to offer audio feedback. If synchronous feedback is necessary, teachers can use Google Docs to provide text-based feedback, or adopt Zoom or Skype to hold online conferences and provide multimodal feedback, similar to faceto-face conferences. Table 3.3 shows some technology tools that can be considered for CMTF. Specifically, the respective website links and tool descriptions are addressed. Teachers should select an appropriate tool based on the features of writing assignments, the nature of L2 classes, and students’ needs and access. CMTF is becoming a norm in L2 classes in the digital age. With technological revolution, various ways of CMTF are expected to be delivered more effectively.
https://www.techsmith.com/ video-editor.html https://screencast-o-matic.com/
Camtasia
https://www.kaizena.com/
https://www.audacityteam.org/
https://zoom.us/
https://www.skype.com/en/
https://www.mheducation.com/ highered/connect/lecture-cap ture.html
Kaizena
Audacity
Zoom
Skype
Tegrity Campus
Turnitin GradeMark
https://www.techsmith.com/ jing-tool.html https://www.techsmith.com/scr een-capture.html https://www.turnitin.com/
TechSmith (formerly Jing) Snagit
Screencast O-Matic
Website link
Technology
Free video capture tool with the free Tegrity Mobile app by McGraw-Hill
Free video conferencing tool
Free open-source audio software; multi-track audio editor and recorder Cloud video conferencing tool
Free Google Docs add-on for feedback
All-in-one screen recorder and video editor by TechSmith Free and easy-to-use screen recorder and video editor Free screenshot and screen recording tool Screen capture and video recorder by TechSmith Paper markup and rubric tool; can be embedded in CMS (e.g., Blackboard, Canvas, D2L)
Description
Table 3.3 Representative technologies for computer-mediated teacher feedback
Synchronous teacher video feedback Synchronous teacher video feedback Asynchronous teacher video feedback
Asynchronous teacher video feedback Asynchronous teacher video feedback Asynchronous teacher video feedback Asynchronous teacher video feedback Asynchronous text-based teacher feedback (markup tools, rubrics, and proofing tools); an option of voice comments Asynchronous text and voice teacher feedback; rating students’ writing on individual skills or through a self-designed rubric Asynchronous teacher voice feedback
Function
46 M. Li
3 Computer-Mediated Teacher Feedback
47
Notes 1. I included in Table 3.1 the research questions and validity/reliability strategies, the aspects which may be ignored in research reporting, in response to the call for methodological reform and the enhancement of study quality (e.g., Paquot & Plonsky, 2017; Plonsky, 2014). 2. This format for Table 3.2 is adapted from the Research Timeline articles published in Language Teaching: Surveys and Studies.
References Amundrud, T. (2015). Individual feedback consultations in Japanese tertiary EFL: A systemic semiotic exploration. English Australia Journal, 30 (2), 40– 64. Bakla, A. (2017, April 27–28). An overview of screencast feedback in L2 writing: Fad or the future? International Foreign Language Education and Turkish as a Foreign Language Education Symposium, Ankara University, TÖMER, Bursa. Bakla, A. (2020). A mixed-methods study of feedback modes in EFL writing. Language Learning and Technology, 24 (1), 107–128. Biber, D., Connor, U., & Upton, T.A. (2007). Discourse on the move: Using corpus analysis to describe discourse structure. John Benjamins. Chandler, J. (2003). The efficacy of various kinds of error feedback for improvement in the accuracy and fluency of L2 Student writing. Journal of Second Language Writing, 12(3), 267–296. Cheng, D., & Li, M. (2020). Screencast video feedback in online TESOL classes. Computers and Composition, 58(2), 102612. Cunningham, K. J. (2019). Student perceptions and use of technologymediated text and screencast feedback in ESL writing. Computers and Composition, 52, 222–241. Ducate, L., & Arnold, D. (2012). Computer-mediated feedback: Effectiveness and students’ perceptions of screen-casting software vs. the comment function. In G. Kessler, A. Oskoz, & I. Elola (Eds.), Technology across writing contexts and tasks. CALICO Monograph Series, 10, 31–55.
48
M. Li
Elola, I., & Oskoz, A. (2016). Supporting second language writing using multimodal feedback. Foreign Language Annals, 49 (1), 58–74. Ene, E., & Upton, T. A. (2014). Learner uptake of teacher electronic feedback in ESL composition. System, 46 , 80–95. Ene, E., & Upton, T. A. (2018). Synchronous and asynchronous teacher electronic feedback and learner uptake in ESL composition. Journal of Second Language Writing, 41, 1–13. Ene, E., & Upton, T.A. (2021). Effective teacher-student engagement in chats in ESL composition courses. CALICO Virtual Conference. Ferris, D. R. (2003). Response to student writing: Implications for second language students. Routledge. Ferris, D. R. (2011). Treatment of error in second language student writing (2nd ed.). The University of Michigan Press. Ferris, D. R. (2014). Responding to student writing: Teachers’ philosophies and practices. Assessing Writing, 19, 6–23. Ho, M., & Savignon, S. J. (2007). Face-to-face and computer-mediated peer review in EFL writing. CALICO Journal, 24 (2), 269–290. Hyland, K., & Hyland, F. (2006). Feedback in second language writing: Contexts and issues. Cambridge University Press. Kim, Y., Choi, B., Kang, S., Kim, B., & Yun, H. (2020). Comparing the effects of direct and indirect synchronous written corrective feedback: Learning outcomes and students’ perceptions. Foreign Language Annals, 53(1), 176–199. Lee, I. (2014). Revisiting teacher feedback in EFL writing from sociocultural perspectives. TESOL Quarterly, 48(1), 201–213. Lee, I. (2017). Working hard or working smart: Comprehensive versus focused written corrective feedback in L2 academic contexts. In J. Bitchener, N. Neomy, & R. Wette (Eds.), Teaching writing for academic purposes to multilingual students (pp. 168–180). Routledge. Mahboob, A. (2015). Understanding and providing ‘cohesive’ and ‘coherent’ feedback on writing. Writing and Pedagogy, 7 (2–3), 355–376. Monteiro, K. (2014). An experimental study of corrective feedback during video-conferencing. Language Learning & Technology, 18(3), 56–79. Paquot, M., & Plonsky, L. (2017). Quantitative research methods and study quality in learner corpus research. John Benjamins. Plonsky, L. (2014). Study quality in quantitative L2 research (1990–2010): A methodological synthesis and call for reform. Modern Language Journal, 98, 450–470.
3 Computer-Mediated Teacher Feedback
49
Shintani, N., & Aubrey, S. (2016). The effectiveness of synchronous and asynchronous written corrective feedback on grammatical accuracy in a computer-mediated environment. The Modern Language Journal, 100 (1), 296–319. Sherafati, N., Largani, F., & Amini, S. (2020). Exploring the effect of computer-mediated teacher feedback on the writing achievement of Iranian EFL learners: Does motivation count? Education and Information Technologies, 25, 4591–4613. Swales, J. (1990). Genre analysis: English in academic and research settings. Cambridge University Press. Tian, L., & Zhou, Y. (2020). Learner engagement with automated feedback, peer feedback, and teacher feedback in an online EFL writing context. System, 91, 102247. Truscott, J. (2001). Selecting errors for selective error correction. Concentric: Studies in English Literature and Linguistics, 27 (2), 93–108. Zhang, Z., & Hyland, K. (2018). Student engagement with teacher and automated feedback on L2 writing. Assessing Writing, 36 , 90–102.
4 Computer-Mediated Peer Response
Introduction Peer response has attracted much attention from L2 instructors and researchers for decades, and there has been copious research on this domain of research (Hyland & Hyland, 2006; Liu & Edwards, 2018; Min, 2005; Villamil & Guerrero, 2006; Zhu & Mitchell, 2012). The development of computer technologies has provided researchers and instructors with a wider range of tools (e.g., Microsoft Word, chatting, bulletin-board, blogs, and Turnitin PeerMark) to apply peer review activities. Online peer feedback has shown advantages over face-toface (F2F) feedback, including the facilitation of interactive textual exchange, enhancement of student participation, and more revisions in students’ writing products (Tuzi, 2004). Different computer-mediated communication (CMC) modes for peer response have shown distinctive features and benefits. Specifically, asynchronous CMC tools (e.g., Microsoft Word, Blackboard, blogs) afford out-of-class communication in any space and at any time, while synchronous CMC through instant messaging and chatting tools is featured with immediacy like F2F © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 M. Li, Researching and Teaching Second Language Writing in the Digital Age, https://doi.org/10.1007/978-3-030-87710-1_4
51
52
M. Li
communication and fosters more balanced interaction in a less anxietyprovoking environment than F2F communication (Liu & Edwards, 2018). This chapter discusses the research and instructional practice on computer-mediated peer response (CMPR). It begins with the definition of CMPR and rationales for implementing CMPR. It then explains key texts in this domain of research, with important information presented in illustrative tables. After a brief pinpointing of the research gap, this chapter addresses specific research directions and ends with teaching recommendations.
Defining Peer Response and Computer-Mediated Peer Response (CMPR) Peer response, also known as peer feedback and peer review, refers to the activity during which students provide feedback on their peers’ writing in the written/oral mode in pairs or small groups (Yu & Lee, 2016a; Zhu, 2001). Peer response is defined as “the use of learners as sources of information and interactants for each other in such a way that learners assume roles and responsibilities normally taken on by a formally trained teacher, tutor, or editor in commenting on and critiquing each other’s drafts in both written and oral formats in the process of writing” (Liu & Edwards, 2018, p. 1). As Yu and Lee (2016a) reminded us, peer response addresses both the process, as reflected in terms such as “peer feedback process” and “peer interaction”, and the product of the peer review activity, reflected in the term “peer feedback” and “peer comments”. With the increasing use of CMC modes for learning in the digital age, CMPR receives wide attention in L2 contexts. CMPR, also known as online peer review, refers to students providing feedback on their peers’ writing using asynchronous CMC tools (e.g., MS Word, Email, Blogs, wikis, Turnitin PeerMark) and/or synchronous CMC tools (e.g., Google Docs, Skype).
4 Computer-Mediated Peer Response
53
Rationales for Implementing Peer Response Previous research has underscored the importance and many benefits of peer response. It enhances a sense of audience, contributes to learner autonomy, encourages collaborative learning, and fosters ownership of text (Chen, 2012; Tsui & Ng, 2000). It also enables students to negotiate meaning and practice a wide range of language skills to facilitate their language development (Lockhart & Ng, 1995). After training on how to conduct peer feedback, students are capable of making specific constructive feedback on their peers’ writing (Hedgcock & Lefkowitz, 1992; Min, 2006; Stanley, 1992; Zhu, 2001). Peers may give even better content feedback than teachers if the students are paired based on their fields of study (Belcher, 1990). Due to these cognitive, social, and linguistic benefits, peer response has hold considerable promise as a viable tool in writing instruction for decades (Min, 2006; Zhu, 2001). The development of computer technologies has brought forth a surge of interest in CMPR. Researchers (e.g., Guardado & Shi, 2007; Liu & Sadler, 2003; Tuzi, 2004) have reported multiple advantages of online peer review compared with face-to-face peer review. Online peer response creates a less threatening environment and thus fosters more participation for ESL students, especially those who are concerned about their spoken English proficiency (Warschauer, 2002) and whose culture values listening and silence in traditional classrooms (Li & Li, 2017; Liu & Sadler, 2003). CMC environment, particularly with the use of pseudonyms, not only encourages students to provide honest and critical comments, but also enables student writers to assess reviewers’ feedback more objectively (Chang, 2016).
Key Texts I did the literature search on CMPR in L2 contexts via Google Scholar by inputting the key words of “peer response” or “peer feedback”, “computer” and “L2” and meanwhile limiting the findings to the articles published from 2010 to 2020 in the peer-refereed journals with the CiteScore higher than 2.0. Fourteen articles were subsequently selected
54
M. Li
for illustration. This section begins with a holistic review of these selected articles, addressing context and participants, writing task and technology, theoretical framework, methodological approach, research questions, and validity/reliability strategy (see Table 4.1). Then the articles are further analyzed in terms of thematic categories to suggest the research foci during the last decade (see Table 4.2).
Overview Table 4.1 summarizes the fourteen key studies from multiple aspects. As the table shows, these studies were predominantly conducted in tertiary level ESL/EFL contexts. The technology tools used for peer review ranged from Microsoft Word, Email, blogging tools (e.g., Blogger, Vox), wikis, instant messaging tools to specific peer review systems (e.g., SwoRD, Turnitin PeerMark). Previous research was mainly informed by sociocultural theory and cognitive theoretical perspectives and this decade witnessed the increasing adoption of the mixed-methods approach. To ensure the research validity and reliability, most qualitative studies employed triangulation and thick description, and most mixedmethods studies or quantitative studies reported inter-rater reliability, measured by Cronbach’s alpha and Cohen’s Kappa.
Context and participant
11 ESL college students, USA
12 EFL sophomores at a university in Taiwan
24 undergraduate non-English majors at a university in Taiwan
Study
Jin and Zhu (2010)
Liang (2010)
Chang (2012)
Three paragraph writing assignments (i.e.., opinion, comparison and contrast, problem-solving) MSN; blackboard
Multiple tasks of different genres (i.e., compare and contrast, exposition, summary-analysis, argumentation, problem-solution) Wink 2.0 (Screen-motion capturing software); instant messaging Two writing tasks: book review, a research paper presentation task MSN Messenger
Writing task and technology
Methodological approach Qualitative case study
Longitudinal mixed-methods study
Mixed methods
Theoretical framework Sociocultural theory (Vygotsky, 1978); activity theory(Liet’ev, 1978, 1981)
Sociocultural theory
Sociocultural theory
Table 4.1 Research matrix of computer-mediated peer response
Triangulation; intercoder agreement
1. What are the different types of interaction in synchronous online L2 discourse? 2. How does synchronous online peer revision-related discourse facilitate subsequent writing and revision? 1. How do students engage in peer review tasks via the three modes (face-to-face, synchronous, and asynchronous CMC)? 2. What comments are generated from the three modes and how do these comments benefit peer review?
(continued)
Triangulation; inter-relater reliability
Triangulation
How does the use of instant messaging mediated ESL students’ motives in their participation in CMPR tasks in an ESL academic writing class?
Research question
Validity and reliability strategy
4 Computer-Mediated Peer Response
55
67 first-year English majors at a university in Taiwan
30 first-year Turkish EFL students (English majors) at a university in Turkey
50 EFL students (non-English majors) at a university in Taiwan
Chen (2012)
Ciftci and Kocoglu (2012)
Yang and Meng (2013)
Study
Context and participant
Table 4.1 (continued)
Mixed methods
Mixed methods (quasiexperimental design)
Process writing; collaborative learning theory; sociocultural theory (e.g., Zone of Proximal Development); interactionist Sociocultural theory
Four opinion essays on different topics Blogger
A narrative essay Computersupported collaborative learning (CSCL) system facilitating students’ error correction on peer writing
Mixed methods; action research
Sociocultural theory; process writing
Academic writing essays Weblog
Methodological approach
Theoretical framework
Writing task and technology
Does the use of blogs result in a significant difference in writing performance between EFL students who received online peer feedback through blogs and those who received face-to-face peer feedback in the traditional classroom? 1. To what degree do college students’ texts improve after online feedback training? 2. What differences are there between the moreand less-proficient students’ revisions? 3. What are college students’ perceptions towards online feedback training concerning text revision?
3. How do students perceive the effectiveness of peer review via the three modes? What are the student reactions to a blog-based peer review approach to teaching English writing?
Research question
Triangulation; inter-relater reliability
Triangulation; reliability test of questionnaire (Cronbach’s alpha) Triangulation; inter-relater reliability
Validity and reliability strategy
56 M. Li
1. Do learners who receive computer-mediated corrective feedback from their peers perform significantly better in writing posttests? 2. Which type of computer-mediated corrective feedback is more effective for EFL learners’ writing performance? 3. Which writing aspect is mainly developed by computer-mediated corrective feedback? 1. What forms of peer review comments are given and received by the students? 2. What are the students’ reflections on peer reviewing? How did advice givers and recipients manage the asymmetrical participant roles inherent in L2 peer response?
Qualitative case study
Qualitative study (CA approach)
Zone of Proximal Development (Vygotsky, 1978); intercultural communicative competences Conversation analysis; politeness theory (Brown & Levinson, 1987)
Technical writing from the Swedish students; business writing from the US students Google Docs, Adobe Connect, Wikispaces Three course essays Web-based interface chatroom; Microsoft Word
Bradley (2014)
26 Master’s students from a American university and a Swedish university Tsai and 14 ESL college Kinginger students in (2014) the USA
Research question
Quantitative Study
Sociocultural theory; communicative competence (Canale & Swain, 1980)
Timed writing task of a 150–200 word essay, an error-correction task Microsoft Word
AbuSeileek 64 EFL students at and Abualsha’r, a university in Jordan (2014)
Study
Methodological approach
Theoretical framework
Writing task and technology
Context and participant
(continued)
Thick description
Triangulation; thick description
Triangulation; inter-relater reliability
Validity and reliability strategy
4 Computer-Mediated Peer Response
57
32 EFL students at a university in Vietnam
43 EFL undergraduates at a University in Estonia
26 first-year students (both ESL students and mainstream students), USA
Pham and Usaha (2016)
Leijen (2017)
Li and Li (2017)
Study
Context and participant
Table 4.1 (continued) Methodological approach Mixed methods
Mixed methods
Mixed methods
Theoretical framework Sociocultural theory
ZPD
Sociocultural theory
Writing task and technology
Cause/Effect, Process essays Yahoo Mail Weblog
Argumentative academic text Web-based peer review system SWoRD (newly named Peerceptiv)
Two writing tasks (i.e., Summary and response paper, argumentative paper) Turnitin PeerMark embedded in D2L
Research question 1. Do students provide more comments on global than local areas? If so, are there any differences between the revision-oriented comments of the two areas? 2. What are the ratios of students’ incorporation of blog-based peer comments into revision? And why do the student writers not incorporate some peer comments into revision? What are the types and traits of feedback regarding the use of a web-based peer review system, and how the feedback influence revisions made in subsequent drafts? 1. What different areas do the students comment on when they provide Turnitin-based peer feedback in the mainstream class and the ESL class? 2. How do mainstream students and ESL students perceive the use of Turnitin for peer review?
Triangulation; coding cross-check
Triangulation; inter-relater reliability
Triangulation; thick description; inter-relater reliability
Validity and reliability strategy
58 M. Li
Process oriented approach; interaction approach
A 300–330-word argumentative essay
24 EFL students at a university in Brazil
Zaccaron and Xhafaj (2020)
Mixed methods
Quasiexperimental study
ZPD
Two five-paragraph argumentative essays (500 words on ethical business cases)
114 EFL second-year students at a university in Netherlands
van den Bos and Tan (2019)
Study
Methodological approach
Theoretical framework
Writing task and technology
Context and participant 1. What is the effect of anonymous and non-anonymous online peer review on the different types of feedback? 2. What is the effect of the anonymous and non-anonymous online peer review and feedback types on students’ revisions? 3. What is the effect of anonymous and non-anonymous online peer feedback and revisions on writing performance? 1. Does the effectiveness of peer feedback differ when it is anonymous (and received solely through online asynchronous feedback) from when it is complemented with peer conference? 2. How do students perceive anonymous and conference peer feedback?
Research question
Triangulation; inter-relater reliability using Cronbach alpha
Triangulation; inter-relater reliability
Validity and reliability strategy
4 Computer-Mediated Peer Response
59
Reference
Jin, L., & Zhu, W. (2010). Dynamic motives in ESL computer-mediated peer response. Computers and Composition, 27(4), 284–303
Liang, M. Y. (2010). Using synchronous online peer response groups in EFL writing: Revision-related discourse. Language Learning & Technology, 14(1), 45–64
Year
2010
2010
Jin and Zhu’s (2010) qualitative case study investigated how the use of instant messaging (IM) mediated student motives while participating in the CMPR task in an ESL academic writing class at an American university. Students conducted peer review regarding five essay assignments (i.e., compare and contrast, exposition, summary-analysis, argumentation, and problem-solution). Triangulated data sources were utilized, including (1) ethnographic survey, (2) on-screen behaviors, (3) beyond-screen behaviors, (4) multiple interviews, (5) researcher reflective journals, and (6) first and second drafts of papers. Results indicated that CMPR was mediated by dynamic motives, in relation to students’ language proficiency, familiarity with technology, interpersonal skills, and ability to comment on writing. The study cautioned us that CMPR using IM may create tension between participants, especially when they do not have skills necessary to perform CMPR tasks Liang (2010) examined EFL writers’ synchronous online interaction within three small groups in a Taiwanese undergraduate EFL writing class. The students submitted essays and provided feedback on their peers’ writing via MSN Messenger. Regarding the assignment of book review, the online chats involved low frequencies of meaning negotiation (i.e., comprehension checks, confirmation checks, and clarification requests), and error correction, with interactions primarily centered on content discussion, task management, and social talk. The students incorporated the majority of content discussions into their revisions. The small group, performing a Research Paper task, had similar chat pattern; however, they exhibited variations as to the incorporation of online discourse into revisions. Liang attributed the differences to group makeups and dynamics
Annotation
Table 4.2 Research timeline of computer-mediated peer response Theme
A.2 B.1 B.2 C
C F
60 M. Li
Reference
Chang, C. F. (2012). Peer review via three modes in an EFL writing course. Computers and Composition, 29(1), 63–78
Chen, K. T. C. (2012). Blog-based peer reviewing in EFL writing classrooms for Chinese speakers. Computers and Composition, 29(4), 280
Year
2012
2012
Annotation
Theme
A.3 D
A.1 A.4 D F
(continued)
Chang (2012) adopted a mixed methods design to examine how F2F, synchronous CMC and asynchronous CMC modes influenced peer review. Two CMC tools (i.e., MSN and Blackboard) were implemented for synchronous/asynchronous peer review. Results indicated that the modality influenced task engagement and comment categories and led to different student perceptions of peer review. Instead of a single mode of peer review, Chang (2012) showed that different modes in the multiple-draft process could accommodate individual preferences, and permutations of review modes at different writing stages and training for specific modes of peer review may have positive impact on CMPR process Chen (2012) investigated Taiwanese EFL learners’ experiences of blog-based peer reviewing in an undergraduate Academic English Writing course. Students’ perceptions of the web-based peer review activities were explored through a Likert-scale questionnaire survey and reflective writing. Students reported that they enhanced their writing behaviors while increasing interactions with their writing partners. From the students’ perspectives, weblog, as a convenient tool, facilitated their revising process and fostered their reflective thinking. Blogs also helped them alleviate task-related stress and build communication/learning confidence
4 Computer-Mediated Peer Response
61
Reference
Ciftci, H., & Kocoglu, Z. (2012). Effects of peer e-feedback on Turkish EFL students’ writing performance. Journal of Educational Computing Research, 46(1), 61–84
Yang, Y. F., & Meng, W. T. (2013). The effects of online feedback training on students’ text revision. Language Learning & Technology, 17(2), 220–238
Year
2012
2013
Table 4.2 (continued)
B.1 D E
Theme A.1 B.1 D
Annotation Ciftci and Kocoglu (2012) investigated the effect of peer e-feedback on Turkish EFL students’ writing performance and their perceptions of conducting peer review using blogs. The control group of the study completed in-class writing activities and utilized F2F peer feedback, while the experimental (blog-based) group attended class in the computer lab and conducted blog-based feedback. Data included (a) students’ background survey, (b) interviews conducted at the beginning and end of the term, (c) first and revised drafts, and (d) end-of-semester questionnaire. The statistical analysis (i.e., ANOVA) of writing products revealed that both groups made improvement in their revised drafts, and the experimental group showed greater improvement. The interview and questionnaires revealed students’ positive perceptions of using blogs for peer review; in particular, they believed that creating a website added authenticity of the task and enabled them to foster their agency Yang and Meng (2013) examined the impact of online feedback training on EFL college students’ text revisions. The students enrolled in a writing program at a Taiwanese university were grouped into the more- proficient and less-proficient cohorts. For one CMPR task, the students did not receive training, and for the other, they received training. The results showed that more-proficient students made little writing progress reflected in small changes in pre-test vs. post-test scores. This cohort of students lacked trust in their peers’ text comments even if they had received online feedback training. In contrast, less-proficient students reported their enhanced ability to detect and correct their peers’ errors. Regarding revisions, less-proficient students improved the quality of their text revisions, i.e., correcting both local and global errors. In addition, both groups concurred that immediate feedback on local and global errors was beneficial for clarification of writing issues
62 M. Li
Reference
AbuSeileek, A., & Abualsha’r, A. (2014). Using peer computer-mediated corrective feedback to support EFL learners’ writing. Language Learning & Technology, 18(1), 76–95.
Bradley, L. (2014). Peer reviewing in an intercultural wiki environment-student interaction and reflections. Computers and Composition, 34, 80–95
Year
2014
2014
(continued)
A.3 D
Theme B.1 B.2
Annotation AbuSeileek and Abualsha’r (2014) examined the effect of computer-mediated corrective feedback on EFL students’ writing performance. The control group of the study received no feedback, and the three treatment groups were “track changes”, recast feedback (via comment bubbles), and metalinguistic feedback (via comment bubbles). Over the course of the 8-week treatments, the results showed that students receiving computer-mediated corrective feedback achieved better overall test scores than those who received no corrective feedback. The “track changes” feedback type tended to have the greatest positive impact on students’ writing development, reflected in improvement in multiple writing aspects) Bradley’s (2014) case study explored the peer review activity in which non-native speakers partnered with native speakers in an intercultural writing project using wikis in an ESP course at a Swedish university. Drawing on the analyses of peer comments and post-task interviews, the study revealed that EFL students developed and refined texts while gaining communicative competence in the asynchronous web-based writing environment. Despite initial reluctance, the students became comfortable with peer feedback, and the feedback showed a high variation in terms of revision-oriented comments. Overall, the students enhanced their intercultural communication skills and critical awareness through the intercultural peer review activity
4 Computer-Mediated Peer Response
63
Reference
Tsai, M. H., & Kinginger, C. (2014). Giving and receiving advice in computer-mediated peer response activities. CALICO Journal, 32(1), 82–112
Pham, V. P. H., & Usaha, S. (2016). Blog-based peer response for L2 writing revision. Computer Assisted Language Learning, 29(4), 724–748
Year
2014
2016
Table 4.2 (continued)
A.3 B.1 E
Theme A.1 C D
Annotation Tsai and Kinginger (2014) applied a conversational analysis approach to closely examine the nature of peer advice from ESL students in a writing course at an American university. They specifically examined the ways in which the parties (giving/receiving feedback) managed interactions and relationships when unbalanced knowledge occurred. Online sessions of CMPR were adopted in combination with F2F peer review. After examining their peer’s essays in F2F sessions, the students logged on to a web-based chatroom developed by the university, and continued to interact using the message and dialogue box. The results showed that compared with F2F, asynchronous CMC peer review allowed time for students to consider their responses, which in turn resulted in more thorough feedback focusing primarily on content-related items. Students were more willing to communicate in the CMC mode, but they were able to mitigate face-threatening acts in relation to advice giving during F2F sessions. Students also found the intercultural aspects to be beneficial in developing multicultural social skills In their research on blog-based peer responses for L2 writing revisions, Pham and Usaha (2016) examined the nature of blog-based peer comments as well as the students’ uptake of suggestions in their revisions. The Vietnamese EFL college students in an academic writing class posted their writings to blogs, provided feedback on their classmates’ writing, and then revised their own drafts based on peer feedback. Worthy of note, the students were trained on blog-based peer response prior to peer review sessions. The analyses of writing drafts, peer comments, revisions, and in-depth interviews revealed that after receiving training, the students were able to provide more global comments, and they also made significant revisions to their own essays
64 M. Li
Reference
Leijen, D. A. (2017). A novel approach to examine the impact of web-based peer review on the revisions of L2 writers. Computers and Composition, 43, 35–54
Li, M., & Li, J. (2017). Online peer review using Turnitin in first-year writing classes. Computers and Composition, 46, 21–38
Year
2017
2017
(continued)
A.3 D
Theme A.3 B.1
Annotation Leijen (2017) examined the impact of web-based peer review on revisions of L2 learners in a study conducted with EFL students at a university in Estonia. The author employed a novel approach to examining peer review comments through the peer review platform named SWoRD (now called Peerceptiv). Based on peer feedback, the students revised their papers and turned them back in to SWoRD, which distributed the papers to the same peers for final review. SWoRD was an innovative tool, as it determines the accuracy of student ratings by separating out different kinds of bias in grading; algorithms ensures that peer reviews are of high quality and peer grades correlate with those from the instructor (Kaufmann & Shunn, 2011). The results suggested that some SWoRD features such as alteration (feedback pointing to a specific change) and recurring (other peers referring to the same/similar aspect in their feedback) are more likely to lead to revisions than the other features in this model Li and Li (2017) examined students’ writing feedback using Turnitin PeerMark in freshman composition classes in the US. Turnitin PeerMark is featured with three functions: Commenting tools, Composition marks symbols (e.g., spelling errors, word choice, citation, and run-on sentence), and PeerMark questions which allow instructors to devise assignment-specific guiding questions. Both the non-ESL and ESL students were found to provide mostly revision-oriented feedback. Contrary to previous research that found online feedback not as effective as oral feedback with global issues and development of critical reading skills, PeerMark feedback was effective for revisions in both global and local areas. PeerMark functions scaffolded students in evaluating multiple writing areas, resulting in more thorough feedback. In terms of student perceptions, positive attitudes were found towards Turnitin peer feedback, with unanimous agreement in the ease and effectiveness of PeerMark. Students in both groups, in addition, supported the anonymity of reviewers and writers in the peer review process
4 Computer-Mediated Peer Response
65
Reference
van den Bos, A. H., & Tan, E. (2019). Effects of anonymity on online peer review in second language writing. Computers & Education, 10, 36–38
Zaccaron, R., & Puntel Xhafaj, D. C. (2020). Knowing me, knowing you: A comparative study on the effects of anonymous and conference peer feedback on the writing of learners of English as an Additional Language. System.
Year
2019
2020
Table 4.2 (continued)
A.1 B.1 D F
Theme A.3 B.1 F
Annotation van den Bos and Tan (2019) conducted an empirical study with Dutch second-year university students and investigated the effect of anonymity on peer feedback type and revisions through online anonymous peer review and non-anonymous peer review. Feedback types were classified as directive, non-directive, higher-order concern, lower-order concern, and students’ revisions were classified as processed, partly processed, and not processed. The results showed that anonymous feedback mode led to more feedback occurrences than non-anonymous mode, featured with significantly more occurrences of both directive and non-directive higher-order concerns (e.g., content and organization development). Although the overall adoption rate for revisions did not differ significantly between the two conditions, the anonymous condition processed significantly more higher-order feedback and led to students’ higher scores than those of non-anonymous peers Zaccaron and Xhafaj (2020) compared the effectiveness and perceptions of anonymous CMC and F2F peer feedback on the writing of EFL students at a Brazilian university. Two groups (i.e., asynchronous peer feedback using anonyms and F2F peer conference) were assigned. The writing tasks involved first drafts, peer feedback, and revised drafts of an argumentative essay. The results showed that the anonymous CMC group outperformed the F2F group in percentage of valid suggestions, while uptake of valid suggestions was similar. The students in the anonymous group preferred giving feedback over receiving feedback. They agreed that peer feedback was beneficial in exploring their own linguistic gaps, and in particular, anonymity allowed for more opportunities for them to provide higher-order feedback, which led to higher writing scores
66 M. Li
4 Computer-Mediated Peer Response
67
Thematic Categories To provide a rough timeline of the empirical studies that reflects the foci of investigation, I present these articles chronologically with respective annotations and research themes (see Table 4.2). The themes are categorized as follows: A. Different modes of peer feedback 1. Comparison of electronic/online peer feedback and F2F peer feedback 2. Synchronous CMC peer feedback 3. Asynchronous CMC peer feedback 4. Comparison of synchronous CMC and asynchronous CMC peer feedback B. Impact of computer-mediated peer response on students’ writing 1. Impact on revisions 2. Impact on writing development C. D. E. F.
Peer interaction and group dynamics Students’ perceptions Peer feedback training Factors influencing computer-mediated peer response
As Table 4.2 shows, quite a few studies (e.g., Chang, 2012) discussed and compared different modes of peer feedback (Category A). Comparing online feedback with F2F feedback, researchers identified that the online review group made significantly more comments than the F2F group, which can be attributed to the heightened sense of responsibility that comes with the high visibility enabled by technology tools (Sengupta, 2001). Chang (2012), in a mixed-methods design, examined how F2F, synchronous CMC, and asynchronous CMC modes influenced peer review. Two CMC tools (i.e., MSN and Blackboard) were implemented for synchronous/asynchronous peer review of student essays. Results indicated that the F2F and asynchronous modes generated high percentages of on-task episodes than the synchronous mode;
68
M. Li
the synchronous and asynchronous modes generated higher percentages of off-task episodes than the F2F mode. In terms of peer comments, the asynchronous CMC mode produced higher percentage of revisionoriented comments than other modes due to more time to think about writing, thus generating more specific constructive feedback, particularly in local areas. Multiple quantitative studies were conducted to examine the impact of CMPR on revisions and writing development (Category B). For instance, AbuSeileek and Abualsha’r (2014) reported that students receiving computer-mediated corrective feedback achieved better scores than those who did not receive feedback. Similarly, Pham and Usaha’s (2016) study revealed that through training, the Vietnamese EFL students were able to provide more global comments and made significant revisions to their own essays. Such a quantitative approach to examining the effect of CMPR remains needed in future research. Moreover, a few previous studies explored peer interaction and group dynamics during online peer review (Category C ). Liang (2010) explored EFL students’ interaction during peer review in the synchronous CMC mode and found that the major types of interaction were social talk, task management, and content discussion, with small occasions of error correction. The study revealed that group dynamics influenced students’ subsequent revisions. Tsai and Kinginger (2014) explored the interactions between advice givers and receivers during peer review through the conversational analysis approach. Findings showed that the advice recipients were oriented by initiating advice-seeking requests, and the advisors adopted the “institutional” role of evaluating and advising. Interestingly, the participants worked together to implement face-saving strategies in order to mitigate the face-threatening aspects of advice-giving. This pragmatic approach illuminates our understanding of how peers establish social relationship while providing writing feedback. Other prior research reported students’ mixed perceptions of CMPR (Category D). Chen (2012) implemented blog-based peer review among EFL freshmen in Taiwan, and the analyses of students’ reflective essays and the post-task questionnaires revealed the students’ preference over reading from computers: in their perspectives, the blogs made it easy to comment on classmates’ writing and they benefited from the paired
4 Computer-Mediated Peer Response
69
peer review process. Moreover, Li and Li (2017) implemented online peer review in the American first-year composition classes using Turnitin PeerMark embedded into the CMS (i.e., Desire2Learn). The students commented on the ease and effectiveness of Turnitin PeerMark as a peer review tool and appreciated distinctive features of PeerMark that facilitated their peer review activities. Specifically, they found PeerMark questions functioning as guidelines for commenting, composition marks handy to use and helping identify grammatical errors, and commenting tools allowing them to provide feedback on multiple aspects without messing up the paper. In contrast to students’ perspectives, teachers’ perceptions on implementing CMPR is rather scarce, which deserves due attention in future research agenda. Like the literature on F2F peer response (Min, 2005, 2006), training on CMPR received researchers’ and instructors’ increasing attention, as trained peer review was found to have positive impact on text quality (Liou & Peng, 2009). Regarding this research strand (Category E ), Yang and Meng’s (2013) study extended to examine the effects of training in relation to students’ language proficiency. The results revealed that more-proficient students made little improvement between pre-test and post-test writing scores compared with less-proficient students. The open-ended questionnaire indicated that more-proficient students lacked trust in their peers’ text revisions even if they had received the CMPR training. In contrast, less-proficient students reported their enhanced ability to detect and correct their peers’ errors, and to improve the overall quality of their text revisions. Although little previous research specifically investigated mediating factors of CMPR (Category F ), quite a few studies addressed multiple factors influencing the process of CMPR, including peer response modes (Chang, 2012), group composition (Liang, 2010), assignment types, and learners’ experiences (Jin & Zhu, 2010). Chang (2012) examined the effects of three different communication modes (i.e., F2F, synchronous CMC, and asynchronous CMC) and found that the modality influenced task engagement and comment categories and led to different student perceptions of peer review. Also, a few studies (e.g., van den Bos & Tan, 2019; Zaccaron & Xhafaj, 2020) reported the positive role of anonymity
70
M. Li
that asynchronous CMPR afforded. Zaccaron and Xhafaj (2020) investigated the effectiveness of asynchronous peer feedback compared with F2F peer conference and found that the anonymous CMC group provided more percentage of valid suggestions than the F2F group and anonymity allowed for more opportunities for students to provide higher-order feedback. Moreover, previous studies revealed that students’ individual factors mediated CMPR, such as language proficiency (Jin & Zhu, 2010; Yang & Meng, 2013) and familiarity with technology (Jin & Zhu, 2010). For example, Jin and Zhu (2010) explored the dynamic motives for two ESL students within and across CMPR tasks and found that the prior and current experience of using Instant Messenger (IM) in peer response tasks influenced students’ motive formation and shift. In their study, because of relatively poor typing skills, one student failed to complete the first synchronous peer review task and left the image of an incompetent IM user and partner. Switching the motive of improving writing skills to maintaining a good-student image, he practiced typing diligently outside class and consequently had smoother experience of CMPR in the following peer feedback session. Given the lack of this research strand, researchers are encouraged to thoroughly examine various factors mediating CMPR.
Research Directions Due to the wide acknowledgement of its benefits, CMPR has become a common practice in L2 classrooms. Based on the current body of literature, this section discusses the research gap and future directions. The research synthesis discussed earlier in this chapter has shown that studies on CMPR have been conducted in a wide range of undergraduate-level ESL/EFL contexts, including America, Asia, and Europe; however, studies in other settings are lacking. Future research can be extended to foreign language classes other than English (e.g., Spanish, German, Arabic, Chinese, and Japanese) as well as secondary or graduate levels to explore the benefits and effects of CMPR in broad instructional contexts. More specifically, peer review among students with different language proficiency levels, particularly the low proficiency
4 Computer-Mediated Peer Response
71
(e.g., Yu & Lee, 2016b), needs to be further examined. In terms of research approaches, experimental studies with large sample sizes were particularly welcome due to its scarcity in the current literature. Multiple research lines await our further exploration. In response to inconclusive findings about the role of different modes of peer response (Category A), the first task is to further examine the affordances of different modes and technology tools for peer reviews. For instance, future research can investigate how the functions of new technology platforms and diverse communication modes take the benefits of peer review to the full potential, and how the different modes of CMPR can be best combined to maximize the benefits of peer feedback (Yu & Lee, 2016a). Such inquiries will help L2 teachers to better implement CMPR in their instructional contexts. Regarding the impact of CMPR on student writing (Category B), longitudinal studies are strongly encouraged that investigate the long-term effects of CMPR on students’ writing development, which would deepen our understanding of how engaging in feedback processing leads to tangible language learning outcomes in a long run. Future research should also further explore peer interactions and group dynamics, which were fully examined in the F2F peer response literature (e.g., Lockhart & Ng, 1995; Villamil & Guerrero, 2006; Zhu, 2001), but is largely unexamined in the CMPR task environment (Category C ). Moreover, perception data were mainly derived from students (Category D) in previous studies; the teacher role/perspective has been largely unexplored in the available research. Future research needs to probe into teachers’ attitude, beliefs, and competence regarding the implementation of CMPR (Yu & Lee, 2016a). A theme pertinent to the teacher role is peer feedback training (Category E ); the role of teachers in training students for CMPR and implementing CMPR deserves further examination. An effective approach is for teachers to do action research on how to train their students for CMPR, in which teachers, as essential agents in language teaching and learning, can identify a problem in their own classes, develop and implement a plan to address the problem, and evaluate and reflect on the results. Studies that investigate mediating factors of CMPR (Category F ) are also needed. The culture (e.g., Nelson & Carson, 2006; Zhang, 1995),
72
M. Li
students’ individual factors (e.g., Goldstein, 2006), and group composition (Zhu, 2001) have been examined in F2F peer review research; if and how such factors can mediate CMPR processes deserves further investigation. Also, given the mediating role of languages, future research can explore the use of L1 vs L2 in CMPR in the hope that we better understand optimal uses of languages to facilitate peer review in the CMC contexts. Furthermore, how to assess online peer response seems to be ignored in this domain of research. Pressing questions can include: What aspects need to be assessed for CMPR? What rubrics can be developed to not only assess students’ online peer review behaviors but also guide the entire peer review processes? In such ways can CMPR practices become truly learning processes.
Teaching Recommendations With ubiquitous technology in the digital age, CMPR has gradually become the norm in L2 writing classes. To motivate future classroom practices of CMPR, I discuss below research-informed pedagogical recommendations regarding modes/technology selection, training, and follow-up activities.
Selection of Modes and Technology Previous research has explored peer response in asynchronous CMC, synchronous CMC, and mixed modes (i.e., CMC and F2F) and found the benefits and constraints for each different mode. Due to time/space independence, asynchronous CMC allows sufficient time for more thoughtful peer comments and caters to individual needs and learning styles (Liu & Edwards, 2018). The anonymity, particularly combined with the asynchronous CMC, enables students to make more critical and honest comments. Also, it is relatively easy to manage when multiple peer response tasks are assigned to each student in that peer response can be conducted smoothly at students’ own pace. However, peer response in the asynchronous CMC mode is just one-way communication, and
4 Computer-Mediated Peer Response
73
it lacks two-way interaction which is important especially when ideas need to be clarified and exchanged. Synchronous CMC, on the other hand, allows students to discuss peer feedback without being physically together; it has been found to enhance motivation and group cohesion. Nevertheless, the synchronous CMC mode may lead to more “impression” comments than text-specific comments, and the comments may not be thorough under time pressure (Liu & Edwards, 2018). Also, turn-taking can become problematic, especially when multiple students conducted peer response within a group. Teachers should utilize different modes for peer response at different stages of the writing process based on students’ needs, task requirements, and work/time efficiency. For instance, based on the students’ perceptions revealed from previous studies (e.g., Ho & Savignon, 2007; Liu & Sadler, 2003), teachers can arrange for asynchronous CMC for peer commenting first and then synchronous CMC or F2F for discussion of peer comments. Mixed modes of peer response may be most effective and beneficial (Chen, 2016). Multiple technology tools have been incorporated into peer response tasks in the past decade, including asynchronous CMC tools (e.g., Microsoft Word, blogs, Turnitin PeerMark) and synchronous CMC tools (e.g., MSN Messenger, Google Docs). Table 4.3 shows relevant information of some representative digital tools available for CMPR.
Training on Computer-Mediated Peer Response Previous research (e.g., Berg, 1999; Liou & Peng, 2009; Min, 2005, 2006; Rahimi, 2013; Yang & Meng, 2013; Zhu, 1995) has shown the importance of training for peer review in both F2F and CMC settings. Training on peer response led to better student engagement and more specific and constructive feedback, which in turn resulted in better quality writing (Berg, 1999; Min, 2006; Rahimi, 2013; Zhu, 1995). I discuss below how to train students on CMPR. On one hand, the training strategies for F2F peer review discussed in previous studies (e.g., Berg, 1999; Liu & Sadler, 2003; Rollinson, 2005) are applicable to CMPR. Teachers need to ensure students’ clear
74
M. Li
Table 4.3 Representative technologies for computer-mediated peer response Digital technology
Website link
Function
Blogger
https://www.blogger.com/ about/
Wordpress Weebly Wix Turnitin PeerMark
https://wordpress.com/ https://www.weebly.com/ https://www.wix.com/ https://help.turnitin.com/ feedback-studio/turnitinwebsite/student/pee rmark/about-peermark. htm
Peerceptiv (former SWoRD)
https://peerceptiv.com/
Peergrade
https://www.peergrade.io/
Facebook
https://www.facebook.com/
WeChat (Moments function)
https://www.wechat.com/
Free website builder, online content management system Free website builder Free website builder Free website builder Online peer review platform Technology partner with CMS (e.g., Blackboard, Canvas, D2L) Online peer review platform Technology partner with CMS (e.g., Blackboard, Canvas) Online peer review platform Free social networking tool which can be used for peer response A relatively new free social networking tool which has potential for peer response
understanding of why and how they are engaging in peer response, and to provide guiding questions for peer response. Teachers also need to provide mini lessons (preferably assisted with relevant videos) on how to conduct effective peer response before students start peer review. On the other hand, teachers need to model how to use the features of a particular technology platform for CMPR tasks, and then organize a pilot session in which students utilize all the features of the assigned technology to provide peer feedback, with the teacher monitoring students’ use of the technology (Liu & Edwards, 2018). Additionally, due to the features of CMC modes, teachers can encourage students to use pragmatic cues, such as emojis, effectively to facilitate discussion/interaction.
4 Computer-Mediated Peer Response
75
Follow-Up Activities Teachers should organize some follow-up activities after CMPR (Liu & Edwards, 2018). Several options may be considered. Peer reviewers can discuss their comments with the writer; if anonymity is used during the CMPR process, information of feedback giver/receiver can be revealed in the follow-up stage. Students can also have an individual conference with the instructor to discuss the peer feedback either F2F or via video conferencing tools (e.g., Zoom, Google Classroom, Skype). Importantly, students should be required to revise their papers based on peer feedback. To ensure the full consideration of peer feedback, students can be asked to make a list of comments that they did not incorporate into their revision and briefly explain why. Also, reviewers can be invited to read the writer’s final draft to see the effect of their review comments on student revisions. After reviewing peer comments and student revisions, instructors are encouraged to provide additional feedback so as to further assist students’ writing development.
References AbuSeileek, A., & Abualsha’r, A. (2014). Using peer computer-mediated corrective feedback to support EFL learners’ writing. Language Learning & Technology, 18(1), 76–95. Belcher, D. (1990). Peer vs. teacher response in the advanced composition class. Issues in Writing, 2(2), 128–150. Berg, E. C. (1999). The effects of trained peer response on ESL students’ revision types and writing quality. Journal of Second Language Writing, 8(3), 215–241. Chang, C. (2012). Peer review via three modes in an EFL writing course. Computers and Composition, 29, 63–78. Chang, C. Y. (2016). Two decades of research in L2 peer review. Journal of Writing Research, 8(1), 81–117. Chen, K. T. C. (2012). Blog-based peer reviewing in EFL writing classrooms for Chinese speakers. Computers and Composition, 29 (4), 280.
76
M. Li
Chen, T. (2016). Technology-supported peer feedback in ESL/EFL writing classes: A research synthesis. Computer Assisted Language Learning, 29 (2), 365–397. Ciftci, H., & Kocoglu, Z. (2012). Effects of peer e-feedback on Turkish EFL students’ writing performance. Journal of Educational Computing Research, 46 (1), 61–84. Goldstein, L. (2006). Feedback and revision in second language writing: Contextual, teacher, and student variables. In K. Hyland & F. Hyland (Eds.), Feedback in second language writing: Contexts and issues (pp. 185– 205). Cambridge University Press. Guardado, M., & Shi, L. (2007). ESL students’ experiences of online peer feedback. Computers and Composition, 24, 443–461. Hedgcock, J., & Lefkowitz, N. (1992). Collaborative oral/aural revision in foreign language writing instruction. Journal of Second Language Writing, 1, 255–276. Ho, M., & Savignon, S. (2007). Face-to-face and computer-mediated peer review in EFL writing. CALICO, 24 (2), 269–290. Hyland, K., & Hyland, F. (2006). Feedback on second language students’ writing. Language Teaching, 39 (2), 83–101. Jin, L., & Zhu, W. (2010). Dynamic motives in ESL computer-mediated peer response. Computers and Composition, 27 (4), 284–303. Li, M., & Li, J. (2017). Online peer review using Turnitin in first-year writing classes. Computers and Composition, 46 , 21–38. Liang, M. Y. (2010). Using synchronous online peer response groups in EFL writing: Revision-related discourse. Language Learning & Technology, 14 (1), 45–64. Liou, H., & Peng, Z. (2009). Training effects on computer-mediated peer review. System, 37 , 514–525. Liu, J., & Edwards, J. G. H. (2018). Peer response in second language writing classrooms (2nd ed.). University of Michigan Press. Liu, J., & Sadler, R. W. (2003). The effect and affect of peer review in electronic versus traditional modes on L2 writing. Journal of English for Academic Purposes, 2, 193–227. Lockhart, C., & Ng, P. (1995). Analyzing talk in peer response groups: Stances, functions, and content. Language Learning, 45, 605–655. Min, H. (2005). Training students to become successful peer reviewers. System, 33, 293–308. Min, H. (2006). The effects of trained peer response on EFL students’ revision types and writing quality. Journal of Second Language Writing, 15, 118–141.
4 Computer-Mediated Peer Response
77
Nelson, G. L., & Carson, J. G. (2006). Cultural issues in peer response: Revisiting ‘culture’. In K. Hyland & F. Hyland (Eds.), Feedback in second language writing: Contexts and issues (pp. 42–59). Cambridge University Press. Pham, V. P. H., & Usaha, S. (2016). Blog-based peer response for L2 writing revision. Computer Assisted Language Learning, 29 (4), 724–748. Rahimi, M. (2013). Is training student reviewers worth its while? A study of how training influences the quality of students’ feedback and writing. Language Teaching Research, 17 (1), 67–89. Rollinson, P. (2005). Using peer feedback in the ESL writing class. ELT Journal, 8, 183–204. Sengupta, S. (2001). Exchanging ideas with peers in network-based classrooms: An aid or a pain? Language Learning & Technology, 5 (1), 103–134. Stanley, J. (1992). Coaching student writers to be effective peer evaluators. Journal of Second Language Writing, 1, 217–233. Tsai, M., & Kinginger, C. (2014). Giving and receiving advice in computermediated peer response activities. CALICO Journal, 32(1), 82–112. Tsui, A., & Ng, M. (2000). Do secondary L2 writers benefit from peer comments? Journal of Second Language Writing, 9 (2), 147–170. Tuzi, F. (2004). The impact of e-feedback on the revisions of L2 writers in an academic writing course. Computers and Composition, 21, 217–235. van den Bos, A. H., & Tan, E. (2019). Effects of anonymity on online peer review in second-language writing. Computers & Education, 10, 36–38. Villamil, O. S., & de Guerrero, M. C. M. (2006). Sociocultural theory: A framework for understanding the social-cognitive dimensions of peer feedback. In K. Hyland & F. Hyland (Eds.), Feedback in second language writing (pp. 105–122). Cambridge University Press. Warschauer, M. (2002). Networking into academic discourse. Journal of English for Academic Purposes, 1(1), 45–58. Yang, Y., & Meng, W. (2013). The effects of online feedback training on students’ text revision. Language Learning & Technology, 17 (2), 220–238. Yu, S., & Lee, I. (2016a). Peer feedback in second language writing (2005– 2014). Language Teaching, 49 (4), 461–493. Yu, S., & Lee, I. (2016b). Understanding the role of learners with low English language proficiency in peer feedback of second language writing. TESOL Quarterly, 50 (2), 483–494. Zaccaron, R., & Xhafaj, D. (2020). Knowing me, knowing you: A comparative study on the effects of anonymous and conference peer feedback on the writing of learners of English as an additional language. System, 95 (4), 102367.
78
M. Li
Zhang, S. (1995). Reexamining the affective advantage of peer feedback in the ESL writing class. Journal of Second Language Writing, 4, 209–222. Zhu, W. (1995). Effects of training for peer response on students’ comments and interaction. Written Communication, 1(4), 492–528. Zhu, W. (2001). Interaction and feedback in mixed peer response groups. Journal of Second Language Writing, 10, 251–276. Zhu, W., & Mitchell, D. A. (2012). Participation in peer response as activity: An examination of peer response stances from an activity theory perspective. TESOL Quarterly, 46 (2), 362–386.
5 Digital Multimodal Composing
Introduction Due to multimodal realities and radical changes in digital environments, the multimodal writing practice attracts instructors’ and researchers’ wide attention (Li & Akoto, 2021). Writing, reconceptualized as multimodal composing, provides rich opportunities for writers to deploy multiple resources (e.g., linguistic, visual, audio, gestural, and spatial) to make meaning, construct knowledge, and express self-identity (Belcher, 2017; Li & Storch, 2017). Multiliteracies (New London Group, 1996) has been proposed to present a broader view of literacy pedagogy, responding to “the multiplicity of communication channels and increasing cultural and linguistic diversity” (p. 1). Students’ authentic writing in out-of-school contexts has been increasingly multimodal during this decade, involving deployment of multiple semiotic resources (Hyland, 2016). To echo this trend, students are now engaged in digital multimodal composing (DMC) projects in L2 classrooms, such as blogging, digital storytelling, and video-making. In particular, the most recent years witnessed the surge of research interest in DMC with the advancement of digital technologies and more multisemiotic digital input in L2 learners’ life © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 M. Li, Researching and Teaching Second Language Writing in the Digital Age, https://doi.org/10.1007/978-3-030-87710-1_5
79
80
M. Li
(Belcher, 2017; Halfner, 2014; Street et al., 2011). Research syntheses and research agenda on this topic have subsequently appeared (e.g., Li & Akoto, 2021; Lim & Kessler, 2021; Zhang et al., 2021), stimulating increasing interest in this area. This chapter discusses the research and instructional practice on digital multimodal composing (DMC). It first provides the definition of DMC and rationales for implementing DMC activities, and then explains key texts in this domain of research, with important information presented in illustrative tables. The research gap is then addressed based on the synthesis of the main research strands. This chapter ends with recommendations for future research and pedagogy.
Defining Digital Multimodal Composing (DMC) To understand DMC, we need to first understand the term “multimodality” (Kress, 2003, 2010). Simply, multimodality refers to the use of different modes (i.e., textual, aural, linguistic, spatial, and visual) for communication and meaning making (Kress, 2003). Kress (2010) interpreted multimodality as the design to organize diverse modes into a multimodal “ensemble,” namely a “plurality of signs in different modes into a particular configuration to form a coherent arrangement” (p. 162). In twenty-first-century social and cultural contexts, meanings are increasingly represented and communicated multimodally with images, sounds, space, and movement (Kress, 2010). The paradigm shift in terms of writing representation moves from the logic of the page to the logic of the screen (Kress, 2003), which coincides with the visual turn in writing studies: seeing texts as visual and treating images as texts (Purdy, 2014). Computer-based digital technologies have afforded us new and easy access to multimodal communication. Miller-Cochran (2017), however, reminds us that multimodal does not always mean digital . Multimodal composing entails expanding the range of communicative options through new genres to new audiences. Therefore, moving a text to a digital space does not necessarily change the function, purpose, and audience of the writing and thus does not necessarily create a new genre, which multimodal composing entails. In light of this viewpoint, DMC
5 Digital Multimodal Composing
81
is defined as a new literacy practice in which students draw on digital technologies to create multimodal products integrating texts, images, sound, movement, video, and/or hypertext, which address new audience through new genres (Li & Akoto, 2021).
Rationales for DMC With the necessity of reading and writing multimodal texts in contemporary communication (Yi et al., 2020), DMC is now deemed an essential component of L2 writing classrooms, as text-based communication cannot adequately support students’ writing in the rhetorical situations they experience in the digital age. DMC supports writing as a discovery process of ideas and forms, a means of self-expression and reflection, and an effective way of communication (Belcher, 2017). In a L2 writing class, Fraiberg (2010), connecting multilingualism and multimodality, argues that languages and modalities are both critical writing resources that L2 learners draw upon. Incorporation of multimodalities into L2 writing tasks encourages a focus on process, “as various modes of expression can contribute to invention, drafting, and remixing” (Miller-Cochran, 2017, p. 89). DMC activities have been reported to provide multiple opportunities for learning and growth of L2 learners. First, such activities develop students’ critical thinking and digital literacies skills. They enable students to not only acquire technology-mediated literacy skills but also enhance their awareness of the communication purpose to “build a community of like-minded individuals and to use that community for professional and personal development” (Dudeney & Hockly, 2016, p. 117). DMC particularly fosters learner autonomy in that it encourages students to take responsibility for, monitor, and reflect on their learning (Hafner, 2014; Jiang & Luk, 2016). Specifically, due to the authenticity of multimodal tasks and tangible real-world audience, students are highly motivated, independently practice L2, and search information to accomplish the task goals while monitoring their own learning process (Hafner & Miller, 2011).
82
M. Li
DMC also opens up new identities for L2 learners, including struggling writers. It allows writers to draw on their unique linguistic repertoire as well as non-linguistic alternatives to fully convey meaning and showcase their knowledge of heritage language and culture (GodwinJones, 2018; Jiang et al., 2020; Smith et al., 2017). While classroom environment may lead to L2 students’ sense of exclusion or marginalization, the CMC context can enable them to use and practice English more freely and help develop a sense of belonging and connectedness to a global English-speaking community, and therefore enhance their self-esteem (Lam, 2000; Smith et al., 2017). For struggling migrant L2 writers, DMC, in a positive environment, allows them to share their personal backgrounds, to showcase the knowledge of their heritage culture and language, and to take on identities as productive students (Godwin-Jones, 2018; Smith et al., 2017; Vandommele et al., 2017). Moreover, DMC practices such as blogging and digital storytelling have been found to facilitate L2 learners’ writing development (e.g., Bloch, 2007; Oskoz & Elola, 2016a; Vandommele et al., 2017). Previous research has reported L2 students’ linguistic and rhetorical gains through digital storytelling projects, including their better use of vocabulary, grammar, and writing conventions (e.g., Dzekoe, 2017; Oskoz & Elola, 2016a).
Key Texts To zoom in on the research in DMC in the L2 context, I searched the Google Scholar database using the key words of “multimodal composing” or “multimodal composition” and “L2” and meanwhile limiting the results to the articles published from 2010 to 2020 in the journals with the CiteScore higher than 2.0. Sixteen articles were subsequently selected for illustration in this section. As I noted earlier, I do not aim to do a comprehensive review of the current body of literature, but the selected articles cover a wide range of research themes, and the synthesis of these studies suggests the recent trends for research practices. The section begins with an overview of these reported studies, including context and participants, writing task and technology, theoretical framework,
5 Digital Multimodal Composing
83
methodological approach, and validity/reliability strategy (as illustrated in Table 5.1). Then, the sixteen articles are further analyzed in terms of thematic categories (see Table 5.2), based on which main research strands are discussed.
Overview Table 5.1 shows that DMC practices have been widely implemented in diverse learning contexts, and the participants ranged from EFL university and secondary school students in Asia and America, ESL middle school and college students in the USA, to tertiary Spanish and Dutch as a L2 students in America and Europe, respectively. However, the studies in the ESL/EFL settings still dominate the current body of literature. The technologies used for DMC ranged from video creating software (e.g., MovieMaker), multimodal presentation software (e.g., PowerPoint), to multimodal product sharing software (e.g., YouTube, Edublog). These studies were informed by theoretical frameworks across multiple disciplines, such as literacy studies (e.g., multimodality, multiliteracies, digital literacies), social semiotics (e.g., synaesthesia, transformation, transduction), linguistics (e.g., systemic functional grammar), and L2 acquisition (e.g., motivation, identity, investment). Moreover, the qualitative research approach dominated the current research, and only a few studies employed quantitative research and mixed-methods approaches. To ensure the research validity and reliability, most qualitative studies adopted triangulation, member checking, and thick description, and most quantitative studies reported inter-rater reliability.
1 high-school 1.5 generation (Korean immigrant) adolescent female student, secondary level, USA
2 EFL undergraduate students, tertiary level, Taiwan
Yi and Hirvela (2010)
Yang (2012)
Study
Context and participant
Methodological approach Qualitative case study
Qualitative study
Theoretical framework New literacy studies (e.g., multiple literacies and multimodality); Affinity identity (Gee, 2002)
Social semiotic multimodal approach; designtransformation & transduction (Kress, 2003)
Writing task and technology
Out of school composing; self-sponsored writing activities Online diary software Cyworld, blog software Xanga
Individual multimodal digital storytelling projects Cyberlink Power Director, MovieMaker/iMovie
Table 5.1 Research matrix of digital multimodal composing
1. What kinds of self-sponsored writing were preferred by a 1.5 Generation adolescent? 2. What purposes motivated her self-sponsored writing activity? 3. What was the role of computers and other technology in her self-sponsored writing? 1. How do English language learners approach the design and development of their digital stories? 2. How do they construct hybrid texts to deliver their messages? 3. How do they assign meanings to the objects and artifacts used in their digital stories?
Research question
Thick description
Triangulation; member checking
Validity and reliability strategy
84 M. Li
67 science students from an ESP course, tertiary level, Hong Kong
52 science students from an ESP course, tertiary level, Hong Kong
6 Spanish FL students from a writing course, tertiary level, USA
Hafner (2014)
Hafner (2015)
Oskoz and Ilola (2016)
Study
Context and participant
Methodological approach Qualitative study
Qualitative study
Qualitative study
Theoretical framework New literacy studies; situated learning (Gee, 2004; Lav & Wenger, 1991)
Digital literacies (Jones & Hafner, 2012)
Activity Theory (Leontiev, 1978; Engeström, 1987) Transformation and transduction (Kress, 2003, 2009)
Writing task and technology
Collaborative digital video projectmultimodal scientific documentary DV camera and editing software; YouTube; Edublogs
Collaborative digital video projectmultimodal scientific documentary DV camera and editing software; YouTube
Digital story based on a narrative Video-editing tool: Final Cut
1. What rhetorical challenges do students perceive in this task? 2. What multimodal rhetorical strategies do they draw on in response? 3. What is the role played by language and other modes in this task? 1. How is the practice of remix evidenced in the multimodal compositions of English language learners? 2. How does remix either promote or compromise the expression of learner voice? 1. How is learners’ understanding of the object (DSs) mediated by available tools and artifacts? 2. How do learners perceive the use of tools and artifacts as they move from goal-oriented short-term actions to long-term object-oriented activity? 3. What linguistic reorientation takes place when learners move from academic writing to the creation of DSs?
Research question
(continued)
Triangulation
Thick description
Triangulation; thick description
Validity and reliability strategy
5 Digital Multimodal Composing
85
Smith et al. (2017)
Dzekoe (2017)
Jiang and Luk (2016)
Study
3 Bilingual eighth grade students, secondary level, USA
21 undergraduate students and 5 teachers at an EFL course, tertiary level, China 22 advanced-low ESL undergraduates, USA
Context and participant
Table 5.1 (continued)
What did students and teachers perceive as the factors that make multimodal composing motivating for English learning?
To what extent do revisions prompted by computer-based multimodal composing activities (CBMCAs) facilitate advanced-low ESL writer’s ability to notice gaps in their written drafts and improve the quality of their writings? 1. What are eighth grade bilingual students’ multimodal code-meshing processes? 2. How do bilingual students use their heritage languages during the multimodal code-meshing process?
Qualitative study
Mixed-methods study
Qualitative comparative case study
Multiliteracies (New London Group, 1996); motivation theory (Melone & Lepper, 1987) Multimodality; noticing hypothesis (Schmidt, 1990); multidimensional model of revision (Stevenson et al., 2006) Translanguaging (Canajarajah, 2012); social semiotics (Halliday, 1978)
Video projects (e.g., documentaries, TV programs) on a range of topics Technology not discussed specifically
Composing written drafts, creating a digital poster, and listening to written texts for revisions Google Docs, Glogster, Natural Reader
“My Heoro” multimodal project PowerPoint
Research question
Methodological approach
Theoretical framework
Writing task and technology
Triangulation; member checking
Member checking; inter-rater reliability; bias clarification
Triangulation; thick description
Validity and reliability strategy
86 M. Li
52 beginning learners of Dutch as a second language, tertiary level, Belgium
46 EFL undergraduate students, tertiary-level, Taiwan
69 EFL undergraduate students, Taiwan
Vandommele et al. (2017)
Chen (2018)
Yeh (2018)
Study
Context and participant
Methodological approach Quantitative study
Mixed methods
Qualitative study
Theoretical framework Multimodality; task-based learning
Multimodality, Digital literacy
Multimodality (Kress, 2003) Multiliteracies (The New London Group, 1996)
Writing task and technology
Website design; video projects posted to the websites Technology not discussed specifically
Video creation on the topic of digital empathy Video-editing tool; Facebook
Producing digital videos and writing reflections on video creating process PowerPoint (video embedded)
Does the incorporation of meaningful multimodal writing activities in in-school and out-of-school settings differentially influence the development of writing skills by adolescent L2 learners of Dutch of low proficiency levels? If so, in what ways? 1. How do students perceive the experience of producing a video to advocate digital literacy? 2. What do students learn from producing their videos? How did the process of producing multimodal videos benefit students’ multiliteracies?
Research question
(continued)
Triangulation; thick description
Not reported
Inter-rater reliability (Krippendorff’s α)
Validity and reliability strategy
5 Digital Multimodal Composing
87
5 teachers of EFL, tertiary level, China
7 teachers of EAP, tertiary level, China
1 sixth-grade ESL student, secondary level, USA
Jiang et al. (2019)
Hafner and Ho (2020)
Shin, et al. (2020)
Study
Context and participant
Table 5.1 (continued)
Qualitative study (multiple-case study)
Qualitative study
Qualitative study
Multimodality (Kress, 2000, 2010)
Multimodality (Kress, 2000) Multiliteracies (New London Group, 1996)
Systemic functional approach to multimodal discourse analysis (SF-MDA (Jewitt et al., 2016; Kress, 2003; Kress & van Leeuwen, 2001)
Composing a digital video scientific documentary Technology not discussed specifically
Two tasks: multimodal expository and argumentative texts PowerPoint, Glogster, and Edmodo
Producing videos to reflect knowledge of curricular content Video creating and editing software, Corel Video Studio
Methodological approach
Theoretical framework
Writing task and technology 1. How did teachers engage with a DMC program that was integrated in their conventional EFL curriculum over one academic year? 2. What factors, if any, influenced teacher engagement with the DMC program? 1. What criteria do teachers apply in assessing multimodal compositions? 2. What practical issues and challenges do they perceive in this process? 1. How did an L2 learner orchestrate language and images into multimodal texts with multimedia tools? 2. How did the learner use intermodal relations between language and image, and how did he develop the metalanguage of the semiotic systems?
Research question
Member checking; triangulation
Thick description
Triangulation; member checking; thick description
Validity and reliability strategy
88 M. Li
18 EFL undergraduate students, tertiary level, Korea
54 EFL middle school students, secondary level, Taiwan
Kim and Belcher (2020)
Yang et al. (2020)
Study
Context and participant
Methodological approach Qualitative study
Quasiexperimental study
Theoretical framework Multiliteracies (The New London Group, 1996)
Multiliteracies (The New London Group, 1996); PresentationPracticeProduction (PPP) (LarsenFreeman, 2015)
Writing task and technology
Two tasks: one traditional essay and one multimodal text Technology not discussed specifically
Collaborative multimodal digital storytelling projects Prezi, Google Drive
1. Are there any syntactic complexity and accuracy differences in Korean EFL students’ writing for traditional essay and DMMC tasks? 2. How do Korean EFL students perceive the helpfulness of traditional writing and DMMC for their own development as L2 writers? 1. Will students who learn through DST-based instruction demonstrate greater improvement of English speaking in comparison with those who learn through PPP-based instruction? 2. Will students who learn through DST-based instruction demonstrate greater improvement of creative thinking in comparison with those who learn through PPP-based instruction?
Research question
Inter-rater reliability
Inter-coder reliability
Validity and reliability strategy
5 Digital Multimodal Composing
89
Reference
Yi, Y., & Hirvela, A. (2010). Technology and “self-sponsored” writing: A case study of a Korean-American adolescent. Computers and Composition, 97(2), 94–111
Yang, Y. F. (2012). Multimodal composing in digital storytelling. Computers and Composition, 29(3), 221–238
Year
2010
2012
This study makes a case for out-of-class self-sponsored writing as an authentic literacy practice that should be encouraged and integrated into formal in-class writing instruction. Yi & Hirvela sought to better understand the ways in which a Korean-American teenager engaged in literacy practices by exploring their out-of-school self-sponsored writing activities. The authors found that the participant actively participated in web-based writing environments as a means of connecting to multiple audiences. Moreover, self-sponsored writing also allowed her to better understand and express her biliterate-bicultural identity Yang’s study sought to understand Taiwanese EFL students’ crafting process of digital storytelling by examining how they designed digital stories and assigned meanings to objects/artifacts. She found that they employed various multimodal resources such as voice narration, written texts, still images, background music, animated texts, and special effects. She argued that the orchestration of multimodal resources was guided by the author intents, such as expressing emotional stances, demonstrating voices, and enhancing audience attention and comprehension
Annotation
Table 5.2 Research timeline of digital multimodal composing
A.2
A.1 B.1 B.3
Theme
90 M. Li
Reference
Hafner, C. (2014). Embedding digital literacies in English language teaching: Students’ digital video projects as multimodal ensembles. TESOL Quarterly, 48(4), 655–685
Hafner, C. (2015). Remix culture and English language teaching: The expression of learner voice in digital multimodal compositions. TESOL Quarterly, 49(3), 486–509
Year
2014
2015
Annotation
A.1 A.2
A.2 B.2
Theme
(continued)
This study reported the implementation of digital literacies practice in an ESP class in Hong Kong, in which students produced multimodal scientific documentaries shared though YouTube with a general audience of non-specialists. Based on interviews, student comments, and documentaries, the researchers found that the students, despite the lack of prior experience, met the challenge of writing for an authentic audience by orchestrating semiotic resources to develop an effective rhetorical “hook” and appropriate discoursal identity (i.e., scientist, reporter, and a traveler) to capture their audience’s attention This is an innovative study to explore L2 students’ cultural remixing and voice expression in a collaborative digital video project. The result showed that the remix culture was reflected in different forms: mixing sources (chunking); mixing modes (layering); mixing genres (blending); and mixing cultural resources (intercultural blending). The remix can have both a positive and negative effect on students’ identity expression. While some students were able to appropriate cultural resources by positioning themselves as reporters, secret agents, and so on in their projects, others struggled to appropriately remix materials in coherent ways. The author therefore stressed the importance of providing students with clear instructions and training on how to integrate the remix practices into their multimodal projects
5 Digital Multimodal Composing
91
Reference
Jiang, L., & Luk, J. (2016). Multimodal composing as a learning activity in English classrooms: Inquiring into the sources of its motivational capacity. System, 59, 1–11
Oskoz, A., & Elola, I. (2016). Digital stories: Bringing multimodal texts to the Spanish writing classroom. ReCALL, 326–342
Year
2016
2016
Table 5.2 (continued) Annotation Jiang & Luk’s study builds on previous work by Malone and Lepper (1987) and investigated the motivating power of multimodal composing for Chinese EFL students’ language learning. Drawing on semi-structured individual interviews and written reflections, the researchers identified several perceived motivating factors, including challenge, curiosity, (language) control, fantasy, cooperation, competition, and recognition (from a wide audience). This study provided pedagogical implications on taking advantages of motivating factors that DMC affords This study drew on concepts from social semiotics (i.e., transformation and transduction) and activity theory (i.e., tools and artifacts) to examine the perceptions of Spanish FL learners about DMC. The task invited students to transform and transduct traditional academic writing into digital stories. Based on the analyses of students’ reflections, questionnaires, and online journals, the researchers found that students displayed a profound awareness of the power of semiotic resources and purposefully integrated images, sounds, and music to bring their text to life. Moreover, the students showed an understanding of the use of tools and artifacts and strategically reoriented their actions in order to adapt to the new genre. They made specific linguistic and stylistic choices such as using varied tenses and reducing lexical connectors so as to ensure the impact of their message A.2 B.3
B.1 B.2 B.3
Theme
92 M. Li
Reference
Dzekoe, R. (2017). Computer-based multimodal composing activities, self-revision, and L2 acquisition through writing. Language Learning & Technology, 21(2), 73–95
Smith, B. E., Pacheco, M., & de Almeida, C. (2017). Multimodal code-meshing: Bilingual adolescents’ processes composing across modes and languages. Journal of Second Language Writing, 36, 6–22
Year
2017
2017
Annotation
A.1 A.2
A.2 D.1
Theme
(continued)
This mixed-methods study explored the ways in which multimodal composing facilitates L2 learners’ self-revisions, gap noticing, and language learning through writing. Students first wrote an expository essay and an argumentative essay, and then, they created a digital poster online via Glogster. Based on the data from students’ revision histories, the online multimodal posters, reflections, surveys, screen recordings of listening activities, and stimulated recall interviews, the researcher found that the use of multimodal composition had a positive effect on the overall quality of their texts. In particular, the integration of multiple modes helped them to make more content-level related revisions Smith et al.’s study explored the process of American middle school ESL students’ composing across languages and modalities, what the researchers called multimodal code-meshing. Using the analytical tool named multimodal code-meshing timescapes, they discovered the multilevel iterative process that students used to construct meaning, including image search, image design, text type and revision, text design, audio search, audio remix, voice record, voice remix, transitions, heritage language use, informational internet search, and project review and sharing. The researchers specifically highlighted multiple purposes of using the heritage language in students’ multimodal writing projects, namely searching/accessing information, translating to clarify content, interacting with peers, and engaging with potential readers and establish voice
5 Digital Multimodal Composing
93
Reference
Vandommele, G., Van den Branden, K., Van Gorp, K., & De Maeyer, S. (2017). In-school and out-of-school multimodal writing as an L2 writing resource for beginner learners of Dutch. Journal of Second Language Writing, 36, 23–36
Chen, C. W. Y. (2018). Developing EFL students’ digital empathy through video production. System, 77, 50–57
Year
2017
2018
Table 5.2 (continued) Annotation This quantitative study investigated the Dutch as a L2 students’ writing development via multimodal writing activities in both in-school and out-of-school contexts. The in-school students worked in groups to design a website, including multimodal tasks such as video-based interviews and digital photo comics; the out-of-school students also designed a website, including multimodal tasks such as a documentary featuring new comers in Antwerp. The results showed that multimodal composition tasks in both learning contexts led to L2 writing development, gauged by multiple writing measures, e.g., complexity, lexical diversity, and text length Chen’s study examined Taiwanese EFL students’ perceived affordances and constraints of the video production project and how this experience can enhance their awareness of digital empathy. The analysis of students’ perceptions was based on a descriptive analysis of the Likert-scale questions and a content analysis of the responses to open-ended questions, supplemented with students’ finished drafts and audio reflective comments. The results indicated that the students perceived the activity to be generally positive and they displayed creativity and versatility when using videos to express their understanding of digital empathy. For example, some of the students narrated stories about victims of cyberbullying, which evoked compassion and empathy. However, some students were not successful in addressing the theme of empathy, which was attributed to a lack of more explicit training during the pre-production stage B.1 B.2
D.1
Theme
94 M. Li
Reference
Yeh, H. C. (2018). Exploring the perceived benefits of the process of multimodal video making in developing multiliteracies. Language Learning & Technology, 22(2), 28–37
Jiang, L., Yu, S., & Zhao, Y. (2019). Teacher engagement with digital multimodal composing in a Chinese tertiary EFL curriculum. Language Teaching Research, 1–20
Year
2018
2019
Annotation
C1
B.1
Theme
(continued)
Yeh’s study provides an empirical support for the use of multimodal video production to cultivate L2 students’ multiliteracies and awareness of semiotic resources. In this study, he explored EFL students’ perceptions of multimodal video-making in a multimedia English class where they were trained on the use of multimedia for communication and asked to produce a digital video using different modes. The results showed that majority of the students had a positive learning experience especially with regard to vocabulary, speaking, and translation and writing. The students also mentioned that the video production project helped them to better understand their own culture and enhance their multimedia skills This exploratory multiple-case study draws attention to L2 teachers’ engagement with DMC within an EFL context in China. After descriptive analyses of the data which comprised of semi-structured interviews, informal conversations, field notes, classroom observations, three forms of teacher engagement with DMC were identified, i.e., incidental, ambivalent, and integral. In the first case, the teachers used DMC merely as a means of reinforcing the acquisition of linguistic skills and paid little or incidental attention to students’ use of semiotic resources. Although the teachers in the second case enjoyed incorporating DMC projects in their classes, they remained ambivalent and uncertain about the role of DMC in EFL learning. Contrary to the other cases, the teacher in the third case used DMC as an integral part of her teaching praxis. She focused not only on students’ language use, but also on their strategic orchestration of multimodal resources. Overall, this study showed that teachers’ engagement with DMC is largely influenced by internal factors such as their own conceptions of themselves, their students, and language learning and external factors such as a prescribed curriculum and high-stake testing requirements
5 Digital Multimodal Composing
95
Reference
Hafner, C., & Ho, W. (2020). Assessing digital multimodal composing in second language writing: Towards a process-based model. Journal of Second Language Writing, 47, 100710–100714
Shin, D. S., Cimasko, T., & Yi, Y. (2020). Development of metalanguage for multimodal composing: A case study of an L2 writer’s design of multimedia texts. Journal of Second Language Writing, 47, 100714
Year
2020
2020
Table 5.2 (continued) Annotation This study is one of the first to propose a process-based model for assessing DMC projects. Hafner and Ho mainly focused on understanding the criteria by which L2 teachers assessed a DMC task (digital video scientific documentary) and the practical issues and challenges they encountered in the process. Drawing on semi-structured interviews, the study revealed that teachers paid attention to seven main criteria areas when it comes to assessing DMC projects, i.e., (1) creativity and originality, (2) organization, (3) language, (4) delivery, (5) modal interaction, (6) variety, and (7) genre. The finding is in line with previous studies (e.g., Burnett et al., 2014) that also noted similar themes. However, the teachers in the study reported some issues and challenges that they faced such as the amount of workload involved in the assessment of digital videos Drawing on the systemic functional approach to multimodal discourse analysis (SF-MDA), this longitudinal ethnographic study explored a middle school ESL student’s development of the metalanguage and the multimodal composing processes and finished products. The findings showed that the student successfully created multimodal ensembles by employing both linguistic and visual modes as meaning-making resources for the DMC tasks. For example, in the expository essay, the student used language to ask questions and then used different images to respond to the questions. Overall, the researchers found that the student’s metalanguage of multimodality was developed throughout the process. Accordingly, they emphasized the need to teach students the metalanguage for talking about language, images, texts, and meaning-making interactions A.2 D.1
C2
Theme
96 M. Li
Reference
Kim, Y., & Belcher, D. (2020). Multimodal composing and traditional Essays: Linguistic performance and learner perceptions. RELC Journal, 51(1), 86–100
Yang, Y. T. C., Chen, Y. C., & Hung, H. T. (2020). Digital storytelling as an interdisciplinary project to improve students’ English speaking and creative thinking. Computer Assisted Language Learning, 1–23
Year
2020
2020
Annotation In this small-scale exploratory study, Kim and Belcher implemented the DMC practice with EFL students in Korea. They compared the students’ digital multimodal composition to traditional essay writing in terms of linguistic complexity and accuracy, and investigated the students’ perceptions of the two tasks. The results showed that while the traditional writing elicited more syntactically complex writing than the multimodal writing, there was no statistically significant difference in terms of accuracy. The study also reported the students’ perceptions of the two tasks in terms of helpfulness, enjoyment, anxiety, attention to language form, motivation, and effectiveness. Specifically, the majority of the students found the digital multimodal project to be more interesting. However, mixed perceptions were found as to helpfulness in improving writing skills This quasi-experimental study investigates the effects of two instructional methods, i.e., Digital Storytelling (DST) and Presentation-Practice-Production (PPP), on EFL students’ speaking and creative thinking abilities. The experimental group who learned with the DST-based approach was asked to create their collaborative digital story featuring each of the group members’ personal narratives. Meanwhile, the control group who learned with the PPP-based approach was engaged in traditional reading activities and controlled practice of grammar and vocabulary. The results indicated that the students who received DST-based instruction showed more improvement than their counterparts in the PPP-based group in terms of English speaking/L2 output and creative thinking. The result provided an empirical support for the use of DST-related tasks in L2 classrooms D.1 D.2
B1 B2 D1
Theme
5 Digital Multimodal Composing
97
98
M. Li
Thematic Categories To provide a rough timeline of the empirical studies that reflect the research foci, I discuss the sixteen articles chronologically, with respective annotations and research themes provided (see Table 5.2). The themes are categorized as follows: A. DMC process 1. Expressing identity and voice 2. Orchestrating multiple semiotic resources B. Students’ perspectives 1. Benefits 2. Challenges 3. Mediating factors C. Teachers’ perspectives 1. Teacher investment and factors mediating engagement 2. Assessment D. Learning outcome & development 1. Language/metalanguage development 2. Other learning-related development As Table 5.2 shows, earlier research studies (2010–2016) examined mainly multimodal composing process and students’ perceptions of DMC, and more recent studies (2017–2020) added the strand of learning outcome/development. Regarding the CMC process (Category A), studies mainly reported the orchestration of multiple semiotic resources, as the multimodal approach to writing studies has enhanced researchers’ awareness of the importance of semiotic resources other than language in meaning negotiation and construction (Shin et al., 2021). For instance, Hafner (2015) innovatively explored L2 students’ remix culture practice as they conducted a collaborative digital video project. The remix culture took on different forms: mixing sources (so-called chunking); mixing modes (layering); mixing genres (blending); and
5 Digital Multimodal Composing
99
mixing cultural resources (intercultural blending). The study revealed that the remix practice may promote or compromise the expression of the learner voice during writing process. Smith et al. (2017) described the process of L2 students’ composing across language and modality, what they called code-meshing process in a multimodal presentation project. By specifically analyzing the code-meshing timescape, they discovered the multilevel iterative process for students to construct meaning. Moreover, they highlighted multiple purposes of using the heritage language in students’ multimodal writing projects, namely searching/accessing information, translating to clarify content, interacting with peers, and engaging with potential readers and establishing voice. Regarding students’ perceptions (Category B), three main subcategories were derived: benefits, challenges, and mediating factors. For example, Yeh (2018) reported students’ positive experience of DMC, especially with regard to the learning of language skills (e.g., vocabulary, speaking, and writing), better understanding of their own culture, and enhancement of their multimedia skills. Hafner (2014) specifically examined students’ perceptions of possible challenges they encountered while working on digital video projects via interviews and written reflections. In addition to the lack of prior experience, students perceived two rhetorical challenges: audience attention and multimodal orchestration. They found it challenging to employ a range of modes to develop an effective rhetorical “hook” and appropriate discoursal identity (i.e., scientist, reporter, and a traveler) so as to appeal to their audience. Jiang and Luk (2016) investigated the motivating factors of DMC from the students’ perspectives via semi-structured individual interviews and written reflections, and identified the positive affordances of DMC, including challenge, curiosity, (language) control, fantasy, cooperation, competition, and recognition (from a wide audience). Moreover, the research has gradually diversified to include the inquiry of teachers’ perspectives on DMC (Category D). Jiang et al. (2019) initially examined EFL teachers’ investment and engagement with multiple DMC tasks. They found that teachers’ individualized engagements were mediated by their conception of language and instructor and learner roles, as well as high stakes testing regimes. Focusing on the under-explored area of assessment, Hafner and Ho (2020) studied the
100
M. Li
ESP teachers’ perceptions of the assessment for digital video documentary, based on which the researchers proposed a process-based model for assessing DMC, integrating formative/summative strategies and orchestration of multimodal affordances. Moreover, a few quantitative research studies addressed students’ writing outcome and development through DMC (Category C ). For instance, Vandommele et al. (2017) explored Dutch as a L2 students’ writing development via multimodal writing activities. In the school context, students in groups designed a website introducing Flanders, including multimodal tasks such as video-based interviews and digital photo comics; in the out-of-school context, students also designed a website that includes multimodal tasks such as documentaries featuring newcomers in Antwerp. The results showed that DMC tasks in both learning contexts led to L2 writing development, gauged by multiple writing measures, e.g., complexity, lexical diversity, and text length. More recently, Yang et al. (2020) compared two groups’ learning with different instructional strategies, and the results indicated that the students who received digital storytelling-based instruction showed more improvement than their counterparts in the Presentation-Practice-Production group in terms of both English speaking/L2 output and creative thinking skills.
Research Directions Due to ubiquitous technology in the digital age, empirical research on DMC warrants expansion in the next decade. Based on the current body of literature, this section addresses the research gap and points to future research directions. Given the dominance of research studies conducted in the ESL/EFL contexts, the research needs to be extended to a wider range of foreign language learning settings (e.g., Spanish, Arabic, French, German, Chinese, and Japanese). Moreover, other research approaches, seldom used for the DMC research, deserve researchers’ attention, such as the narrative inquiry to explore L2 students’ stories of engaging in DMC practices outside the school and L2 teachers’ action research conducted in their own classrooms with the aim at solving existing pedagogical
5 Digital Multimodal Composing
101
problems (Li & Akoto, 2021; Zhang et al., 2021). Also, longitudinal studies examining the long-term effect of DMC on L2 learning, which are scarce in available research on DMC, are strongly encouraged (Li & Akoto, 2021). Furthermore, as the collaborative writing approach toward digital multimodal texts started to capture researchers’ attention, future research on collaborative multimodal composing is highly welcome due to its potential for developing simultaneously L2 learners’ collaborative skills and digital literacy skills (see Chapter 6 for details). More specifically, multiple research lines await our further exploration. The effect of DMC on learning (Category D) is still underexplored in the current body of literature. Future research can examine how DMC, in either school contexts or out-of-school contexts, facilitates writing/language development and digital learning. Of note, while language, being a crucial building block, remains as our major concern in L2 classes, it is cautioned that the multimodal discourse might take L2 students’ attention away from language to other modes of representation (Qu, 2017). Future studies can thus use a cross-sectional design to investigate L2 learners’ possible gains on linguistic development through engaging in DMC tasks (Lim & Kessler, 2021; Zhang et al., 2021). For example, researchers can examine how the writing/language development through DMC tasks is compared with that through traditional monomodal writing activities. Would DMC tasks decrease L2 learners’ opportunities to negotiate language problems and learn linguistic forms as the monomodal writing environments provide? Such inquiries may help us better understand the role of DMC and influence teachers’ instructional decision-making toward DMC. Also, the available DMC research seems to be dominated by the exploration of composing processes (Category A) and student perceptions (Category B) via exploratory qualitative approaches, and the analysis of DMC products is rather deficient. Encouragingly, D’Angelo (2016) proposed a multimodal genre analysis drawing on grammar of visual design (Kress & Van Leeuwen, 2016) and the metadiscourse framework (Hyland, 2005), in which five types of interactive resources used for academic posters were coded, namely information values, framing, collective elements, graphic elements, and fonts. This multimodal genre
102
M. Li
analysis model can provide much insight on the analysis of products with regard to new multimodal genres (e.g., academic posters and infographics). Another strand of spotlight is directed at the explorations of how teachers assess DMC (Category C ). Assessment techniques and strategies geared toward DMC projects recently catch L2 researchers’ attention (Hafner & Ho, 2020; Hung et al., 2013). Due to the missing coordination between tasks and assessments, future research should develop and test rubrics (i.e., holistic scoring, analytical scoring, multiple trait scoring) that are suitable for DMC assignments and meanwhile accurately evaluate L2 students’ learning and writing development. Departing from an element-based rubric and process-based rubric explored in previous research, Jiang et al.’s study (ongoing) innovatively proposed, and empirically tested and refined a genre-based assessment model entailing the multilayers (i.e., purpose, basic units, layout, navigation, and rhetoric), which is expected to guide teachers to evaluate DMC as purpose-directed and audience-oriented social actions with the employment of multiple semiotic resources. Moreover, teachers’ perspectives (Category C ) on implementing DMC practices deserve further investigation. Teachers’ insights on task design and assessment will add to our knowledge of the feasibility and efficiency of DMC. Another needed area of inquiry, little explored, is what training in-service and pre-service teachers should receive in order to implement DMC activities successfully (Yi et al., 2020). Such extended research will deepen our understanding of new perspectives on digital literacies and empower us to engage L2 students in the twenty-first-century learning through DMC practices.
Teaching Recommendations Incorporation of multimodal writing helps expand the range of students’ communication options through new genres to new audience (MillerCochran, 2017). DMC is becoming a common practice in L2 writing classes in the digital age. To encourage classroom practices of DMC, I
5 Digital Multimodal Composing
103
discuss below research-informed pedagogical recommendations in terms of writing tasks, technology tools, and assessment.
Design of DMC Tasks To implement DMC activities in L2 classrooms, designing an authentic and effective multimodal task is an essential step. The multimodal composing tasks that have been adopted in language classes for a long time include online book reviews, interactive posters, and movie trailers (Warschauer, 2006). A popular DMC genre in recent years is digital storytelling, in which students write the script, find images, select background music, and narrate the story while producing a 3–5 minute video (Godwin-Jones, 2018). Students can create digital stories to recount historical events, illustrate and explain a particular topic, or depict personal experiences (Lambert, 2010; Robin, 2008). For instance, with identity texts facilitated through use of digital technologies (Cummins & Early, 2011), digital storytelling connects to students’ lives and community, affirm their identity, and scaffold meaning making by drawing on their heritage language and culture. This digital genre is also found to contribute to students’ L2 learning, including pronunciation, language production, and syntactical complexity of writing (Oskoz & Elola, 2016a). As Belcher (2017) and Bloch (2007) posited, the goals of digital storytelling assignment can align well with the goals of traditional writing genres (e.g., narrative, exposition, argumentation). Digital documentary is another genre that motivates students to write for a real-life audience and meanwhile showcases their learning of content knowledge (e.g., Hafner & Miller, 2011). Through digital documentary projects, students search and critically evaluate online information, remix online texts, and create multimodal texts to inform audience on a special topic of their disciplines. In Hafner’s (2014) innovative study on DMC in the ESP class, students collaboratively completed scientific documentary videos and the DMC project fostered their communicative competence, enhanced their audience awareness, and enabled them to establish appropriate discoursal identities. Meanwhile, the scientific documentary task functioned effectively as a “motivational bridge” to
104
M. Li
a traditional research writing task—a written lab report. Accordingly, instructors are encouraged to implement a digital documentary task among students of relatively high language proficiency levels in ESP classes. Infographics is recently found to be another effective DMC task in L2 writing classes (Maamuujav, Krishnan, & Collins, 2020). Infographics are visual representations designed to present information, data, or knowledge quickly and clearly (Krauss, 2012). They can be integrated into process-based writing curriculum and support writing development of multilingual students and also scaffold the cognitive demands L2 writers encounter while composing. As Maamuujav et al. (2020) reminded us, instructors need to conduct a series of pedagogical steps to implementing infographics writing in their classes, including explicit instruction on how to produce effective infographics and how to use digital tools (e.g., Canva Visme, Snappa) to create infographics, and providing students with opportunities to present their infographics to other classmates and receive feedback. Of particular note, infographics, as a new digital writing genre, has been increasingly used in professional careers, such as healthcare profession. Implementing authentic infographics tasks in ESP classes would equip the students better with their future professional career. Furthermore, online fan practices, arguably considered DMC, have been reported to develop L2 learners’ literacy skills and empower their global and multilingual identities (e.g., Lam, 2000; Sauro, 2017; Thorne & Black, 2011). Developed from fan-fictions and geared toward the need of fan affinity groups, “fansubbing,” namely amateur subtitling of videos, such as movies, TV shows, and anime (Sauro, 2017), becomes a brand new fan-practice that begins to capture our attention. Although the task was more related to translation, students need to reproduce dialogues, taking into consideration the purpose, audience, coherence, and language use while watching the video and subtitling. With instructors’ careful design and arrangement, fansubbing can be an effective digital writing task to be integrated into formal L2 classroom contexts. Instead of having students go into fan spaces and engage with actual fans, teachers are encouraged to draw on fan works as models and sources of inspiration for classroom activities (Sauro, 2017). In Talaván
5 Digital Multimodal Composing
105
et al. (2017)’s study, native speakers of Spanish collaboratively conducted a fan-subtitling project in which they wrote English subtitles for short videos adapted from a popular Spanish sit-com. Results revealed that the students acclaimed at the authenticity and creativity of the fansubbing task, and they also demonstrated remarkable improvements in writing skills, particularly in cohesion, coherence, and idea structuring. Despite its infancy status, fansubbing tasks have great potential to support digital L2 writing practices.
Selection of Technology Tools Multimodal design provides teachers with opportunities to enhance current language teaching strategies, such as integrated skills-oriented task-based instruction and learning (Belcher, 2017). By using digital technology tools in DMC, students integrate multiple modes and language skills, which is expected to optimize their communication opportunities and enhance their L2 acquisition. To take advantage of technology tools, instructors need to introduce to students technology options and have them compare the characteristics and functions of each individual tool provided so as to ensure that their selected digital tools are in alignment with task designs. For example, if peer collaboration is a required component in a digital video project, WeVideo will constitute one of the optimal digital tools to use in that it allows multiple users to create videos jointly. To take another example, if process-writing and collaborative learning are essential components in a digital storytelling project, blogs, wikis, and/or Google Docs can be used, via which students share initial thoughts, conduct peer feedback, and jointly complete their stories. To enrich teachers’ digital writing toolkit, Table 5.3 shows information of some digital tools available for DMC projects, including digital storytelling, multimodal presentation, infographics, and videomaking. The website links, key functions, and sample writing tasks are specifically presented.
106
M. Li
Table 5.3 Representative technologies for digital multimodal composing Technology
Website link
Function
Application (Sample writing task)
Storybird
https://storybird. com/ https://www.sto ryjumper.com/ http://edu.glo gster.com/
Artful storytelling
Digital storytelling
Artful storytelling
Digital storytelling
Multimodal posters
Weebly
https://www.wee bly.com/
Website builder and blogging
Wordpress
https://wordpress. com
Website builder and blogging
VoiceThread
https://voicet hread.com/
WeVideo
https://www.wev ideo.com
PowToon
https://www.pow toon.com
Aegisubb
http://www.aeg isub.org/ https://www.vis me.co/
Web-based multimedia presentation Collaborative web-based video maker and editor Web-based animated video maker Subtitle creator and modifier Graphic design/poster platform Graphic design platform Graphic design platform Media creation application for mobile and web
Advertisement; Research poster presentation Webpage production; Peer feedback Webpage production; Peer feedback Video-making project
StoryJumper Glogster
Visme
Snappa Canva Adobe Spark
https://snappa. com/ https://www. canva.com/ https://www. adobe.com/pro ducts/spark. html
Video-making project; Individual or collaborative Digital documentary; Cartoons Movie/documentary subtitling project Infographic project; Poster presentation Infographic project; Poster presentation Infographic project; Poster presentation Personal narrative; Personal webpage
Assessment of DMC How to assess DMC projects is critical and challenging (Belcher, 2017; Yi et al., 2017). Previous studies (e.g., Tan et al., 2010; Yi & Choi,
5 Digital Multimodal Composing
107
2015) reported a lack of coordination between DMC activities and nondigital formal assessments in classroom settings. The rubrics of DMC projects should not access merely the traditional aspects of writing and need to address the orchestration of multimodal affordances (Hafner & Ho, 2020) and aesthetics of design, as well (Li & Storch, 2017). Despite the scarcity of assessment literature, a few rubrics discussed in previous research deserves our attention. Informed by the New London Group’s (2000) pedagogy of multiliteracies, Hung et al. (2013) developed the rubrics from five design aspects: linguistic design, visual design, gestural design, auditory design, and spatial design for a DMC project conducted with EFL students. Example evaluation questions include: “Was the linguistic content structured in a logical and organized manner?” “Did the author carefully design the use of color and typology to reflect the selected visual theme?” and “were the auditory elements used purposefully and meaningfully to complement or supplement the other design modes for meaning construction in a cohesive manner?” (Hung et al., 2013, p. 402). Hung et al. (2013)’s rubric can be a good starting point to analyze various forms of multimodal texts, and the grading criteria can be adapted to suit practitioners’ own instructional purposes. For instance, if language development is one of the main goals, linguistic forms should be taken into account, and a descriptor of language accuracy can be added to the category of linguistic design. VanKooten (2013), working with the first-year composition students, developed an assessment model addressing both process and product, which engaged students’ involvement in the assessment throughout the DMC project. Three criteria were adopted: (1) purpose and audience, (2) multifaceted logic and layers of media (e.g., use of figurative devices, juxtaposition and collage, completion and reinforcement), and (3) rhetorical and technical features (e.g., ethos, pathos, and logos; fonts, color, and animation). VanKooten had students set initial functional goals and rhetorical goals at the beginning of the project and invited them to review and revise the goals until they completed the project with reference to the assessment criteria. The frequent assessment and reflection helped them develop critical thinking skills and digital literacies. Worthy of note, VanKooten (2013) explicitly listed “Citation and attribution” under the criterion of technical features, which enhanced
108
M. Li
students’ awareness of copyright issues and fair use, crucial for the current digital learning. Hafner and Ho (2020) recently proposed a process-based model to assess L2 DMC. They created a grading rubric guiding the instructors to assess students’ digital video science documentaries, taking into account the orchestration of multimodal affordances. Specifically, they designed five bands (i.e., outstanding, good, satisfactory, marginal, failed) for each of the three categories: organization and content (including creativity and originality), multimedia and visual effect, and language. Based on the instructors’ insights revealed in the interview, the researchers developed a model of assessment applied to four stages of DMC: pre-design (e.g., notes, mindmaps), design (e.g., scripts, storyboards, film clips), sharing (i.e., multimodal ensembles), and reflecting (e.g., conversation with students, student report). Moreover, as noted earlier in this chapter, the genre-based assessment model developed by Jiang et al. (ongoing) will provide a fresh lens to the DMC assessment practice. Based on the previous literature, we may continue reconceptualizing assessment for DMC and design/adapt a rubric to reflect on and to promote students’ learning of L2 and digital literacies skills. L2 practitioners can evaluate the above-mentioned assessment models and tailor them to their own instructional/curricular contexts. They are also encouraged to develop their own assessment rubrics for specific DMC tasks to be implemented in their classes, after gauging the use of different types of rubrics (e.g., holistic scoring, analytical scoring, multiple trait scoring) and different approaches to assessing DMC (e.g., process-based, genre-based). As Blair (2019) reminded us, instructors need to collaborate with students in developing formative assessment criteria; the rubric functions as a form of instructive evaluation that connects the use of technology to specific instructional goals and also a learning tool for students to understand rhetorical contexts.
5 Digital Multimodal Composing
109
References Belcher, D. (2017). On becoming facilitators multimodal composing and digital design. Journal of Second Language Writing, 38, 80–85. Bloch, J. (2007). Abdullah’s blogging: A generation 1.5 student enters the blogosphere. Language Learning & Technology, 11(2), 128–141. Blair, K. (2019). Teaching multimodal assignments in OWI contexts. In S. Khadka & J. C. Lee (Eds.), Bridging the multimodal gap: From theory to practice (pp. 471–491). The University Press of Colorado. Cummins, J., & Early, M. (2011). Identity texts: The collaborative creation of power in multilingual schools. Trentham Books. D’Angelo, L. (2016). Academic posters: A textual and visual metadiscourse analysis. Peter Lang. Dudeney, G., & Hockly, N. (2016). Literacies, technology and language teaching. In F. Far & L. Liam (Eds.), The Routledge handbook of language learning and technology (pp. 115–126). Routledge. Dzekoe, R. (2017). Computer-based multimodal composing activities, selfrevision, and L2 acquisition through writing. Language Learning & Technology, 21(2), 73–95. Fraiberg, S. (2010). Composition 2.0: Toward a multilingual and multimodal framework. College Composition and Communication, 62(1), 100–126. Godwin-Jones, R. (2018). Second language writing online: An update. Language Learning & Technology, 22(1), 1–15. Hafner, C. (2014). Embedding digital literacies in English language teaching: Students’ digital video projects as multimodal ensembles. TESOL Quarterly, 48(4), 655–685. Hafner, C. (2015). Remix culture and English language teaching: The expression of learner voice in digital multimodal compositions. TESOL Quarterly, 49 (3), 486–509. https://doi.org/10.1002/tesq.238 Hafner, C., & Ho, W. (2020). Assessing digital multimodal composing in second language writing: Towards a process-based model. Journal of Second Language Writing, 47 , 1–14. https://doi.org/10.1016/j.jslw.2020.100710 Hafner, C., & Miller, L. (2011). Fostering learner autonomy in English for science: A collaborative digital video project in a technological learning environment. Language Learning & Technology, 15 (3), 68–86. Hung, H., Chiu, Y., & Yeh, H. (2013). Multimodal assessment of and for learning: A theory driven design rubric. British Journal of Educational Technology, 44, 400–409.
110
M. Li
Hyland, K. (2005). Metadiscourse: Exploring interaction in writing. Continuum. Hyland, K. (2016). Teaching and researching writing (3rd ed.). Routledge. Jiang, L., & Luk, J. (2016). Multimodal composing as a learning activity in English classrooms: Inquiring into the sources of its motivational capacity. System, 59, 1–11. Jiang, L., Yang, M., & Yu, S. (2020). Chinese ethnic minority students’ investment in English learning empowered by digital multimodal composing. TESOL Quarterly (4), 954−979. Jiang, L., Yu, S., & Lee, I. (ongoing). Developing a genre-based model for assessing digital multimodal composing in second language writing: Integrating theory with practice. Journal of Second Language Writing 2022 Special Issue. Krauss, J. (2012). Infographics: More than words can say. Learning and Leading with Technology, 39 (5), 10–14. Kress, G. (2003). Literacy in the new media age. Routledge. Kress, G. (2010). Multimodality: A social semiotic approach to contemporary communication. Routledge. Kress, G., & Van Leeuwen, T. (2006). Reading images: The grammar of visual design. Routledge. Lam, W. S. E. (2000). L2 literacy and the design of the self: A study of a teenager writing on the internet. TESOL Quarterly, 34 (3), 457–482. Lambert, J. (2010). Digital storytelling cookbook. Digital Diner Press. https:// wrd.as.uky.edu/sites/default/files/cookbook.pdf Li, M., & Akoto, M. (2021). Review of recent research on L2 digital multimodal composing. International Journal of Computer Assisted Language Learning and Teaching, 11(3), 1–16. Li, M., & Storch, N. (2017). Second language writing in the age of CMC: Affordances, multimodality, and collaboration. Journal of Second Language Writing, 36 , 1–5. Lim, J., & Kessler, M. (2021). Expanding research agendas: SLA, writing, and multimodality. In R. M. Manchón & C. Polio (Eds.), The handbook of second language acquisition and writing. Routledge. Maamuujav, U., Krishnan, J., & Collins, P. (2020). The utility of infographics in L2 writing classes: A practical strategy to scaffold writing development. TESOL Journal, 11(2), e484. Manchón, R. (2017). The potential impact of multimodal composition on language learning. Journal of Second Language Writing, 38, 94–95. Miller-Cochran, S. (2017). Understanding multimodal composing in an L2 writing context. Journal of Second Language Writing, 38, 88–89.
5 Digital Multimodal Composing
111
New London Group. (1996). A pedagogy of multiliteracies: Designing social futures. Harvard Educational Review, 66 (1), 60–92. Oskoz, A., & Ilola, I. (2016a). Digital stories: An overview. CALICO Journal, 33(2), 157–173. Oskoz, A., & Ilola, I. (2016b). Digital stories: Bringing multimodal texts to the Spanish writing classroom. ReCALL, 28(3), 326–342. Palmeri, J. (2012). Remixing composition: A history of multimodal writing pedagogy. Southern Illinois University Press. Purdy, J. (2014). What can design thinking offer writing studies? College Composition and Communication, 65 (4), 612–641. Qu, W. (2017). For L2 writers, it is always the problem of the language. Journal of Second Language Writing, 38, 92–93. Robin, B. (2008). Digital storytelling: A powerful technology tool for the 21st century classroom. Theory into Practice, 47 (3), 220–228. Sauro, S. (2017). Online fan practices and CALL. CALICO Journal, 34 (2), 131–146. Shin, D., & Cimasko, T. (2008). Multimodal composition in a college ESL class: New tools, traditional norms. Computers and Composition, 25 (4), 376– 395. Shin, D., Cimasko, T., & Yi, Y. (2021). Multimodal composing in K-16 ESL and EFL education: multilingual perspectives. Springer. Smith, B., Pacheco, M., & de Almeida, C. (2017). Mutimodal code-meshing: Bilingual adolescents’ processes composing across modes and languages. Journal of Second Language Writing, 36 , 6–22. Street, B., Pahl, K., & Rowsell, J. (2011). Multimodality and new literacy studies. In C. Jewitt (Ed.), The Routledge handbook of multimodal analysis (pp. 191–200). Routledge. Talaván, N., Ibáñez, A., & Bárcena, E. (2017). Exploring collaborative reverse subtitling for the enhancement of written production activities in English as a second language. ReCALL, 29 (1), 39–58. Tan, L., Bopry, J., & Guo, L. (2010). Portraits of new literacies in two Singapore classrooms. RELC Journal, 41(1), 5–17. Thorne, S. L., & Black, R. W. (2011). Identity and interaction in Internetmediated contexts. In C. Higgins (Ed.), Identity formation in globalizing contexts (pp. 257–277). Mouton de Gruyter. Vandommele, G., Van den Branden, K., Van Gorp, K., & De Maeyer, S. (2017). In-school and out-of-school multimodal writing as an L2 writing resource for beginner learners of Dutch. Journal of Second Language Writing, 36 , 23–36.
112
M. Li
VanKooten, C. (2013). Toward a rhetorically sensitive assessment model for new media composition. In D. N. DeVoss & H.A. McKee, (Eds.), Digital writing assessment and evaluation. Computers and Composition Digital Press/Utah State University Press. Warschauer, M. (2003). Technology and social inclusion: Rethinking the digital divide. MIT Press. Warschauer, M. (2006). Laptops and literacy: Learning in the wireless classroom. Teachers College Press. Yang, Y., Chen, Y., & Hung, H. (2020). Digital storytelling as an interdisciplinary project to improve students’ English speaking and creative thinking. Computer Assisted Language Learning. https://doi.org/10.1080/09588221. 2020.1750431 Yeh, H. C. (2018). Exploring the perceived benefits of the process of multimodal video making in developing multiliteracies. Language Learning & Technology, 22(2), 28–37. Yi, Y., & Choi, J. (2015). Teachers’ views of multimodal practices in K–12 classrooms: Voices from teachers in the United States. TESOL Quarterly, 49, 838–847. Yi, Y., King, N., & Safriani, A. (2017). Re-conceptualizing assessment of digital multimodal literacy. TESOL Journal, 8(4), 878–885. Yi., Y., Shin, D., & Cimasko, T. (2020). Special issue: Multimodal composing in multilingual learning and teaching contexts. Journal of Second Language Writing, 47, 100717. Zhang, M., Atoko, M., & Li, M. (2021). Digital multimodal composing in post-secondary L2 settings: a review of the empirical landscape. Computer Assisted Language Learning.
6 Computer-Mediated Collaborative Writing
Introduction Collaborative writing, as an effective instructional activity, has been widely implemented in L2 classrooms over the last decades. Currently, the proliferation of social software (e.g., wikis and Google Docs) has brought renewed attention to collaborative writing (Storch, 2013). CMC technologies have allowed for collaboration on an unprecedented scale. These technology tools, due to the features of user editability, detailed page histories, and time/space independence, can encourage interaction/collaboration, continual revision, and reflection on writing (Li, 2018). Lei and Liu (2019), in a recent review, reported that applied linguists’ interest in collaborative writing increased by over seven times in the period of 2005–2016. We witnessed a notable surge of interest in collaborative writing, particularly after the publication of Storch (2013)’s monograph on collaborative writing in L2 classrooms. Research syntheses and research agenda on this topic have subsequently appeared (e.g., Li, 2018; Li & Zhang, 2021; Zhang et al., 2021), stimulating an even wider interest in this area. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 M. Li, Researching and Teaching Second Language Writing in the Digital Age, https://doi.org/10.1007/978-3-030-87710-1_6
113
114
M. Li
This chapter discusses the current research and pedagogy on computer-mediated collaborative writing (CMCW). The chapter begins with the definition of CMCW and rationales for implementing CMCW. It then explains key texts in this domain of research, with important information presented in illustrative tables. Research gap is then addressed based on the synthesis of the main research strands. This chapter ends with suggestions for future research and pedagogical practice.
Defining Computer-Mediated Collaborative Writing (CMCW) Collaborative writing refers to an activity in which students interact, negotiate meaning, and make joint decisions throughout the writing process and produce a single text with shared responsibility and coownership (Storch, 2013). Ede and Lunsford (1990) initially clarified collaborative writing by articulating three distinctive features of collaborative writing: (1) substantive interaction throughout the writing process; (2) shared decision-making and responsibility for the text produced; and (3) a single written product. Storch (2013) later elucidated true collaborative writing activities: on the one hand, “participants work together and interact throughout the writing process, contributing to the planning, generation of ideas, deliberations about the text structure, editing and revision” (p. 2); on the other hand, a collaborative writing product is a co-owned and jointly produced text that cannot be traced back to the separate input of individual members (Li, 2018; Stahl, 2006; Storch, 2013). Based on this perspective, CMCW can be defined as technology-based collaborative writing activity in which students negotiate meaning and writing tasks, co-construct texts, co-revise texts, and jointly produce a single text online through collaborative efforts (Li, 2018). The collaborative tasks which focus on language/task negotiation in computer-mediated communication (CMC) contexts, but do not require the production of joint written texts cannot be strictly referred to as CMCW.
6 Computer-Mediated Collaborative Writing
115
Rationales for Implementing CMCW Previous research has reported many benefits of collaborative writing, such as enhancing audience awareness and reflective thinking (Storch, 2013), and providing opportunities to pool language resources collectively and co-construct knowledge and writing through scaffolded interaction (Donato, 1994; Swain & Lapkin, 1998). In collaborative writing, L2 students are engaged in deliberations about structure, form, and language choice (DiCamilla & Anton, 1997; Storch, 2013; Storch & Wigglesworth, 2007; Swain & Lapkin, 1998), which affords “languaging” (Swain, 2006, 2010), reflected in language-related episodes (LREs). Students are also provided with opportunities to apply the newly learnt knowledge (Hirvela, 1999) and practice team writing resembling one in the authentic workplace (Li, 2018; Storch, 2013). With the development of Web 2.0 tools (e.g., wikis, Google Docs), collaborative writing in the online mode has become a promising research topic. While the first-generation web applications (Web 1.0) such as discussion boards and chat rooms mainly facilitate negotiations in the CMC context, the Web 2.0 tools afford the entire writing process, including task negotiation, languaging, text co-construction, and revising and editing. Li (2018, p. 883) discussed wikis’ multiple functions that facilitate collaboration at all writing stages: wiki “Discussion” allows users to communicate and negotiate writing tasks via asynchronous messaging (pre-writing stage); “Edit” enables users to freely write and revise the “Page” in terms of texts, images, or hyperlinks (writing, revising, and editing stages); “Comment” allow users to provide feedback/comments or raise questions regarding specific texts via pop-up boxes (revising and editing stages); “History” reveals all the changes that the wiki page has gone through with the color coding of additions and deletions (revising stage); and “Page” illustrates the final wiki writing products (publishing stage). Technologies such as wikis and Google Docs, due to their collaborative nature, offer learners optimal opportunities to communicate and jointly write beyond time constraints of onsite classrooms. Compared with face-to-face collaborative writing, CMCW over a period of time well represent an authentic team writing project conducted in the real-life professional career (Li, 2018; Storch, 2013).
116
M. Li
Key Texts I did the literature search on CMCW in L2 contexts via Google Scholar by inputting the key words of “collaborative writing,” “computer,” and “L2” and meanwhile limiting the findings to the articles published from 2010 to 2020 in the journals with the CiteScore higher than 2.0. Sixteen articles were subsequently selected for illustration. This section begins with an overview of these selected studies from the aspects of context and participants, writing task and technology, theoretical framework, methodological approach, and validity/reliability strategy (as illustrated in Table 6.1). I followed with the analysis of thematic categories emerging from the data to document the investigation foci during the past decade (see Table 6.2).
Overview Table 6.1 summarizes the overall features of the sixteen studies. As the table shows, the studies were predominantly conducted in the tertiary L2 context. The majority of the Web 2.0 tools used for collaborative writing is wiki applications, including Wikispaces, PBWiki (PBWork), and MediaWiki and a few studies examined CMCW using Google Docs. The research has been largely informed by sociocultural theory/social constructivism; other frameworks guiding CMCW research include collaborative learning, autonomous learning, contentbased instruction, process writing, and task-based learning. Multiple methodological approaches (i.e., qualitative, quantitative, and mixed methods) were employed in the previous research. The mostly reported validity strategies were data triangulation and inter-rater/coder reliability.
56 software engineering students in an ESP course, tertiary level, Sweden
8 advanced Spanish FL students, tertiary level, USA
Bradley et al. (2010)
Elola and Oskoz (2010)
Study
Context and participant
Methodological approach Qualitative case study
Mixed-methods study
Theoretical framework Multiliteracies Sociocultural perspective
Sociocultural theory
Writing task and technology Four writing tasks involving argumentation, technical reports, summaryresponse essays wikis
Two argumentative essays: completed collaboratively and individually PBwiki
Table 6.1 Research matrix of computer-mediated collaborative writing
1. In an environment allowing for user-generated content, what interactive work are students engaged in? 2. What is the nature of the interaction regarding student participation as far as cooperation and collaboration is concerned? 3. What is the potential of wikis for language learning? 1. What are the differences between collaborative and individual writing? 2. How do writers approach collaborative writing through the use of social tools? 3. What are students’ perceptions on writing individually and collaboratively and how do they perceive collaborative work performed with the use of social tools?
Research question
(continued)
Triangulation; inter-rater reliability
Triangulation
Validity and reliability strategy
6 Computer-Mediated Collaborative Writing
117
40 pre-service NNS teachers, tertiary level, Mexico
8 German as a FL students, tertiary level, Canada
Kessler and Bikowski (2010)
Kost (2011)
Study
Context and participant
Table 6.1 (continued) Methodological approach Qualitative study
Qualitative study
Theoretical framework Littlewood (1996)’s framework of learner autonomy
Social constructivist; collective scaffolding (Donato, 1994)
Writing task and technology Collective class wiki defining the term “culture” wikis
Descriptive writing; creative writing (i.e., from perspective of a character in a movie) wikis
1. What is the nature of individual and group behavior when attending to meaning in a long-term wiki-based collaborative activity? 2. How do students demonstrate collaborative autonomous language learning in wiki space? 3. How can the development of collaborative autonomous language learning abilities inform computer-mediated language learning? 1. What kind of strategies do learners use when they engage in the collaborative writing 2. What kind of revisions do learners make to the common text? 3. How do learners perceive the use of wiki?
Research question
Triangulation; member checking
Triangulation; member checking
Validity and reliability strategy
118 M. Li
53 intermediate German as a FL students, tertiary level, Canada
9 EFL college students, tertiary level, China
Arnold et al. (2012)
Li and Zhu (2013)
Study
Context and participant
Methodological approach Quantitative study
Qualitative case study
Theoretical framework Socioconstructivism
Sociocultural theory; collective scaffolding
Writing task and technology Class 1: 400 word essay on historical topics; classes 2 & 3: Novel-based writing wikis
Three collaborative writing tasks: Narration, exposition, & argumentation Wikispaces
1. Did students complete the task in a cooperative manner or a truly collaborative manner? 2. Were formal revisions more successful when students edited their own contributions or those of others? 3. While working on their wiki, did students develop unique task roles? 1. What patterns of group interaction can be found when Chinese EFL students work on their collaborative writing? 2. If there are differences in patterns of group interaction, do they influence the students’ perceptions of their learning experiences?
Research question
(continued)
Triangulation; Coding verification
Triangulation; inter-rater reliability
Validity and reliability strategy
6 Computer-Mediated Collaborative Writing
119
48 advanced German learners, tertiary level, Belgium
48 EFL students enrolled in “Business English Writing” class, tertiary level, Taiwan
Strobl (2014)
Wang (2015)
Study
Context and participant
Table 6.1 (continued)
Quasiexperimental study
Socioconstructivist theory
Sociocultural theory
One synthesis paper based on written sources Google Docs
Business letter wikis
Mixed-methods study
Methodological approach
Theoretical framework
Writing task and technology 1. What impact does online collaboration have on the final text with regard to complexity, accuracy, and fluency (CAF) and/or content and coherence? 2. What impact does online collaboration have on the writing process? 1. Does the collaborative learning environment afforded by wikis have any effect on the improvement of EFL students’ business writing skills? 2. What are EFL students’ experiences, attitudes, and perceptions regarding learning business writing through wikis?
Research question
Triangulation; writing tests verified for content validity
Triangulation
Validity and reliability strategy
120 M. Li
Context and participant
59 ESL students, tertiary level, USA
29 ESL graduate students enrolled in an EAP course, graduate level, USA
Study
Bikowski and Vithanage (2016)
Li and Kim (2016)
Methodological approach Mixed-methods study
Qualitative study
Theoretical framework Sociocultural theory; collaborative autonomous pedagogy (Kessler et al., 2012)
Sociocultural theory Collective scaffolding; regulation
Four writing tasks, including two compare/contrast essays, one descriptive essay, and one persuasive essay Google Docs
Two research tasks: Research Proposal & Annotated Bibliography Wikispaces
Writing task and technology 1. To what extent do in-class web-based collaborative writing tasks help second language English writers improve their overall performance in their individual writing? 2. What are the perceptions of teachers and second language writers toward web-based collaborative writing compared to individual writing in terms of perceived writing development and the writing experience? 1. How do students in small groups negotiate writing tasks and engage with each other’s ideas via wikis? 2. How do students in small groups co-construct written texts via wikis? 3. How do students scaffold each other during wiki-based small group writing? 4. What patterns of interaction occur for each group across two wiki writing tasks?
Research question
(continued)
Triangulation; inter-coder reliability
Triangulation; inter-rater reliability
Validity and reliability strategy
6 Computer-Mediated Collaborative Writing
121
4 adolescent ELL students, secondary level, USA
3 Asian students enrolled in an English Club at a Canadian university
Vorobel and Kim (2017)
Cho (2017)
Study
Context and participant
Table 6.1 (continued) Methodological approach Multiple case study
Case study
Theoretical framework Ecological approach (van Lier, 2004)
Activity theory (Engeström, 1987)
Writing task and technology Expository writing about important things/people in students’ lives and an essay about life goals Penzu.com
Two summary reports of what were discussed in the club meetings Goggle Docs; Skype
1. How do adolescent ELLs perceive collaborative writing in face-to-face and online contexts? 2. How do adolescent ELLs develop their writing in L2 through collaboration in face-to-face and online contexts? 1. What interaction patterns occur when a group of three L2 writers engage in synchronous web-based collaborative writing tasks? 2. To what extent and how do individual goals influence group interactions and, conversely, how do group interactions influence individuals goals? 3. In addition to goals, what other factors influence peer interaction in web-based collaborative writing?
Research question
Triangulation; inter-rater reliability
Thick description; triangulation; member checking
Validity and reliability strategy
122 M. Li
28 German FL students, tertiary level, US
6 adolescent EFL students, secondary level, Turkey
Abrams (2019)
Selcuk et al. (2019)
Study
Context and participant
Theoretical framework Not explicitly stated
Sociocultural theory
Writing task and technology Creative Writing Google Docs
Short story writing Facebook discussion boards
Qualitative study
Mixed-methods study
Methodological approach 1. In CSCW tasks, is there a relationship between a groups participatory pattern (Abrams, 2017; Storch, 2002) and learners’ writing performance in terms of syntactic complexity, grammatical accuracy, writing fluency, propositional content, and lexical sophistication? 2. What are the characteristics of effective collaborative writing in Google Docs? 1. What are the factors that lead to the choice of group leader by group members in the context of a web-based CW activity? 2. How can group leaders influence their group members’ writing process in English through the activity?
Research question
(continued)
Not reported
Triangulation; inter-rater reliability
Validity and reliability strategy
6 Computer-Mediated Collaborative Writing
123
54 EFL students, tertiary level, Japan
26 EFL learners, tertiary level, Taiwan
Abe (2020)
Hsu (2020)
Study
Context and participant
Table 6.1 (continued)
Two 500 word essays (i.e., argumentative & description) Google Docs
Three collaborative creative writing tasks Quip
Writing task and technology
Methodological approach Qualitative case study (CA approach)
Qualitative case study
Theoretical framework Interactionist approach/ conversation analysis; collective scaffolding
Cognition Hypothesis (Robinson, 2001, 2003, 2005); Triadic Componential Framework for task design (Robinson, 2005, 2007)
1. How does a learner negotiate with others in online text-based interactions? 2. How do these interactional practices change for different occasions? Does task complexity influence patterns of peer interaction during web-based asynchronous L2 collaborative writing?
Research question
Triangulation; inter-rater reliability
Triangulation
Validity and reliability strategy
124 M. Li
Reference
Bradley, L., Lindström, B., & Rystedt, H. (2010). Rationalities of collaboration for language learning in a wiki. ReCALL, 22(2), 247–264
Elola, I., & Oskoz, A. (2010). Collaborative writing: Fostering foreign language and writing conventions development. Language Learning & Technology, 14(3), 51–71
Year
2010
2010
Theme
A.1 B.1 B.2 D.1
A.1 A.3
(continued)
In this qualitative case study, Bradley et al. investigated what type of activities users engaged in when using wikis to complete four collaborative writing tasks, and what type of interactions occurred while students co-constructing texts. The participants were university software engineering students in an ESP course in Sweden. Three patterns of interaction (i.e., collaborative, cooperative, and no interaction) were identified through the analyses of archived wiki records, and the majority of members were found to be engaged in wiki collaboration. Results also showed that collaborating groups produced more revised texts, with a higher number of edits. The authors attributed this finding to the ease and autonomy afforded by the wiki Elola and Oskoz conducted an empirical study with eight Spanish majors at a US university and examined the differences in writing between wiki collaborative writing group and individual writing group. The students in pairs employed wikis and text/voice chats for the collaborative writing tasks. The writing products were then compared among these pairs. No statistically significant differences were detected in fluency, accuracy, and syntactic complexity. Further qualitative analysis revealed that chats focused on content development, while wikis served to reformulate and organize the global ideas developed in chats. An interesting result of student perceptions was that although many preferred writing individually, they acknowledged that the wiki collaboration enhanced not only their writing content but also the overall quality of their essays
Annotation
Table 6.2 Research timeline of computer-mediated collaborative writing
6 Computer-Mediated Collaborative Writing
125
Reference
Kessler, G., & Bikowski, D. (2010). Developing collaborative autonomous learning abilities in computer-mediated language learning: Attention to meaning among students in wiki space. Computer Assisted Language Learning, 23(1), 41–58
Kost, C. (2011). Investigating writing strategies and revision behavior in collaborative wiki projects. CALICO Journal, 28(3), 606–620
Year
2010
2011
Table 6.2 (continued)
A.1 A.3 D.1 D.2
Theme A.3 D.1
Annotation Kessler and Bikowski’s longitudinal case study sought to determine the nature of individual and group behaviors in a long-term wiki-based collaborative activity. Informed by Littlewood’s (1996) autonomy framework, this study worked with 40 EFL students at a Mexican university and examined their collaborative construction of a wiki project. All iterations of the text that included at least one meaning-related change (MRC) were examined, with a final wiki consisting of 160 iterations. While all students participated, only five students were responsible for the majority of the iterations, categorized as adding new information, deleting information, clarifying/elaborating on information, synthesizing information, and adding web links. Owing to the long-term nature of the task, a holistic perspective allowed the researchers to identify student group behaviors overtime. The results showed that students tended to avoid acts such as synthesis, which involved critical thinking, while instead preferring adding, deleting, and clarification Kost’s qualitative study examined collaborative strategies and revision behaviors when students collaboratively wrote texts using wikis. Pairs of students in a tertiary-level German FL classes collaborated on a joint wiki writing task. Based on the archived wiki records, writing strategies and revision types were analyzed according to Arnold, Ducate, and Kost’s (2009) Taxonomy of Revisions. Data collected from the planning, drafting, and writing processes revealed students’ diverse behaviors, ranging from brainstorming only to continuous discussions throughout the process. Of note, the participants in this study had fewer numbers of revisions than anticipated from previous research, and the author partially attributed this finding to the relatively short duration of the project. Moreover, questionnaires data showed that participants found the project positive and useful, although some felt there was unequal participation among members
126 M. Li
Reference
Arnold, N., Ducate, L., & Kost, C. (2012). Collaboration or cooperation? Analyzing group dynamics and revision processes in wikis. CALICO Journal, 29(3), 431–448
Li, M & Zhu, W. (2013). Patterns of computer-mediated interaction in small writing groups using wikis. Computer Assisted Language Learning, 26 (1), 61–82
Year
2012
2013
(continued)
A.1 A.2 A.4 D.1 D.2
Theme A.3 A.4 D.1 D.2
Annotation Following-up on a previous study of collaborative wiki writing (Arnold et al., 2009) which focused on students’ revision behaviors, this study examined specifically if students edited their own wiki writing (cooperation) or were engaged in both self-writing and other writing (collaboration), and what success rates of formal revisions were. The study was conducted with German FL students from three classes which were at three different universities in the USA. The results showed that students’ content changes were primarily cooperative, while formal revisions were found to be both cooperative and collaborative. In relation to task roles and group dynamics, students’ behaviors varied, ranging from free riders who contributed only minimally to the wiki or not at all to leaders who worked on the wiki writing extensively. Questionnaire data revealed highly divergent opinions on the project, reflected in varying levels of student engagement. The authors highlighted the importance of student training on collaborative wiki writing Li and Zhu explored the patterns of group interactions when Chinese EFL students worked on three wiki-based collaborative writing tasks, and possible connections between group interactions and students’ perceptions of their learning experience. Guided by sociocultural theory and collective scaffolding, the authors examined patterns of interactions and role of scaffolding in collaborative wiki writing activities. Drawing on Damon and Phelp’s (1989) two indexes of interaction equality and mutuality, which were successfully applied to L2 collaborative writing by Storch (2002), the authors found three distinct patterns of interactions: collectively contributing/mutually supportive, authoritative/responsive, and dominant/withdrawn after examining the wiki records, supplemented with interview data. The study also revealed that the patterns of interaction influenced students’ opinion of collaborative wiki writing. The two collaboration-oriented patterns were linked to more positive perceptions of learning
6 Computer-Mediated Collaborative Writing
127
Reference
Strobl, C. (2014). Affordances of Web 2.0 technologies for collaborative advanced writing in a foreign language. CALICO Journal, 31(1), 1–18
Wang, Y. C. (2015). Promoting collaborative writing through wikis: A new approach for advancing innovative and active learning in an ESP context. Computer Assisted Language Learning, 28(6), 499–512
Year
2014
2015
Table 6.2 (continued)
B.2 D.1
Theme B.1 B.2 D.1
Annotation Strobl’s quasi-experimental study sought to shed light on the impact online collaboration has on the final writing products with regard to complexity, accuracy and fluency (CAF), and content and coherence. 48 Dutch learners of German worked on collaborative and individual essays in a two-task crossover design of synthesis and summary papers. Quantitative analysis of textual features of the students’ writing revealed that collaboratively produced texts were better than individual texts in fluency, content selection, and organization, but not in accuracy, complexity, or coherence. Meanwhile, regarding the writing process, individual writing was found to be more linear, while collaborative writing was more recursive. An interesting finding was that most students’ comments were directly incorporated in revisions without further discussion Wang (2015) employed a mixed-method approach and investigated the effect of the wiki collaborative learning environment on Taiwanese EFL students’ business writing skills. In an ESP course at a university in Taiwan, students were divided into wiki and non-wiki groups for writing tasks. A pretest and posttest were administered to both groups and students’ business letters were graded from the aspects of organization, content, style, format, and grammar, etc. Paired samples t-test and ANOVA revealed that the wiki groups performed better in writing content, organization, and grammar, but did not in format/layout. Also, the post-task survey indicated that the students enjoyed the convenience of wikis and found their increased agency afforded by the platform
128 M. Li
Reference
Bikowski, D., & Vithanage, R. (2016). Effects of web-based collaborative writing on individual L2 writing development. Language Learning & Technology, 20(1), 79–99
Year
2016
(continued)
Theme B.2 B.3 D.1
Annotation Bikowski and Vithanage sought to understand the impact of web-based collaborative writing tasks via Google Docs in college ESL classes. Fifty-nine ESL students at a US university participated in this mixed-methods study. An experimental group of 32 participants using in-class web-based collaborative writing and a control group of 27 in-class individual writers engaged in four in-class web-based writing tasks. A paired samples t-test showed that participants in the collaborative web-based writing group produced statistically significant better writing results in terms of content, organization, academic style, and grammar than those who individually write papers during the same time-frame. Three different types of roles emerged from the study, namely explicit collaborators who collaborated throughout the process, budding collaborators who improved with each collaborative session after initial difficulties, and resistant collaborators who had difficulty in communicating with pairs, which led to apparent mistrust. Both groups enjoyed the in-class web-based writing and felt their writing were improved, and some students who worked individually indicated that they would have preferred to have done group writing in class. However, a few participants from the collaborative group expressed their preference over teacher corrective feedback to peer feedback
6 Computer-Mediated Collaborative Writing
129
Reference
Li, M., & Kim, D. (2016). One wiki, two groups: Dynamic interactions across ESL collaborative writing tasks. Journal of second language writing, 31, 25–42
Vorobel, O., & Kim, D. (2017). Adolescent ELLs’ collaborative writing practices in face-to-face and online contexts: From perspectives to action. System, 65, 78–89
Year
2016
2017
Table 6.2 (continued)
A.3 D.1 D.2
Theme A.1 A.2 A.3 A.4 C
Annotation Li and Kim’s study explored the dynamic group interactions across two wiki writing tasks. ESL graduate students in an EAP course at a US university performed two collaborative writing tasks in small groups, i.e., Research Proposal and Annotated Bibliography. Extending on Storch’s (2002) analysis of face-to-face peer interaction drawing on equality and mutuality, the authors developed new coding framework to analyze the students’ collaborative writing process, including language functions, writing change functions, and scaffolding strategies. The analyses of wiki records showed distinctive patterns of interaction, Group 1 exhibited a Collective pattern of interaction during Task 1 and a more Active/Withdrawn during Task 2. Meanwhile, Group 2 during Task 1 was more Dominant/Defensive, then becoming Collaborative during Task 2. Thus, the patterns of interaction were shown not stable and dynamic changes were detected over the tasks. The authors attributed the findings to the fluidity of peer scaffolding and the influence of tasks Vorobel and Kim’s multiple case study examined adolescent ELL students’ perceptions of collaborative writing and the impact of peer feedback on their L2 writing development. Two focal pairs were assigned to collaboratively work on two writing assignments. The analyses of data from multiple sources such as semi-structured interviews, observations, researcher’s and participants’ e-journals, and artifacts, yielded several interesting findings. First, the students perceived collaborative writing to be helpful to their L2 development. Meanwhile, they reported some challenges they identified in collaborative writing, such as gaps in L2 proficiency, fear of critiquing and hurting a peer’s feelings, and misunderstandings due to cultural differences
130 M. Li
Reference
Cho, H. (2017). Synchronous web-based collaborative writing: Factors mediating interaction among second language writers. Journal of Second Language Writing, 36, 37–51
Abrams, Z. I. (2019). Collaborative writing and text quality in Google Docs. Language Learning & Technology, 23(2), 22–42
Year
2017
2019
131
A.3 A.4 B.1 C
(continued)
Theme A1 A4 C D1 D2
Annotation Informed by Activity Theory as the framework for analysis, Cho’s case study investigated mediating factors and learning outcomes of three Asian ESL learners while collaboratively writing using both Google Docs with text chat and Google Docs and voice chat (Skype) within one hour. The students were recruited from an English Debate Club at a Canadian university. The group was tasked with writing two collaborative summary reports for debate meetings. Multiple streams of data included a survey questionnaire, debate summaries, screen recordings, and stimulated recalls. Using Storch’s (2002) dyadic interaction model, participants expressed a facilitator/participants pattern in Task 1, while in Task 2, the participants showed a collaborative pattern. From an expanded activity model, results suggest that communication mode (text or voice), task, perceptions of roles, and perception of feedback served to mediate the quality of collaboration. Pedagogical takeaways point to the importance of establishing interaction guidelines for participants as well as providing a clear understanding of why group work is beneficial In this study, Abrams examined the relationship between a group’s participatory pattern (Abrams, 2017; Storch, 2002) and learners’ writing performance in terms of syntactic complexity, grammatical accuracy, writing fluency, propositional content, and lexical sophistication. Twenty-eight first-year learners of German FL at a US university collaborated on a creative writing task using Google Docs. Self-selected groups of 3–4 synchronously collaborated in class for 15 minutes, then completed the collaborative writing asynchronously within two days. After analyzing the writing quality and participatory patterns, the author reported that more collaborative groups produced writing with better text coherence, higher fluency, and better prepositional content, but no connections were detected between collaborative patterns and coherence. In addition, the author found that group dynamics played a more influential role in group working success than learner proficiency levels
6 Computer-Mediated Collaborative Writing
Reference
Selcuk, H., Jones, J., & Vonkova, H. (2019). The emergence and influence of group leaders in web-based collaborative writing: Self-reported accounts of EFL learners. Computer Assisted Language Learning
Abe, M. (2020). Interactional practices for online collaborative writing. Journal of Second Language Writing, 49, 100752
Year
2019
2020
Table 6.2 (continued)
A.1 A.4
Theme C
Annotation This study explored six adolescents Turkish EFL students’ writing process with the help of self-selected group leaders during collaborative writing task. Drawing on the data such as group interviews, written chats on Facebook, and discussion boards, they identified two main types of group leaders (i.e., group leaders as facilitators and group leaders as affective domain supporters). On the one hand, the group leaders as facilitators type focused on the pre-writing stage and helped their group members with feedback on linguistic problems. On the other hand, the group leaders as affective domain supporters were concerned with motivating their group members as an important way of improving their writing process. This study initially suggested the value of the peer leadership approach to collaborative writing In this study, Abe used CA to explore how one focal Japanese EFL student negotiates with peers in online writing tasks, and how these interactional practices changed for different occasions. The study examined a focal student’s changing interactional practices when negotiating over the essay in a large group of eight to nine students. Interactional practices were found to be reliant on the student’s ability to not only navigate the asynchronous nature of online interaction, but also the student’s skill at navigating time while performing collaborative tasks. Aya, the focal student, was able to change her interactional practices as a means to negotiating with students as well as navigating the asynchronous nature of the talk to successfully construct a coherent text. At times, she made a direct request, which made an acceptance (or decline) expectant, while at other times she directly stated her intention to perform a specific task, depending on the response from other participants. Overall, this study lends support to the role of CA in analyzing interactional practices in the web-based collaborative writing task environment
132 M. Li
Reference
Hsu, H. C. (2020). The impact of task complexity on patterns of interaction during web-based asynchronous collaborative writing tasks. System, 93, 102328
Year
2020
Theme A1, A4 C
Annotation Hsu investigates how task complexity influenced patterns of peer interaction during web-based asynchronous L2 collaborative writing. Twenty-six Taiwanese EFL students in pairs completed one complex writing task and one simple writing task via Google Docs, with the two tasks counter-balanced. Robinson’s Triadic Componential Framework for task design was used to determine task complexity, with the complex task requiring learners to provide reasoning for the arguments and a coherent logic for their position, while the simpler task requiring only clear information and basic facts. Based on the analytical framework in Li and Kim’s (2016), the author analyzed the language functions, writing change functions, text contributions, and scaffolding strategies during the collaborative writing tasks. The results showed a limited effect of task complexity on interaction patterns, and authoritative/withdrawn pattern was found to be the most common pattern overall
6 Computer-Mediated Collaborative Writing
133
134
M. Li
Thematic Categories To provide a rough timeline of the empirical studies that reflects the remaining and/or shifting foci of investigation, I present below the sixteen articles chronologically with respective annotations and research themes (Table 6.2). The themes are categorized as follows: A. Interaction/writing process 1. 2. 3. 4.
Online discussion/chats Languaging/LRE Revision behaviors Patterns of interaction
B. Writing product/outcome 1. Features/qualities of written products (e.g., accuracy, complexity, fluency, coherence) 2. Comparison (online CW vs individual writing; online CW vs. F2F CW) 3. Effect on individual writing development C. Factors influencing CMCW D. L2 students’ perceptions 1. Benefits and challenges 2. Peer/group interactions As Table 6.2 shows, one main strand of research on CMCW was interaction/writing processes (Category A). Some research studies focused on students’ revision behaviors, ranging from writing change functions (Li, 2013; Li & Zhu, 2013; Mak & Coniam, 2008) and the focus of revisions (Abrams, 2016; Elola & Oskoz, 2010) to the comparison of formal changes and meaning changes (Kessler, 2009; Kost, 2011). Specifically, archived online writing records revealed multiple writing change functions such as adding, deleting, reorganizing, correcting during collaborative writing (e.g., Li & Zhu, 2013; Mak & Coniam, 2008), and multiple categories of revisions: content, stylistics, structure, and grammar, etc. (e.g., Elola & Oskoz, 2010). Also, a few studies (e.g., Kost, 2011)
6 Computer-Mediated Collaborative Writing
135
distinguished formal changes (addressing grammar or spelling issues) from meaning changes (addressing semantics and discourse). Moreover, some studies (e.g., Kessler, 2009; Kost, 2011; Roushad & Storch, 2016) addressed students’ attention to forms and reported language-related episodes (LREs) that occurred during the CMCW process. In addition, researchers (e.g., Abe, 2020) recently started to examine the interactional practices by conducting a conversation analysis of online chats/discussion records. These empirical studies investigating online writing/revising behaviors shed much light on the recursive writing process that CMCW affords and L2 learners’ attention to multiple aspects of writing. Patterns of peer interaction also constituted an important area of research. Earlier studies (e.g., Arnold et al., 2012; Bradley et al., 2010) noted two distinctive patterns in CMCW, namely the cooperative pattern (i.e., online texts are processed by individuals working in the parallel fashion) and collaborative pattern (i.e., online texts are jointly written by individuals, with individuals engaging with each other’s contribution). Extending on previous studies, Abrams (2016) detected three participatory patterns in collaborative Google Doc writing: low, sequentially additive, and collaborative. Along the other line, informed by Storch’s (2002) analytical framework, Li and Zhu (2013), examining “equality” and “mutuality” of interaction (Damon & Phelps, 1989) and meanwhile taking group members’ roles into account, identified three distinctive patterns of interaction in wiki-based collaborative writing: collectively contributing/mutually supportive; authoritative/responsive; and dominant/withdrawn. Li and Kim (2016) later investigated the patterns of interaction in greater depth by analyzing each group’s language functions, writing change functions, and scaffolding strategies, and probed group dynamics across writing tasks. Echoing previous research on peer interactions in F2F collaborative writing (e.g., Storch, 2002; Watanabe, 2008), this research line reinforces the positive effect of a collaborative pattern on writing outcome and students’ learning experience. The second main strand of research was online writing products/outcome, including the joint writing qualities, and the effect of CMCW on individual writing development (Category B). Previous studies (e.g., Elola & Oskoz, 2010; Strobl, 2014) compared online group
136
M. Li
writing and individual writing via quantitative analyses and detected no statistically significant differences in fluency, accuracy, and complexity, although collaborative texts were found to score statistically higher in content selection and organization in Strobl (2014)’s study. Focusing on the post-task writing development, Bikowski and Vithanage (2016) identified statistically higher gains in EAP students’ writing (assessed from the aspects of content, organization, academic style, and grammar) from the experimental group (i.e., collaborative web-based writing) than those from the control group (i.e., individual web-based writing). Likewise, Wang (2014) detected students’ better gains in business writing from the collaborative wiki writing group than from the F2F collaborative writing group. Such research studies enhanced our understanding of collaborative writing qualities and motived L2 scholars to further explore the potential advantages of CMCW. The third strand of research, although limited, addressed factors that mediated peer interaction and accounted for group dynamics (Category C). Cho (2017) drew on Engeström’s (1999) activity system and discovered multiple mediating factors mediating EFL college students’ Google Docs-based collaborative writing, namely goals, mode of communication, task representation, and matches/mismatches between participants’ self- and other-perceived roles. Selcuk et al. (2019), from a new perspective, examined the role of a group leader in Turkish high school students’ web-based collaborative writing. The result revealed the positive influence of the group leader on peer interaction and learning motivation. Along this line of inquiry, future research can further examine if mediating factors identified in F2F collaborative writing (Storch, 2013; Zhang, 2018) would apply to CMCW environments, such as task type, grouping (e.g., size and relative language proficiency), and use of L1 vs. L2 (Yanguas, 2020). Another preoccupation of research in this domain concerns students’ perceptions, including perceived benefits and challenges of CMCW and students’ attitude toward peer interaction (Category D). Many studies (Ducate et al., 2011; Elola & Oskoz, 2010; Kost, 2011; Li & Zhu, 2013; Mak & Coniam, 2018; Wang, 2015) reported perceived advantages of CMCW, including developed communication skills, improved language and writing skills, enhanced audience awareness, and higher motivation.
6 Computer-Mediated Collaborative Writing
137
On the other hand, L2 students reported a few challenges in CMCW, such as difficulty in managing time constraints and merging different opinions from group members (Bikowski & Vithanage, 2016), maintaining co-ownership (Arnold et al, 2012; Kessler & Bikowski, 2010), and possible unequal participation from group members (Arnold et al., 2012; Kessler & Bikowski, 2010). Regarding group work and peer interaction, L2 students had mixed feelings. Some commended the teamwork featured with shared insights and workload, as reported in multiple research studies (e.g., Bikowski & Vithanage, 2016; Kost, 2011; Li & Zhu, 2013). Others were unsatisfied with their group interaction, as they lacked a sense of belonging and perceived themselves as minority (Bikowski & Vithanage, 2016; Li & Zhu, 2017a) and thought that their individual contribution was not fully recognized by their groupmates (Ducate et al., 2011). This strand of research provided valuable pedagogical insights on how to promote group cohesion and create favorable co-working environments.
Research Directions Due to the increasingly important role of technology, CMCW will continue to gain momentum in L2 research. Based on the above research synthesis, this section addresses the research gap and points to future research directions. As displayed in Table 6.1, studies were primarily conducted at the tertiary educational level. There is a pressing need for research on CMCW conducted in the K-12 settings. German, Spanish, and French as a FL classes (other than ESL/EFL) were found to be the popular L2 settings in which CMCW practices were implemented. However, research in other underrepresented language learning settings (e.g., Chinese, Japanese, Arabic) is largely lacking. Implementing CMCW in broader contexts will deepen our understanding of how the technology can afford collaborative writing and language development. Methodologically, the predominantly used qualitative research approach was the case study, and other research approaches such as conversation analysis (e.g., Abe, 2019), phenomenological study, and ecological approach (e.g., Vorobel & Kim, 2017) deserve further exploration. Of
138
M. Li
note, the interventionist research on CMCW is rather limited, and more interventionist studies controlling confounding variables (i.e., task and learner variables) will definitely advance our knowledge of CMCW across task conditions and learner populations (Li & Zhang, 2021; Zhang et al., 2021; Zhang & Plonsky, 2020). Longitudinal studies, scarce in available research, are also strongly encouraged, which could result in our deepened understanding of the long-term effect of CMCW on L2 learners’ writing development. Furthermore, experimental studies with a large sample size are still deficient in the current boy of literature. Quantifiable information, triangulated with qualitative evidence, can become a trend for future research inquiries (Yim & Warschauer, 2017). Specifically, future research can build on the existing literature (e.g., Li & Kim, 2016; Li & Zhu, 2017a, 2017b) and continue to explore collaborative writing process (Category A), evaluate collaborative writing products (Category B), examine the possible connection between writing products and processes, and investigate individual/ contextual factors that shape CMCW activities (Category C ). For example, regarding the interaction processes, only a few studies reported the peers’ languaging (LREs) during CMCW, which was regarded as a crucial component for collaborative writing in the F2F collaborative writing literature. One impressing question might be how L2 students are engaged in languaging, a site for language learning, during various CMCW tasks. We identified much less studies addressing students’ LREs in CMCW. Would the reason for this observation be related to the constraints of technology (e.g., wikis being asynchronous CMC tools)? How would the L2 learners engage in LREs in collaborative writing using synchronous CMC tools (e.g., Google Docs, chatting apps)? Also, informed by Li and Zhu’s (2017b) study that assessed collaborative wiki writing qualities (i.e., rhetorical structures, coherence, and language accuracy) and explored the connection between the writing products and patterns of interaction, future research can further examine the links between collaborative writing processes and products. Moreover, future studies can examine, on a larger scale, the impact of CMCW on L2 writing development (Category B), for instance, by comparing the effectiveness
6 Computer-Mediated Collaborative Writing
139
of CMCW and that of face-to-face collaborative writing or computerbased individual writing, and to delve into the affordances of different technologies (e.g., wikis, Google Docs, and combination with chats) for collaborative writing tasks. Of particular note, a few new tools (e.g., SCAPES, Docuviz) were recently created to better capture the L2 students’ collaborative writing processes. According to Yim and Warschauer (2017), text mining and visualization techniques can elucidate the entire processes of collaborative writing by quantifying and visually representing interaction patterns as to a large dataset, involving revision/editing records and comparison of each member’s text contribution. For example, we can get a holistic picture of learners’ individual contribution to joint texts through Docuviz, an add-on application plugged in Google Docs. Departing from the existing literature, multiple new research lines await our future investigation. One emerging area is to connect collaborative writing with digital multimodal composing (DMC) tasks due to the wider attention to multimodality in the digital age (Akoto, 2021; Akoto & Li, 2021; Li & Zhang, 2021). Collaborative multimodal writing likely has potential to enhance learners’ collaborative skills and digital literacy skills, both deemed essential for completing authentic tasks encountered in the digital world (Li & Zhang, 2021). Future research could examine how L2 learners jointly orchestrate multiple semiotic resources to construct meaning and writing during collaborative multimodal writing, and what are the qualities of L2 learners’ collaborative multimodal products, compared with those of individual multimodal composing products. In response to the concern that DMC might risk learners’ attention to linguistic features (Qu, 2017), future studies may focus on the examination of L2 learners’ attention to language forms during collaborative multimodal writing. Methodologically, L2 learners’ joint digital composing behaviors can be better tracked by screen recordings and later interpreted with stimulated recall interviews with the participants. The other new research area, little explored in the current body of literature, concerns how we assess collaborative writing tasks. An initial empirical endeavor in this inquiry is Zhang and Chen’s ongoing study, in which they developed a rubric assessing both the product (in terms of
140
M. Li
content, organization, and language use) and process of CMCW (using five-point scale of mutuality and equality of peer interaction). Also, considering the lack of reported teachers’ perspectives on collaborative writing (Zheng et al., 2021), future research can probe the teachers’ role in L2 CMCW, including training, monitoring, scaffolding, and assessment.
Teaching Recommendations To encourage practices of CMCW in L2 classes, I discuss below pedagogical recommendations from the perspectives of writing tasks, technology tools, grouping, training, and assessment, informed by previous research.
Writing Tasks and Technology As mentioned earlier in this chapter, the task type has been considered a critical component for CMCW. Researchers (Lee, 2010; Mak & Coniam, 2008) posit that it is the nature of the task more than technology itself that influences students’ interaction and collaboration. As Storch (2013) suggested, teachers should consider writing tasks that “do not easily lend themselves to a division of labor so that learners collaborate rather than cooperate” (p. 159). Of note, authentic group projects that motivate students to engage in writing can be conducive to collaboration. A good example is Mak and Coniam (2008)’s study on collaborative wiki writing, in which ESL students worked in small groups and collaboratively produced a school brochure to be distributed to their parents. Moreover, instructors need to distinguish meaning-focused and language-focused collaborative tasks and link CMCW tasks to pedagogical goals (Storch, 2017). If the focus is on learning a new writing genre (e.g., research articles), meaning-focused tasks such as research proposals and annotated bibliography can be deployed (Li & Kim, 2016; Li & Zhu, 2017a). If the learning focus is on language structures, languagefocused tasks such as dictogloss can be applied, in which L2 learners
6 Computer-Mediated Collaborative Writing
141
listen to brief texts, take notes, and then reconstruct the originally dictated texts. In the digital age, teachers will be more likely to design and implement collaborative multimodal writing tasks in language classrooms. Teachers could innovate writing pedagogy by making a juxtaposition of digital composition and traditional writing tasks (Hafner, 2014); as in Hafner’s (2014) study, university students in an ESP class collaboratively worked on a scientific documentary, followed by a traditional research genre— individual written lab report. Such tasks can enhance students’ audience awareness and genre awareness, and foster their communication and digital literacy skills. Multiple technology tools have been developed for CMCW tasks. Table 6.3 shows a list of options. These tools can be used separately or in combination. For example, chatting applications such as Skype and Zoom can be combined with wikis/Google Docs to maximize L2 learners’ interactions during collaborative writing tasks. Also, as the social networking application is soaring and widely used among students in some regions, teachers can explore the use of novel tools for collaborative writing. It is reminded that teachers have a critical examination of available tools against the course objectives, characters and needs of student population, and instructional contexts.
Grouping Different grouping has been addressed in previous research. It is suggested that we limit the group size to four, because in larger groups, students are likely to have weaker sense of text co-ownership, resulting in one or more “social loafers” or “free riders” (Kessler & Bikowski, 2010). Pair work, as discussed in previous studies (e.g., Elola & Oskoz, 2010; Kost, 2011), enabled students to have more accountability, thus facilitating individual participation. Small groups of three or four turned out to be more advantageous over pairs because more students tend to pool more language and writing resources and work more effectively (Fenandez Dobao, 2012). However, whether the small group is superior to the pair or not is in debate (Elola and Oskoz, 2010).
142
M. Li
Table 6.3 Representative technologies for computer-mediated collaborative writing Technology
Website link
Description
Google Docs
https://www.google.com/docs/ about/
PBWorks
https://www.pbworks.com/ education
Wikidot
https://www.wikidot.com
Fandom
https://free-anime.fandom. com/wiki/Free!_Wiki
Etherpad
https://etherpad.org/
Edmodo
https://new.edmodo.com/
Manubot
https://manubot.org/
Padlet
https://padlet.com/
An online word processor that is commonly used for information sharing and collaborative writing A free and professional wiki hosting tool that can be used for information sharing, collaboration, and development of online classroom A free and professional wiki hosting tool that can be used for information sharing, collaboration, and development of online classroom Formerly known as Wikia, a free wiki hosting tool, used particularly for groups of common interests in a certain pop culture A downloadable open-source online editor that supports collaborative writing and editing A learning management system commonly used in K-12 settings for written and video communication and collaboration A free and open-source platform for collaborative writing and automatic reference managing A cloud-based real-time collaborative web platform in which users can upload, organize, and share content to virtual bulletin boards
6 Computer-Mediated Collaborative Writing
143
Having said this, the group size cannot predict students’ ways of collaboration. Li and Kim (2016) reported that small groups of three can demonstrate different patterns of interaction, some being collaborative (e.g., collective, expert/novice) while others being non-collaborative (e.g., dominant/defensive, cooperating in parallel). How to group students based on their L1 language and cultural background seems also important. To promote intercultural understanding and communication in the target language, teachers may form groups of students from different L1/cultural background. Meanwhile, teachers need to help students develop positive attitude toward working with group partners with different backgrounds. It is also necessary for instructors to detect and address the potential problem of students’ feeling of isolation in group work, particularly when one does not share the same cultural background with other group members (Li & Zhu, 2017a). In addition, previous research (e.g., van Lier, 1996) indicated that grouping students with mixed language proficiency levels would allow more scaffolding occasions to occur and better facilitate peer interaction. Nevertheless, Storch (2017) reminded us that the goal of the activity needs to be considered when we decide on proficiency pairing/grouping: if our goal is to have learners use L2 and engage in collaborative writing tasks, students with similar proficiency can be paired/grouped; if the goal focuses on learning forms, mixed proficiency pairing/grouping would work better.
Student Training Well-structured training is crucial for the success of CMCW, including both training on functions of specific technologies for collaborative writing and training on how to conduct collaborative writing (Li, 2018). In wiki-based collaborative writing project, for example, teachers could deliver a workshop, showcase collaborative writing behaviors involving different writing stages adopted from a former wiki project, and train students to use multiple wiki functions (i.e., discussion, edit, comments, and history) via a trial writing task. As supported by Chen and Hapgood’s (2021) finding that students’ knowledge about collaborative
144
M. Li
writing affected patterns of interaction and learning, teachers need to explain what characterizes collaborative writing and highlight concepts such as co-ownership, collective cognition, and individual accountability. Students will need to understand that co-ownership, for example, does not mean the combination of texts contributed by individual members; instead, it “entails joint exploration of task direction, meaningful textual contribution by individual members, and collective efforts to achieve coherence of the wiki text through joint revision” (Li & Zhu, 2017b, p. 51).
Assessment Assessment, as noted earlier in this chapter, is another essential component that deserves our intensive attention when we implement CMCW. Due to the technology’s transparency, instructors could easily track each individual’s writing behavior, recursive joint writing process, as well as the final product. Despite the debate on assigning group project a group grade or individual grades, I opt for the group receiving the same grade, which tends to foster positive interdependence and enhance accountability (Pfaff & Huddleston, 2003). Also, assessment criteria need to award both the writing process and the product, both the quality of individual contributions and that of the jointly constructed text (Storch, 2013; Trentin, 2009). Specifically, regarding the grading rubrics, instructors can include a criterion (a certain portion of points assigned), evaluating the collaboration process (See Zhang and Chen’s ongoing study as an example) taking the pairs/groups as a whole, which assesses (1) the quantity and quality of each individual member’s online posts and (2) degree of group members’ mutual engagement (e.g., positively responding language functions and other writing change functions reported in Li & Kim, 2016) throughout the collaborative writing process. Alternatively, instructors may consider a composite grade which includes the common overall grade for final texts and an individual component capturing each individual contribution during the writing process (Storch, 2017). Additionally, it would be helpful for advanced working groups to jointly assess the equality and mutuality of their ongoing peer interactions at
6 Computer-Mediated Collaborative Writing
145
different writing stages in the hope that they would enhance their learner autonomy, and continually monitor and evaluate collaborative writing process and facilitate interaction and collaboration.
References Abe, M. (2019). L2 interactional competence in asynchronous multiparty text-based communication: study of online collaborative writing. Computer Assisted Language Learning, 1–25. Abe, M. (2020). Interactional practices for online collaborative writing. Journal of Second Language Writing, 49, 100752. Abrams, Z. (2016). Exploring collaboratively written L2 texts among first-year learners of German in Google Docs. Computer Assisted Language Learning, 29 (8), 1259–1270. Abrams, Z. (2019). Collaborative writing and text quality in Google Docs. Language Learning & Technology, 23(2), 22–42. Akoto, M. (2021). Collaborative multimodal writing via Google Docs: Perceptions of French FL learners. Languages, 6 (3). Akoto, M., & Li, M. (2021). Exploring the processes and products of collaborative multimodal writing in a French FL class. In J. Colpaert & G. Stockwell (Eds.), Smart CALL: Personalization, contextualization, and socialization. Castledown Publishers. Arnold, N., Ducate, L., & Kost, C. (2012). Collaboration or cooperation? Analyzing group dynamics and revision processes in wikis. CALICO Journal, 29 (3), 431–448. Bikowski, D., & Vithanage, R. (2016). Effects of web-based collaborative writing on individual L2 writing development. Language Learning & Technology, 20 (1), 79–99. Bradley, L., Linstrom, B., & Rystedt, H. (2010). Rationalities of collaboration for language learning on a wiki. ReCALL, 22(2), 247–265. Chen, W., & Hapgood, S. (2021). Understanding knowledge, participation and learning in L2 collaborative writing: A metacognitive theory perspective. Language Teaching Research, 25 (2), 256–281. Cho, H. (2017). Synchronous web-based collaborative writing: Factors mediating interaction among second-language writers. Journal of Second Language Writing, 36 , 31–57.
146
M. Li
Damon, W., & Phelps, E. (1989). Critical distinctions among three approaches to peer education. International Journal of Educational Research, 58, 9–19. DiCamilla, F., & Anton, M. (1997). The function of repetition in the collaborative discourse of L2 learners. The Canadian Modern Language Review, 53, 609–633. Donato, R. (1994). Collective scaffolding in second language learning. In J. Lantolf & G. Appel (Eds.), Vygotskian approaches to second language research (pp. 33–56). Ablex. Ducate, L., Anderson, L., & Moreno, N. (2011). Wading through the world of wikis: An analysis of three wiki projects. Foreign Language Annals, 44 (3), 495–524. Ede, L., & Lunsford, A. (1990). Singular texts/plural authors. Southern Illinois University Press. Elola, I., & Oskoz, A. (2010). Collaborative writing: Fostering L2 development and mastery of writing conventions. Language Learning & Technology, 14 (3), 51. Engeström, Y. (1999). Activity theory and individual and social transformation. Y. Engeström, R. Miettinen, & R.-L. Punamäki (Eds.), Learning in doing: Social, cognitive, computational perspectives on activity theory (pp. 19–38). Cambridge University Press. Fernández Dobao, A. (2012). Collaborative writing tasks in the L2 classroom: Comparing group, pair, and individual work. Journal of Second Language Writing, 21, 40–58. Hafner, C. (2014). Embedding digital Literacies in English language teaching: Students’ digital video projects as multimodal ensembles. TESOL Quarterly, 48(4), 655–685. Hirvela, A. (1999). Collaborative writing instruction and communities of readers and writers. TESOL Quarterly, 8(2), 7–12. Kessler, G. (2009). Student-initiated attention to form in wiki-based collaborative writing. Language Learning & Technology, 13(1), 79–95. Kessler, G., & Bikowski, D. (2010). Developing collaborative autonomous learning abilities in computer-mediated language learning: Attention to meaning among students in wiki space. Computer Assisted Language Learning, 23(1), 41–58. Kost, C. (2011). Investigating writing strategies and revision behavior in collaborative wiki projects. CALICO Journal, 28(3), 606–620. https://doi.org/10. 11139/cj.28.3.606-620. Lee, L. (2010). Exploring wiki-mediated collaborative writing: A case study in an elementary Spanish course. CALICO Journal, 27 (2), 260–276.
6 Computer-Mediated Collaborative Writing
147
Lei, L., & Liu, D. (2019). Research trends in applied linguistics from 2005 to 2016: A bibliometric analysis and its implications. Applied Linguistics, 40 (3), 540–561. Li, M. (2013). Individual novices and collective experts: Collective scaffolding in wiki-based small group writing. System, 41(3), 752–769. Li, M. (2018). Computer-mediated collaborative writing in L2 contexts: An analysis of empirical research. Computer Assisted Language Learning, 31(8), 882–904. Li, M., & Kim, D. (2016). One wiki, two groups: Dynamic interactions across ESL collaborative writing tasks. Journal of Second Language Writing, 31, 25– 42. Li, M., & Zhang, M. (2021, online first). Collaborative writing in L2 classrooms: A research agenda. Language Teaching. Li, M., & Zhu, W. (2013). Patterns of computer-mediated interaction in small writing groups using wikis. Computer Assisted Language Learning, 26 (1), 62–81. Li, M., & Zhu, W. (2017a). Explaining dynamic interactions in wiki-based collaborative writing. Language Learning & Technology, 21(2), 96–120. Li, M., & Zhu, W. (2017b). Good or bad collaborative wiki writing: Exploring links between group interactions and writing products. Journal of Second Language Writing, 35, 38–53. Mak, B., & Coniam, D. (2008). Using wikis to enhance and develop writing skills among secondary school students in Hong Kong. System, 36 , 437– 455. Pfaff, E., & Huddleston, P. (2003). Does it matter if I hate teamwork? What impacts student attitudes toward teamwork. Journal of Marketing Education, 25 (1), 37–45. Qu, W. (2017). For L2 writers, it is always the problem of the language. Journal of Second Language Writing, 38, 92–93. Rouhshad, A., & Storch, N. (2016). A focus on mode: Patterns of interaction in face-to-face and computer-mediated contexts. In M. Sato & S. Ballinger (Eds.), Peer interaction and second language learning: Pedagogical potential and research agenda (pp. 267–289). John Benjamins. Selcuk, H., Jones, J., & Vonkova, H. (2019). The emergence and influence of group leaders in web-based collaborative writing: self-reported accounts of EFL learners. Computer Assisted Language Learning. Stahl, G. (2006). Group cognition: Computer support for building collaborative knowledge. MIT Press.
148
M. Li
Storch, N. (2002). Patterns of interaction in ESL pair work. Language Learning, 52(1), 119–158. Storch, N. (2013). Collaborative writing in L2 classrooms. Multilingual Matters. Storch, N. (2017). Implementing and assessing collaborative writing activities in EAP classes. In J. Bitchener, N. Storch, & R. Wette (Eds.), Teaching writing for academic purposes to multilingual students: Instructional approaches (pp. 130–142). Routledge. Storch, N., & Wigglesworth, C. (2007). Writing tasks: Comparing individual and collaborative writing. In M. P. Garcia Mayo (ed.), Investigating tasks in formal learning (pp. 157–177). Multilingual Matters. Strobl, C. (2014). Affordances of Web 2.0 technologies for collaborative advanced writing in a foreign language. CALICO Journal, 31(1), 1–18. Swain, M. (2006). Languaging, agency and collaboration in advanced second language learning. In H. Byrnes (Ed.), Advanced language learning: The contributions of Halliday and Vygotsky (pp. 95–108). Continuum. Swain, M. (2010). Talking-it-through: Languaging as a source of learning. In R. Batestone (Ed.), Sociocognitive perspective on language use and language learning (pp. 112–130). Oxford University Press. Swain, M., & Lapkin, S. (1998). Interaction and second language learning: Two adolescent French immersion students working together. The Modern Language Journal, 82(3), 320–337. Trentin, G. (2009). Using a wiki to evaluate individual contribution to a collaborative learning project. Journal of Computer Assisted Learning, 25 (1), 43–55. van Lier, L. (1996). Interaction in the language curriculum: Awareness, autonomy and authenticity. Longman. Vorobel, O., & Kim, D. (2017). Adolescent ELLs’ collaborative writing practices in face-to-face and online contexts: From perceptions to action. System, 65, 78–89. Wang, Y. C. (2014). Using wikis to facilitate interaction and collaboration among EFL learners: A social constructivist approach to language teaching. System, 42, 383–390. Wang, Y. C. (2015). Promoting collaborative writing through wikis: A new approach for advancing innovative and active learning in an ESP context. Computer Assisted Language Learning, 28(6), 499–512. Watanabe, Y. (2008). Peer-peer interaction between L2 learners of different proficiency levels: Their interactions and reflections. The Canadian Modern Language Review, 64 (4), 605–663.
6 Computer-Mediated Collaborative Writing
149
Yanguas, I. (2020). L1 vs L2 synchronous text-based interaction on computermediated L2 writing. System, 88, 102169. Yim, S., & Warschauer, M. (2017). Web-based collaborative writing in L2 contexts: Methodological insights from text mining. Language Learning & Technology, 21(1), 146–165. Zheng, Y., Yu, S., & Lee, I. (2021). Implementing collaborative writing in Chinese EFL classrooms: Voices from tertiary teachers. Frontiers in Psychology. Zhang, M. (2018). Collaborative writing in the EFL classroom: The effects of L1 and L2 use. System, 76 , 1–12. Zhang, M., & Chen, W. (ongoing). Exploring the effects of assessment approaches in synchronous computer-mediated collaborative writing. Journal of Second Language Writing 2022 Special Issue. Zhang, M., Gibbon, J., & Li, M. (2021, online first). Computer-mediated collaborative writing in L2 classrooms: A systematic review. Journal of Second Language Writing. Zhang, M., & Plonsky, L. (2020). Collaborative writing in face-to-face settings: A substantive and methodological review. Journal of Second Language Writing, 49, 100753.
7 Automated Writing Evaluation
Introduction Automated writing evaluation (AWE), earlier referred to as automated essay scoring (AES), is receiving increasing attention in L2 teaching and learning. Since the first AES engine was created in 1960s, multiple AWE software programs (e.g., My Access! , Criterion, Intelligent Essay Assessor, WriteToLearn, Turnitin, and Writing Pal ) have been developed to provide numerical scores and evaluative feedback on writing of both native speakers and L2 learners (Warschauer & Ware, 2006). Rather than being merely an automated scoring tool as the name AES shows, AWE is now more deemed a language learning tool since multiple programs provide model essays, graphic organizers, and dictionaries and also generate students’ writing progress reports. AWE systems continue to become more effective and diverse addressing different writing foci and target learners, and more accessible in various learning contexts (Hegelheimer & Ralli, 2020). This Chapter discusses the research and instructional practice on AWE, mostly used as formative feedback to improve the quality of students’ writing. The chapter begins with the definition of AWE and © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 M. Li, Researching and Teaching Second Language Writing in the Digital Age, https://doi.org/10.1007/978-3-030-87710-1_7
151
152
M. Li
rationales for using AWE. It then explains key texts in this domain of research, with important information presented in illustrative tables. Research gap is then addressed based on the synthesis of the main research strands. This chapter ends with suggestions for future research and pedagogy.
Defining Automated Writing Evaluation (AWE) AWE systems are engines that utilize sophisticated natural language processing (NLP) techniques and machine learning to analyze a wide range of text features at lexical, syntactic, semantic, and discourse levels and subsequently provide scores and evaluative feedback on writing (Li, Link, Hegelheimer, 2015; Li, Dursun, & Hegelheimer, 2020). AWE systems have been used for writing assessment in high-stakes contexts and for instructional support of writing in classroom settings (Shermis et al., 2016). More simply, AWE is computer-generated scoring and feedback used for assessment and instructional purposes (Stevenson, 2016). The use of AWE for both formative assessment and summative assessment has been examined. Although researchers (e.g., Chapelle et al., 2015; Li et al., 2014) found out high human–computer inter-rater reliability in the testing setting, critics have questioned AWE’s ability to judge critical thinking, rhetorical knowledge, and creativity (Hockly, 2019). Therefore, the use of AWEs is more commonly acknowledged for formative assessment.
Rationales for Utilizing AWE Research on AWE systems have yielded favorable findings in both the testing context and the classroom context. Previous correlational studies have supported that the AWE scores correlate highly with the human scores in various testing settings (Li et al., 2014). The AWE systems can effectively evaluate multiple writing aspects, such as coherence, discourse structure, vocabulary usage, content, and grammar (Warschauer & Ware, 2006). The high human–computer inter-rater reliability can validate the utility of AWE systems in large-scale standardized testing such as
7 Automated Writing Evaluation
153
TOEFL, thus innovating the testing practice globally (Chapelle et al., 2015). Also importantly, AWE systems are found to have played important roles in writing instruction. Previous research has suggested that AWE systems provided effective feedback on language use, adoption of writing strategies, and conformity with genre conventions (Chapell et al., 2015). By performing error diagnosis, generating individualized feedback, and offering self-access resources such as thesauri and editing tools, AWE programs provide students with opportunities to direct their learning and improve their writing in self-regulated learning environment (Chen & Cheng, 2008; Link et al., 2014; Wang et al., 2013). Moreover, positive perceptions about AWE were reported from both students and teachers, despite inconclusive findings. L2 students perceived that AWE motivated their learning (Grimes & Warschauer, 2010), helped them with error correction (Li et al., 2015), enhanced their metalinguistic knowledge (Liao, 2016), and fostered rhetorical development in writing (Cotos, 2011). According to Link et al. (2014), teachers conveyed positive perspectives on the effectiveness of AWE tools, especially the automated grammar feedback: AWE tools enabled them to spend less time on language issues and more time on global issues such as content and organization. Compared with teacher feedback, AWE exhibits discernible advantages reflected in timeliness, multiple resubmissions, and potential of learner autonomy (Chen & Cheng, 2008; Hyland, 2019). With the wider awareness of the AWE’s positive role and potential, AWE programs are expected to gain popularity in L2 writing classes.
Key Texts To zoom in on the research on automated writing evaluation in the L2 context, I searched the Google Scholar database using the key words of “automated writing evaluation” and “L2” and meanwhile limiting the results to the articles published from 2010 to 2020 in the journals with the CiteScore higher than 2.0. Sixteen articles were subsequently selected for illustration. Worthy of note, I found that Grime and Warschauer (2010) had conducted a large-scaled empirical study on
154
M. Li
AWE with middle school students in the USA, in which native speakers of English and English Language Learners were not distinguished. As the study was not carried out exclusively with ESL students, this article was not selected due to the focus of the book. In this section, I first present an overview of these reported studies from the aspects of context and participants, writing task and technology, theoretical framework, methodological approach, and validity/reliability strategy (as illustrated in Table 7.1). Then, the sixteen articles are further analyzed in terms of thematic categories to suggest the investigation foci during the past decade (see Table 7.2).
Overview Table 7.1 provides a holistic picture of the sixteen studies. As the table shows, the abundant body of literature reports using AWE with ESL students at the tertiary level in the USA, supplemented with the EFL students in Asia. The tools utilized in previous research included popular AWE systems such as My Access!, Criterion, Grammarly, Pigai, as well as a few self-developed tools such as Intelligent Academic Discourse Evaluator (IADE). These studies are mainly informed by socio-cognitive theory, sociocultural theory and feedback theory. The research largely drew upon quantitative study and mixed-methods approaches, supplemented with qualitative studies. To ensure the research validity and reliability, prior studies adopted strategies such as triangulation, member checking, thick description, and inter-rater reliability measured by Cronbach’s alpha and Cohen’s Kappa.
57 EFL students, tertiary level, Taiwan
3 ESL instructors of academic writing, tertiary level, USA
Wang et al. (2013)
Link et al. (2014)
Study
Context participant
Methodological approach Quasiexperimental study
Longitudinal case study
Theoretical framework No theoretical framework explicitly stated
Sociocultural theory (Lantolf, 2000; Lantolf & Thorne, 2006)
Writing task & technology
Essay writing on “My favorite Chinese Festival” Correct English
Essay writing based on the prompts from Criterion Criterion
Table 7.1 Research matrix of automated writing evaluation
1. Is there a significant difference in the writing accuracy of the experimental group compared with the control group after the application of AWE? 2. Is there a significant difference in the writing accuracy before and after the employment of AWE? 3. What is the overall self-report of using AWE on the improvement of student writing in terms of accuracy, learner autonomy, and interaction? 4. What are students’ attitudes toward the usage of AWE on their writing development? 1. How are ESL instructors implementing the AWE tool in the ESL writing classroom? 2. What are instructors’ perceptions of their experiences with the AWE tool? 3. What areas of concern and suggestions do ESL instructors have for future AWE practitioners?
Research question
(continued)
Triangulation; external auditor checks; peer debriefing
Validity & reliability tested via Cronbach’s alpha
Validity & reliability strategy
7 Automated Writing Evaluation
155
20 undergraduate ESL students and 105 graduate ESL students, tertiary level, USA
32 ESL students from five academic writing classes, tertiary level, USA
Chapelle et al. (2015)
Lavolette et al. (2015)
Study
Context participant
Table 7.1 (continued)
What is the validity of two AWE-based diagnostic assessment systems (i.e., Intelligent Academic Discourse Evaluator (IADE) and Criterion) and what kinds of inferences and claims can be made based on the goals of each AWE system and its use?
1. How well can Criterion give language feedback? a. What types of language errors does Criterion correctly detect? b. To what extent does Criterion miss errors? 2. How do type of error, correctness of the error code, timeliness of the feedback, and practice over time affect student response to the feedback? 3. Does immediate feedback result in more grammatically accurate writing?
Mixed-methods study
Quantitative study
Argument-based validity framework (Clauser et al., 2002)
Feedback in SLA (Polio, 2012); skill-acquisition theory (Evans et al., 2011; Hartshorn et al., 2010)
Different types/genres of essays for undergraduate students; research articles for graduate students - Criterion - Intelligent Academic Discourse Evaluator (IADE) Four essay writing based on prompts from Criterion Criterion
Research question
Methodological approach
Theoretical framework
Writing task & technology
Intercoder reliability
Intercoder reliability
Validity & reliability strategy
156 M. Li
4 ESL writing instructors and 70 ESL students, tertiary level, USA
66 EFL university students, Taiwan
Li et al. (2015)
Liao (2016)
Study
Context participant Mixed-methods study
Quasiexperimental time series design
Input hypothesis (Long, 1996); noticing hypotheses (Schmidt, 1990, 1994)
Interlanguage theory (Ortega, 2009)
Three papers in lower-level class, four papers in higher-level class, written based on prompts from Criterion Criterion
Four comparison essays using process writing approach Criterion
Methodological approach
Theoretical framework
Writing task & technology 1. How do the instructors use and view Criterion corrective feedback in light of their writing pedagogy? 2. How do the students view the AWE corrective feedback? 3. How does the use of the AWE corrective feedback affect the students’ writing practice? 4. How does the use of the AWE corrective feedback affect the change of writing accuracy from the first draft to the final draft? 1. What are the participants’ primary pre-treatment grammatical error types? 2. Does using AWE in a process-writing approach improve learner linguistic accuracy in revising texts regarding these error types? 3. Does using AWE in a process-writing approach improve learner linguistic accuracy in writing new texts regarding these error types?
Research question
(continued)
No validity/ reliability measures reported
Triangulation; peer debriefing
Validity & reliability strategy
7 Automated Writing Evaluation
157
163 EFL students, tertiary level, China
6 undergraduate Chinese as a FL students tertiary level, USA
Liu and Kunnan (2016)
Ai (2017)
Study
Context participant
Table 7.1 (continued) Methodological approach Quantitative study
Qualitative study
Theoretical framework Socio-cognitive theory
Sociocultural theory (Vygotsky, 1978) and associated constructs such as mediation and ZPD
Writing task & technology
Essay writing based on prompts from WriteToLearn WriteToLearn
An English– Chinese translation task A Chinese ICALL program
1. How does WriteToLearn perform in scoring Chinese undergraduate English majors’ essays compared to scoring by trained human raters? 2. How accurate is WriteToLearn’s feedback to Chinese undergraduate English majors compared to trained human raters? 1. What effect, if any, does graduated CF in an ICALL environment have on language learning? 2. What are the participants’ perceptions of this type of CF?
Research question
Triangulation; member checks
Internal consistency via Many-facet Rasch measurement (MFRM)
Validity & reliability strategy
158 M. Li
Ranalli (2018)
Study
82 ESL students, tertiary level, USA
Context participant
Methodological approach Quantitative study
Theoretical framework Input hypothesis (Long, 1996); noticing hypotheses (Gass, 1988, 1990, 1991; Schmidt, 1990, 1994)
Writing task & technology
Corpus of essays submitted via Criterion Criterion
1. How successfully can college-level ESL student writers use the AWCF provided by Criterion to correct written errors? How do feedback explicitness, course level, and the need to evaluate accuracy influence this ability? 2. How much mental effort do college-level ESL student writers perceive in using the AWCF provided by Criterion to correct errors? How do feedback explicitness, course level, and the need to evaluate accuracy influence these perceptions? 3. How clear and helpful do college-level ESL student writers find the AWCF provided by Criterion? How do feedback explicitness and course level influence their evaluations?
Research question
(continued)
Inter-rater reliability measured by Cohen’s k
Validity & reliability strategy
7 Automated Writing Evaluation
159
1. What are the internal factors of TAM that influence EFL learners’ use of AWE? 2. What are the effects of computer self-efficacy on EFL learners’ use of AWE? 3. What are the effects of computer anxiety on EFL learners’ use of AWE?
Quantitative study
245 EFL students, tertiary level, China
Li et al. (2019)
Unspecified writing tasks Pigaiwang/ Pigai
1. What is the relationship between automatically generated indices and human assessors’ scores? 2. What is the difference in automatically generated indices between lower- and higher-quality texts? 3. Do automatically generated indices predict human assessors’ scores?
Research question
Quantitative study
Assessment of writing quality, i.e., cohesion (Halliday & Hasan, 1976), lexical features (Ellis, 2002; Nation, 2001) & syntactic complexity (Ortega, 2003) Technology Acceptance Model (TAM) (Davis, 1989)
104 standardized text samples gathered from the corpus Coh-Metrix
104 text samples from Chinese EFL college students; 2 assessors
Matthews and Wijeyewardene (2018)
Study
Methodological approach
Theoretical framework
Writing task & technology
Context participant
Table 7.1 (continued)
Confirmatory factor analysis Cronbach’s α and composite reliability
Intercoder reliability
Validity & reliability strategy
160 M. Li
31 ESL students, tertiary level, USA
32 EFL students, tertiary level, Iran
Saricaoglu (2019)
Link et al. (2020)
Study
Context participant
Methodological approach Quasiexperimental study
Quantitative study
Theoretical framework Interaction hypothesis (Long, 1983)
Socio-cognitive theory (Hyland, 2003)
Writing task & technology
Two causeand-effect essays Automated causal discourse evaluation tool (ACDET)
Essay writing based on prompts from Criterion Criterion
1. To what extent does automated formative feedback provided by ACDET lead to improvement of ESL learners’ written causal explanations within essays? 2. To what extent does automated formative feedback provided by ACDET lead to improvement of ESL learners’ written causal explanations across pre- and post-tests? 1. Is there a difference in amount and level (higherversus lower-level) of teacher feedback between the AWE and teacher groups? 2. Are there differences in student revision practices between the two groups? 3. Are there differences in short- and long-term effect on the students’ writing improvement in terms of CAF (Complexity, Accuracy, and Fluency) between the two groups?
Research question
(continued)
Intercoder reliability
Inter-rater reliability measured by Cohen’s k
Validity & reliability strategy
7 Automated Writing Evaluation
161
11 teachers of a College English course, tertiary level, China
2 ESL students, tertiary level, USA
3 EFL students, tertiary level, China
Jiang, Yu and Wang (2020)
Koltovskaia (2020)
Zhang (2020)
Study
Context participant
Table 7.1 (continued) Methodological approach Mixed methods
Qualitative study
Qualitative study
Theoretical framework Mediated learning experience (MLE) theory (Feuerstein, 1990; Lee, 2014) Student engagement framework (Ellis, 2010) Cognitive dimension of writing and revision (Faigley & Witte, 1981; Flower & Hayes, 1981; Flower, et al., 1986)
Writing task & technology
Essay writing based on prompts from Pigai Pigai
Writing a literature review Grammarly
Essay writing based on prompts from Pigai Pigai
1. Are there any changes on teachers’ writing feedback practice when an AWE program was integrated into their teaching? If yes, what may be the changes? 2. What factors may mediate such changes, if any? How do students behaviorally, cognitively, and affectively engage with AWCF provided by Grammarly when revising their final draft? 1. What AWE feedback do the students receive on their L2 writing? 2. How do the students perceive the AWE feedback on their L2 writing? 3. How do the students engage with the AWE feedback to make revisions?
Research question
Thick description; intercoder reliability
Triangulation; intercoder reliability
Triangulation; intercoder reliability
Validity & reliability strategy
162 M. Li
Reference
Wang, Y. J., Shang, H. F., & Briody, P. (2013). Exploring the impact of using automated writing evaluation in English as a foreign language university students’ writing. Computer Assisted Language Learning, 26(3), 234–257
Link, S., Dursun, A., Karakaya, K., & Hegelheimer, V. (2014). Towards Better ESL Practices for Implementing Automated Writing Evaluation. CALICO Journal, 31(3), 323–344
Year
2013
2014
Wang et al.’s quasi-experimental study makes a strong case for the implementation of AWE as a tool for improving writing. They sought to determine whether using AWE leads to the improvement of Taiwanese EFL students’ writing in terms of accuracy, learner autonomy, and interaction. They also examined students’ perceptions toward using AWE. A comparison of the pre-test and post-test scores revealed that the students in the experimental group not only made fewer errors than their counterparts in the control group but also performed significantly better after receiving feedback from the AWE program. In addition, based on the questionnaire survey, they found that students in the experimental group generally expressed positive attitudes toward the effect of AWE on the improvement of their writing accuracy This longitudinal case study explored the ways in which ESL instructors incorporated the AWE tool Criterion into their writing classes as well as the instructors’ perceptions of the efficacy of Criterion for writing/learning development. The findings indicated that the AWE tool not only helped to boost students’ sense of autonomy and motivation but also improved their metalinguistic skills. This was attributed to the fact that Criterion’s grammar checker function allowed the instructors to spend more time providing feedback on organization and meaning instead of focusing on students’ linguistic errors. Apart from these reported benefits, the instructors also expressed some concerns with the AWE tool such as reserved use of the tool’s features due to the students’ lack of familiarity and low satisfaction in misleading holistic scores. Consequently, the authors concluded with pedagogical recommendations, such as adequate training of instructors on the use of AWE tools and open communication and sharing of teaching strategies between instructors
(continued)
B.1 C.3
Theme B.2 C.1 C.2
Annotation
Table 7.2 Research timeline of automated writing evaluation
7 Automated Writing Evaluation
163
Reference
Chapelle, C. A., Cotos, E., & Lee, J. (2015). Validity arguments for diagnostic assessment using automated writing evaluation. Language testing, 32(3), 385–405
Lavolette, E., Polio, C., & Kahng, J. (2015). The accuracy of computer-assisted feedback and students’ responses to it. Language, Learning & Technology, 19(2), 50–68
Year
2015
2015
Table 7.2 (continued) A.2 B.2
This study evaluates two AWE-based diagnostic assessment systems (i.e., Intelligent Academic Discourse Evaluator (IADE) and Criterion) in terms of the validity of their inferences, uses, and consequences using Clauser et al. (2002) argument-based validity framework. Data were collected from two different case studies, one from an undergraduate EAP class and the other from a graduate EAP class. The first case examined how students made revision choices based on the feedback received from Criterion, whereas the second case specifically investigated whether the feedback provided by IADE helped students’ focus on how meaning is expressed in research papers. The results of the first case showed that the AWE feedback positively influenced students’ revision process, even though some students disregarded 50% of the Criterion feedback. Drawing on data from surveys, think-aloud protocols, screen recordings of students’ interaction with IADE and interviews, the second case revealed that although students did not initially think about meaning while making revisions, they later turned their attention to the functional meaning due to the IADE’s color-coded feedback. Taken together, this study provides strong evidence in support of AWE-based diagnostic assessment tools for L2 writing and learning Lavolette et al.’s (2015) study examined the AWE tool Criterion in terms of the accuracy of its feedback. Additionally, they sought to ascertain whether immediate feedback on writing was more helpful than delayed feedback. After coding the feedback and categorizing the different error types found in students’ writing, students’ responses to the feedback were also coded. The results indicated that the error codes produced by Criterion were only accurate 75% of the time and the AWE tool missed about 46% of errors. Additionally, with regard to the immediacy of Criterion’s feedback, no significant difference was found in students’ responses to immediate and delayed feedback. This further confirms that the immediacy of AWE feedback does not guarantee improvement in students’ revisions A.2 C.1
Theme
Annotation
164 M. Li
Reference
Li, J., Link, S., & Hegelheimer, V. (2015). Rethinking the role of automated writing evaluation (AWE) feedback in ESL writing instruction. Journal of Second Language Writing, 27, 1–18
Liao, H. C. (2016). Using automated writing evaluation to reduce grammar errors in writing. ELT Journal, 70(3), 308–319
Year
2015
2016
B.1 B.2 C.1 C.2
This mixed-methods study explored the ways in which Criterion’s corrective feedback was used by instructors in their writing classes and its impact on students’ writing practice. Based on the students’ draft submissions, Criterion error reports and semi-structured interviews with the instructors and students separately, Li et al. found that although the instructors were not completely satisfied with the quality of the Criterion’s feedback, they generally agreed that since AWE provides feedback on the grammatical and mechanical aspects of students’ writing, this afforded them the opportunity to provide feedback on content, organization, and other important aspects of writing. The students also expressed similar sentiments in terms of the value of Criterion’s feedback. Moreover, the results also showed that the use of AWE encouraged students to write more; over half of the students turned in at least three or more submissions. Lastly, with regard to the change in writing accuracy, the study revealed that the AWE corrective feedback contributed to the improvement of students’ linguistic accuracy Utilizing a quasi-experimental time series design with 66 Taiwanese university students, this study examined how the AWE Criterion impacted the correction of grammatical errors during writing performance. Employing a multiple draft process-writing approach, the students were required to complete four comparison-essay assignments. Each essay followed the same 4 steps of 1) initial drafting, 2) addressing teacher meaning-based commentary, 3) attending to linguistic/grammatical aspects of writing, and 4) finally, the revised draft based on AWE feedback. AWE feedback focused on grammar identified in nine categories of linguistic errors. Results indicated that although patterns of error varied, the trend was toward a gradual and non-linear development of grammatical skills across the four essays. The use of the AWE appeared effective at reducing the number of grammatical errors in both the revisions of each paper and new text production. This improvement can be ascribed to the AWE facilitating students’ repeated, self-directed practice. These findings deepened our understanding of using AWE in the L2 writing classroom as a supplementary tool
(continued)
C1 C2
Theme
Annotation
7 Automated Writing Evaluation
165
Reference
Liu, S., & Kunnan, A. J. (2016). Investigating the Application of Automated Writing Evaluation to Chinese Undergraduate English Majors: A Case Study of “WriteToLearn”. CALICO Journal, 33(1), 71–91
Ai, H. (2017). Providing graduated corrective feedback in an intelligent computer-assisted language learning environment. ReCALL, 29(3), 313–334
Year
2016
2017
Table 7.2 (continued) A.1
This large-scale study, conducted with Chinese L2 students, investigated WriteToLearn’s error detection and scoring ability by comparing its feedback and scores to those graded by four human raters. The students were asked to submit two essays in response to two writing prompts (one expository and the other persuasive). The essays were then evaluated based on six writing traits (each rated on six-point scale): ideas, organization, conventions, sentence fluency, word choice, and voice. Liu and Kunnan found that although WriteToLearn had a better scoring ability compared to the human raters in terms of consistency, the human raters were more lenient. With regard to error feedback, however, WriteToLearn performed poorly with an overall rate of 49% in precision and 18.7% in recall. This means that the feedback received from this AWE tool was not always accurate and was unable to detect errors such as the use of articles, prepositions, word choice, and expression. Consequently, the authors recommended using this AWE tool as a supplementary assessment method in addition to human raters because of its scoring performance Ai’s study initially examined the effectiveness of a Chinese AWE system and Chinese as a FL students’ perceptions of the automated corrective feedback (CF). The participants were asked to complete an English to Chinese translation task that required them to use various syntactic aspects of the Chinese ba-construction. Drawing on the microgenetic analyses of screen recordings, website logs, audio and video recordings, the author discovered that graduated CF does facilitate learners’ ability to identify and make corrections to grammatical problems on their own. Additionally, analysis of interview data indicated that students generally preferred indirect/ implicit CF, as it allows them to think further and come up with correct answers by themselves, which ultimately contributed to their L2 development B.2 C.1 C.2
Theme
Annotation
166 M. Li
Reference
Ranalli, J. (2018). Automated written corrective feedback: how well can students make use of it?. Computer Assisted Language Learning, 31(7), 653–674
Matthews, J. & Wijeyewardene, I. (2018). Exploring relationships between automated and human evaluations of L2 texts. Language Learning & Technology, 22(3), 143–158
Year
2018
2018
A.2 B.2 D
This classroom-based study examined how EFL students responded to the CF provided by the AWE tool. Building on prior studies which suggested that certain factors could influence L2 students’ use of AWE (e.g., explicitness and accuracy, individual cognitive demands, learners’ perceptions of the tool and their course level), Ranalli sought to understand the factors that affect EFL students’ incorporation of Criterion’s feedback into revisions and the students’ perceptions of this AWE tool. The participants completed an error-correction task in which they provided scores on mock-ups of Criterion feedback, ratings of perceived mental effort as well as the clarity and helpfulness of the feedback. To investigate the students’ ability to successfully incorporate Criterion’s feedback, a descriptive statistical analysis was conducted on four main categories: (1) error-correction scores, (2) perceived mental effort, (3) clarity, and (4) helpfulness. The results showed that generic feedback required more mental effort than specific feedback and students struggled to make successful corrections whenever they received generic feedback with unfamiliar terms. However, students’ perceptions of the clarity and helpfulness of Criterion’s feedback were generally positive Matthews and Wijeyewardene’s study compared automated computer-based evaluation to human evaluation by analyzing the correlation between the indices generated by the computational tool Coh-Metrix and those derived from human assessors. Based on the evaluation of 104 standardized text samples gathered from a corpus, Coh-Metrix generated 108 indices which were then categorized into six types, i.e., referential cohesion, connectives, lexical density, word information, syntactic complexity, and syntactic pattern density. The two human assessors also used an analytic rubric with four main criteria: a) coherence and cohesion, b) lexical resources, c) grammatical range and accuracy, and d) task achievement. The results indicated that several automated indices, especially referential cohesion, were significantly correlated with the human assessors’ evaluations of text quality. Additionally, the key index, global argument, had a significant correlation to all the criteria on the analytic rubric used by the human assessors. Taken together, these findings highlight the potential importance of AWE in the evaluation of textual quality
(continued)
A.1
Theme
Annotation
7 Automated Writing Evaluation
167
Reference
Li, R., Meng, Z., Tian, M., Zhang, Z., Ni, C., & Xiao, W. (2019). Examining EFL learners’ individual antecedents on the adoption of automated writing evaluation in China. Computer Assisted Language Learning, 32(7), 784–804
Saricaoglu, A. (2019). The impact of automated feedback on L2 learners’ written causal explanations. ReCALL, 31(2), 189–203
Year
2019
2019
Table 7.2 (continued) D
In this large-scale (N = 245) study, Li et al. sought to identify the factors that influence Chinese EFL learners’ adoption of AWE by adding two external factors (i.e., computer self-efficacy and computer anxiety) to Davis’ (1989) technology acceptance model (TAM) which originally had four constructs, i.e., perceived usefulness, perceived ease of use, attitude toward using, and behavioral intention to use. The results from the analyses of three questionnaires showed that participants’ behavioral intention to use AWE is mainly dependent on factors such as its perceived usefulness, their attitude toward the technology use, and computer self-efficacy. Moreover, the added factors of computer self-efficacy and computer anxiety were found to significantly impact students’ perceived ease of using the AWE tool This study examined the impact of AWE on students’ L2 writing development as reflected in their written causal explanations. Using a pre-test/post-test design, students were asked to write one essay in the pre-test and in the post-test. Students got opportunities to make revisions based on the automated feedback. Drawing on students’ pre- and post-test drafts, feedback reports gathered from the AWE tool and screen-capturing videos, the study indicated that although no statistically significant changes in causal language features were observed in the second essay, a statistically significant reduction in the number of causal conjunctions and an increase in the number of adverbs and adjectives were detected. This study shows AWE’s potential role in developing students’ writing C2
Theme
Annotation
168 M. Li
Reference
Link, S., Mehrzad, M., & Rahimi, M. (2020). Impact of automated writing evaluation on teacher feedback, student revision, and writing improvement. Computer Assisted Language Learning, 1–30
Jiang, L., Yu, S., & Wang, C. (2020). Second language writing instructors’ feedback practice in response to automated writing evaluation: A sociocultural perspective. System, 93, 102302
Year
2020
2020
C.1 C.2 C.3
This quantitative study investigates the impact of AWE on L2 writing by comparing the use and non-use of Criterion in two sections of an EFL writing course in Iran. The participants were randomly assigned to two groups, i.e., the AWE group or the teacher group. While the AWE group students received automated feedback as well as content feedback from the teacher, the teacher group students only received feedback from the teacher. The goal was not only to see the impact of automated feedback on teaching practices, students’ revision process but also to determine whether there were any short- or long-term improvements in L2 writing. The findings on teacher feedback indicated that the use of AWE did not increase the amount and level of feedback provided. In fact, the teacher from the AWE group provided less feedback than the one from the Teacher group. The authors attributed this result to influencing factors such as assessment criteria and differences in teaching abilities. With regard to students’ revision, the study showed that students made more revisions to not only surface-level errors but also content-based ones based on both AWE and teacher feedback. Lastly, the short-term gains in terms of improvement in textual accuracy were noted Jiang et al.’s study sought to determine whether teachers’ writing feedback practices were impacted when AWE was introduced and if so, what potential factors could mediate teachers’ feedback change in response to AWE use. Data were collected from 11 teachers in a Chinese EFL context. A descriptive statistical analysis of data on feedback types and levels before and after using AWE revealed a significant difference in terms of both feedback types (5 out of the 6) and also feedback levels (7 out of the 9). Furthermore, three patterns of teachers’ use of AWE and two layers of change in teachers’ feedback practices were noted, namely a) changes in feedback mode, feedback time/workload, and feedback types and levels, and b) changes in teachers’ intentionality, reciprocity, transcendence, and meaning. While some teachers refused to accept the new automated feedback system, others used the AWE as a supplementary tool. This resulted in making their feedback more intentional, focused, and selective. Consequently, the authors also found several individual and contextual factors influencing instructors’ use of AWE, such as teacher beliefs/disbeliefs in AWE feedback, their willingness and agency to supplement AWE with scaffolding, workload, and class size
(continued)
C.3 D
Theme
Annotation
7 Automated Writing Evaluation
169
Reference
Koltovskaia, S. (2020). Student engagement with automated written corrective feedback (AWCF) provided by Grammarly: A multiple case study. Assessing Writing, 44, 100450
Zhang, Z. V. (2020). Engaging with automated writing evaluation (AWE) feedback on L2 writing: Student perceptions and revisions. Assessing Writing, 43, 100439
Year
2020
2020
Table 7.2 (continued) A2 C1
Koltovskaia’s case study examines two L2 students’ engagement with the automated feedback received via Grammarly by drawing on Ellis’ (2010) engagement framework to explore three main dimensions of engagement, i.e., behavioral, cognitive, and affective. The analyses of data in the form of screencasts, stimulated recall, and semi-structured interview revealed that with regard to behavioral engagement, students devoted a lot of time on surface-level issues and worked hard to eliminate the errors identified by Grammarly. In terms of affective and cognitive engagement, students displayed varying degrees of engagement. While one student showed overdependence on Grammarly and rarely questioned the automated feedback received, the other showed some mistrust and consulted the internet to verify the accuracy of the Grammarly feedback. Taken together, this study suggests that although automated feedback can have a positive effect on students’ behavioral engagement that alone is insufficient to language learning. Moreover, the overdependence on automated feedback can inhibit students from fully processing feedback and making necessary revisions This study takes a close look at how students engage with the feedback generated by Pigai in order to make revisions to their writing and how they perceive this way of receiving feedback. Based on a textual analysis of student drafts and AWE feedback, the author found that students use AWE feedback not just as a helpful tool to make corrections on language errors but also as a means of expanding their linguistic knowledge. Additionally, a qualitative analysis of interview transcripts, students’ reflective journals, and teaching documents revealed six different types of revision behaviors: correction, no correction, addition, deletion, substitution, and reorganization. Lastly, in relation to students’ perceptions, two out of the three students expressed positive attitudes about AWE feedback while the other remained indifferent A.2 B.2 C.1
Theme
Annotation
170 M. Li
7 Automated Writing Evaluation
171
Thematic Categories To provide a rough timeline of these empirical studies that reflect the foci of investigation, I present below the sixteen articles chronologically with respective annotations and research themes (see Table 7.2). The themes are categorized as follows: A. Validity of AWE 1. Correlation between AWE feedback (holistic scores) and scores/annotations of human raters 2. Interaction of students and the AWE system B. Perceptions of AWE 1. Instructors’ perceptions 2. Students’ perceptions C. Impact of AWE 1. Impact on writing revisions 2. Impact on students’ test scores and writing development 3. Impact on teachers’ writing feedback practice D. Factors influencing students’/instructors’ use of AWE As Table 7.2 shows, some studies (e.g., Li et al., 2014; Matthews & Wijeyewardene, 2018) explored the validity of AWE by examining the correlation between AWE scores and human raters’ scores (Category A). For instance, Li and colleagues (2014) conducted statistical analyses (i.e., ANOVA and Chi-Square) of different groups of scores, and detected low to moderate correlations between the Criterion scores and the instructors’ grades and analytic ratings. The study thus questioned the independent use of AWE scores for summative assessment of students’ performance; instead, the preference is using AWE as a means of formative assessment. In another study, Liu and Kunnan (2016) reported that the AWE tool WriteToLearn had a better scoring ability compared to the human raters in terms of consistency, but was unable to fully detect errors as to the use of articles, prepositions, and word choice. This study confirmed the
172
M. Li
use of AWE tool for supplementary, formative assessment. The other line of research on validity of AWE examined the students’ response to AWE. For example, Chapelle et al. (2015) evaluated two AWE-based diagnostic assessment systems (i.e., Intelligent Academic Discourse Evaluator and Criterion) in terms of the validity of their inferences, uses, and consequences, drawing on Clauser et al. (2002)’s argument-based validity framework. The first case examined how students made revision choices based on the Criterion feedback, whereas the second case specifically investigated whether the feedback provided by IADE enabled students to focus on functional meanings in research papers. The results revealed that the AWE feedback positively influenced students’ revision process and enhanced their attention to both forms and meaning, which provided sound evidence in the support of using AWE for diagnostic assessment and feedback on writing. An important research preoccupation in this domain concerns students’ perceptions of AWE (Category B). For instance, Wang et al. (2013) also reported students’ positive attitudes toward AWE tools and acknowledged the role of AWE in developing students’ writing accuracy. More recently, studies began to include the instructors’ perspectives. Link et al. (2014), for example, explored ESL instructors’ perceptions of using the AWE tool Criterion in writing classrooms. The instructors claimed that AWE tools not only helped to boost learners’ autonomy and motivation but also improved their metalinguistic knowledge. Also, criterion’s grammar checker function relieved instructors from offering specific error feedback so that they could spend more time commenting on global writing issues such as organization and meaning. Nevertheless, the instructors expressed some concerns relating to the lack of familiarity with the tool and some students’ dissatisfaction due to misleading holistic scores. Such perception data, to some extent, have informed future instructors who want to implement AWE in their instructional contexts. Regarding the impact of AWE (Category C), a growing body of studies explored the influence of AWE on students’ revisions. For instance, Zhang (2020) examined Chinese EFL students’ use of AWE feedback for the revisions of their essays. Based on a textual analysis of writing drafts and AWE feedback, he found that students used AWE feedback
7 Automated Writing Evaluation
173
not just as a helpful tool to make corrections on language errors but also as a means of expanding their linguistic knowledge. Other studies also reported the positive impact of AWE on students’ writing development. In Wang et al.’s (2013) study, the comparison of the pre-test and post-test scores revealed that the students in the AWE group made fewer errors than their counterparts and performed significantly better in the subsequent writing test after engaging with AWE feedback on previous writing tasks. Li et al.’s (2015) study also reinforced that the AWE corrective feedback contributed to the improvement of students’ linguistic accuracy. Moreover, researchers started to examine the influence of AWE on teachers’ feedback practice. Link et al. (2014) reported that instructors spend more time providing feedback on organization and meaning on students’ writing, and left AWE helping solve students’ linguistic errors. Jiang et al.’s (2020) recent study thoroughly examined the instructors’ feedback practice in response to AWE. A descriptive statistical data collected before and after using AWE revealed a significant difference in feedback types and feedback levels. Also, changes were identified in teachers’ intentionality, reciprocity, and transcendence. The instructors used the AWE as a supplementary tool, and AWE made their feedback more intentional, focused, and selective. Research efforts in this area have also progressed to explore the factors that influenced students’ and instructors’ use of AWE (Category D). For example, Chen et al. (2008) noted what influenced the effectiveness of AWE included teachers’ attitudes toward the use of AWE and individual learners’ goals for learning to write. In a recent study, Li, Meng, and Tian et al. (2019) identified a few more factors that affected EFL students’ adoption of AWE, including the perceived usefulness of AWE and ease of use, which were highly related to their computer selfefficacy and anxiety. Differently, focusing on instructors’ use of AWE, Jiang et al. (2020) reported several individual and contextual factors such as teacher beliefs/disbeliefs in AWE feedback, their willingness and agency to supplement AWE with scaffolding, teaching workload, and class size. The deepened understanding of mediating factors in AWE will enable instructors to implement AWE wisely and effectively.
174
M. Li
Research Directions As AWE tools continue to become more powerful and accessible in terms of revision foci of diverse areas and target audiences (Hegelheimer & Ranalli, 2020), research on AWE will be burgeoning in the years to come. Based on the synthesis of above-mentioned studies, this section addresses the research gap and points to directions for future research. First, the implementation of AWE is expected to be realized in broader learning contexts, including graduate programs and sub-tertiary educational settings. The research review (Table 7.1) has shown that AWE research has predominantly focused on the ESL/EFL settings (except Chinese FL in Ai’s 2017 study), and developing AWE systems for the language other than English has not received due attention. Therefore, research on AWE for other language learning contexts would be extremely welcome. Also, empirical studies addressing how learners interact with AWE tools and how the AWE helps their writing and language learning are far from sufficient. Given the lack of available research, longitudinal studies are encouraged to explore L2 learners’ long-term learning with AWE tools. In particular, longer-term implementations and case studies of individual users observed over time would be a pressing need (Hegelheimer & Rallani, 2020). In addition, diverse AWE tools have been adopted in language/writing classes, but little research has compared the affordances of different AWE tools for learning and writing development. Such inquiries would help instructors to better choose the tools that cater to their students’ needs and meanwhile align with curricular goals. Drawing upon the multiple themes discussed earlier in this chapter, I point out a few research strands that await our further exploration. One strand is the effect of AWE on the learners’ writing development in a long run (Category C ). A handful of studies (e.g., Liao, 2016; Wang et al., 2013) have shown students’ improvements in grammatical accuracy in a short period of time, but insufficient evidence of gains across writing tasks over time has been reported. While we call for further research illuminating feedback for acquisition in contrast to feedback for accuracy (Manchón, 2011), the impact of AWE on SLA constitutes another important line of inquiry in research agenda. Little research addressed
7 Automated Writing Evaluation
175
students’ enhancement in writing competence, including writing strategies and acquisition of genre conventions. By combining process and product approaches (Warschauer & Ware, 2006), future research can inquire how L2 learners use AWE tools and as follows how the use of AWE influences their learning outcomes. The next research strand on mediating factors (Category D), which just started to capture researchers’ attention, is individual and contextual factors that influence learners’ engagement with AWE feedback, such as learner motivation, self-regulated learning strategies, and teachers’ beliefs and agency (Hegelheimer & Rallani, 2020; Zhang, 2020). The examination into the interplay of such important constructs and AWE will shed new light on how to implement AWE effectively in order to facilitate L2 learning. Another important avenue worth exploring is the role of AWE for writing assessment (Li et al, 2014). In addition to providing formative assessment in writing classes, whether and how AWE can be a valid diagnostic tool used for summative assessments deserves our further investigation (Category A). Despite the mixed findings about the role of AWE as a summative assessment tool, Chapelle et al. (2015) positively maintained the validity of AWE, opening the discussion about future prospectus of using AWE for high-stakes testing. Moreover, in relation to formative assessment, future research needs to develop and evaluate AWE systems drawing on diverse disciplinary knowledge, which can help students learn various academic writing genres, particularly at discourse levels (e.g., Knight et al., 2020). In addition, scholarly dialogues on how to have the implementation of AWE complemented with teacher feedback and peer feedback would contribute to innovative L2 writing pedagogies in the digital era.
Teaching Recommendations The positive role of AWE gleaned from the research review encourages language/ writing teachers to incorporate AWE into their instruction. I end this chapter with pedagogical recommendations in terms of technology selection, training, and implementation of AWE. Of note, this
176
M. Li
section focuses on the use of AWE for formative assessment in class and outside the class.
Selection of AWE Tools Table 7.3 below shows representative AWE tools that are commonly used in language classrooms. Specifically, the respective website link, company/institution, and features/applications are addressed. In a synthesis paper, Hockly (2019) informed us that some AWE programs allow additional feedback from teachers; some are linked to course management system (CMS), which enables students to upload work and create writing portfolios for teachers and themselves to track writing progress. Other AWE tools allow students to access writing samples and online dictionaries. Turnitin, in addition, allows teachers to do plagiarism check. With access to diverse AWE systems, instructors need to critically examine and evaluate different AWE tools’ performances before deciding on a suitable and effective AWE tool to be utilized in their own classes (Rallani, 2018). While selecting the tools, instructors need to take into account multiple factors, such as the objectives of writing tasks, students’ language proficiency, and the teaching context.
Training and Implementation of AWE As noted earlier in this chapter, AWE has received wide recognition as a formative instructional tool. In order for students to make full use of AWE, teachers need to organize in-depth training and tutorial activities. To ensure the success of training, teachers themselves should beforehand explore the AWE tools’ features and learn how to utilize these features efficiently in their own classrooms (Jiang et al., 2020; Link et al., 2014). During the training session, teachers can first inform students of the rationales for using AWE systems and clearly explain the expected changes in the learner role and importance of learner autonomy throughout the process (Liao, 2016). Teachers should focus on training students how to use and engage/interact with AWE systems, including how to respond to AWE feedback on grammar, content,
https://www.ets.org/cri terion
https://pmark.pearso ncmg.com/templates/ass ets/upload/IEA-FactSh eet.pdf https://www.pearsonasses sments.com/store/usasse ssments/en/Store/Profes sional-Assessments/Aca demic-Learning/WriteT oLearn/p/100000030. html https://www.turnitin.com
http://www.adaptivelite racy.com/writing-pal
https://www.vantagelearn ing.com/products/myaccess-school-edition/
Criterion
Intelligent Essay Assessor
Writing Pal (W-Pal)
My Access!
Turnitin
WriteToLearn
Website link
Technology
Vantage Learning
Arizona State University
IParadigms, LLC
Pearson Education
Pearson Education
Educational Testing Service (ETS)
Company/Institution
Table 7.3 Representative automated writing evaluation systems/tools
(continued)
It can be integrated with LMS, such as Blackboard, Moodle, Canvas, Desire2Learn It is a writing strategy instruction tool, including both game-based and essay-based practices. Coh-Metrix for computing cohesion Intellimetric analyzes semantic, syntactic, and discourse feature in comparison with sample essays. Provides other resources such as sample essays, graphic organizers, dictionaries, and thesauri
It is a web-based AWE tool for building writing skills and developing reading comprehension. It is used in K-12 settings
Criterion (E-rater) provides holistic scores to prompt essays and other resources, such as sample essays, graphic organizers, dictionaries, and thesauri It provides feedback on grammar, style, mechanics; it also evaluates the meaning of text and short constructed responses
Features & Application
7 Automated Writing Evaluation
177
http://en.pigai.org/
https://www.grammarly. com/ https://writeandimprove. com/
Pigai
Grammarly
https://acawriter.uts. edu.au/
https://cce.grad-college. iastate.edu/resources/ writing-resources
www.paperrater.com
AcaWriter
Research Writing Tutor
Paperrater
Write&Improve
Website link
Technology
Table 7.3 (continued)
A team of computational linguists and subject matter experts
Iowa State University
University of Technology Sydney
Cambridge English
Grammar Inc
E-learning
Company/Institution
Cloud-based online automatic essay scoring services providing overall scores, comments, and also feedback sentence by sentence. It is able to point out learners’ Chinglish and meanwhile recommend the native way of expression It is a popular free editing tool focusing on corrective feedback Free automatic evaluation service grading writing according to Common European Framework of Reference as well as providing feedback A new automatic evaluation tool offering feedback on analytical and reflective writing across disciplines, involving the areas of rhetorical moves, idea development, and style A web-based writing platform that provides individualized automated feedback on scientific writing. Its interactive modules enable students to progress toward deeper understanding, have the autonomous use of genre conventions, and develop research writing competence It is a free AWE tool, providing grammar/spelling check, plagiarism check, and writing suggestions
Features & Application
178 M. Li
7 Automated Writing Evaluation
179
and organization and make corresponding revisions beyond mere error corrections. After training, teachers can embed AWE in their instruction and guide students to interact with AWE systems for the better learning of new writing genres and for revisions of their papers. During the process, teachers play a pivotal role; they ought to monitor how students use the AWE tool and provide contingent scaffolding where needed. Meanwhile, teachers should find ways to enhance students’ continuous motivation and foster their cognitive strategies so as to take full advantage of AWE tools. On the other hand, AWE systems usually generate scores; the scores can be inaccurate on some occasions. Teachers, therefore, should help students avoid misuse and misinterpretation of AWE scores and prevent the pitfall of aiming for higher AWE scores than actually improving writing quality (Li, et al., 2014). In addition, it is recommended that AWEs be used in tandem with other assessment forms, such as portfolio assessment and other types of feedback methods, such as teacher multimodal feedback and computer-mediated peer feedback (Hockly, 2019; Hyland, 2019; Zhang, 2020).
References Chapelle, C. A., Cotos, E., & Lee, J. (2015). Validity arguments for diagnostic assessment using automated writing evaluation. Language Testing, 32(3), 385–405. Chen, C. F. E., & Cheng, W. Y. E. C. (2008). Beyond the design of automated writing evaluation: Pedagogical practices and perceived learning effectiveness in EFL writing classes. Language Learning & Technology, 12(2), 94–112. Clauser, B. E., Kane, M. T., & Swanson, D. B. (2002). Validity issues for performance-based tests scored with computer-automated scoring systems. Applied Measurement in Education, 15 (4), 413–432. Cotos, E. (2011). Potential of automated writing evaluation feedback. CALICO Journal, 28, 420–459. Grimes, D., & Warschauer, M. (2010). Utility in a fallible tool: A multi-site case study of automated writing evaluation. Journal of Technology, Learning, and Assessment, 8(6), 1–44.
180
M. Li
Hegelheimer, V., & Ralli, J. (2020). Call for papers for a special issue on automated writing evaluation: Impacting classrooms, supporting learners. Language Learning & Technology, 24 (2), 35–36. Hockly, N. (2019). Automated writing evaluation. ELT Journal, 73(1), 82–88. Hyland, K. (2019). Second language writing (2nd ed.). Cambridge University Press. Jiang, L., Yu, S., & Wang, C. (2020). Second language writing instructors’ feedback practice in response to automated writing evaluation: A sociocultural perspective. System, 93, 102302. Knight, S., Shibani, A., Abel, S., Gibson, A., Ryan, P., Sutton, N., Wight, R., Lucas, C., Sándor, Á., Kitto, K., Liu, M., Vijay Mogarkar, R., & Buckingham Shum, S. (2020). Acawriter: A learning analytics tool for formative feedback on academic writing. Journal of Writing Research, 12(1), 141–186. Li, Z., Link, S., Ma, H., Yang, H., & Hegelheimer, V. (2014). The role of automated writing evaluation holistic scores in the ESL classroom. System, 44, 66–78. Li, Z., Dursun, A., & Hegelheimer, V. (2020). Technology and L2 writing. In C. A. Chapelle & S. Sauro (Eds.), The Handbook of Technology and Second Language Teaching and Learning (pp. 77–92). John Wiley & Sons. Li, J., Link, S., & Hegelheimer, V. (2015). Rethinking the role of automated writing evaluation (AWE) feedback in ESL writing instruction. Journal of Second Language Writing, 27 , 1–18. Liao, H. C. (2016). Enhancing the grammatical accuracy of EFL writing by using an AWE-assisted process approach. System, 62, 77–92. Link, S., Dursun, A., Karakaya, K., & Hegelheimer, V. (2014). Towards Better ESL Practices for Implementing Automated Writing Evaluation. CALICO Journal, 31(3), 323–344. Liu, S., & Kunnan, A. J. (2016). Investigating the Application of Automated Writing Evaluation to Chinese Undergraduate English Majors: A Case Study of “WriteToLearn.” CALICO Journal, 33(1), 71–91. Matthews, J., & Wijeyewardene, I. (2018). Exploring relationships between automated and human evaluations of L2 texts. Language Learning & Technology, 22(3), 143–158. Manchón, R. M. (2011). The language learning potential of writing in foreign language contexts. In M. Reichelt & T. Cimasko (Eds.), Foreign language writing: Research insights (pp. 44–64). Parlor Press. Ranalli, J. (2018). Automated written corrective feedback: How well can students make use of it? Computer Assisted Language Learning, 31(7), 653–674.
7 Automated Writing Evaluation
181
Shermis, M. D., Burstein, J., Elliot, N., Miel, S., & Foltz, P. W. (2016). Automated writing evaluation: An expanding body of knowledge. In C. A. MacArthur, S. Graham, & J. Fitzgerald (Eds.), Handbook of writing research (pp. 395–409). The Guilford Press. Stevenson, M. (2016). A critical interpretative synthesis: The integration of automated writing evaluation into classroom writing instruction. Computers and Composition, 42, 1–16. Wang, Y. J., Shang, H. F., & Briody, P. (2013). Exploring the impact of using automated writing evaluation in English as a foreign language university students’ writing. Computer Assisted Language Learning, 26 (3), 234–257. Warschauer, M., & Ware, P. (2006). Automated writing evaluation: Defining the classroom research agenda. Language Teaching Research, 10 (2), 157–180. Zhang, Z. V. (2020). Engaging with automated writing evaluation (AWE) feedback on L2 writing: Student perceptions and revisions. Assessing Writing, 43, 100439.
8 Corpus Analysis and Corpus-Based Writing Instruction
Introduction With the advancement of technology, many corpus-based tools have been developed and served as resources for L2 writers to gain access to authentic language use (Li, Dursun, & Hegelheimer, 2020). Corpusbased pedagogy and data-driven learning (DDL), therefore, have been increasingly implemented in L2 classes, particularly in EAP and ESP classes (Cortes, 2007). In L2 writing classes, drawing upon online corpora, students analyze academic texts, ranging from textual features such as collocations to the rhetorical structure of writing. It has been reported that students greatly improve academic writing skills through corpus-based instruction (Flowerdew, 2005). This chapter discusses the research and instructional practice on corpus approaches in L2 contexts. The chapter begins with the definition of corpus analysis and the rationales for corpus-based writing instruction. It then explains key texts in this domain of research, with important information presented in illustrative tables. Research gap is
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 M. Li, Researching and Teaching Second Language Writing in the Digital Age, https://doi.org/10.1007/978-3-030-87710-1_8
183
184
M. Li
then addressed based on the synthesis of the main research strands. This chapter ends with recommendations for future research and pedagogical practice.
Defining Corpus and Corpus Analysis Corpus is a collection of authentic language, either written or spoken, which has been compiled for a particular purpose (Flowerdew, 2012, p. 3). Consisting of naturally occurring data and representing a particular language or genre, corpus is assembled in accordance with explicit design criteria. It is different from database in that database is a large repository of unstructured text whereas corpus is built based on systematic sampling technique (Flowerdew, 2012). Corpora have two types: (1) general corpora which contain large volumes of text, illustrating grammatical and lexical features of a certain language, such as the Corpus of Contemporary American English (COCA), and (2) specialized corpora which are much smaller and intend to describe language use in specific contexts, such as Michigan Corpus of Academic Spoken English (MICASE), the popular corpus of spoken language. In terms of academic writing corpora, the Hong Kong University of Science and Technology (HKUST) computer science corpus and Jiaotong Daxue English of Science and Technology Corpus are two examples. Linked to language for specific purposes, the specialized corpora represent language characteristic of registers and genres in the specific areas (Cotos, 2014). Corpus analysis is to analyze linguistic patterns in and across naturally produced texts, usually with the aid of computer. The most common tools used in corpus analysis are web-based concordancing programs, such as AntConc (Anthony, 2014) and WordSearch (Cortes, 2007). The concordancers extract a “collection of the occurrences of a word-form, each in its textual environment” (Sinclair, 1991, p. 32), displayed as lists of key words in contexts called concordance lines. The natural texts are sorted and resorted for learners to discover linguistic patterns. Corpus analysis has been widely used in language learning contexts, particularly for learning of vocabulary and grammar. For example, corpus-based explorations of lexical bundles and constructions were conducted to
8 Corpus Analysis and Corpus-Based Writing Instruction
185
suggest disciplinary variations, which subsequently provided pedagogical insights for EAP/ESP classes (e.g., Lake & Cortes, 2020; Reppen & Olson, 2020; Yilmaz & Römer, 2020).
Rationales for Corpus-Based Writing Instruction Using corpora in writing instruction dates back to 1990s with the publications of Tribble and Jones (1990) and Johns (1991). Corpus-based writing instruction has been extolled for replacing traditional instruction with learner discovery and moving the study of language from correctness to typicality (Hyland, 2019). Leech (1997), for instance, discussed multiple benefits of concordancing for teaching, including automatic searching, sorting, and scoring, promotion of a learnercentered approach, and tailored learning processes. The use of corpora in EAP and ESP classes has gradually become a common practice (Cortes, 2007; Flowerdew, 2005; Swales, 2004). Drawing on the corpora, students observe and analyze linguistic conventions well established in their academic disciplines, what is called data-driven learning (DDL). Motivated by the fact that DDL activities have increased students’ lexicogrammatical knowledge (Johns, 1991), researchers and instructors show increasing interest in the corpus-based instruction which contextualizes corpus data so as to enhance learners’ language awareness, particularly on academic register and genre (Biber & Conrad, 2009). Students explore multiple genres by means of corpus analysis in EAP writing classes (Cortes, 2007), in which they draw on the corpora to explore and analyze the linguistic patterns and organizational conventions regarding writing in specific genres. Research has shown that L2 doctoral students advanced their academic writing skills through corpus analysis comparing the rhetorical and linguistic features of their own writing and those shown in the corpus of published research articles (Lee & Swales, 2006; Yoon & Hirvela, 2004). Corpus-based writing instruction also led to students’ increased confidence in academic writing (Lee & Swales, 2006; Yoon & Hirvela, 2004). Moreover, corpora have also been used as valuable reference resources for L2 writing (Chang, 2014), including corpora consultation for error
186
M. Li
correction in writing (e.g., O’Sullivan & Chambers, 2006). Moreover, move-annotated corpora have recently been developed to assist advanced students’ learning of research genres (e.g., Cotos et al., 2017; Gray et al., 2020). In short, corpus-based pedagogy can enhance L2 students’ command of lexical items and collocations and move structures that benefit their learning of English for both general academic purposes and specific purposes.
Key Texts I did the literature search on corpus analysis and corpus-based writing instruction in L2 contexts via Google Scholar by inputting the key words of “corpus,” “writing,” and “L2” and meanwhile limiting the findings to the articles published from 2010 to 2020 in the journals with the CiteScore higher than 2.0. Thirteen articles were subsequently selected for illustration. In this section, I first present an overview of these reported studies including context and participants, writing task and technology, theoretical framework, methodological approach, and validity/reliability strategies (see Table 8.1). I then discuss the selected articles in terms of thematic categories to indicate the current research strands (see Table 8.2).
Overview Table 8.1 summarizes the thirteen studies that probed multiple themes related to corpus analysis and corpus-based instruction. As the table shows, all the studies were conducted in the L2 tertiary contexts, primarily in the USA, with participants being both graduate students and undergraduate students. The corpora ranged from general language corpora such as AntConc and COCA to specific corpora addressing registers and genres in specific areas, such as Research Writing Tutor (RWT). Although some studies did not report theoretical frameworks, the domain of research is informed by multiple L2 acquisition constructs
Lu (2011)
Study
3,678 essays written by English majors aged 18–22 years from nine Chinese universities
Context & participant
Argumentative and narrative essays Written English Corpus of Chinese Learners (WECCL) Syntactic Complexity Analyzer (Lu, 2010)
Writing task/corpus/ technology Syntactic complexity
Theoretical framework 1. What is the impact of sampling condition, including institution, genre, and timing condition, on the mean values of any given syntactic complexity measure? 2. Which measures show significant between-proficiency differences? What is the magnitude at which between-proficiency differences in each measure reach statistical significance? 3. What are the patterns of development for the measures that show significant between-proficiency differences? 4. What is the strength of the relationship between different pairs of syntactic complexity measures?
Research question
Table 8.1 Research matrix of corpus analysis and corpus-based writing instruction
Quantitative study
Methodological approach
(continued)
F score for production unit and structure identification; Bonferroni correction
Validity & reliability strategy
8 Corpus Analysis and Corpus-Based Writing Instruction
187
3 intermediate ESL students at a US university
10 Korean EFL students, five master’s and five doctoral students, Korea
Park (2012)
Chang (2014)
Study
Context & participant
Table 8.1 (continued)
Three major assignments: summary paper, response paper, and a research report; six smaller in-class writings iShowU screen capture software; Google’s Custom Search; A variety of academic writing tasks: journal articles, conference papers, abstracts, project proposals, emails, etc. Corpus of Contemporary American English (COCA) Michelangelo journal and conference paper corpora AntConc freeware concordancer
Writing task/corpus/ technology Triangulation; inter-rater reliability
Triangulation; member checking
Qualitative case study
Qualitative case study
1. What are the benefits and drawbacks of a general corpus as a reference source for academic English writing? 2. What are the benefits and drawbacks of a specialized corpus as a reference source for academic English writing? 3. How do the participants evaluate corpora as reference sources for academic English writing?
Feedback-driven learning (Gaskell & Cobb, 2004)
Validity & reliability strategy
1. What are the processes through which learners interact with a corpus system? 2. How do microgenetic developments emerge from learners’ interactions in this context?
Research question
Methodological approach
Sociocultural theory: Microgenesis Learner Autonomy (Chambers & O’Sullivan, 2004)
Theoretical framework
188 M. Li
41 international graduate ESL students at a North American university
U.S. university students and college-level EFL learners with seven different L1 backgrounds
Cotos (2014)
Lu & Ai (2015)
Study
Context & participant
Theoretical framework Learner Autonomy (Chambers, 2010; Boulton, 2009, 2010) Schema theory (Bartlett, 1932; Barlow,1996)
Syntactic complexity
Writing task/corpus/ technology
Native-speaker corpus of research articles (NSC) Learning driven data (LDD) from the local learner corpus
Louvain Corpus of Native English Essays: 200 argumentative essays written by native speakers; international Corpus of Learner English Version 2.0: 1400 argumentative essays produced by EFL learners with seven different L1 backgrounds L2 Syntactic Complexity Analyzer (Lu, 2010)
1. What are the changes in written production and knowledge of linking adverbials before and after DDL activity types? 2. What are the effects of DDL activity types on observed and perceived performance between the LDD and NSC groups according to the type of activity they completed? Are there systematic differences in the syntactic complexity of English writing among college-level writers with different L1 backgrounds and, if yes, what are these differences?
Research question
Correlations between the syntactic complexity scores by human annotators and L2SCA; Bonferroni correction
Quantitative study
(continued)
Triangulation; Cohen’s d
Validity & reliability strategy
Quasiexperimental Mixed methods
Methodological approach
8 Corpus Analysis and Corpus-Based Writing Instruction
189
Staples & Reppen (2016)
Quinn (2015)
Study
58 undergraduate students in two EFL writing courses at a Japanese university 240 pieces of writing from a first-year writing course at a U.S. university; students were native English speakers, L1 Arabic, and L1 Chinese students
Context & participant
Table 8.1 (continued)
Collins WordBanks Online, due to its relatively straightforward search capabilities Rhetorical analysis and long argument Corpus consists of two components: (a) responses to the TOEFL iBT independent and integrated writing tasks, and (b) academic papers submitted for courses by the same participants
Writing task/corpus/ technology What are the students’ reactions to the first-time experience of using corpora to self-correct writing errors? 1. How are lexical, grammatical, and lexico-grammatical structures associated with academic writing produced by students in first-year writing courses? 2. What differences can be seen based on the writer’s L1 background and the genre? 3. How is the use of these structures related to language rating?
Grammatical complexity
Research question
Not explicitly stated
Theoretical framework
Validity & reliability strategy Not reported
Inter-rater reliability
Methodological approach Descriptive study
Quantitative
190 M. Li
23 graduate students in writing for academic publication courses at a US university
106 L2 English writers from three universities (in the US, UK, and New Zealand) Undergraduate/ graduate levels, with different first languages (e.g., Chinese, German)
Cotos et al. (2017)
Staples et al. (2018)
Study
Context & participant
Top-down analysis of the rhetorical composition of annotated introduction sections; bottom-up analysis using the move or step concordancer in RWT Research Writing Tutor (RWT): a web-based platform containing an annotated English language corpus (900 published RAs in 30 disciplines) Responses to the TOEFL iBT independent and integrated writing tasks; academic papers submitted for courses by the same participants British Academic Writing in English (BAWE) Corpus
Writing task/corpus/ technology Research question
Corpus-based register framework Multi-dimensional analysis Situational analysis framework (Biber & Conrad, 2009)
1. How linguistically similar is the language produced in the written TOEFL iBT to that produced in disciplinary writing tasks by the same writers? 2. To what extent do writers who receive higher TOEFL iBT scores produce writing in their test performance that is more similar to that of disciplinary writing tasks?
1. Do DDL learning events Knowledgetelling/knowledge- enabled by RWT contribute to genre transformation awareness? How? model (Bereiter 2. Do DDL learning events & Scardamalia, enabled by RWT 1987) contribute to Constructivist improvement in the learning theory quality of genre Academic literacy writing? How?
Theoretical framework
Quantitative study
Mixed methods
Methodological approach
(continued)
Inter-rater agreement
Inter-coder reliability (Cohen’s Kappa); effect sizes
Validity & reliability strategy
8 Corpus Analysis and Corpus-Based Writing Instruction
191
327 postgraduate students from a Hong Kong university
30 electrical engineering or mechanical engineering master’s students enrolled in a disciplinespecific academic writing course in China
Crosthwaite et al. (2019)
Dong and Lu (2020)
Study
Context & participant
Table 8.1 (continued)
Noticing Hypothesis (Schmidt, 1990) Constructivist learning theory Learner Autonomy
Constructivist learning theory
Genre analysis of introduction sections of engineering research articles (RAs) Corpus developed by students and annotated with the rhetorical function; antMover for annotating tagging of the rhetorical functions
Theoretical framework
22 tasks in a postgraduate thesis writing courses Hong Kong Graduate Corpus (HKGC) accessed through Moodle
Writing task/corpus/ technology 1. How do postgraduate students engage with corpora in terms of their actual corpus usage, query function preferences, and query syntax for DDL? 2. What is the extent of disciplinary variation in the usage of and engagement with corpora for postgraduate DDL involving a multi-disciplinary corpus platform? 1. Do guided corpus-based genre analysis activities using a self-compiled specialized corpus contribute to improvement of EFL learners’ genre knowledge and genre-based writing skills? 2. How do EFL learners perceive their experiences with the corpus-based genre analysis activities?
Research question
Validity & reliability strategy Triangulation
Triangulation; interannotator reliability (evaluated using Cohen’s Kappa)
Methodological approach Mixed methods
Mixed methods
192 M. Li
Lei and Yang (2020)
Study
Student corpora from a university in China; high proficient and advancedlevel native and non-native student writers at the University of Michigan
Context & participant
142 RA manuscripts by Chinese PhD candidates; 71 unpublished research papers by native undergraduates and master-level students (Native Beginner Students); 128 published RAs by native experts (NE) in the field of science and engineering
Writing task/corpus/ technology Lexical Frequency Profile (Laufer & Nation, 1995)
Theoretical framework 1. What are the similarities and differences in terms of lexical richness in the RA manuscripts by Chinese PhD, unpublished research papers by NBS, and published RAs by native experts (NE)? 2. What roles may nativeness and academic expertise play in regard to lexical richness in RA writing?
Research question Quantitative Study
Methodological approach
(continued)
Bonferroni adjustment to control Type I error
Validity & reliability strategy
8 Corpus Analysis and Corpus-Based Writing Instruction
193
Satake (2020)
Study
55 undergraduate Japanese freshman history majors from a private university in Tokyo enrolled in a compulsory English writing course
Context & participant
Table 8.1 (continued)
Descriptive, narrative, and argumentative essays followed by revision tasks, collected during the course of a semester Corpus of Contemporary American English (COCA)
Writing task/corpus/ technology Noticing Hypothesis (Schmidt, 1990)
Theoretical framework 1. Is there an interaction between learners’ choice of reference resources (i.e., a corpus, dictionaries, no use of both), and accuracy in correcting particular error types? 2. When given a choice for a reference resource for L2 error correction, was learners’ choice of reference resource influenced by error types? 3. Is there an interaction between different types of feedback (teacher vs. peer feedback) and accuracy in correcting particular error types?
Research question
Validity & reliability strategy Not stated
Methodological approach Mixed methods
194 M. Li
Reference
Lu, X. (2011). A corpus based evaluation of syntactic complexity measures as indices of college level ESL writers’ language development. TESOL Quarterly, 45(1), 36–62
Park, K. (2012). Learner-corpus interaction: A locus of microgenesis in corpus-assisted L2 writing. Applied Linguistics, 33(4), 361–385
Year
2011
2012
Lu performed a corpus-based analysis of 14 syntactic complexity measures on college-level ESL writers from multiple countries with different L1s. The researcher analyzed the impact of the sampling condition (institution, genre, timing), the relationship between syntactic complexity and language development, and which measures show significant between-proficiency differences. Furthermore, the researcher assessed patterns of development associated with each measure and the strength of the relationship between different pairs of syntactic complexity measures. ESL writing data from the Written English Corpus of Chinese Learners (Wen, Wang, & Liang, 2005) was analyzed using the Syntactic Complexity Analyzer. Lu found that institution, genre, and timing condition have significant effects on the observed mean values of most measures. Lu also identified seven candidate measures for developmental indices. Of particular interest to L2 researchers, the results provide evidence that the clause is potentially a more informative unit of analysis This research is part of a larger project in which Park conducted the empirical study with intermediate ESL writing class over the course of a semester. This quasi-experimental case study explored processes of learners’ interaction with a corpus system and the microgenetic development emerging from the interaction. Microgenesis is a Vygotskian methodological construct and analytical framework used to investigate the changes in learners’ progression from other- to self-regulation over a relatively short period of time. This paper analyzes the interactions among three focal ESL students participating in an academic writing course. Using triangulated data from real-time screen recordings, corpus queries, and oral/written reflections, Park found that learner success depends both on their ability to interpret and exploit the search results and on the corpus system’s ability to provide results applicable to learners’ specific needs. Based on the research findings, the author raised the attention to higher levels of interactivity and intelligence search tools catering to specific student needs when developing corpora search engines
(continued)
C.1 C.2
Theme A.1
Annotation
Table 8.2 Research timeline of corpus analysis and corpus-based writing instruction
8 Corpus Analysis and Corpus-Based Writing Instruction
195
Reference
Chang, J. Y. (2014). The use of general and specialized corpora as reference sources for academic English writing: A case study. ReCALL, 26(2), 243
Cotos, E. (2014). Enhancing writing pedagogy with learner corpus data. ReCALL, 26(2), 202
Year
2014
2014
Table 8.2 (continued) E
This qualitative case study examined the benefits and drawbacks of a general and specialized corpora as a reference source for academic English writing. Over the course of 22 weeks, Korean university students in an engineering lab used general and specialized corpora for academic writing. Ten Korean EFL students, five master’s and five doctoral students, conducted a variety of academic writing tasks using the Corpus of Contemporary American English (COCA) and Michelangelo journal and conference paper corpora. Results were drawn primarily from data from the transcripts of weekly interviews and the students’ written responses to a second survey. While both corpora were considered effective as reference sources, the specialized corpus was particularly valued because the graduate engineering students wanted to follow the writing conventions of their discourse community. Students had mixed attitudes about the time taken to use corpora due to differences in academic experience, search purposes, and writing tasks Cotos’ mixed-methods study investigated the effects of two types of data-driven learning (DDL) activities on students’ use of linking adverbials. Participants were 31 ESL graduate students in an advanced academic writing course. The control group used a native-speaker corpus (NSC) and the experimental group used the combined corpora of NSC and local learner corpora collected during the experimental implementation of the DDL activities. Data was obtained from trial writing samples, pre/post-tests, and questionnaires. The results showed an increase in frequency, diversity, and accuracy in all participants’ use of adverbials, but students exposed to the local corpus of their own writing showed more significant improvement. The findings suggest that combining learner and native-speaker data is an effective practice while instructors implement DDL-based instruction B.1 C.2 E
Theme
Annotation
196 M. Li
Reference
Lu, X., & Ai, H. (2015). Syntactic complexity in college-level English writing: Differences among writers with diverse L1 backgrounds. Journal of Second Language Writing, 29, 16–27
Quinn, C. (2015). Training L2 writers to reference corpora as a self-correction tool. ELT Journal, 69(2), 165–177
Year
2015
2015
A.1 A.2
Lu and Ai provide empirical support for the claim that there are significant and varied patterns of differences in syntactic complexity regarding writings of college-level English writers from different L1 backgrounds. The authors sampled 200 argumentative essays written by native speakers and 1400 argumentative essays produced by EFL learners with seven different L1 backgrounds. Using the L2 Syntactic Complexity Analyzer (Lu, 2010), the authors found that across 14 syntactic complexity measures, significant differences emerged in only three of the 14 measures between the NNS group and the NS group. However, when grouped by their L1 backgrounds, significant differences emerged between the NS group and one or more NNS groups in all 14 measures. The NNS groups also revealed a greater variety in patterns of difference than the NS group. The three upper intermediate NNS groups (Chinese, Japanese, and Tswana) and the four advanced NNS groups (Bulgarian, French, German, and Russian) showed a variety of patterns in the syntactic complexity measures. This study suggests that the intergroup variation in syntactic complexity cannot be accounted for by proficiency alone; L1 may partially contribute to the syntactic complexity of L2 writing This pedagogy-oriented article reports on a corpus training module that helped students self-correct teacher coded errors by referring to corpora resources in an intermediate-level EFL writing course at a Japanese university. The post-task questionnaire survey indicated that the students held very positive attitude toward the corpus-based writing approach. They appreciated the opportunities to research collocations, which improved their lexical usage and meanwhile boosted their confidence in their linguistic choices. They were also largely satisfied with the class training module. However, a few students felt it difficult to navigate the all-English corpus interface and stated that online Japanese-English resources were preferred. Quinn attributed the students’ reactions to their English proficiency, computer skills, and interest in experimenting with a new reference tool
(continued)
E
Theme
Annotation
8 Corpus Analysis and Corpus-Based Writing Instruction
197
Reference
Staples, S., & Reppen, R. (2016). Understanding first-year L2 writing: A lexico-grammatical analysis across L1s, genres, and language ratings. Journal of Second Language Writing, 32, 17–35
Cotos, E., Link, S., & Huffman, S. (2017). Effects of technology on genre learning. Language Learning & Technology, 21(3), 104–130
Year
2016
2017
Table 8.2 (continued) A.1 A.2 A.3
Staples and Reppen investigated the language use in university students’ first-year writing across three L1s (English, Arabic, and Chinese) and two genres (argumentative and rhetorical analysis). Student corpora were examined to see how these differences impacted language ratings (language and organization). The samples were taken from 240 pieces of writing from a first-year writing course at a US university. Students were native English speakers, Arabic L1, and Chinese L1 students. Applying a lexico-grammatical approach, the researchers examined students’ use of vocabulary and grammar for eight lexico-grammatical features. Also, these features were connected to textual functions (stance and argumentation). The researchers found differences in the degree of lexical diversity and syntactic patterns used by the English L1s and the two ESL groups, as well as across the two genres. However, there were also important similarities in textual features, attributed to all the groups’ status as developing writers. This research points to the need for more research to further examine the patterns of language use for different populations of writers from the same background and different backgrounds Cotos et al.’s mixed-methods study investigates the effectiveness of the Research Writing Tutor (RWT) on genre awareness and quality of writing. Research Writing Tutor (RWT) is a web-based platform containing an English language corpus and a searchable concordancer. Two groups of graduate students participated in the study over the period of two academic semesters, one group per semester. Data was collected from written responses to DDL tasks and writing progress from first to last drafts. Through an analysis of student texts and open-ended questions, students showed greater genre awareness and comparable reported frequencies in noticing EAP discourse patterns and rhetorical moves A.2 B.2 C3 D
Theme
Annotation
198 M. Li
Reference
Staples, S., Biber, D., & Reppen, R. (2018). Using corpus-based register analysis to explore the authenticity of high-stakes language exams: A register comparison of TOEFL iBT and disciplinary writing tasks. The Modern Language Journal, 102(2), 310–332
Crosthwaite, P., Wong, L. L., & Cheung, J. (2019). Characterising postgraduate students’ corpus query and usage patterns for disciplinary data-driven learning. ReCALL, 31(3), 255–275
Year
2018
2019
A.1 A.3
Staples and colleagues apply multidimensional analysis to compare the lexico-grammatical characteristics of texts produced by L2 writers on the TOEFL iBT with those in academic texts produced by the same writers. Multi-dimensional (MD) analysis identifies co-occurrences of linguistic features based on the factor analysis of a text or a group of texts (co-occurrence patterns are called dimensions). The researchers found that the language of TOEFL iBT tasks is both similar and different from disciplinary tasks across four linguistic dimensions: (a) compressed procedural information versus stance toward the work of others, (b) personal stance, (c) possible versus completed events, and (d) information density. There was also a divergence of results within the iBT tasks, that is, between the Integrated iBT (relies on source-based informational writing) and Independent iBT tasks (requires the writer to use personal opinions, anecdotes, and generalizations to support arguments). Independent iBT failed to extrapolate to major discourse registers, while results do support extrapolation of Integrated iBT tasks to many of the disciplinary writing registers included in this study. This study has important implications for language teachers and test designers alike. A careful consideration of the communicative purposes and other situational characteristics of disciplinary registers need to be accounted for when creating tasks Crosthwaite et al.’s mixed-methods study reports on how postgraduate students engage with corpora and the extent of disciplinary variation in the use of corpora. Students at a Hong Kong university accessed through Moodle the Hong Kong Graduate Corpus (HKGC) developed by the university. Each of the ten faculties recommends one thesis that was deemed excellent, for a total of ten used in the creation of the small corpus. An online corpus query platform was also developed and accessed through Moodle. The platform tracked learners’ corpus use according to multiple parameters, including (1) time, date and duration; (2) individual corpus query syntax; (3) filters applied to query results; and (4) corpus functions used. Quantitative data revealed that users had significant interdisciplinary and inter-/intra-user variation in the use of corpus functions. Qualitative results from three focal users’ activity logs revealed distinctive individual corpus engagement in terms of query frequency and function. Logs revealed temporal changes in the use of corpus functions, with early queries typically involving frequency searches to discover new vocabulary, then switching to the collocation function in later queries
(continued)
C.2 C.3
Theme
Annotation
8 Corpus Analysis and Corpus-Based Writing Instruction
199
Reference
Dong, J., & Lu, X. (2020). Promoting discipline-specific genre competence with corpus-based genre analysis activities. English for Specific Purposes, 58, 138–154
Lei, S., & Yang, R. (2020). Lexical richness in research articles: Corpus-based comparative study among advanced Chinese learners of English, English native beginner students and experts. Journal of English for Academic Purposes, 47, 100894
Year
2020
2020
Table 8.2 (continued) B.2 C.3 E
Dong and Lu explored the use of a self-compiled specialized corpus during guided corpus-based activities to improve EFL learners’ genre knowledge and genre-based writing skills. Thirty electrical engineering (EE) or mechanical engineering (ME) master’s students enrolled in a discipline-specific academic writing course at a university in China participated in the study. The instructor and students first collaboratively compiled a specialized corpus consisting of the introduction sections of 150 research articles (RA) in EE and ME. The RAs were annotated for rhetorical moves and functions following the Create a Research Space (CARS) model (Swales, 2004). Next, corpus-based genre analysis activities were used to help the students better understand the rhetorical structures and linguistic features associated with specific rhetorical moves. Analyses of triangulated data sources (i.e., pre- and post-instruction questionnaires, interviews, students’ reflective journals, and student writing samples) revealed that the corpus integrated approach improved both genre awareness and the quality of the writing product Lei and Yang’s quantitative study investigated the lexical richness in RA manuscripts by Chinese PhD candidates (CPhD), unpublished research papers from native final-year undergraduates and master-level students (Native Beginner Students, NBS), and published RAs by native experts (NE). An examination of three dimensions of lexical richness (lexical diversity, lexical density, and lexical sophistication) revealed that RA manuscripts by CPhD were at the intermediate level, surpassing NBS but not at the level of NE. To better understand the function of nativeness and academic expertise on lexical richness, the two native writer groups were compared, and predictably the experts far outpaced the NBS group in lexical diversity and sophistication, indicating the importance of expertise for these indices. There were observed imbalances in the mean rank of the three measures of lexical richness in the corpus of CPhD when compared to the other two groups, with lexical diversity similar to NBS but lexical density closer to NE than NBS. These results suggested that expertise may play a more important role than nativeness in the writing of RAs, alluding to the value of discipline-specific research training in the development of students’ RA writing A.1 A.2
Theme
Annotation
200 M. Li
Reference
Satake, Y. (2020). How error types affect the accuracy of L2 error correction with corpus use. Journal of Second Language Writing, 50, 100757
Year
2020
Theme C.1
Annotation Drawing on Schmidt’s (1990) noticing hypothesis, Satake’s mixed-methods study examined the Japanese EFL students’ interactions with different types of reference resources (i.e., a corpus, dictionaries, no use of both), the accuracy in correcting particular error types, and if the error type impacts the reference resource selection. For this experimental treatment, 55 undergraduate EFL students wrote an essay in 25 min without access to reference resources. After receiving teacher or peer feedback on their errors, students performed revision tasks for 15 min with either the use or non-use of reference resources. This procedure was iterated 9–11 times during the course of the semester. An error analysis of the student papers revealed that there were 10 error types with greater than 80% accuracy in correction rate, and the rate of accurate corrections with corpus use was higher than with dictionary use or no use of both. However, corpus use was found to be less helpful in correcting voice (active/passive) errors, and this was reflected in a lower rate of accurate correction with corpus use than with dictionary use or no use of reference resources at all. This research study overall points to the effectiveness of corpus integration into the classroom for correcting particular language errors, in concert with other modes of corrective feedback
8 Corpus Analysis and Corpus-Based Writing Instruction
201
202
M. Li
(e.g., learner autonomy and noticing hypothesis) and social constructivism/sociocultural theory. The research approaches included qualitative study, quantitative study, and mixed methods. To ensure research validity and reliability, researchers mostly utilized triangulation and inter-rater reliability.
Thematic Categories To provide a rough timeline of empirical studies that reflect the remaining and/or shifting foci of investigation, I present below the thirteen articles chronologically with respective annotations and research themes (see Table 8.2). The themes explored are categorized as follows: A. Corpus analysis of textual features of writing 1. Analysis and comparison of L2 students’ writing (e.g., syntactic complexity, lexico-grammatical features) 2. Comparison of native vs non-native speakers’ writing 3. Comparison of different writing tasks B. Corpus-based instruction of L2 writing 1. Lexical features of general academic writing 2. Rhetorical moves of research genre (e.g., research report) C. Learner-Corpus Interaction 1. Use of corpus for L2 error correction 2. Use of corpus for consultation of L2 use 3. Use of corpus for genre knowledge D. Effect of DDL on genre learning and/or writing development E. Perceptions As Table 8.2 shows, a substantial number of research has focused on textual features of writing drawing on corpus analysis (Category A). Lu (2011) conducted a corpus-based evaluation of fourteen syntactic complexity measures in college-level ESL students’ writing. Based on the two important research syntheses, Wolfe-Quintero et al. (1998) and
8 Corpus Analysis and Corpus-Based Writing Instruction
203
Ortega (2003), Lu (2011) evaluated five types of syntactic complexity, namely length of production, sentence complexity, subordination, coordination, and particular structures. A computational system was devised to automate the fourteen measures of syntactic complexity with regard to ESL students’ writing samples. The results showed that argumentative essays produced higher syntactic complexity than narrative essays, and untimed essays had higher syntactic complexity than timed essays. The best measures that predict the ESL writers’ language development were found to be complex nominals per clause and mean length of clause, followed by complex nominals per T-unit, mean length of sentence, and mean length of T-unit. Using the corpus analysis, Lei and Yang (2020) examined the lexical richness in science and engineering research articles (RA) written by non-native speakers in comparison with those by native speakers. Three corpora were selected: (1) Michigan Corpus of Upper Level Student Papers (MICUSP) from which native final-year college students’ papers were analyzed, (2) English Paper Revising Project in a top Chinese research university from which unpublished RAs written by Chinese Ph.D. students were analyzed, and (3) high-impact factor journal articles written by native speaker disciplinary experts. Dimensions of lexical richness were measured separately using corresponding tools, such as lexical diversity measured by Moving-average-type-tokenratio (Covington & McFall, 2010) and lexical sophistication measured by Lexical Frequency Profile (Laufer & Nation, 1995) using AntWordProfiler (Anthony, 2014). Results revealed that native experts excelled in all dimensions of lexical richness and native beginner students received the lowest scores, and non-native doctoral students performed in between. The study supports the claim that multilingual writers’ literacy acquisition is related more to expertise than to nativeness (e.g., Flowerdew, 2013; Hyland, 2015). The second strand of research, still rather limited in the current body of literature, is corpus-based instruction (Category B), which addresses learning both general academic writing (e.g., lexical features) and a specific research genre (e.g., rhetorical structures). Cotos (2014) examined the effect of two corpora (one being native-speaker corpus and the other combining both native-speaker corpus and learner corpus) on L2 learners’ acquisition of adverbials. The students had a self-query
204
M. Li
of corpus with concordancing tools, in which they explored their own written products archived in the local corpus. The results showed that students enhanced their use of adverbials in terms of frequency, diversity, and accuracy after learning both types of corpora, but the students exposed to the corpora containing their own writing made more significant improvement. This study suggested that supplementing corpus materials with learner output can be an effective pedagogical practice. Bridging corpus-based instruction and genre-based instruction, Dong and Lu (2020) explored a pedagogical approach to teaching rhetorical structures in a graduate ESP course at a Chinese university. In collaboration with the instructor and their respective graduate supervisors, students compiled a specialized corpus regarding the introduction sections of engineering RAs. Students were then guided to annotate the rhetorical moves and the associated linguistic features following Swales’ (2004) Create a Research Space (CARS) model. Specifically, the instructor introduced students to the concept of rhetorical function tagging and demonstrated the use of AntMover, a free computer program designed to annotate RAs with a taxonomy of rhetorical functions, such as claiming centrality, making topic generalizations, indicating a gap, and announcing present study. Afterward, genre-based pedagogy was implemented, which consisted of modeling, joint negotiation of text, and learners’ independent construction of texts sequentially (Hammond et al., 1992). The researchers found that this integrated approach enhanced students’ genre knowledge and academic writing skills, thus recommending further implementation of corpus-based genre pedagogy. Another research strand related to corpus-based instruction (Category C), remaining under-explored, focused on learner-corpus interaction (Crosthwaite et al., 2019; Park, 2012). Park (2012) drew on triangulated data sources (e.g., real-time screen recordings, corpus queries, written/oral reflections, and stimulated recall interviews) and examined how three undergraduate ESL learners at a US university resolved language issues by retrieving, evaluating, and appropriating the corpus search results. The corpus used in this study was compiled from specialized academic papers published in free web-based journals, and the students referred to the exemplar papers while making queries about lexico-grammatical and rhetorical features of academic genres. Park
8 Corpus Analysis and Corpus-Based Writing Instruction
205
attributed the L2 learners’ achievement to both learners’ ability to interpret and exploit corpus search results and the corpus system’s ability to respond to learners’ particular needs. More recently, Crosthwaite et al. (2019) examined over 300 Hong Kong graduate students’ actual engagement with corpora while working on disciplinary theses in English via a corpus query and data visualization platform named Hong Kong Graduate Corpus (HKGC). HKGC is sorted in terms of section (e.g., abstract, introduction, methodology), faculty, and discipline. Students’ data-driven learning process was examined in terms of corpus usage history (e.g., time of access), query syntax (e.g., use of part-of-speech tags), query function (e.g., frequency lists and concordance sorting), and query filters (e.g., searches by disciplines). The results suggested the variations in individual use of corpora as well as the diverse use across disciplines. For instance, those searching arts/education sub-corpora tended to more frequently focus on concordance output and did not use other corpus functions often, which was compared with regular use of the frequency breakdown function regarding the architecture/engineering sub-corpora. Moreover, researchers started to examine the effect of DDL on students’ actual writing development (Category D). Cotos, Link, and Huffman (2017) designed the RWT which contained “an English language corpus annotated to enhance rhetorical input, a concordancer that was searchable for rhetorical functions, and an automated writing evaluation engine that generated rhetorical feedback” (p. 104). They examined 23 ESL graduate students’ writing progress from the first to last draft across a semester and found that technology-mediated corpora fostered novice writers’ learning of genre conventions and helped enhance their rhetorical, formal, and procedural aspects of genre knowledge. The students showed higher writing performances in content, lexis, grammar, structure, and mechanics, based on the results of Wilcoxon Signed-Rank’s test. This research study raised our knowledge of DDL being an effective means to developing academic writing. Furthermore, previous research has delved into students’ perceptions on corpora (Category E). For instance, Chang (2014) examined ten Korean EFL graduate students’ perceptions on the role of both general corpora and specialized corpora for their learning of academic
206
M. Li
writing. This group of non-native English-speaking engineering students consulted two types of reference resources while working on academic writing: (1) the general corpus, namely COCA, and (2) a specialized corpus named Michelangelo, which consists of journal articles and conference papers written by both NES and NNES in the field of computer science and engineering. Drawing on the interviews and students’ written responses to questions regarding their purpose of using corpora and their comments on the corpora, Chang discovered the students’ perceived benefits and drawbacks regarding both types of resources, such as COCA’s helpfulness in collocations and synonyms but the lack of target expression in their academic field, and Michelangelo’s advantage of target expressions in the engineering field, but its insufficient results. In another study discussed earlier, Dong and Lu (2020) implemented a new pedagogical approach to teaching the introduction of engineering RAs by having the Chinese EFL graduate students work with self-compiled specialized corpus. They investigated the students’ perceived experiences with the corpus-based genre analysis activities. Specifically, students found it not difficult to identify linguistic patterns and rhetorical moves, and the corpus activities were fun and helped enhance their interest and confidence in writing RA introductions. This research strand of students’ perspectives will inform both future research and instructional practice.
Research Directions Based on the current literature, this section addresses the research gap and points to directions for future research. The research synthesis discussed earlier in this chapter has shown that previous studies on corpus-based writing instruction were predominantly conducted in college undergraduate/graduate programs, and little was found in K12 settings. Similar to what I found concerning research on AWE, the majority of the studies were conducted in the USA, followed by China. Also, previous research has predominantly been conducted with ESL/EFL students. Using corpus to inform other language (e.g., Spanish, French, Chinese) learning is rather scarce. Designing and implementing
8 Corpus Analysis and Corpus-Based Writing Instruction
207
corpus-based learning in broader language contexts will be warranted in future studies and will definitely benefit language learners in the globe. Moreover, the predominantly used research approaches were quantitative and mixed methods, and qualitative research exploring the learnercorpora interaction and examining both learners’ and teachers’ perceptions, in particular, deserves further exploration. Longitudinal studies are also strongly encouraged that investigate the long-term effectiveness of corpus-based instructional approaches. Drawing on the research themes addressed earlier in this chapter, I discuss a few research strands that await our further investigation. First, future research (Category A) should continue to employ the corpus approach to analyzing writing qualities at various linguistic levels, including morphological, phrasal, clausal, and discourse levels (Biber et al., 2014; Lu, 2011). The analyses of L2 learners’ writing measures in relation to native speakers’ and non-native experts’ measures would inform L2 instructors of the discrepancies among the groups, which would help them develop corresponding goals of their pedagogical practices. Based on the current call for attention to phrasal complexity, deemed a critical component of academic writing (Biber et al., 2020), future corpus analysis could focus on the comparison of phrasal complexity and other traditional metrics (e.g., syntactic complexity) as to multiple writing tasks. Second, regarding corpus-based instruction (Category B), we need expansive studies on the design and implementation of new corpusbased writing tools that address specialized discourses and that teach languages other than English. For instance, a corpus-based platform can be designed to teach Chinese as a foreign language learners how to write business letters. Such computer-annotated corpora will collectively empower language learners by tailoring to their specific learning goals. Of note, the corpus activities should not merely focus on lower-level features (e.g., grammar, lexis); they should also facilitate the analyses of higher-level features (e.g., discourse, genre). Another important research area concerns how the DDL can be effective in the long run (Category D). Future experimental studies can investigate the longitudinal effects of corpora and DDL on L2 writing or language development in a span of at least one year. For instance, we
208
M. Li
can examine how computer-based corpora impact L2 writers’ long-term learning of specific writing genres. Due to the lack of available research, future studies can, through the multiple case study approach, further explore the L2 writers’ actual use of corpora during drafting and revising stages (Crosthwaite et al., 2019) and connect the learner-corpora interaction to students’ final writing products following the DDL (Category C ). In response to little exploration of teachers’ perspectives of DDL, future research also needs to delve into teachers’ insights on “DDL materials, corpus platform, and effectiveness of DDL for disciplinary writing” (Crosthwaite et al., 2019, p. 21). Such research studies would collectively enhance our understanding of the role of corpora in L2 learning and writing development.
Teaching Recommendations Corpus-based instruction has been increasingly implemented in L2 classrooms this decade. Corpora can be used effectively for both the instruction of a new genre and students’ self-evaluation of their writing. Corpus-based approach possesses immense potential for L2 learning and writing development in various contexts. To encourage classroom practices of this new approach, I discuss below pedagogical recommendations from the aspects of corpora selection and corpora training, informed by previous research.
Selection of Corpora Both general corpora and learner/DIY corpora have been found to be helpful in L2 classrooms. Instructors need to select corpora based on course goals, students’ needs, and the convenience of use. Table 8.3 below presents a list of commonly used corpora. General corpora (e.g., COCA) are used to guide L2 writers’ language learning in general, mainly addressing lexical and syntactic levels. Learner corpora (e.g., MICUSP, TECCL) are used by language instructors who need to develop pedagogical methods and procedures that accurately target the needs of language
University of Arizona and other universities
University of Michigan/ Ute Römer and colleagues
https://elicorpora.info/
Coventry University/Hilary Nesi and colleagues
https://www.coventry.ac. uk/research/research-dir ectories/current-projects/ 2015/british-academicwritten-english-corpusbawe/
https://writecrow.org/ 2020/07/02/ten-millionwords/
Collins Publisher/University of Birmingham/ John Sinclair
https://collins.co.uk/pages/ elt-cobuild-referencethe-collins-corpus
CROW (Corpus and repository of writing) MICUSP (The Michigan Corpus of Upper-Level Student Papers)
Brigham Young University/Mark Davies
https://www.english-cor pora.org/coca/
COCA (Corpus of Contemporary American English) COBUILD (Collins Birmingham University International Language Database) BAWE (British academic written English corpus)
Brigham Young University/Mark Davies
https://www.english-cor pora.org/
English-corpora.org
Table 8.3 Representative corpora and writing analysis tools Institution/ Corpus Name Website link Developer
(continued)
A learner corpus containing texts produced by upper-level students across disciplines
Archive containing proficient native English university-level student writing at the turn of the twenty-first century; part of the research project “An investigation of genres of assessed writing in British higher education” A learner corpus containing texts produced by undergraduate students in first-year writing
Corpus of contemporary English texts, leading to the production of Collins COBUILD English Language Dictionary
Most widely used corpora including many types of corpora and corpora-based resources Large, genre-balanced corpora of American English; formerly known as BYU corpora
Description 8 Corpus Analysis and Corpus-Based Writing Instruction
209
unknown
LCA/L2SCA (Lexical complexity analyzer/L2 syntactic complexity analyzer)
https://aihaiyang.com/sof tware/
A free online text content and readability analyzer which reports basic text statistics, lexical density, readability, and grammar use (e.g., passive voice) Web-based LCA automatically analyzes 25 measures of lexical complexity, covering lexical density, lexical sophistication, and lexical variation or range of texts online. Web-based L2 SCA automatically analyzes 14 different measures of syntactic complexity, covering length of production units, amounts of coordination, amounts of subordination, degree of phrasal sophistication, and overall sentence complexity
Laurence Anthony
https://www.laurenceanth ony.net/software/ant conc/ https://www.analyzemywri ting.com/about_us.html
AMW (Analyzing my writing)
A freeware corpus analysis toolkit for concordancing and text analysis
Yasumasa Someya
http://www.someya-net. com/concordancer/
Description
Haiyang Ai
A learner corpus of written English by Chinese EFL students; containing essays written by Chinese secondary school and college EFL students A corpus of business letters from both US and UK
BLC (Business letter corpus) AntConc
Beijing Foreign Studies University/ Xizhe Xue & Jiajin Xu
http://corpus.bfsu.edu.cn/ info/1070/1449.htm
TECCL (Ten-Thousand English compositions of Chinese learners)
Institution/ Developer
Website link
Corpus Name
Table 8.3 (continued)
210 M. Li
Mark Davies & Dee Gardner
www.academicwords.info/
www.lextutor.ca
Academic Vocabulary List (AVL)
Compleat Lexical Tutor
University of Quebec at Montreal (UQAM) / Tom Cobb
The University of Memphis/ Danielle S. McNamara & Arthur C. Graesser
http://cohmetrix.com/
Coh-Metrix
Institution/ Developer
Website link
Corpus Name
Description Using text processing mechanisms, the computer tool Coh-Metrix analyzes texts on over 200 measures of language, cohesion, and readability. Coh-Metrix is sensitive to cohesion relations and language and discourse characteristics This corpus includes academic vocabulary lists of English based on 120 million words of academic texts in the COCA and shows some advantages over Coxhead’s (2000) Academic Word List, including much more information about word families and containing more high-frequency words. Learners can identify and interact with AVL easily in the new web-based interface It is a website with concordancer, vocabulary profiler, and interactive exercises. Including “customisable word lists, and self-quizzing features based around several different corpora in addition to a concordance” (Hyland, 2019, p. 167). Consisting of three sections: tutorial, research, and teachers, it is of great interest to practitioners working with EAP students
8 Corpus Analysis and Corpus-Based Writing Instruction
211
212
M. Li
learners based on their own analyses of learner corpora. Moreover, DIY corpora are usually developed to meet the target students’ academic needs and the instructional goals. For instance, Cotos (2014) identified the challenges that graduate students encountered when writing research articles using English, so she created the intelligent academic discourse evaluation (IADE) system. IADE was found to have helped graduate students discover and acquire linguistic patterns and organizational conventions of RAs published in their disciplines. Worthy of note, the input from disciplinary specialists is crucial for the design of DIY discipline-related writing corpora, as they can provide invaluable guidance on what DDL materials can be included as helpful resources.
Training/Instruction on Using Corpora It is important for language teachers to offer training sessions to enable L2 learners to use corpora independently and effectively. Two approaches deserve explanation here: bottom-up and top-down (Cotos, 2017). Regarding the bottom-up approach that focuses on analyzing lexico-grammatical features via the use of concordancers, teachers can train students to interpret concordance output vertically by examining frequency lists and concordance lines to identify patterns of language use (Cotos, 2020). They can alternatively train students to interpret concordances horizontally by reading the surrounding context and noting specific language choices (Cotos, 2020). In such ways can students search the corpus and examine the key words (e.g., technical vocabulary) in contexts so as to better understand the appropriate use of language. Differently, top-down approach, suitable for learning of research genres, begins with identifying patterns of text organization using analytical frameworks of possible discourse units (Biber et al., 2007). Informed by genre analysis (Swales, 1990; Swales & Feak, 2012), teachers can first introduce the concepts of moves (i.e., communicative goals) and steps (i.e., rhetorical strategies that accomplish communicative goals), followed by an instruction of CARS model, an influential model to analyze the introductions of RAs (Swales, 1990). Afterward, teachers can have students identify relevant moves and steps via a corpus platform.
8 Corpus Analysis and Corpus-Based Writing Instruction
213
For instance, in Cotos, Link, and Huffman’s (2016) study, the students were given access to a corpus of research articles annotated (including color highlighting) for moves and steps, which enabled them to observe the distribution of moves, sequence of moves, and the occurrence of steps within each move. Combining these two approaches, teachers will be able to raise their students’ awareness of patterns of discourse at the macro-level of rhetorical functions as well as the microlevel of lexico-grammatical features. By embracing innovative technologies, corpus-based writing instruction will inevitably bloom in the years to come.
References Anthony. (2014). AntWordProfiler. A freeware corpus analysis toolkit for concordancing and text analysis. Biber, D., Connor, U., & Upton, T. A. (2007). Discourse on the move using corpus analysis to describe discourse structure. Amsterdam John Benjamins. Biber, D., & Conrad, S. (2009). Register, genre, and style. Cambridge University Press. Biber, D., Gray, B., & Staples, S. (2014). Predicting patterns of grammatical complexity across language exam task types and proficiency levels. Applied Linguistics, 37 (5), 639–668. Biber, D., Gray, B., Shelley, S., & Egbert, J. (2020). Investigating grammatical complexity in L2 English writing research: Linguistic description versus predictive measurement. Journal of English for Academic Purposes, 46, 100869. Chang, J. Y. (2014). The use of general and specialized corpora as reference sources for academic English writing: A case study. ReCALL, 26 (2), 243. Cortes, V. (2007). Exploring genre and corpora in the English for academic writing class. The ORTESOL Journal, 25, 8–14. Cotos, E. (2014). Genre-based automated writing evaluation for L2 research writing: From design to evaluation and enhancement. Palgrave Macmillan. Cotos, E. (2020). Language for specific purposes and corpus-based pedagogy. In C. A. Chapelle., & S. Sauro (Eds.), The handbook of technology and second language teaching and learning (pp. 248–264). Wiley-BlackWell.
214
M. Li
Cotos, E., Link, S., & Huffman, S. (2016). Studying disciplinary corpora to teach the craft of discussion. Writing & Pedagogy, 8(1), 33–64. Cotos, E., Link, S., Huffman, S., & R. (2017). Effects of DDL technology on genre learning. Language Learning & Technology, 21(3), 104–130. Covington, M., & McFall, J. D. (2010). Cutting the Gordian knot: The moving-average type-token ratio (MATTR). Journal of Quantitative Linguistics, 17 (2), 94–100. Crosthwaite, P., Wong, L. L., & Cheung, J. (2019). Characterising postgraduate students’ corpus query and usage patterns for disciplinary data-driven learning. ReCALL, 31(3), 255–275. Dong, J., & Lu, X. (2020). Promoting discipline-specific genre competence with corpus-based genre analysis activities. English for Specific Purposes, 58, 138–154. Flowerdew, L. (2005). An investigation of corpus-based and genre-based approaches to text analysis in EAP/ESP: Countering criticisms against corpus-based methodologies. English for Specific Purposes, 24, 321–332. Flowerdew, L. (2012). Corpora and language education. Palgrave Macmillan. Flowerdew, J. (2013). English for research publication purposes. In B. Paltridge, & S. Starfield (Eds.), The handbook of English for specific purposes (pp. 301–321). Wiley-Blackwell. Gray, B., Cotos, E., & Smith, J. (2020). Combining rhetorical move analysis with multi-dimensional analysis: Research writing across disciplines. In U. Römer, V. Cortes, & E. Friginal (Eds.), Advances in corpus-based research on academic writing (pp. 137–168). John Benjamins. Hammond, J., Burns, A., Joyce, H., Brosnon, D., & Gerot, L. (1992). English for social purposes: A handbook for teachers of adult literacy. National Centre for English Language Teaching and Research Macquarie University (pp. 18–23). Hyland, K. (2015). Academic publishing: Issues and challenges in the construction of knowledge. OUP. Hyland, K. (2019). Second language writing (2nd ed.). Cambridge University Press. Johns, T. (1991). Should you be persuaded: Two examples of data-driven learning. English Language Research Journal, 4, 1–16. Lake, W. M., & Cortes, V. (2020). Lexical bundles as reflections of disciplinary norms in Spanish and English literary criticism, history, and psychology research. In U. Römer, V. Cortes, & E. Friginal (Eds.), Advances in corpusbased research on academic writing (pp. 183–204). John Benjamins.
8 Corpus Analysis and Corpus-Based Writing Instruction
215
Laufer, B., & Nation, P. (1995). Vocabulary size and use: Lexical richness in L2 written production. Applied Linguistics, 16 (3), 307–322. Lee, D., & Swales, J. (2006). A corpus-based EAP course for NNS doctoral students: Moving from available specialised corpora to self-compiled corpora. English for Specific Purposes, 25 (1), 56–75. Leech, G. (1997). Teaching and language corpora: A convergence. In A. Wichmann, S. Fligelstone, T. McEnery, & G. Knowles (Eds.), Teaching and language corpora (pp. 1–23). Longman. Lei, S., & Yang, R. (2020). Lexical richness in research articles: Corpusbased comparative study among advanced Chinese learners of English, English native beginner students and experts. Journal of English for Academic Purposes, 47 , 100894. Li, Z., Dursun, A., & Hegelheimer, V. (2020). Technology and L2 writing. In C.A. Chapelle, & S. Sauro (Eds.), The handbook of technology and second language teaching and learning. Wiley-Blackwell. Lu, X. (2011). A corpus-based evaluation of syntactic complexity measures as indices of college-level ESL writers’ language development. TESOL Quarterly, 45 (1), 36–62. O’Sullivan, I., & Chambers, A. (2006). Learners’ writing skills in French: Corpus consultation and learner evaluation. Journal of Second Language Writing, 15, 49–68. Ortega, L. (2003). Syntactic Complexity Measures and their Relationship to L2 Proficiency: A Research Synthesis of College-level L2 Writing. Applied Linguistics, 24 (4), 492–518. Park, K. (2012). Learner–corpus interaction: A locus of microgenesis in corpusassisted L2 writing. Applied Linguistics, 33(4), 361–385. Reppen, R., & Olson, S. B. (2020). Lexical bundles across disciplines: A look at consistency and variability. In U. Römer, V. Cortes, & E. Friginal (Eds.), Advances in corpus-based research on academic writing (pp. 137–168). John Benjamins. Sinclair, J. (1991). Corpus, concordance, collocation. Oxford University Press. Swales, J. (1990). Genre analysis: English in academic and research settings. Cambridge University Press. Swales, J. (2004). Research Genres: Explorations and applications. Cambridge University Press. Swales, J., & Feak, C. (2012). Academic writing for graduate students: Essential tasks and skills. University of Michigan Press. Tribble, C., & Jones, G. (1990). Concordancing in the Classroom. Longman.
216
M. Li
Wolf-Quintero, K., Inagaki, S., & Kim, H.-Y. (1998). Second language development in writing: Measures of fluency, accuracy, & complexity. University of Hawai’i, Second Language Teaching & Curriculum Center. Yilmaz, S., & Römer, U. (2020). A corpus-based exploration of constructions in written academic English as a lingua franca. In U. Römer, V. Cortes, & E. Friginal (Eds.), Advances in corpus-based research on academic writing (pp. 59–88). John Benjamins. Yoon, H., & Hirvela, A. (2004). ESL student attitudes towards corpus use in L2 writing. Journal of Second Language Writing, 13(4), 257–283.
9 Resources
To motivate readers to further explore the topic of technology-based L2 writing, I include in this section additional resources of multiple types, namely books, journals, professional conferences, webinars, and personal websites. These resources are compiled mainly based on my own professional experiences, interests, and trajectory, supplemented with the insights of colleagues1 in the field of L2 writing shared via personal contact, social media, or academic work. Although these resources cannot be comprehensive, L2 scholars can get much guidance in relevant pedagogical and research practices. Such resources can be consulted for course development in the language/writing teacher education program or considered for language/writing practitioners’ individual use in their profession. It is hoped that this chapter can partially respond to how writing practitioners can be well equipped to teach writing in the digital environment.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 M. Li, Researching and Teaching Second Language Writing in the Digital Age, https://doi.org/10.1007/978-3-030-87710-1_9
217
218
M. Li
Books I provide brief information about fifteen scholarly books, in which the six research areas discussed in Chapters 3 through 8 are well represented. • Bitchener, J., & Storch, N. (2016). Written corrective feedback for L2 development. Multilingual Matters. Drawing on cognitive and sociocultural theory perspectives, this book examines relevant research and explains whether or not, and the extent to which, written corrective feedback has facilitated L2 development over time. It specifically addresses the topic of computermediated written corrective feedback. • Liu, J., & Edwards, J. G. H. (2018). Peer response in second language writing classrooms (2nd ed.). The University of Michigan Press. This book has an extensive and intensive examination of the issues and strategies of implementing peer response in multilingual contexts. To respond to the changing landscape in the current language classrooms, this book specifically discusses the computer-mediated peer response and the training for the CMC modes of peer response. • Cotos, E (2014). Genre-based automated writing evaluation for L2 research writing: From design to evaluation and enhancement (2nd ed.). Palgrave Macmillan. This book innovatively presents how to design, implement, and evaluate a genre-based AWE system for L2 research writing. Based on multiple theoretical constructs, the author created a pedagogical tool named “the Intelligent Academic Discourse Evaluation System,” which helps ESP students construct texts that adhere to the genre and linguistic conventions of their disciplines. • Storch, N. (2013). Collaborative writing in L2 classrooms. Multilingual Matters. This is the first authoritative book that discusses theoretical, empirical, and pedagogical rationales for collaborative writing and offers readers an illuminating and comprehensive understanding of the role of collaborative writing in L2 learning. It includes one chapter specifically discussing computer-mediated collaborative writing.
9 Resources
219
• Li, M., & Zhang, M (under contract). L2 collaborative writing in diverse learning contexts. John Benjamins. This edited volume provides L2 researchers and practitioners with a collection of conceptual papers and empirical studies that explore theoretical, methodological, and pedagogical approaches to L2 collaborative writing in face-to-face, online, and hybrid learning contexts. It specifically examines the implementation of various collaborative writing tasks and assessments across modes, genres, and language learning settings and explores how diverse ways of collaborative writing can facilitate students’ language learning and writing development. • Shin, D., Cimasko, T., & Yi, Y. (2021). Multimodal composing in K-16 ESL and EFL education: Multilingual perspectives. Springer. This book provides a comprehensive picture of multimodal composing and literacies in ESL and EFL contexts. Drawing on multiple theoretical perspectives (e.g., multiliteracies, systemicfunctional linguistics, and social semiotics), it explores multilingual students’ diverse multimodal composing practices. • Bloch, J. (2021). Creating digital literacy spaces for multilingual writers. Multilingual Matters. This book critically evaluates the up-to-date use of technology in the multilingual writing space and focuses on the role of teachers in facilitating digital literacy practices. It specifically discusses digital writing activities, such as blogging, multimodal writing, and writing in the publishing space. • Flowerdew, L. (2012). Corpora and language education. Palgrave Macmillan. This book provides a comprehensive overview of the field of corpus linguistics and explains the appliance of corpus linguistics in diverse research fields. Quite relevant to the themes of corpus analysis discussed in Chapter 8 is the chapter on applying corpus linguistics in teaching arenas. • Bennett, G. (2010). Using corpora in the language learning classroom: Corpus linguistics for teachers. University of Michigan Press.
220
•
•
•
•
M. Li
This book provides guidance for teachers to use the applications of corpus linguistics in their language classrooms. It briefly introduces theory and principles of corpus linguistics, reviews commonly used corpora (e.g., MICASE, COCA), and presents a set of corpusdesigned activities to teach multiple language skills. Lee, I. (2017). Classroom writing assessment and feedback in L2 school contexts. Springer Nature. This book examines how effective classroom assessment and feedback facilitate teaching and learning of writing in L2 contexts. It discusses the topics of teacher feedback, peer feedback, portfolio assessment, and technology enhanced classroom writing assessment. In particular, one chapter devotes to technology in L2 writing assessment and feedback, discussing various technology-enhanced writing tasks (e.g., storytelling, blog writing, collaborative wiki writing) and evaluation of writing tasks (e.g., automated writing evaluation, screencast feedback). Lloret, M. G., & Ortega, L. (2014). Technology-mediated TBLT: Researching technology and tasks. John Benjamins. This edited volume investigates the intersection between tasks and technology from educational, cognitive, and sociocultural perspectives. It consists of chapters that report studies concerning the design and implementation of various writing tasks mediated by technology tools such as wikis, blogs, and fanfiction sites. All these reported tasks were guided by task-based language teaching and learning principles. Chapelle, C., & Sauro, S. (2020). The handbook of technology and second language teaching and learning. Wiley Blackwell. This edited volume presents a comprehensive examination of the important role of technology in the field of second language learning. It covers multiple aspects of language learning, including grammar, vocabulary, reading, writing, listening, speaking, and pragmatics. One chapter devotes to technology and L2 writing, and another chapter specifically discusses corpus-based pedagogy. Hyland, K. (2021). Teaching and researching writing (4th ed.). Routledge. This book provides language researchers and educators with a comprehensive and authoritative guide to writing research and
9 Resources
221
teaching. It thoroughly examines the key concepts, issues, controversies, and approaches in the field of L2 writing. It particularly discusses the incorporation of technology in writing instruction, such as Web 2.0 writing tools, automated marking, and multimodal and social media writing. Of note, a few websites introduced by Hyland are represented in the following section. • Oskoz, A., & Elola, I. (2020). Digital L2 writing literacies: Directions for classroom practice. Equinox. This book provides an updated comprehensive overview of digital writing in L2 contexts and examines the increasingly important role that digital media play in researching and teaching L2 literacies. It particularly discusses multilingual collaborative writing, multimodal composing, and digital writing assessment in second language, foreign language, and heritage language classrooms. • Jones, R., & Hafner, C. (2012). Understanding digital literacies: A practical introduction. London: Routledge. This book provides an accessible introduction to the emerging field of new media literacies. It presents an overview of major concepts, issues, and debates around the topic of digital literacies. It specifically discusses diverse digital tools and the important relevant constructs such as critical literacy, online culture, and multimodality.
Journals This section lists the main international peer-refereed journals that regularly publish articles on L2 writing and technology, although some of them do not publish research on L2 exclusively. Based on the web information, I present the summary of the special issues (sections), forums, and special articles that address the areas related to the topics I have discussed in previous chapters. • Journal of Second Language Writing https://www.journals.elsevier. com/journal-of-second-language-writing. (A leading SSCI journal reporting theoretically grounded L2 writing research and exploring
222
M. Li
central issues in second and foreign language writing and writing instruction.) – 2017 Special Section: L2 writing in the age of computer-mediated communication. This section highlights what it means to be literate in the twenty-first century, how new technologies transform the teaching and learning of L2 writing, and how these new technologies are incorporated into L2 writing. It specifically discusses the themes of multimodal literacy practices, computer-mediated collaborative writing, and affordances of electronic technologies for writing. – 2017 Disciplinary Dialogues: Perspectives on multimodal composition. This forum, consisting of contributions from six L2 writing experts, draws on diverse theoretical and disciplinary perspectives to further explore conceptions of multimodal literacy and its roles in L2 writing research and instruction. – 2020 Special Issue (SI): Multimodal composing in multilingual learning and teaching contexts. This issue critically examines the discursive, social, and cultural process of multimodal meaning-making practices with the aim to advance research and pedagogy on multimodal composing in the field of L2/multilingual writing. It specifically discusses the themes of linguistic and non-linguistic repertoire, process, and product of multimodal composing, and assessment and teacher knowledge of multimodal composing. – 2022 SI: L2 writing assessment in the digital age. This issue explores theoretical, methodological, and pedagogical approaches to L2 writing assessment in response to the development of digital technologies. It examines how technology and new forms of writing intersect with assessment practices to facilitate L2 students’ writing development. It is a collective inquiry of what to assess and how to assess in relation to new forms of writing tasks and novel technology-mediated formative assessments. • Assessing Writing www.elsevier.com/locate/asw. (An important SSCI journal on assessment of written language. It addresses all kinds of
9 Resources
223
writing assessments and values all perspectives on writing assessment as process, product, and politics.) – 2018 SI: The comparability of paper-based and computer-based writing: Process and Performance. This issue addresses multiple dimensions of computer-based writing assessment use in different contexts and across levels of L2 learner proficiency. It discusses specific themes about the impact of delivery modes on test-takers’ process and performance, the use of online writing assessment, and the use of technologies to provide feedback. – 2019 SI: Framing the future of writing assessment. This issue, by revisiting 25 years of research on assessing writing, illustrates the evolution of ideas, questions, and concerns which are essential to the field of writing assessment and explains their relevance to the current and future writing practices. • TESOL Quarterly https://onlinelibrary.wiley.com/journal/154 57249. (A leading SSCI journal that reports theoretically grounded research on English as a second language teaching and learning. It represents cross-disciplinary interests, including the topics of psychology and sociology of language learning and teaching, instructional methods and curriculum design, and professional preparation.) – 2015 SI: Multimodality: Out from the margins of English language teaching. This issue draws readers’ attention to the topic of multimodality in TESOL and highlights the possibilities and challenges that a multimodal lens brings to language education and particularly to the contested contexts where English language education is granted special priority. It discusses the themes of multimodal genres, multimodal literacy, multimodal ensembles, etc. – 2022 SI: Digital literacies in TESOL: Mapping out the terrain. This issue explores functional and critical digital literacies by delving into theoretical constructs in digital literacies research, approaches to integrating digital literacies in language classrooms
224
M. Li
(e.g., autonomous learning), and critical perspectives in the teaching of digital literacies (e.g., inequity and power). • Language Learning & Technology https://www.lltjournal.org. (A leading SSCI open access journal that disseminates research to foreign and second language teachers and researchers on issues related to technology and language education.) – 2015 SI: Digital literacies and language learning. This issue examines the digital literacy practices in language education, responding to new forms of multimodal representation (e.g., CMC and interactive hypermedia), new kinds of joint composing practices (featured with remix and collaboration), and formation of globalized online affinity space. It discusses the themes of digital translanguaging, online intercultural learning, and computer-mediated socialization. – 2017 SI: Corpora in language learning and teaching. This issue publishes theoretically grounded empirical studies of language learning processes or outcomes in data-driven learning (DDL) contexts using expert- or native speaker corpora and learner corpora. It discusses the use of corpora in L2 learning at different areas, including academic writing, reading, and pragmatics. – 2022 SI: Automated writing evaluation. This issue seeks to enlighten the current state of automated writing evaluation (AWE) by presenting theoretically grounded research that combines process and product approaches to examine how the AWE system is used by participants and the influence of using AWE on language learning outcome. • System https://www.sciencedirect.com/journal/system. (An important SSCI journal devoted to the applications of educational technology and applied linguistics to problems of foreign language teaching and learning.) – 2016 SI: Language pedagogy and the changing landscapes of digital technology. This issue aims to enhance language pedagogy via technology tools in the early twenty-first century. The papers in this issue focus
9 Resources
225
primarily on the supportive role technology plays in the informal learning of language in daily language use such as texting and videoconferencing. Topics include integration of technologies as pedagogical resources as well as methods of adoption in and out of the classroom. – 2019: Twenty-five years of research on oral and written corrective feedback in system. This article is an extensive review providing a critical analysis of the research on oral and written corrective feedback (CF) from articles curated from System in the past 25 years. • Computers and Composition www.journals.elsevier.com/computersand-composition/. (An important journal devoted to exploring the use of computers in writing classes, writing programs, and writing research.) – 2005 SI : Second language writers in digital contexts. This issue builds a foundation for theorizing the work of L2 writing in digital contexts as a field of inquiry, addressing a range of topics including multimodal genres, computer-based reading and writing, and online writing tutoring. – 2012 SI: New literacy narratives: Stories about reading and writing in a digital age. This issue focuses on new literacy narratives and presents research exploring new methods and theories for the study of stories, literacies, and the teaching of writing. These selected articles situate composition research, identity, and teaching context as integral to interconnected 21st literacy practices. Topics range from literacy narratives in the classroom, teacher training, data collection, and analysis of cultural constructs, as well as narrative studies as a tool for social change. – 2014 SI: Multimodal Assessment. This issue captures the complexities that lie in the intersections of multimodal writing assessment with the text themselves, administration, history and context, and theory and praxis. It specifically discusses the topic of assessing multimodal processes and products in writing.
226
M. Li
• Written Communication https://journals.sagepub.com/home/wcx. (A leading interdisciplinary journal that publishes research, theory, and application of writing in schools, workplaces, and communities.) – 2008 SI: Writing and new media. This issue explores writing in the new media age. The topics include social semiotic account of multimodal texts, students’ composing during written and video production, online sports commentaries across multiple languages, and Wikipedia article editing. – 2013 SI: New methods for the study of written communication. This issue discusses relevant theoretical frameworks (e.g., systemic-functional linguistics, situated literacies) and updated methods used in the study of written communication. It particularly illuminates new strategies and tools to examining writing processes, products, and perceptions, including discourse analysis, keystroke logging, and survey design. • Computer-Assisted Language Learning https://www.tandfonline. com/toc/ncal20/current. (A leading SSCI interdisciplinary journal devoted to exploring the use of computers in L1/L2 learning, teaching, testing, and curriculum development.) • ReCALL https://www.cambridge.org/core/journals/recall. (A leading SSCI journal focused on CALL which aims to fulfill the aims of EUROCALL: encouraging the use of technology for teaching and learning languages and cultures. It publishes innovative CALLrelated research in relation to applied linguistics, corpus linguistics, digital pedagogy, digital literacies, computer-mediated communication, learning analytics, second language acquisition, and educational science.) • Language Teaching: Surveys and Studies https://www.cambri dge.org/core/journals/language-teaching. (A leading SSCI journal providing rich and expert overview of research in second language teaching and learning. It boasts multiple distinctive article types, including state-of-the-art articles, plenary speeches, topic-based research timelines, theme-based research agenda, replication studies, and surveys of doctoral dissertations.)
9 Resources
227
• International Journal of Computer-Assisted Language Learning and Teaching https://www.igi-global.com/journal/internationaljournal-computer-assisted-language/41023. (An important journal devoted to reporting innovative language teaching approaches and strategies by integrating new technologies.) • Journal of Response to Writing https://scholarsarchive.byu.edu/jou rnalrw/. (An important open access journal reporting both research and pedagogy-oriented work on L1, L2, and foreign language writing and response practices.) • Journal of English for Academic Purposes https://www.scienc edirect.com/journal/journal-of-english-for-academic-purposes. (An important SSCI journal devoted to linguistic, sociolinguistic, and psycholinguistic examinations of English as it is used for the purposes of academic study and scholarly exchange.) • English for Specific Purposes https://www.sciencedirect.com/jou rnal/english-for-specific-purposes. (An important SSCI journal that publishes research and discussion notes on topics relevant to the teaching and learning of discourse for specific communities: academic, occupational, or otherwise specialized.)
Professional Conferences and Association SIGs In this section, I include information about professional conferences and association SIGs which usually discuss issues on L2 writing and technology. Although some of them are not exclusively devoted to writing, they provide valuable resources on teaching and researching L2 writing in the digital era. More details can be accessed via the corresponding websites. • Symposium on Second Language Writing (SSLW): “An annual international conference that brings teachers and researchers who work with second- and foreign-language writers to discuss important issues in the field of second language writing.” http://sslw.asu.edu.
228
M. Li
• American Association for Applied Linguistics (AAAL) is “an organization of scholars interested in applied linguistics that provides an annual venue for scholars of the multi-disciplinary field of applied linguistics to share their research. It promotes an evidence based and principled approach to language related research including language education, acquisition and loss, sociolinguistics, psycholinguistics, literacy, rhetoric, discourse analysis, language assessment, policy, planning, and more.” https://www.aaal.org. – The Reading, Writing, and Literacy (RWL) strand features “presentations that examine issues relating to first-language, second-language, and multilingual readers, writers, texts, teachers, policies, and practices related to reading, writing, and literacy.” https://www.aaal. org/RWL. – The Language and Technology (TEC) strand “features research at the interface of language teaching and learning and technology. Empirical studies on the use of contextually appropriate technologies could include (but are not limited to) the following: adaptive technology, blended/hybrid formats, computational linguistics, computer-assisted language learning (CALL), computer-assisted language assessment, computer-mediated communication (CMC), and corpus linguistics.” https://www.aaal.org/TEC. – Corpus linguistics (COR) strand “features presentations that report on analyses of spoken and written texts using corpus-based methodology, in which principled, representative collections of texts are examined through the aid of computers. Analyses should be quantitative in nature, allowing researchers to study a greater number of texts and more features than in other approaches to text analysis. However, qualitative methods (e.g., move analysis, critical discourse analysis, interview data) may be used in conjunction with the quantitative analysis. Researchers may identify central tendencies and compare features fstatistically. Presentations may include analyses of grammar, vocabulary, lexico-grammar, phraseology, prosody, stylistics, or keyness, and may focus on contrastive analyses of register, L1 background, developmental levels, and more. Presentations may also focus on corpus construction and the use of corpus-based
9 Resources
229
methodologies in pedagogy and teacher training.” https://www.aaal. org/cor. • TESOL International Association is “the largest professional organization for teachers of English as a second or foreign language. TESOL publishes two peer-reviewed academic journals, TESOL Quarterly and TESOL Journal , and “holds professional development seminars called ‘TESOL Academies’ across the United States. It also hosts an annual convention.” https://www.tesol.org. – English for Specific Purposes Interest Section (ESPIS) “exists to serve the needs of those teachers, program developers, consultants and researchers who are interested in the design and delivery of courses for individuals with identifiable academic and professional/occupational goal.” https://my.tesol.org/communities/ interest-sections. – Second Language Writing Interest Section (SLWIS) “exists to facilitate interaction among TESOL members who desire to further the teaching and research of second language (L2) writing in different contexts and settings, including ESL and EFL.” https://my.tesol. org/communities/interest-sections. TESOL also has a SLWIS Book club on Facebook and their Website. – Computer-Assisted Language Learning Interest Section (CALLIS) “exists to facilitate interaction among members of TESOL who desire to further the teaching of ESL, EFL, and languages in general through the medium of CALL.” https://my.tesol.org/communities/ interest-sections. • College Composition and Communication (CCC) SLW Standing Group: “The Second Language Writing Standing Group works with the CCCC Committee on Second Language Writing”, explores issues related to L2 writing research and teaching, and “offers recommendations to the Committee on needs within the CCCC community.” https://secondlanguagewriting.wordpress.com. • Computer and Writing Conference: It is “the main conference for those who use computers and networks to teach writing. It brings
230
M. Li
together scholars, teachers, and professionals from all over the world in an intimate, welcoming setting to discuss the problems, successes, innovations, and logistics of computer and network-based writing instruction.” https://cccc.ncte.org/cccc/committees/7cs/candwcall. • CALICO (Computer-Assisted Language Instruction Consortium) is “a professional organization that serves a membership involved in both education and technology. CALICO has an emphasis on language teaching and learning but reaches out to all areas that employ the languages of the world to instruct and to learn. CALICO is a recognized international clearinghouse and leader in computer assisted learning and instruction. It is a premier global association dedicated to computer-assisted language learning (CALL).” It holds an annual convention and has one peer-reviewed academic journal CALICO Journal. https://calico.org. • IALLT (The International Association for Language Learning Technology) is “a professional organization whose members provide leadership in the development, integration, evaluation and management of instructional technology for the teaching and learning of language, literature and culture. Its strong sense of community promotes the sharing of expertise in a variety of educational contexts.” https://ial lt.org.
Webinars This section provides information of some webinars on L2 writing. To be well equipped to teach writing in digital environments, language teachers need to constantly brush up their knowledge and pedagogy through multiple ways. In addition to reading up-to-date books and journals, attending academic conferences, and associating with interest groups, they are highly encouraged to attend webinars continually, which proves to be one of the most efficient ways for professional development. Meanwhile, watching recordings of past webinars is also of great help. I list below a few relevant webinar resources.
9 Resources
231
• TESOL SLW Interest Section: Organizing free professional development webinars on L2 writing, covering a wide range of up-to-date topics, such as writing and technology, writing assessment, and writing teacher education. Past recorded webinars are freely accessible on the TESOL YouTube page: https://www.youtube.com/user/tesolinc/ search. • TESOL Virtual Seminars: Providing webinars relating to various topics of TESOL, including teaching writing, reading, and grammar. The webinars are free to TESOL members. https://www.tesol.org/con nect/tesol-resource-center. • National Foreign Language Resource Center/ACTFL’s Distance Learning Special Interest Group: Providing a series of webinars on online language teaching and learning, including technologysupported writing pedagogy. Past recorded webinars are freely accessible via http://nflrc.hawaii.edu/events/view/105/. • IALLT Webinar: Providing interactive professional development workshops on various topics regarding language teaching and technology. Participation is open for all, but only IALLT members have access to the archives. Register for webinars via https://iallt.org/resour ces/webinars/. • Oxford University Press: Providing free professional development webinars on English language teaching, including ones addressing writing pedagogy. Registration is required. Also, previous webinars are freely accessible in the webinar library. https://elt.oup.com/feature/glo bal/webinars/?cc=us&selLanguage=en. • Macmillan English Interactive: Providing free professional development webinars on teaching and learning, including online teaching, hybrid learning, and digital technologies. Registration is required. Webinar archives are freely accessible online. https://www.macmillan english.com/us/training-events/events-webinars. • Pearson: Providing free professional development webinars on English language teaching and assessment, including ones addressing writing pedagogy and online/hybrid teaching. Registration is required. Webinar archives are freely accessible online. https://www.pearson. com/english/professional-development/webinars.html.
232
M. Li
• Cambridge Assessment English: Providing free professional development webinars on teaching and learning, including effective teaching methods addressing writing skills. Registration is required. Webinar archives are freely accessible online. https://www.cambridgeenglish. org/teaching-english/resources-for-teachers/webinars/. • British Council: Providing free professional development webinars on English language teaching, covering a wide range of topics in the teaching profession. https://www.britishcouncil.me/en/teach/web inars.
Websites and More In this section, I list web sources that can be helpful for teachers and researchers who work with L2 writers, including websites, blogs, networking tools, reference managing tools, and Google resources. • Second Language Writing: A website for the SLW standing group of the CCCC, discussing issues related to L2 writing research and teaching, and providing various resources including conference panel/workshop information, webinars, and SLW publications. https://secondlanguagewriting.wordpress.com. • Second Language Writing by Charles P. Nelson: A website that contains rich resources on second language writing (i.e., ESL/EFL writing). It provides links to book, articles, journals, blogs, and more. http://www.secondlanguagewriting.com/. • Web cloud resources by John Swales: This free-accessed google drive folder includes conference presentations and other small research projects that draw on data from MICASE and MICUSP to provide pedagogically-relevant insights into important features of advanced student academic speech and academic writing. https://drive.google. com/drive/folders/1BNwXY_FSiYVR-9wVg55tZA043C5IsuvK. • Using corpora for language learning and teaching: free resource from the TESOL website. https://www.tesol.org/read-and-publish/ bookstore/using-corpora.
9 Resources
233
• Dave’s ESL Café: Dave’s ESL provides resources for EFL teachers, including lesson plans and quizzes for writing classes. https://www. eslcafe.com. • Guide to Grammar and Writing: Sponsored by the Capital Community College Foundation, dedicated to the memory of Dr. Charles Darlings, this website offers various free resources on grammar and writing. http://guidetogrammar.org/grammar/. • Google Scholar: It is a powerful freely accessible web search engine that indexes the full text or metadata of scholarly literature including peer-refereed academic journals and books, conference papers, theses and dissertations, preprints, and abstracts. • ResearchGate: ResearchGate is a professional network for scientists and researchers in diverse disciplines, including applied linguistics. Over 20 million members from all over the world use it to share, discover, and discuss research. https://www.researchgate.net. • LINGUIST List: The LINGUIST List “undertakes major research and service projects devoted to enhancing the field of linguistics. The aim of the list is to provide a forum where academic linguists can discuss linguistic issues and exchange linguistic information.” https:// linguistlist.org. • Purdue OWL: The Online Writing Lab (OWL) at Purdue University “houses writing resources and instructional material, and provides these as a free service of the Writing Lab at Purdue. Students and users worldwide find helpful information to assist with many writing projects. Teachers and trainers use this material for in-class and out-of-class instruction.” https://owl.purdue.edu/index.html. • Plagiarism and how to avoid it: by David Gardner, a website providing valuable resources about plagiarism and techniques and strategies for avoiding plagiarism. http://www4.caes.hku.hk/plagia rism/introduction.htm. • RefWorks: It is a “web-based reference management software package. It is a helpful reference managing tool in that a word processor integration utility called Write-N-Cite enables users to insert reference codes from their RefWorks accounts into documents, which can then be formatted to produce in-text citations and reference lists in various styles.” https://www.refworks.com/.
234
M. Li
• EasyBib: It is an “intuitive information literacy platform that provides citation, note taking, and research tools that are easy-to-use and educational. EasyBib is not only accurate, fast, and comprehensive, but helps educators teach and students learn how to become effective and organized researchers.” https://www.easybib.com/guides/company/. • Mendeley: It is a reference manager and academic social network that helps “organize research, collaborate with others online, and discover the latest research.” https://www.mendeley.com. • Zotero: Zotero is an application that collects, manages, and cites research sources. It saves all the citations in a single place and allows users to organize references into collections for different projects and courses. https://www.zotero.org.
Note 1. I appreciate the insights from Drs. Meixiu Zhang, Betsy Gilliland, and Ken Hyland.
10 Conclusion
Recap of the Content The development of digital technologies has brought simultaneously opportunities and complexities to the writing field by dramatically changing the research and pedagogical practices in L2 writing classrooms. Responding to the changes, this book has provided an extensive and up-to-date coverage of the main areas in L2 writing and technology. It particularly highlights the new directions toward which writing in the digital age has been developing: digital multimodal composing, computer-mediated collaborative writing, online writing feedback, and computer-assisted instruction and assessment. In this book, I have discussed six major areas of technology-mediated L2 writing. As shown in Fig. 10.1, the teacher, students, and technology are three interdependent components in digital L2 writing classrooms. Computer-mediated peer response (CMPR) and computer-mediated collaborative writing (CMCW) are mainly implemented through the interactions between students with the technology being the medium. Digital multimodal composing (DMC) and automated writing evaluation (AWE) primarily rely on the interactions between students and © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 M. Li, Researching and Teaching Second Language Writing in the Digital Age, https://doi.org/10.1007/978-3-030-87710-1_10
235
236
M. Li
teacher
technology
Peer Feedback CollaboraƟve WriƟng
students
students
Fig. 10.1 Interaction of teacher, students, and technology in digital L2 writing classrooms
technology. Differently, computer-mediated teacher feedback (CMTF) and corpus-based writing instruction (CBWI) draw on the interplay of all the three components: teacher, students, and technology. The factors of teacher, students, and technology are equally crucial for the application of digital L2 writing activities. Worthy of note, the six key areas can be interrelated. For instance, combining CMCW and DMC, L2 researchers and teachers are encouraged to inquire about collaborative multimodal writing using digital technologies. The CMCW practice can also be linked to the use of AWE and the application of corpus-based instruction, thus highlighting the prominent collaborative approach in more teaching and research arena. To take another example, taking all the feedback methods into account, L2 practitioners are expected to delve into potential benefits of incorporating AWE together with CMTF and/or CMPR into L2 writing classes. Such connections of multiple interrelated areas will inevitably become new research/pedagogical trends in the technology-supported L2 classes in the digital age. Figure 10.2 simply depicts what I have covered in this book. From the research perspective, I have examined the theoretical frameworks that inform research areas on technology-mediated L2 writing, including sociocultural theory, second language acquisition, computer-mediated communication, corpus linguistics, and genre analysis. In terms of
10 Conclusion
thereoƟcal framework
research direcƟon
Research
technology
methodol ogical approach
L2 writing Areas
research strands
Fig. 10.2
237
wriƟng task
Teaching
training
assessment
Contents covered in this book
research methods, quantitative, qualitative, and mixed-methods studies straddle these six areas, despite possible different weights received. Similarly, all the six research areas have addressed the strands of students’ perceptions, writing processes or products, and learning outcome. More nuanced research is in pressing need that employs less-commonlyused methodological approaches (e.g., ethnographic case study, narrative inquiry, action research) and is conducted in underrepresented L2 learning contexts (e.g., less commonly taught language and nontertiary-level settings). From the teaching perspective, I have discussed state-of-the-art technology tools utilized for each of the six research areas, and provided guidelines for integrating the technology-mediated writing tasks into L2 classes. For some of the areas, I particularly highlighted the importance of training and/or assessment, and informed readers of effective strategies for implementing training and/or assessment. L2 writing practitioners are expected to draw on these resources and enrich their toolkit to better serve their students in the digital era.
Limitations I need to point out that the areas expounded in this book are not comprehensive. A few limitations should be acknowledged. First, the selection of articles for close examination (Chapters 3 through 8) may be influenced by my research interests and my preconceived beliefs. Second, I did not discuss the use of cutting-edge technologies such as keystroke logging and eye-tracking technologies, which are found to help explore
238
M. Li
and understand the writing processes and writing behaviors. Information tracked from the planning, formulation, and revising stages can play a diagnostic role for teachers and also enables learners to be more reflective in their learning. Third, although I discussed assessments in relation to new forms of classroom writing tasks (e.g., digital multimodal writing, computer-based collaborative writing), I did not explore computer-based writing tests (e.g., high-stakes standardized testing and placement tests). It would be important to bring the collective efforts from teachers, administrators, and other stakeholders to explore the effective role of technology-mediated writing tests and examine how such tests can inform teaching and learning nowadays. Fourth, I focused on addressing purely the use of technologies for L2 writing instruction in this book and did not delve into affordances of technologies for other relevant skills, such as reading-writing connection through social annotation tools (e.g., Perusall), or for the development of digital literacies skills in out-of-school contexts. Integration of writing and other skills in both formal and informal settings would deserve future investigation. Furthermore, teacher education with regard to technology-enhanced writing is beyond the scope of this book; this important under-examined topic, however, merits future exploration. Scholars and administrators from different disciplines (e.g., rhetoric and composition, applied linguistics, English for specific purposes, education, technical communication) should unite to strive for optimal writing teacher education. In addition, a few emerging trends such as digital translanguaging, digital publishing, and artificial intelligence could be addressed in future inquiries.
Concluding Remarks The field of L2 writing has been flourishing over the past two decades, and the research and practice on L2 writing and technology will continue to blossom, along with artificial intelligence growing in capacity and applications as well as a wider accessibility to technology options in the globe (particularly underdeveloped areas of the world). This monograph responds to the changing landscape of L2 writing in the digital age. I sincerely hope this book will enable L2 scholars—established and
10 Conclusion
239
emerging—to keep abreast of the up-to-date knowledge in L2 writing and embrace the fresh landscape of digital writing and learning. I anticipate fanning sparks and motivating both researchers and practitioners to undertake new L2 writing inquiries in technologically supported educational contexts. I look forward to seeing insightful research studies and instructional practices grow and bloom in diverse L2 writing contexts in the digital age.
Index
A
Activity theory 13, 55, 85, 92, 122, 131 Assessment 3, 7, 9, 96, 98–100, 102, 103, 106–108, 140, 144, 152, 156, 160, 164, 166, 169, 171, 172, 175–177, 179, 219–223, 225, 228, 231, 232, 235, 237, 238 Automated writing evaluation (AWE) 2–4, 10–12, 15, 43, 151–155, 157, 162–177, 179, 205, 206, 235, 236
C
Cognitive theory 12, 28 Collaborative writing 4, 9, 12–15, 17, 25, 32, 38, 41, 101,
113–116, 118, 119, 121, 122, 125, 127, 128, 130, 132–145 Computer-mediated collaborative writing (CMCW) 1, 3, 4, 9, 114–117, 125, 134–138, 140, 141, 143, 144, 235, 236 Computer-mediated communication (CMC) 1, 3, 4, 7–9, 14, 29, 51–53, 61, 64, 66–70, 72–74, 82, 98, 113–115, 138, 236 Computer-mediated peer response (CMPR) 3, 43, 52, 53, 55, 60, 62, 64, 67–75, 235, 236 Computer-mediated teacher feedback (CMTF) 3, 23–27, 33, 35, 39–43, 45, 46, 236 Corpora 10, 11, 16, 183–186, 188, 192, 195–199, 203–209, 212 Corpus analysis 3, 10, 16, 43, 183–187, 195, 202, 203, 207
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 M. Li, Researching and Teaching Second Language Writing in the Digital Age, https://doi.org/10.1007/978-3-030-87710-1
241
242
Index
Corpus linguistics 16, 219, 236 Course management system (CMS) 24, 26, 46, 69, 74, 176 D
Data-driven learning (DDL) 4, 10, 183, 185, 189, 191, 192, 198, 202, 205, 207, 208, 212, 224 Digital literacies 3, 7, 11, 81, 83, 85, 91, 102, 107, 108, 221, 223, 226, 238 Digital multimodal composing (DMC) 1, 3, 4, 12, 79–84, 88, 90, 92, 95–108, 139, 235, 236
Interactionist 12, 14, 28, 56, 124 Interaction pattern 122, 133, 139
L
Linguistics 4, 15, 16, 83, 224, 226, 228, 233, 238
M
Multimodality 25, 40, 42, 80, 81, 83, 84, 86–88, 96, 139, 221, 223
P E
English for Academic Purposes (EAP) 9, 15, 16, 27, 29, 36, 39, 43, 88, 121, 130, 136, 164, 183, 185, 198 English for Specific Purposes (ESP) 8, 15, 63, 85, 91, 100, 103, 104, 117, 125, 128, 141, 183, 185, 204, 218 G
Genre analysis 15, 101, 102, 192, 200, 206, 212, 236 Genres 1, 8, 10, 15, 16, 55, 80, 81, 91, 98, 102, 103, 156, 175, 179, 184–186, 198, 204, 208, 209, 212, 219, 223, 225 I
Identity(ies) 1, 3, 7, 11, 13, 82, 83, 90, 91, 98, 99, 103, 104, 225
Peer response 4, 10, 51–53, 64, 69–74, 218 Pragmatics 16, 220 Process writing 15, 56, 116
R
Resources 4, 7, 8, 10, 11, 37, 40, 79, 81, 90–92, 95, 96, 98, 99, 101, 102, 115, 139, 141, 153, 167, 177, 183, 185, 194, 197, 201, 206, 209, 212, 217, 227, 230, 232, 233, 237
S
Sociocultural theory 4, 13, 26, 28, 32, 54–58, 116, 117, 120, 121, 123, 127, 154, 155, 158, 188, 202, 218, 236
Index
T
Teacher feedback 9, 12, 23–27, 29, 33–36, 39–44, 153, 161, 169, 175, 220 Technology 1–4, 9, 12, 14, 17, 24, 25, 27–33, 37, 38, 42, 44–46, 54–60, 62, 63, 67, 70–74, 81, 82, 84–89, 100, 103, 105, 106, 108, 113, 114, 116–124, 137, 138, 140–142, 144, 154–162, 175, 177, 178, 183, 186, 205, 217, 219–222, 224–228, 230, 231, 235–238
243
Training 53, 67–69, 71–73, 91, 94, 102, 127, 140, 143, 163, 175, 176, 179, 208, 212, 218, 225, 228, 237
W
(Writing) task 1, 3, 7, 8, 13–17, 25, 41, 54, 81, 82, 103–105, 114, 115, 135, 139–141, 143, 154, 173, 176, 186, 202, 207, 219, 220, 222 Written corrective feedback (WCF) 24, 25, 28, 35, 38, 41, 42, 44, 167, 218