121 109 86MB
English Pages 286 [291] Year 2020
-a
I
■
VOCABULARY I N LANGUAGE TEACHING SECOND EDITION NORBERT SCHMITT
DIANE SCHMITT
■■■MM'
Vocabulary in Language Teaching Second Edition
Internationally recognized as one of the leading texts in its field, this volume offers a comprehensive introduction to vocabulary for language teachers who would like to know more about the way vocabulary works. Two leading specialists make research and theory accessible, providing the background knowledge necessary for practitioners to make informed choices about vocabulary teaching and testing. This second edition retains the popular format of the first edition, and has been rewritten to take account of the many developments in the past twenty years. There is a greater focus on the vocabulary learning process, with new chapters on incidental learning and intentional learning, and a new wide-ranging discussion of formulaic language. The book now also includes extensive treatment of word lists and vocabulary tests, with explanations of their various strengths and limitations. Updated further reading sections, and new Exercises for Expansion, make this volume more invaluable than ever. Schmitt is Professor of Applied Linguistics at the University of Nottingham. I le is interested in all aspects of second language vocabulary, and has published 8 books and over 100 journal articlesand book chapters on vocabulary and applied linguistics topics.
Norbert
works in the areas of syllabus design, materials development, and test development with emphases on vocabulary acquisition, all aspects of English for Academic Purposes, plagiarism, and English Medium Instruction. She has published articles, book chapters, and textbooks on these topics. She was a Senior Lecturer in EFL/TESOL for twenty-five years at Nottingham Trent University. Diane Schmitt
Vocabulary in Language Teaching Second Edition
Norbert Schmitt University of Nottingham
Diane Schmitt Nottingham Trent University
CAMBRIDGE UNIVERSITY PRESS
CAMBRIDGE UNIVERSITY PRESS University Printing House, Cambridge CB2 8BS, United Kingdom One Liberty Plaza, 20th Floor, New York, NY 10006, USA 477 Williamstown Road, Port Melbourne, VIC 3207, Australia 314-321, 3rd Floor, Plot 3, Splendor Forum, Jasola District Centre, New Delhi - 1 10025, India 79 Anson Road, #06-04/06, Singapore 079906 Cambridge University Press is part of the University of Cambridge. It furthers the University’s mission by disseminating knowledge in the pursuit of education, learning, and research at the highest international levels of excellence. www.cambrldge.org Information on this title: www.cambrldge.org/9781108476829 DOI: 10. 101 7/9781108569057 First edition © Cambridge University Press 2000 Second edition © Norbert Schmitt and Diane Schmitt 2020 This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published 2000 11th printing 2010 Second edition 2020 Printed in the United Kingdom by TJ International Ltd, Padstow, Cornwall, 2020 A catalogue record for this publication is available from the British Library. Library of Congress Cataloging-in-Publication Data Names: Schmitt, Norbert, 1956- author. | Schmitt, Diane, 1963- author. Title: Vocabulary in language teaching / Norbert Schmitt, University of Nottingham, Diane Schmitt, Nottingham Trent University. Description: Second edition. | NcwT York, NY : Cambridge University Press, 2020. | Includes bibliographical references and index. Identifiers: LCCN 2020009726 (print) | LCCN 2020009727 (ebook) | ISBN 9781108476829 (hardback) | ISBN 9781108701600 (paperback) | ISBN 9781108569057 (epub) Subjects: LCSH: Language and languages-Study and teaching. | Vocabulary. Classification: LCC P53.9 .S37 2020 (print) | LCC P53.9 (ebook) | DDC 418.0071-dc23 LC record available at https://lccn.loc.gov/2020009726 LC ebook record available at https://lccn.loc.gov/2020009727 ISBN 978-1-108-47682-9 Hardback ISBN 978-1-108-70160-0 Paperback Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party internet websites referred to in this publication and does not guarantee that any content on such websites is, or will remain, accurate or appropriate.
Norbert: In the first edition (which I wrote just after completing my Ph.D.), I dedicated the book to my mentors. Looking back, I would like to continue that, because without them, my career would have been much less successful: To Ron Carter, Mike McCarthy, Paul Meara, and Paul Nation for taking me under their wings when I was just starting out. This second edition was written toward the end of my career, and I would like to dedicate it looking forward: To my former Ph.D. apprentices who are now publishing exciting research of their own (in chronological order): Tomoko Ishii, Wen-ta (Thomas) Tseng, Faisal Alhomoud, Phil Durrant, Anna Siyanova-Chanturia, Ron Martinez, Ana Pellicer-Sanchez, Suhad Sonbul, Hilde van Zeeland, Kholood Saigh, Marijana Macis, Melodie Garnier, Laura Vilkaite-Lozdiene, Benjamin Kremmel, and Beatriz Gonzalez-Fernandez. Also to other former members of my Vocabulary Research Group: Sam Barclay and Pawel Szudarski. I look forward to seeing how you move the field far beyond what I could ever have imagined, and how your students (my academic grandchildren) develop the next generation of ideas. Diane: For my parents, Norman and Donna, w h o supported me in going to Japan so many years ago, where I unexpectedly found my applied linguistics career.
CONTENTS
List of Figures x List of Tables xi Preface xiii Acknowledgments xv 1
The Nature and Size of Vocabulary 1.1 The Nature of Vocabulary 1 1.2 Vocabulary Size 8 1.3 Summary 14 Exercises for Expansion 15 Further Reading 16
2
History of Vocabulary in Language Teaching 18 2.1 Language Teaching Methodologies through the Ages 2.2 The Vocabulary Control Movement 22 2.3 Vocabulary Trends in the New Century 24 2.4 Historical Overview of Vocabulary Testing 28 2.5 Summary 30 Exercises for Expansion 37 Further Reading 31
3
4
1
18
What Does It Mean to "Know" a Word? 32 3.1 Frameworks for Conceptualizing Vocabulary Knowledge 3.2 Types of Word Knowledge 40 3.3 Applications to Teaching 63 3.4 Summary 72 Exercises for Expansion 72 Further Reading 73 Corpus Insights: Frequency and Formulaic Language 4.1 Corpora and Their Development 75 4.2 Frequency 78 4.3 Formulaic Language 85 4.4 Summary 104
32
75
Exercises for Expansion Further Reading 110
104
5
Categories of Vocabulary and Word Lists 112 5.1 Categories of Vocabulary 113 5.2 Using Word Lists to Guide Language Teaching 120 5.3 Word Lists 127 5.4 Summary 135 Exercises for Expansion 135 Further Reading 137
6
Incidental Vocabulary Learning from Language Exposure 6.1 Child LI Vocabulary Acquisition 138 6.2 Incidental L2 Vocabulary Acquisition from Reading 140 6.3 Incidental L2 Vocabulary Acquisition from Listening 149 6.4 Incidental L2 Vocabulary Acquisition from Watching Television and Movies 151 6.5 Incidental L2 Vocabulary Acquisition from Extramural Exposure 153 6.6 Applications to Teaching 154 6.7 Summary 155 Exercises for Expansion 156 Further Reading 160
7
Intentional Vocabulary Learning 161 7.1 The Need for Explicit Vocabulary Instruction 161 7.2 Issues in Explicit Vocabulary Instruction 164 7.3 Strategy Use 176 7 A Applications to Teaching 181 7.5 Summary 182 Exercises for Expansion 182 Further Reading 785
8
Vocabulary in the Curriculum 186 8.1 Vocabulary and Reading 187 8.2 Vocabulary and Listening 191 8.3 Vocabulary and Speaking 794 8.4 Vocabulary and Writing 197 8.5 Vocabulary in the Wider Curriculum 8.6 Conclusion 206 8.7 Summary 206 Exercises for Expansion 207 Further Reading 208
9
200
Assessing Vocabulary Knowledge 209 9.1 Why Do You Want to Test? 210 9.2 What Words Do You Want to Test? 210
138
9.3 What Aspects of These Words Do You Want to Test? 214 9.4 How Will You Elicit Students' Knowledge of These Words? 9.5 Examples of Current Vocabulary Test Development 221 9.6 Applications to Teaching 227 9.7 Summary 227 Exercises for Expansion 228 Further Reading 230 Appendix Appendix Appendix Appendix
A B C D
Frequency of Selected Words/Phrases in COCA and BNC Concordance for made it plain 232 Missing Words from the Reading Passages 234 Example of the Six-T's Approach (Stoller & Grabe, 1997)
Appendix E Lexical Cohesion References 241 Index
271
240
219
231
235
FIGURES
3.1 4.1 4.2 4.3 4.4 4.5 4.6 4.7 5.1 7.1 7.2 7.3 8.1
Semantic features of cat 41 Coverage provided by all lemmas vs. coverage provided by content lemmas only across the frequency continuum 81 Three-part categorization of vocabulary frequency 81 Concordances of cause and provide from the Cambridge International Corpus 91 The Frequency-Transparency Framework (FTF) 102 Definition of bank, Longman Dictionary of Contemporary English, 5th edn. (2009) 106 Definition of bank, Macmillan English Dictionary for Advanced Learners, 2nd edn. (2007) 107 Definition of bank, Cambridge Advanced Learner's Dictionary (2003) 108 Lexical growth curve for the psychology corpus (not including proper nouns) 126 Typical pattern of forgetting 172 Pattern of forgetting with expanding rehearsal 173 A simplified version of Tseng and Schmitt's (2008) Model of Vocabulary Acquisition 179 Lexical cohesion in conversation 201
TABLES
1.1 1.2 3.1 3.2 3.3 4.1 5.1 7A 7.2 9.1
Counting units for persist 9 English vocabulary size of foreign learners 13 What is involved in knowing a word (types of word knowledge) 33 Framework for defining recognition and recall knowledge 38 Sense relations 42 The most frequent general, spoken, and automotive words 79 The comparison of the new-GSL with the GSL and the AWL 122 Intralexical factors that affect vocabulary learning 165 Top ten vocabulary strategies of L2 English learners 177 Comparison of three types of writing 226
PREFACE
This book is for language teachers and other people who would like to know more about the way vocabulary works. It attempts to provide the background knowledge necessary for practitioners to make informed choices about vocabulary teaching and testing. In most chapters, key ideas are first discussed, and then the pedagogical implications of those ideas are explicitly stated in an Applications to Teaching section. Thus, the overall theme of the book is making research and theory accessible enough to be of use in informing best classroom practice. As such, we have written this book to be much more than a "how-to-do-it" manual. By the time you finish it, you should be aware of the major issues in the field and be equipped to read more-advanced writings on them if you so wish. To encourage this, we have included a Further Reading section in each chapter which suggests appropriate follow-up readings. We have also included a relatively large bibliography to provide leads for further exploration of issues. Chapters 1, 2, and 3 provide some linguistic and historical background. In particular, Chapter 3 is the "heart" of the book, describing the various kinds of knowledge a person can have about words and phrases, which informs all issues in the book. Chapter 4 continues this, and pays special attention to formulaic language, which has now been shown to be a major component of vocabulary knowledge. Chapter 5 describes the various types of vocabulary and the word lists which identify them. Chapters 6-9 move to a more pedagogical focus. Chapters 6 and 7 discuss how vocabulary can be learned both incidentally from exposure, and intentionally from study. Chapter 8 focuses on embedding vocabulary in the curriculum and in the teaching of the four skills (reading, listening, writing, and speaking), while Chapter 9 discusses how to assess vocabulary knowledge and learning. There are Exercises for Expansion at the end of each chapter, which are designed to help you consider some of the key issues in more depth. As their purpose is to help you formulate your own views stemming from an interaction of the information in this book and your own experience, there are generally no "right" or "wrong" answers, and thus only a few exercises have an answer key. The value of the exercises comes from developing answers that make sense for you. We have tried not to assume any prior knowledge about lexical issues in this book, but do assume you will have some general linguistic background. For example, we assume you know what nouns and affixes are. Without this
assumption, the book would become too cluttered with basic definitions to be coherent. Important terms concerning vocabulary are printed in bold and are defined or described in the surrounding text. At all times, we have tried to make the text as direct and accessible as possible. Vocabulary is a big topic area, and a number of perspectives are possible. A point worth remembering when reading this book is that the material contained within is not totally unbiased, and it reflects our personal experience and research. We have tried to present an account of the field that is as broad and balanced as is possible under length constraints, but accept responsibility if our perspective highlights issues other than those you would have chosen. Norbert
Schmitt
Diane Schmitt
ACKNOWLEDGMENTS
It is difficult to decide on who to acknowledge in a book like this, for so many people have influenced our thinking about language and linguistics in general, and vocabulary in particular. Special thanks go to friends who have indulged us over the years in long discussions about vocabulary, which have sharpened our understanding. In this, Norbert has been particularly inspired by his former Ph.D. students listed in the dedication, and much of the discussion in this book has been informed by their research. The editorial team at Cambridge University Press has been friendly, helpful, and efficient throughout the publishing process, which makes writing much more enjoyable. Thanks to Rebecca Taylor, Isabel Collins, Rachel Norridge, and Sue Browning for that. We would also like to thank Julia Dahm for permission to use her fabulous Bauhaus print as our cover art.
The Nature and Size of Vocabulary
• • • •
Are there any patterns among the many thousands of words in a language? How should we count vocabulary? How much vocabulary do L1 English speakers know? Second language learners? How much vocabulary does it take to operate in English?
The White Rabbit put on his spectacles. "Where shall I begin, please your Majesty?" he asked. "Begin at the beginning," the King said, very gravely, "and go on till you come to the end: then stop." Lewis Carroll, Alice's Adventures in Wonderland, p. 1 06 The advice given in the above quote from Alice in Wonderland seems to be appropriate for an introductory text, so to start at the beginning we must consider what we mean by vocabulary. The first idea which probably springs to mind is words. Every language is made up of words, and they come in an amazing variety. Some are short (a), some are long (antidisestablishmentarianisni),some seem old (anon), and some have just entered the language (bling: expensive, ostentatious clothing or jewelry). Some have a single, straightforward meaning (quarantine),some carry several different meanings (bank: financial institution, riverside, a row of things, turning an airplane), and some convey a load of positive/negative connotations (aroma /stench). Some are common (sky, eat) and some are hardly ever used (punctilious). This chapter will discuss the nature of vocabulary, and will show that there are useful patterns in the sea of words. If the patterns are understood, it can make teaching and learning vocabulary much more systematic and successful. The chapter will also discuss how much vocabulary there is in language (vocabulary size), and how much must be known in order to be functional in English.
1.1 The Nature of Vocabulary 1.1.1 The Connection between Meaning and Form At its most basic, vocabulary connects the real world with language. That is, it connects meaning which comes from life experience (the color of leaves, the way
you feel when someone insults you) with linguistic form (i.e., words) which represents those meanings (green, angry). However, it would be a mistake to think that meanings are always connected to just single word forms. To illustrate this, let us consider the following items: die expire pass away bite the dust kick the bucket give up the ghost The six examples are synonymous, with the meaning "to die." However, they are made up of from one to four words. Die and expire are single words, with die being by far the more common. Pass away could probably best be described as a phrasal verb, and the last three are idioms. (An idiom is a string of words which taken together has a different meaning than the individual component words. Similarly, a phrasal verb is made up of a verb plus one or more other words, which usually has an idiosyncratic meaning compared to the component words.) Thus, it is clear that there is not necessarily a one-to-one correspondence between a meaning and a single word. Very often, in English at least, meanings are represented by strings of multiple words. Conversely, a single word form often represents several meanings. If the meanings are related, this is called polysemy (e.g., a chip of wood, a computer chip, a gambling chip [all have a small thin shape]). If the meanings are considered unrelated, this is called homonymy (e.g., grave means a place for burial, serious problems or situations, and an accent mark (a)). Multi-word units are very common. This book will use formulaic language as a cover term for all of the types of multi-word unit, and each individual unit will be called a formulaic sequence. Because the extent of formulaic language only became clear once corpus evidence was available, it will mainly be discussed in Chapter 4 - Corpus Insights: Frequency and Formulaic Language. If we wish to refer to vocabulary which includes both single words and formulaic sequences, the terms lexeme or lexical item can be used: "an item that functions as a single meaning unit, regardless of the number of words it contains." However, to enhance the accessibility of this book, we will use the term word unless more precise terminology is required to make a point.
1.1.2 Content Words and Function Words The meaning-form connection is a key aspect of most vocabulary, whether individual words or formulaic sequences. But not all, as some vocabulary performs grammatical functions. For example, articles (a, an, the) show whether previous information has already been mentioned or not (among other functions), and prepositions show relationships (under, by). These words are called function
words (or grammatical
words), and they "knit together" the content words in
a sentence. They are necessary regardless of the topic being discussed. The following two sentences concern very different topics, and have a different level of formality, with the first one being written and the second one spoken. Regardless, both include many function words (underlined): Stephen Hawking described the motion of planets moving around the sun and how gravity exerts force between bodies in space. / don't want to put you on the spot, Sam, but is this really a good idea? Being necessary for all topics, function words are extremely frequent, and make up a large percentage of any spoken or written discourse, as we will see in Chapter 4. (See a webpage by Vivian Cook for one listing of function words: www.viviancook .uk/Words/StructureWordsList.htm.)
1.1.3 The Meaning Relationships between Words While function words are a closed set (-150-300 words), the number of content words is huge, with new words and phrases being added all the time. For example, phast(a "phone fast": a period of time during which someone chooses not to use their smartphone), knitflixing (the activity of knitting and watching a TV program on Netflix at the same time), and jackpotting (the crime of hacking into a cash machine in order to obtain money) were new words noted by the Cambridge Dictionary blog in 2018 (https://dictionaryblog.cambridge.org/category/newwords/). But all of the numerous content words in a language are not completely independent of each other. Rather, many have various semantic (meaning-based: also termed paradigmatic) relationships with each other. The following are among the most common semantic relationships: Synonyms are words which have approximately the same meaning (new/ fresh,
beautiful/ handsome). It is argued that there are no fully interchangeable synonyms, as every pair would have some slight meaning or collocational difference (Paradis, 2012). For example, new can be used to describe virtually anything that is innovative or which replaces something (car, idea, year), but fresh is largely connected to air or food (bread, vegetables). Beautiful and handsome both mean "pleasing to the eye" when describing people, but beautiful is preferred for women, and handsome for men. Antonyms are words which have approximately opposite meanings (hot/cold, expensive/ inexpensive).There are two kinds of antonymy: ungraded and graded. Ungraded antonyms are exclusively opposite, either one or the other (dead/ alive, pass/fail). Graded antonyms convey oppositeness on a continuum (hot/warm/ cool/cold). Hyponyms are sets of words that have a hierarchal relationship from more general to more specific (vehicle/ car/ Audi), (fruit/apple/crab apple). The more general or
inclusive word is a superordinate (vehicle/car), the more specific word is a subordinate (vehicle/ car), and words at the same level of generality are coordinates (car/ bus /truck). Meronyms have a whole-part relationship (bicycle - wheels, handle, seat). These categories are important because words do not exist in isolation, and for many, their meanings are defined in relation to other words (Carter, 1998). For example, dead can only be defined as "no longer being alive." Thus, knowledge of these categories can be useful when trying to explain meaning in instructional contexts (Chapters 3, 7, and 8).
1.1.4 The Sequential Relationships between Words Words also relate to each other in sequential ways (sometimes called syntagmatic relationships). Although it is possible to combine words in a virtually unlimited range of ways, in day-to-day usage people prefer their language to be more predictable and thus easy to comprehend and produce. Sinclair (1991) was one of the first scholars to discuss these two approaches to language, making the distinction between the open-choice principle and the idiom principle. The open-choice principle conveys the idea that language is creative, and in most instances, there is a wide variety of possible words which could be put into any grammatical "slot." For example, if we wished to express the idea of a torrential downpour, the adjective slot in the sentence The rain caused widespread flooding could potentially be filled with several synonyms (e.g., heavy, strong, powerful, forcefid, dense). This is the traditional way of viewing language, and Sinclair stated that "virtually all grammars are constructed on the open-choice principle" (p. 110). However, complementary to this freedom of choice, he observed that language also has a systematicity which constrains vocabulary choice in discourse; constraints which the open-choice principle does not capture. To some extent this systematicity merely reflects realworld phenomena: fishing is often done in close physical proximity to a lake, so the words expressing these concepts will naturally co-occur as well. But much of the systematicity is strictly linguistic: There is no reason why we do not say *strong rain or *powerful rain, but proficient members of the English-speaking speech community know that the appropriate phrase is heavy rain (an asterisk before a word indicates that it is inappropriate, ungrammatical, or otherwise nonstandard). The idiom principle highlights the fact that there are patterns in how words co-occur with each other. In the above discussion of formulaic language, we saw how the sequential patterning resulted in various kinds of formulaic sequences with their own meanings, e.g., idioms and phrasal verbs. But the sequential patterning also extends to word pairs (or sometimes three words) which tend to co-occur in discourse. This "word partnership" is called collocation. J. R. Firth first brought this notion to prominence in 1957, and it has become increasingly important since. It refers to
the fact that some pairings are preferred in language use and sound "natural" (cause pain, inflict pain), while other possibilities which would convey the same meaning are not typically used, and just sound wrong (*produce pain,*create pain). There are two factors that are key to the notion of collocation. The first is that words co-occur together and the second is that these relationships have varying degrees of exclusivity. A commonly given example of collocation involves the word blonde. Blonde occurs almost exclusively with the word hair and a few other animate nouns like woman or lady. But it never occurs with words like paint or wallpaper, even though there is no reason semantically why they should not fit together. Because blonde has such an exclusive relationship with hair, they are said to collocate strongly. Most words do not collocate this strongly, however. Sometimes the collocation can be much weaker, as in the case of the word nice. This commonly occurs with almost any noun which one would want to associate with pleasantness, such as a nice view, nice car, or nice salary. These combinations could be said to collocate weakly. Some words combine so indiscriminately that there is not enough exclusivity to warrant the notion of collocation. An example is the word the, which co-occurs with virtually every non-proper noun. So to be considered a collocation, words must co-occur in discourse, but there must also be an element of exclusiveness. Most authors who discuss collocation agree that there are two basic kinds of collocations: grammatical/syntactic collocations and scmantic/lcxical collocations (e.g., Benson, 1985; Biskup, 1992; Bahns, 1993). Grammatical collocations are the type in which a dominant word "fits together" with a function word, typically a noun, verb, or adjective followed by a preposition. Examples are abide by, access to, and acquainted with. Lexical collocations, on the other hand, normally consist of combinations of two basically "equal" words such as noun + verb (ball bounces), verb + noun (spend money), and adjective + noun (chilly night), in which both words contribute to the meaning (Benson, 1985). In addition to these two basic collocational categories, Allerton (1984) proposes a third, consisting of collocations that arc not based on grammatical or semantic patterning. The relatively arbitrary prepositions attached to time fit in this category, since there does not seem to be any logical reason why we should say at six o'clock but on Monday.
1.1.5 Grammatical and Morphological Relationships between Words Words are also related through word forms which reflect grammatical and morphological relationships. Walk, walked, walking, and walks are closely related, consisting of the simplest verb form walk (base, root, or stem form), and its grammatical inflections walked, walking, and walks (base form + grammatical suffixes). Similarly, the noun base walk has the plural inflection walks. Inflected forms are created according to regular and transparent grammatical rules, which do
not change a word's meaning. If we package a base word and its inflections together, we call this unit a lemma. Often, we also want to use a word in contexts that require a different word class, for example, persist: Noun: Verb: Adjective: Adverb:
The judge changed his mind because of the lawyer's persistence. The lawyer persisted until the judge changed his mind. The persistent lawyer persuaded the judge to change his mind. The lawyer argued persistently.
In these cases, the meaning does not change, but the word class (part of speech) does. These word-class variations are called derivations. If we package the base word, its inflections, and its derivatives together, the unit is called a word family. There is clearly a relationship between all members of lemmas and word families, but the choice of which package to use will depend on pedagogical purpose and the proficiency of the learner.
1.1.6 Frequency: How Commonly Is a Word Used? One does not need to know much about a language before realizing that some words occur more often than others. For instance, almost everyone with much exposure to English would have the intuition that weak is much more common than puny. Weak occurs frequently in many different contexts (weak economy, weak knees, weak arguments), and thus is much more frequent than puny, which is used mainly to refer to people and body parts (puny arms, puny muscles). So, some words are more frequent than others. But vocabulary frequency does not follow a simple, linearly decreasing curve as frequency steadily tapers off. Rather, it follows what is called Zipf's Law, a pattern where a relatively small number of highfrequency items are extremely frequent, but then frequency drops off exponentially, with the vast majority of items becoming relatively rare quite quickly. This is good news for language learners, because it means that a large percentage of language is made up of a relatively small set of words. In English, the top 10 words make up about 20 percent, the top 50 words 35 percent, and the top 100 words 41 percent of all words in a typical written text. The top 2,000 words usually make up around 80 percent of typical English texts (Nation & Waring, 1997). Considering that the vocabulary of the English language is very large (see below), we find that a relative handful of words contribute the bulk of the vocabulary that a person will come across when reading and listening, while the others occur rather infrequently. While people have intuitions about the frequency of vocabulary, they are best at differentiating between very frequent and very infrequent words (Schmitt & Dunham, 1999; Alderson, 2007; McCrostie, 2007). We can get much more robust and fine-grained results about frequency from counting how many times words occur in large corpora (language databases). For instance, weak occurs 19,839 times in
the 560-million-word Corpus of Contemporary American English (COCA) (Davies, 2008-), while puny only occurs 753 times. Synonyms fragile, frail,and flimsy occur 7,036, 2,366, and 1,369 times respectively (on October 13, 2018). We will discuss frequency and its pedagogical ramifications in much more detail in Chapter 4.
1.1.7 General and Specialized Vocabulary Another way of finding patterning in the many words in a language is to consider how they are used and what topics they cover. In an average day, a person will encounter vocabulary across a range of domains (e.g., various topics when reading the newspaper, the vocabulary for their particular occupation, business vocabulary when doing taxes or paying bills, and fairy tales when reading to their children at night). The categories that have been found useful in describing this diverse range of vocabulary include "general vocabulary," "academic vocabulary," and "technical vocabulary." General vocabulary is the term that is used to describe vocabulary that is useful across a wide range of topics and contexts, in both speech and writing. This consists of high-frequency vocabulary, which is very frequent precisely because it occurs regularly across a wide range of contexts. It is impossible to say exactly how much general vocabulary there is, as words gradually become less common and occur in fewer topics as frequency decreases. That is, there is no obvious boundary where general vocabulary stops and thereafter all words are more specialized. For a long time, the definition of general vocabulary has been synonymous with the Genera] Service List (GSL), a list developed by West (1953) (see Chapter 2) which contains about 2,000 headwords. More recently, Gardner (2013) offered lists of core vocabulary totalling 2,857 words and Davies and Gardner (2010) produced a dictionary of the 5,000 most frequently used words in American English. The 5,000 figure chimes with our experience that at around the 5,000 frequency level, vocabulary use is better indicated by topic or domain than by frequency. Academic vocabulary is vocabulary which is particularly useful for engaging with academic contexts, particularly reading academic texts. It is not topic-specific but rather serves to provide a greater level of precision which contributes to the academic tone of academic texts and speech. Compare the following two sentences: A. The company changed its marketing ideas to try to make more money. B. The company modified its marketing strategy to try to increase revenue. Sentence A uses only high-frequency words and is easy to understand, but words like change, ideas, and money have many possible meanings, so the sentence lacks the precision required in academic texts. The meaning of Sentence B is nearly the same, but the underlined words are more precise. For example, modify means to make small changes to something to improve it, strategy means a planned series of
actions, and revenue is not just any money, but money which is earned from doing business or from collecting taxes. Academic vocabulary also represents academic activities (to define, investigate, hypothesize) and signals rhetorical functions (e.g. furthermore, nevertheless, whereas) in texts. Technical vocabulary is the jargon that is specific to particular domains (e.g., business, medicine, chemistry) and that represents the concepts and ideas specific to those domains (ledger, scalpel, catalyst). Technical vocabulary is crucial to understanding particular domains, because many of the key concepts are represented by this vocabulary. In this book, some of the technical vocabulary you have been exposed to include lexeme, collocation,and meronym, which are unlikely to occur very prominently in other domains like chemistry or politics. General vocabulary is important for all language use and so will need to be prioritized as part of teaching. Specialized vocabulary, whether academic or technical, may be important for learners, depending on which purposes they wish to use language for. There will be more discussion on identifying and teaching these categories of vocabulary in Chapter 5: Categories of Vocabulary and Word Lists.
1.2 Vocabulary Size Languages contain a lot of words. They have enough words to represent all of the things and concepts that a culture wants to talk about in the world, ranging from remembering last weekend's family get-together to musing about the origin of the universe. Most languages have vocabularies reaching into the hundreds of thousands. English is commonly believed to have the largest of all, largely because it has freely borrowed from the many other languages it came into contact with during the years of the far-flung British empire, and then later American influence.
1.2.1 How Many Words Are There in English? Reports of the size of the English language in the popular press range widely: from 400,000 to 600,000 words (Claiborne, 1983, p. 5), from 16 million to over 2 million (Crystal, 1988, p. 32), about 1 million (Nurnberg & Rosenblum, 1977, p. 11), and 200,000 words in common use, although adding technical and scientific terms would stretch the total into the millions (Bryson, 1990). The largest English dictionary, the Oxford English Dictionary,claims to include more than 600,000 words, although many of these are historical and no longer in general usage. The discrepancy in size estimates is due largely to differing definitions of a word, and so a study attempted to produce a more reliable estimate by using word families instead of words as the unit of counting. Goulden, Nation, and Read (1990) counted the number of word families in Webster's Third International Dictionary (1963), which was one of the largest non-historical dictionaries of English at the
time. After excluding entries like proper names and alternative spellings, Goulden et al. found that the dictionary contained about 54,000 word families. Dictionaries obviously cannot contain every current word family, but they were the best resource available before the widespread use of large corpora, and therefore early estimates of the number of words in a language were usually based on them. Brysbaert et al. (2016) describe a more recent statistical analysis of a trillion-word English corpus. They calculated that it contained about 10 million different individual words. Ultimately, it is impossible to say precisely how many words there are in English, as new words are always being added and old words are falling out of usage. However, what we can take from both dictionary and corpus approaches is that English clearly has a huge vocabulary (remembering that each word family can contain numerous individual words, as in the persist example below).
1.2.2 Counting Unit: Individual Word, Lemma, and Word Family It may seem that counting words is easy, but we have seen that estimates of vocabulary size depend on how a "word" is defined, and what counting unit is used. Ideally, we would count meanings rather than word forms (i.e., words, lemmas, word families), but this would entail a manual analysis, as computer software is currently limited in how well it can discern meaning. Due to the large number of words in a language, we usually need computers to do the counting automatically for us. This limits us to counting word forms, which computers are very good at. To illustrate this, let us take the example of persist, persisted, persisting, persists, persistence, persistent,and persistently. We could count these as seven different individual words. However, we could also use a lemma counting unit, where the base word persist and its inflections persisted, persisting,and persists are packaged together as a single unit. The other derivative forms would count as their own lemmas. But there is also a close semantic relationship between all of the words, so it may make sense to package them together (base + inflections + derivatives) and count them as one word family (Table 1.1). TABLE 1.1
COUNTING
UNITS FOR PERSIST
Counting unit
Words
Number units
of
Individual word
Lemma
Word family
• • • •
• persist (includes persisted, persisting, persists) • persistence • persistent • persistently
• persist (includes all seven words)
4
1
7
persist • persisted persisting • persists persistence • persistent persistently
So how do we decide which counting unit to use? The most principled way of deciding is based on how well people can perceive the relationships between the various word forms, but this will differ depending on the language user in question. Nation (2016) argues that word families make sense for LI English speakers, as they are likely to have a reasonable grasp of morphology and therefore will recognize the various members of a family as related. He also believes that word families might be appropriate for second language learners for the receptive skills of reading and listening, arguing that if learners know one family member (persist), they should be able to recognize or work out unfamiliar derivatives as semantically related words when they are encountered in a meaningful context (e.g., persistence, persistent). However, some evidence shows that this may be overly optimistic (especially for beginners), as recognizing derivative word forms is often more difficult than Nation has proposed (McLean, 2018). In terms of productive ability, almost all scholars agree that even at advanced levels, learners do not reliably know all of the various word-family members and so will not necessarily be able to use the appropriate derivative forms when required in speaking and writing (e.g., Schmitt & Zimmerman, 2002). However, learners appear to be better able to produce the inflected forms (McLean, 2018), presumably because they are based on rules which generally operate in a regular and consistent manner, e.g., the progressive form of a verb involves -ing. The upshot is that there is probably no single counting unit that is best in all circumstances. The choice will depend on the proficiency of the learner and whether the focus of use/learning is receptive or productive ability. Few researchers use the "individual word" unit, as learners typically exhibit some knowledge of relationships between the words. A number of researchers support the use of the lemma counting unit (e.g., Gardner, 2013; Kremmel, 2016; Schmitt, 2010). The argument is that this more conservative unit is less likely to overestimate learners' knowledge. This reflects the evidence that learners are more likely to know and use the inflectional morphemes that make up the lemma than they are to know the derivational morphology needed to make up a word family. An increasing amount of research uses lemmas when dealing with second language learners (e.g., Brezina & Gablasova, 2015; Gardner & Davies, 2014; Kremmel, 2016). Despite the apparent movement of the field toward using lemmas as the most common counting unit, most vocabulary size research to date has been denominated in word families (e.g., Nation, 2006). Therefore, the vocabulary size literature at the time of writing this book is denominated in several different units, and so it is important to carefully note the counting unit when considering research which reports vocabulary size, as the reported figures will always vary to some extent depending on the counting unit chosen.
1.2.3 How Many Words Do LI English Speakers Know? Estimates of the vocabularies of LI English speakers provide an idea of what the vocabulary size of lexically proficient users might be. Most studies have estimated that LI English speakers (usually university students) have a vocabulary size of between 10,000 and 20,000 word families. Goulden et al. (1990) and D'Anna,
Zechmeister and Hall (1991) found mean scores of about 17,000 families, albeit with very small numbers of participants (20 and 62 respectively). These studies used a Yes/No test format, where participants merely reported whether they thought they knew the words on the test or not, without any demonstration of that knowledge. Treffers-Daller and Milton (2013) studied more participants (179 undergraduate students), and required them to provide a synonym or explanation for the words known. They found lower average sizes, around 10,000-11,000 word families. Brysbaert et al. (2016) tested a much wider range of participants (221,268 individuals) using a Yes/No format and found that the median score of 20-year-olds was 42,000 lemmas (w 11,100 word families) and that of 60-year-olds was 48,200 lemmas ( 13,400 families). On balance, it seems that the average educated LI English speaker knows broadly 10,000 to 13,000 word families, based on studies that had controls in place to keep participants from overestimating their knowledge. However, we must remember that the tests used did not measure how well the participants could actually use the words they indicated as known, and so we must view these size figures with some caution. (See Chapter 9 for more on the strengths and limitations of vocabulary tests.) Let us put the scope of learning this much vocabulary into perspective. Imagine learning 10,000 to 13,000 telephone numbers. For each of these numbers you must remember the person and address connected with those numbers. This might be somewhat analogous to learning all of the various kinds of lexical knowledge attached to each word. Then, because these are word families and not single words, you would have to learn not only the single number, but also the home, work, and mobile variants. Of course, vocabulary and phone numbers are not directly comparable, but the example does indicate the magnitude of achievement in learning such a large vocabulary.
1.2.4 How Many Words Does It Take to Function in English? We have seen that LI English speakers have a vocabulary size of about 10,000 to 13,000 word families. Luckily, L2 learners of English do not need to have LI -like vocabulary sizes to be proficient in English. Zipf's Law indicates that the most important and useful vocabulary is a relatively limited set of the most frequent words, and that knowing this higher-frequency vocabulary will allow a person to do much in a language, even if lower-frequency words are not known. The question of how many frequent words a person needs to know to use English will depend on what a person wishes to do. Many people will simply wish to be able to participate in daily communication about everyday topics (i.e., chat). Van Zeeland and Schmitt (2013a) found that their participants were able to understand most of everyday spoken narratives if they knew 95 percent of the words in those narratives. Looking at spoken corpora, they found that 2,000-3,000 word families were enough to reach this 95 percent lexical coverage. Thus, 2,000-3,000 families is a reasonable estimate of the lexical requirements for understanding daily discourse.
However, if a person wishes to comprehend speech across a wider range of genres (e.g., news programs, movies, and lectures) and a wide range of topics (e.g., politics, science, sports), a wider range of vocabulary is obviously required. For more complex topics, knowing 98 percent of the words in a text is more likely to ensure comprehension. Nation (2006) calculated that 6,000-7,000 word families (+ proper nouns) must be known to reach this percentage. (See Chapter 6 for more on the percentage of words needed for comprehension.) In terms of speaking, things get trickier to calculate. For listening, people need to be able to comprehend whatever speech comes their way, and we can identify what vocabulary this typically consists of by looking at spoken corpora. But when speaking, people are in control of their own language, and can adjust it to their own strengths and levels of proficiency. If they do not know a word, they can paraphrase, use gestures, or change topics to compensate for this lack of vocabulary. Some people with smaller vocabularies can use the words they do know very well and thus function at a higher level than might be expected. The upshot is that it is very difficult to set out the vocabulary size requirements necessary to speak competently, as these will depend on the domain of use and interact with each person's speaking strategies. We find the same problem with estimating the vocabulary necessary for writing. The best we can do at the moment (until further research on this subject is carried out) is to assume that the size targets necessary for the receptive skills (listening, reading) will also allow competency in the productive ones (speaking, writing). Most people will also wish to read, and here we find that the vocabulary requirements are higher than for listening. This is mainly because written discourse is denser and uses a wider range of vocabulary than spoken discourse (McCarthy & Carter, 1997). Laufer and Ravenhorst-Kalovski (2010) carried out one of the most extensive studies of lexical reading requirements to date and identified two vocabulary thresholds. The first relates to the ability to understand authentic English texts adequately with some support (e.g., teachers or resources like dictionaries). They called this the minimal threshold, and it requires 95 percent text coverage. This translates to a vocabulary size requirement of 4,000 to 5,000 word families. The second relates to the ability to read independently. They called this the optimal threshold, and it requires 98 percent coverage. This translates to knowing 8,000 word families. This study supports Nation's (2006) conclusion that wide reading requires knowing around 8,000-9,000 word families. Language users might also wish to watch movies and television in English, and there are now some studies looking at how much vocabulary this requires. Webb and Rodgers (2009a) analyzed the scripts of 318 movies (2.8 million words - 601 hours) and found that it took the most frequent 6,000 word families (+ proper nouns) to achieve 98.1 percent coverage, although this depended on the genre of the movie (e.g., animation, drama, horror). Similarly, Webb and Rodgers (2009b) investigated 88 television programs (264,384 words - 35 hours) and found that 7,000 families (+ proper nouns) gave 98.3 percent coverage, although again this
varied greatly according to episode. These two studies suggest that the amount of vocabulary necessary to watch movies and TV is in line with Nation's (2006) calculations of 6,000-7,000 word families for listening widely. (See Rodgers & Webb, 2011, for more on vocabulary and television viewing.)
1.2.5 How Many Words Do Second Language Learners of English Typically Know? The previous section suggested that it takes at least 2,000 to 3,000 word families for learners to understand everyday conversation in English. If they wish to engage with a wider range of listening contexts or with authentic reading, the lexical requirements are closer to 6,000 to 9,000 families. The upshot is that learners must learn a very large number of lexical items to be able to operate in English, especially considering that the above figures do not take into account the multitude of formulaic sequences which have been shown to be extremely widespread in language use. Learning such a large number of lexical items is one of the greatest challenges facing learners in acquiring English. Moreover, it is one which a great many learners fail to meet successfully, as the vocabulary sizes of learners reported in research studies typically fall well short of these size requirements (Table 1.2).
TABLE 1 .2
ENGLISH VOCABULARY SIZE OF FOREIGN LEARNERS 1
Country / Learners
Vocab size
Hours of instruction13
Japan EFL University
2,000-2,300
800-1,200
Shillaw, 1995 Barrow et al., 1 999
China English majors
4,000
1,800-2,400
Laufer, 2001
Indonesia EFL University
1,220
900
Nurweni
Oman EFL University
2,000
1,350+
Horst et al., 1998
Israel High school graduates
3,500
1,500
Laufer, 1 998
France High school
1,000
400
Arnaud et al., 1985
Greece Age 1 5, high school
1,680
660
Milton & Meara, 1998
Germany Age 1 5, high school
1,200
400
Milton & Meara, 1998
Source (for vocab size)
& Read, 1999
a. Table is taken from Laufer (2000a), p . 48, slightly adapted. b. The data on hours of instruction was largely obtained by Laufer's personal communication with colleagues from the respective countries.
Other more recent research also indicates that many English learners struggle to learn the thousands of words required. For example, McLean (2018) studied 279 Japanese university students. While a few (17, 6%) scored well on a written recognition test at the 4,000 and 5,000 frequency levels, the majority (177, 63%) scored well at the 2,000 and 3,000 levels, and a considerable number (85, 31%) failed to fully master the 2,000 level. Al-Homoud and Schmitt (2009) found that their Saudi students in a pre-sessional university course only recognized 71-83 percent of the 2,000-level word families and 42-55 percent of the 3,000-level word families, even after a seven-week intensive/extensive reading program. The scope of the vocabulary-learning task, and the fact that many learners fail to achieve even moderate vocabulary-learning goals, indicates that it can no longer be assumed that a sufficient amount of vocabulary will simply be "picked up" from exposure to language tasks focusing either on other linguistic aspects (e.g., grammatical constructions) or on communication alone (e.g., communicative language teaching). Rather, a more proactive, principled approach needs to be taken in promoting vocabulary learning, which includes both explicit teaching and exposure to large amounts of language input. Chapters 6, 7, and 8 give more details on how such an approach can be realized.
1.3 Summary Languages have very large numbers of words and phrases, as they need to represent all of the things in the world worth referring to. English has an especially rich vocabulary, ranging from 54,000, if word families is the counting unit being used, to many millions if individual words is the unit used. The numerous lexical items in a language are not just random but are related to each other in numerous ways. Appreciating these relationships can help provide structure to our understanding of vocabulary, and insights into how to deal with it pedagogically in a more principled manner. Understanding the complex nature of vocabulary requires realizing that meaning does not always map onto form in a one-toone manner, that much vocabulary consists of formulaic sequences, that content and function words behave very differently, and that words have semantic, syntagmatic, grammatical, and morphological relationships. Frequency of occurrence is one of the best tools for describing vocabulary and for prioritizing the vocabulary most worth learning, with high-frequency vocabulary being essential for all language use. Breaking vocabulary into specialist categories (e.g., academic vocabulary, technical vocabulary) can also be usefid for prioritizing vocabulary for particular learner groups with specialist needs. Studies have shown that thousands of lemmas / word families are required to use English, and a large proportion of learners around the world fail to meet these targets. With the background knowledge from this chapter in hand, you should be ready to explore the fascinating world of how vocabulary is learned and used. But first we start by considering how people have viewed vocabulary over the ages, and how this has led to our current thinking in the field.
EXERCISES FOR EXPANSION 1 . Take a text several pages long and choose a few relatively common words. Count how often they occur according to the "individual word" versus "lemma" versus "word family" definitions. Is there a great deal of difference in the counts? 2. Make your own estimate of the number of words in a language. Take a dictionary and find the average number of words defined on a page. Then multiply this by the number of pages in the dictionary. From this total, scholars have typically eliminated classes of words like proper names (Abraham Lincoln) and compound words (dishwasher). Do you agree with this, and should any other classes be disregarded? How does the size of the dictionary affect the total size estimate? 3. To estimate how many word families you know, take this test developed by Goulden et al. (1990). You will find below a list of fifty words which is part of a sample of all the words in the language. The words are arranged more or less in order of frequency, starting with common words and going down to some very unusual ones. Procedure a. Read through the whole list. Put a tick next to each word you know, i.e., you have seen the word before and can express at least one meaning of it. Put a question mark next to each word that you think you know but are not sure about. (Do not mark the words you do not know.) b. When you have been through the whole list of fifty words, go back and check the words with question marks to see whether you can change the question mark to a tick. c. Then find the last five words you ticked (i.e., the ones that are furthest down the list). Show you know the meaning of each one by giving a synonym or definition or by using it in a sentence or drawing a diagram, if appropriate. d. Check your explanations of the five words in a dictionary. If more than one of the explanations is not correct, you need to work back through the list, beginning with the sixth to last word you ticked. Write the meaning of this word and check it in the dictionary. Continue this process until you have a sequence of four ticked words (which may include some of the original five you checked) that you have explained correctly. e. Calculate your score for the 50-item test by multiplying the total number of known words by 500. Do not include the words with a question mark in your scoring.
1 bag
1 1 avalanche
21 bastinado
2 face
1 2 firmament
22 countermarch
3 entire
1 3 shrew
23 furbish
4 approve
1 4 atrophy
24 meerschaum
5 tap
1 5 broach
25 patroon
6 jersey
1 6 con
26 regatta
7 cavalry
1 7 halloo
27 asphyxiate
8 mortgage
1 8 marquise
28 curricle
9 homage
1 9 stationary
29 weta
1 0 colleague
20 woodsman
30 bioenvironmental
31 detente
41 gamp
32 draconic
42 paraprotein
33 glaucoma
43 heterophyllous
34 morph
44 squirearch
35 permutate
45 resorb
36 thingamabob
46 goldenhair
37 piss
47 axbreaker
38 brazenfaced
48 masonite
39 loquat
49 hematoid
40 anthelmintic
5 0 polybrid
(Adapted from Goulden et al., 1 990, pp. 358-359)
Goulden, Nation, and Read reported that their 20 LI English-speaking university students scored between 13,200 and 20,700 word families, with a mean score of 17,200. How do you compare? Why do you think you arc above or below the figures they mentioned? How accurate do you think this test is? See Chapter 9 for more on the strengths and limitations of this and other vocabulary tests. 4. Choose two or three words. List everything you know about these words. Do the same after you have read Chapter 3. Does the second list indicate a greater awareness of vocabulary knowledge? If so, recommend this book to a friend. If not, try to sell your copy to them.
FURTHER READING
This book gives a good overview of vocabulary issues, with a pedagogical focus: Webb and Nation (201 7).
This reference book (624 pages) provides an extensive discussion of vocabulary research: Nation (2013). These three books focus on vocabulary research methodology: Nation (2011), Nation and Webb (2011), and Schmitt (2010). This is currently the most-cited article spelling out the lexical requirements for using English (in word families): Nation (2006). Several vocabulary scholars have websites which provide useful research publications, tools, and word lists: Paul Nation: www.wgtn.ac.nz/lals/about/staff/paul-nation Paul Meara: www.lognostics.co.uk Batia Laufer: haifa.academia.edu/BatiaLaufer Norbert Schmitt: www.norbertschmitt.co.uk Stuart Webb: www.edu.uwo.ca/faculty-profiles/stuart-webb.html Elke Peters: www.kuleuven.be/wieiswie/en/person/00018466 Tom Cobb (Lextutor): www.lextutor.ca Mark Davies (COCA): www.english-corpora.org/
History of Vocabulary in Language Teaching
What methodologies have been used to teach second languages through the ages? What has been the role of vocabulary in these methodologies? What was the "Vocabulary Control Movement"? How has the assessment of vocabulary developed?
People have attempted to study second languages from at least the time of the Romans, and perhaps before. In this period of more than two thousand years, there have been numerous different approaches to language teaching, each with a different perspective on vocabulary. At times, vocabulary has been given pride of place in teaching methodologies, and at other times it has been neglected. In order to help you better understand the current state of vocabulary studies as discussed in subsequent chapters, this chapter will first briefly review some of the historical influences that have shaped the field as we know it today. (Instead of digressing to explain terminology in this historical overview, key terms are cross-referenced to the page in the book where they are discussed.)
2.1 Language Teaching Methodologies through the Ages Records of second language learning extend back at least to the second century BC, where Roman children studied Greek. In early schools, students learned to read by first mastering the alphabet, then progressing through syllables, words, and connected discourse. Some of the texts gave students lexical help by providing vocabulary which was either alphabetized or grouped under various topic areas (Bowen Madsen, & Hilferty, 1985). We can only assume that lexis was considered important at this point in time, as the art of rhetoric was highly prized and would have been impossible without a highly developed vocabulary. Later, in the medieval period, the study of grammar became predominant, as students studied Latin. Language instruction during the Renaissance continued to have a grammatical focus, although some reforming educators rebelled against the
overemphasis on syntax. In 1611, William of Bath wrote a text which concentrated on vocabulary acquisition through contextualized presentation, presenting 1,200 proverbs which exemplified common Latin vocabulary and demonstrating homonyms in the context of sentences. In 1658, John Amos Comenius created a textbook drawing on this idea of contextualized vocabulary. He suggested an inductive approach to language learning, with a limited vocabulary of 8,000 common Latin words, which were grouped according to topics and illustrated with labeled pictures. The notion of a limited vocabulary was important, and would be developed further in the early twentieth century as part of the "Vocabulary Control Movement." Scholars such as William and Comenius attempted to raise the status of vocabulary, while promoting translation as a means of directly using the target language, getting away from rote memorization, and avoiding such a strong grammatical focus. Unfortunately, the emphasis of language instruction remained firmly on deductive, rule-oriented treatments of Latin grammar. This preoccupation filtered over to English as well. The eighteenth and nineteenth centuries brought the Age of Reason when people believed that there were natural laws for all things and that these laws could be derived from logic. Language was no different. Latin was held up as the language least corrupted by human use, so many grammars were written with the intent of purifying English based on Latin models. It was a time of prescription, when authors of grammar books took it upon themselves to decide correct usage and to condemn what seemed to them to be improper. Usually they had no qualifications to do so, other than being important men in the world. Robert Lowth's A Short Introduction to English Grammar (1762) was one of the most influential of the prescriptive grammars, outlawing features in common use, such as double negatives (I don't want to study no more grammar rules!). These grammars received general acceptance, which helped prolong the domination of grammar over vocabulary. Attempts were also made to standardize vocabulary, which resulted in dictionaries being produced. The first was Robert Cawdrey's A Table Alphabetical (1604). (Kelly (1976, p. 24) notes that the first bilingual lexicon dates from around 2500 BC.) Many others followed until Samuel Johnson brought out his Dictionary of the English Language in 1755, which soon became the standard reference. With the exception of printing in general, his dictionary did more to fix standard spelling and lexical usage than any other single development in the history of English. Johnson's genius lay in his utilization of contemporary pronunciation and usage to guide his spellings and definitions. Only in ambiguous cases did he resort to arbitrary decisions based on logic, analogy, or personal taste. The result was a dictionary which would remain unchallenged in influence until Noah Webster published an American version in the following century. The main language teaching methodology from the beginning of the nineteenth century was Grammar-Translation. A lesson would typically have one or two new grammar rules, a list of vocabulary items, and some practice examples to translate from LI (first language) into L2 (second language) or vice versa. The approach was
originally reformist in nature, attempting to make language learning easier through the use of example sentences instead of whole texts (Howatt & Widdowson, 2004, p. 152). However, the method grew into a very controlled system, with a heavy emphasis on accuracy and explicit grammar rules, many of which were quite obscure. The content focused on reading and writing literary materials, which highlighted the obsolete vocabulary of the classics. In fact, the main criterion for vocabulary selection was often its ability to illustrate a grammar rule (Zimmerman, 1997). Students were largely expected to learn the necessary vocabulary themselves through bilingual word lists, which made the bilingual dictionary an important reference tool. As the method became increasingly pedantic, a new pedagogical direction was needed. One of the main problems with Grammar-Translation was that it focused on the ability to analyze language, and not the ability to use it. In addition, the emphasis on reading and writing did little to promote an ability to communicate orally in the target language. By the end of the nineteenth century, new use-based ideas had coalesced into what became known as the Direct Method.This emphasized exposure to oral language, with listening as the primary skill. Meaning was related directly to the target language without the step of translation, and explicit grammar teaching was downplayed. It imitated how a first language is naturally learned, with listening first, then speaking, and only later reading and writing. The focus was squarely on use of the second language, with some of the stronger proponents banishing any use of the LI in the classroom. It was thought that vocabulary would be acquired naturally through interaction during lessons. Concrete vocabulary was explained with pictures or through physical demonstration, with initial vocabulary being kept simple and familiar, for example, objects in the classroom or clothing. Thus, vocabulary was connected with reality as much as possible. Only abstract words were presented in the traditional way of being grouped according to topic or association of ideas (Zimmerman, 1997). Like all other approaches, the Direct Method had its problems. It required teachers to be highly proficient in the target language, which was not always the case. It mimicked LI learning, but did not take into account the differences between LI and L2 acquisition. One key difference is that LI learners have abundant exposure to the language, whereas learners of a second language typically have little, usually only a few hours per week for a limited number of years. In the United States, the 1929 Coleman Report took this limited instruction time into account, and concluded that it was not sufficient to develop overall language proficiency. It decided to recommend a more limited goal: teaching secondary students how to read in a foreign language. This was considered the most useful skill that could be taken from schooling, particularly as relatively few people traveled internationally in the early twentieth century. At the same time in India and Britain, Michael West was stressing the need to facilitate reading skills by improving vocabulary learning. I'he result was an approach called the Reading Method, and it held sway, along with Grammar-Translation and the Direct Method, until World War II.
During the war, the weaknesses of all of the above approaches became obvious, as the American military found itself short of people who were conversationally fluent in foreign languages. It needed a means to quickly train its soldiers in oral/aural skills. American structural linguists stepped into the gap and developed a program which borrowed from the Direct Method, especially its emphasis on listening and speaking. It drew its rationale from Behaviorism, which essentially said that language learning was a result of habit formation. Thus, the method included activities which were believed to reinforce "good" language habits, such as close attention to pronunciation, intensive oral drilling, a focus on sentence patterns, and memorization. In short, students were expected to learn through drills rather than through an analysis of the target language. The students who went through this "Army Method" were mostly mature and highly motivated, and their success was dramatic. This success meant that the method naturally continued on after the war, and it came to be known as Audiolingualism. Because the emphasis in Audiolingualism was on teaching structural patterns, the vocabulary needed to be relatively easy, and so was selected according to its simplicity and familiarity (Zimmerman, 1997). New vocabulary was rationed and only added when necessary to keep the drills viable. "It was assumed that good language habits, and exposure to the language itself, would eventually lead to an increased vocabulary" (Coady, 1993, p. 4), so no clear method of extending vocabulary later on was spelled out. A similar approach was current in Britain during the 1940s to 1960s. It was called the Situational Approach, from its grouping of lexical and grammatical items according to what would be required in various situations (e.g., at the post office, at the store, at the dinner table) (Celce-Murcia, 2014). Consequently, the Situational Approach treated vocabulary in a more principled way than Audiolingualism. Noam Chomsky's attack on the behaviorist underpinnings of Audiolingualism in the late 1950s proved decisive, and it began to fall out of favor. Supplanting the Behaviorist idea of habit formation, language was now seen as governed by cognitive factors, particularly a set of abstract rules which were assumed to be innate. In 1972, Hymes added the concept of communicative competence, which emphasized sociolinguistic and pragmatic factors. This helped to swing the focus from language "correctness" (accuracy) to how suitable language was for a particular context (appropriateness). The approach which developed from these notions emphasized using language for meaningful communication - Communicative Language Teaching (CLT). The focus was on the message and fluency rather than grammatical accuracy. Language was taught through problem-solving activities which required students to transact information, such as information-gap exercises. In these, one student is given information the other does not have, with the two having to negotiate the exchange of that information. In the 2000s, CLT evolved into TaskBased Language Teaching (TBLT), where learners carry out a series of tasks devised to emphasize various linguistic features, and to provide meaningful practice in using those features (e.g., Ellis, 2003). For example, a task based on discussing last week's activities could be designed to focus on past-tense verbs.
In any meaning-based approach, one would expect vocabulary to be given a prominent place. Once again, however, vocabulary was given a secondary status, this time to issues of mastering functional language (e.g., how to make a request, how to make an apology), how language connects together into larger discourse, and the emphasis on completing tasks. CLT/TBLT gives little guidance about how to handle vocabulary, other than as support vocabulary for the functional language use mentioned above. As in previous approaches, it was assumed that L2 vocabulary, like LI vocabulary, would take care of itself (Coady, 1993). It is now clear that mere exposure to language and practice with functional communication will not ensure the acquisition of an adequate vocabulary (or an adequate grammar for that matter), so current best practice includes both a principled selection of vocabulary, often according to frequency lists, and an instructional methodology which encourages meaningful engagement with words over a number of recyclings.
2.2 The Vocabulary Control Movement This survey has shown that language teaching methodology has swung like a pendulum between language instruction as language analysis and as language use. Likewise, vocabulary has had differing fortunes in the various approaches. 1lowever, a recurring thread is that most approaches did not really know how to handle vocabulary, with most relying on bilingual word lists or hoping it would just be acquired naturally. This does not mean, however, that no scholars were interested in vocabulary and how to teach it effectively. In fact, systematic work on vocabulary began in the early twentieth century which is still influential today. It focused on efforts to systematize the selection of vocabulary. Since it also included an attempt to make vocabulary easier by limiting it to some degree, the research came to be collectively known as the Vocabulary Control Movement. There were two competing approaches. The first attempted to limit English vocabulary to the minimum necessary for the clear statement of ideas. C. K. Ogden and I. A. Richards developed a vocabulary with only 850 words (known as Basic English) in the early 1930s, which they claimed could be quickly learned and could express any meaning which could be communicated in regular English. This was done by paraphrasing, e.g., the words ask and want were not included in Basic English, but could be expressed as put a question and have a desire for, respectively (Carter, 1998, p. 25). Basic English consisted of 150 items representing Qualities (essentially adjectives), 600 Things (nouns), and 100 Operations (a mixture of word classes). However, the suffixes -ed and -ing could be attached to the Things, so many could be used as verbs (dust-+dusted). However, for a number of reasons, it turned out that Basic English did not have much lasting impact. First, it was promoted as a replacement language for English itself, which was never going to happen. More important, perhaps, despite the small number of words, it was not necessarily that much easier to use. The same
number of concepts existed in the world which needed to be addressed, but instead of learning many words to cover these concepts, Basic English merely shifted the learning burden to learning many meaning senses. In fact, it has been estimated that the 850 words of Basic English have 12,425 meanings (Nation, 1983a, p. 11). Learning multiple meaning senses is not necessarily any easier than learning multiple words, so Basic English's apparent simplicity is largely an illusion. Two practical problems also counted against the adoption of Basic English. First teachers would have had to be retrained to use this essentially "new" language. Second, it was not very suitable for social interaction, as key items like Goodbye, Thank you, Mr., and Mrs. were not included, nor were very common words like big, never, sit, or want. In the end, Basic English produced what seemed to be "unnatural" English, and many teachers felt that "if courses were offered which claimed to teach Basic English, they should in fact teach basic English" (Howatt, 1984, p. 254). The second (more successful) approach in the Vocabulary Control Movement was to use systematic criteria to select the most useful words for language learning. This was partially in reaction to the Direct Method, which gave little guidance on the selection of either content or vocabulary. Several researchers had been working in this area during the first part of the twentieth century, and their efforts merged in what came to be referred to as the Carnegie Report (Palmer, West, & Faucett, 1936). The report recommended the development of a list of vocabulary which would be useful in the production of simple reading materials. Word frequency was an important criterion for the selection of words on this list, but suffers from the fact that, apart from the most frequent words, the vocabulary required in any situation depends on the context it is used in. For example, pencil, valve, and pint may not be particularly frequent words in general English, but they are indispensable in classrooms, automobile repair garages, and British pubs, respectively. Thus, the eventual words on the list were selected through a wide-ranging list of criteria: 1. 2. 3. 4. 5. 6. 7.
word frequency structural value (all function words included) universality (words likely to cause offense locally excluded) subject range (no specialist items) definition words (for dictionary-making, etc.) word-building capacity style ("colloquial" or slang words excluded). (Howatt, 1984, p. 256)
The list was eventually published as the General Service List of English Words (GSL) (West, 1 953) and included about 2,000 word families. The advantage of the GSL is that the different parts of speech and different meaning senses are listed, which makes the list much more useful than a simple frequency count. The GSL was immensely influential, but inevitably became dated. However, a testament to its usefulness is the fact that there have been several attempts to update it. The latest and best-known iteration is the New General Service List (new-GSL) (Brezina & Gablasova, 2015).
The researchers compiled the new-GSL by first creating word lists based on four diverse corpora, and then selecting the top words from those lists based on frequency and distribution criteria. The procedure produced a final list of 2,494 lemmas. The list accounted for virtually the same amount of language (80.4%) in the huge 12-billionword enTenTen 12 corpus of written English (www.sketchengine.eu/ententen-englishcorpus) as the GSL (80.1%), even though the GSL used the more inclusive counting unit of word families. A major feature of this second approach to vocabulary control is the use of frequency information. The practice of counting words to see how frequently they occur has a long history, dating as far back as Hellenic times (DeRocher, Miron, Patten, & Pratt, 1973). In 1864, Thomas Prendergast, objecting to the archaic word lists used in the Grammar-Translation method, compiled a list of the most common English words by relying solely on his intuitions (which proved to be surprisingly accurate) (Zimmerman, 1997, p. 7). However, the first modern frequency list, compiled by counting a large number of words (11 million), was created by Kaeding in Prussia in the 1890s (Howatt, 1984, p. 257). Michael West is probably the best-known scholar to harness the idea of frequency to second language learning. In addition to compiling the GSL, he was active in promoting reading skills through vocabulary management. To improve the readability of his New Method Readers texts, he substituted low-frequency "literary" words like isle, nought, and ere with more frequent items like island, nothing, and before. This corresponded to the ideas of Harold Palmer, with whom he collaborated. A second step was to limit the number of new words occurring in the text. He increased the length of the overall texts compared to others current at the time, and also decreased the number of new words. This had the effect of dramatically reducing the percentage of new words which a reader would meet in a text. Whereas a reader would be introduced to a new word every 5-20 words in previous texts, in West's readers the reader would meet a new word every 44-56 words on average. This gave readers the chance to improve their reading fluency without constantly having to cope with new words in every sentence, and it also meant that previously met words would be recycled at a higher rate. The readers would presumably also be able to understand more of what they read.
2.3 Vocabulary Trends in the New Century As we move deeper into the twenty-first century, a number of trends related to vocabulary research and pedagogy have become evident. Perhaps the most significant is the technological revolution and the ever-increasing availability of language on the Internet and on mobile devices. In most parts of the world, learners now have almost unlimited access to second languages, especially English. Social media, gaming, and watching English-language television/movies (with subtitles or captions) are now popular across a range of age groups, and the opportunities for
incidentally learning language and vocabulary while engaging with these sources have never been greater. For example, a number of studies have shown that computer gaming in English can lead to substantial learning gains (Peterson, 2013). There are also numerous internet sites supporting the explicit teaching/ learning of vocabulary. For example, Barclay (2017) lists sites that can help in the selection of vocabulary to teach (e.g., Lextutor, www.lextutor.ca/vp/), that highlight (e.g., bold) words in a text so that they are more salient (e.g., AWL Highlighter, www.eapfoundation.com/vocab/academic/highligh ter/), that teach affixes (www.affixes.org), and that provide flash cards with scheduled recycling intervals (e.g., Anki, https://apps.ankiweb.net). Nevertheless, it is fair to say that research into how to best harness these pedagogical and recreational resources is still in its youth, but is likely to become a major focus of research in the coming decades. In some parts of the world, real-world, non -instructed exposure (often called extramural exposure) is enough that young learners are coming to their L2 English schooling with considerable amounts of English already in place. This is especially the case in northern Europe. In Iceland, for example, Lefever (2010) found that before the start of formal schooling many young children can already express themselves and interact with others in English on the basis of their exposure to naturalistic input from TV, DVDs, and gaming. There is even some evidence that amount of out-of-class exposure has a greater effect on learners' vocabulary size than length of instruction (Peters, 2018). However, in other parts of the world, learners are starting from scratch, with many learners struggling to reach 2,000 word families even after many hundreds of hours of instruction (see e.g., Table 1 .2). One challenge for vocabulary teaching is how to maximize the benefits of extramural exposure for the learners who are already taking advantage of it (e.g., by developing explicit supplementary materials). A completely different challenge is how to encourage non-involved learners to become more engaged with extramural exposure (e.g., by including more English-language internet activities in instruction) (Schmitt, 2019). One pedagogical approach to increasing the amount of L2 exposure entails teaching some content classes in school or university entirely in the L2, with the dual-focused aims of teaching content and the L2 simultaneously (Snow & Brinton, 2017). This is known by many names, including Content-Based Instruction (CBI) and Content and Language Integrated Learning (CLIL). Adoption of this content-in-L2 approach is increasing around the world, but as yet its efficiency in promoting L2 proficiency is more a case of assumed success than empirically demonstrated effectiveness. Indeed, there are worrying signs that the L2 part of the learning equation is being left behind. Given that CBI/CLIL's momentum shows no sign of slowing down, research into how it facilitates L2 vocabulary learning (and how it does not) is a priority. Corpus analysis started in earnest in the early twentieth century with the Vocabulary Control Movement, which mainly explored the frequency
distributions of vocabulary. Computerization allowed corpus linguistics to become fully established as a field in its own right by around the mid 1990s. Nowadays, powerful personal computers and easy-to-use software (e.g., the free concordancing program AntConc, www.laurenceanthony.net/software/antconc/) allow anyone to do sophisticated corpus analyses, with several internet sites also providing (often free) corpus data (e.g., the COCA, www.english-corpora.org/coca/). This has led to an explosion of corpus research, with much of it focused on formulaic language. This is probably the area of vocabulary studies which has shown the greatest advances in the last two decades with innumerable studies and a large number of books exploring the description, acquisition, and pedagogy of the various categories of formulaic language (e.g., Wray, 2002, 2008; Schmitt, 2004; Barfield & Gyllstad, 2009; Polio, 2012; Siyanova-Chanturia & Pellicer-Sanchcz, 2019). (See Chapter 4 for this discussion.) Another major outcome of corpus research has been the proliferation of word lists. Word lists have a long history, with the GSL being the most influential example. In 1953, it listed the most useful words for general English, but now there are multiple lists of the most frequent English vocabulary (e.g., Leech, Rayson, & Wilson, 2001; Davies & Gardner, 2010; Brezina & Gablasova, 2015). But perhaps the biggest push has been to create word lists of specialized vocabulary. We now have lists of engineering English (Ward, 2009), academic vocabulary in agriculture research articles (Martinez, Beck, & Panza, 2009), and medical academic words (Lei & Liu, 2016), among many others. However, by far the most influential of the specialized word lists has been the Academic Word List (AWL) (Coxhead, 2000). It has probably done more than anything else to highlight the importance of vocabulary to the average teacher and learner, and to bring vocabulary back into classrooms and textbooks in a principled way. It focuses on academic vocabulary, words that are frequent and important in academic texts, and which help to give those texts a precision of expression and an academic "feel," such as accumulate, coincide, implicit, and significant.This helped to popularize the notion that particular words are useful for particular contexts, e.g., learners engaging with academic texts have a particular need for academic vocabulary. As such, it largely was the stimulus for the widespread development of the specialized word lists mentioned above. Chapter 5 discusses the various categories of vocabulary and the word lists created to exemplify them. Corpus linguists did not stop at analyzing LI English texts but also began exploring the output of L2 learners with L2 corpora, usually called learner corpora. This has provided insights into the nature of language produced by learners at different levels of proficiency and from different LI backgrounds. This has been particularly productive in demonstrating the use of formulaic language by L2 learners. The researchers at the Centre for English Corpus Linguistics (https:// uclouvain.be/en/research-institutes/ilc/cecl) in Belgium have been leaders in this area, both in producing learner corpora (e.g., the International Corpus of Learner
English (ICLE), Granger, Dagneaux, Meunier, & Paquot, 2009) and in analyzing learner language (e.g., Paquot & Granger, 2012). Another research trend has been the increased use of psycholinguistic techniques to measure vocabulary knowledge beyond the form-meaning link. This has traditionally been measured with paper-and-pencil tests. Examples of techniques borrowed from psycholinguistics include eye-tracking, which can be used to show incidental vocabulary learning from reading as it happens (Pellicer-Sanchez, 2016). Priming can be useful to show implicit lexical knowledge and can be an informative supplement to more traditional tests which show explicit knowledge (e.g., Elgort, 2011; Sonbul & Schmitt, 2013). Reaction-time techniques can indicate the improvement of fluency (automaticity) e.g., Segalowitz (2010), or the effects of the LI on L2 processing (e.g., Wolter & Gyllstad, 2011). Most of the studies have explored knowledge/fluency of individual words, but some studies have also focused on formulaic sequences (Underwood, Schmitt, & Galpin, 2004; Durrant & Schmitt, 2010; Carrol, Conklin, & Gyllstad, 2016). These psycholinguistic techniques can provide much fuller descriptions of learners' knowledge and acquisition of vocabulary, and will almost certainly increase in importance in the coming years. A related trend is the increased use of automated tools for the analysis of both LI and L2 output. The best-known suite of tools is Coh-Metrix (Graesser, McNamara, Louwerse, & Cai, 2004), although this has been superseded by the TAALES suite (Kyle & Crossley, 2015). These suites include a vast variety of measurements (TAALES=484). Some of the measures are what might be expected, such as frequencies from various corpora, and words from the AWL. But many more esoteric ones are drawn from psycholinguistic research: e.g., word association responses, concreteness and imageability measures, and typical age-of-acquisition figures. These suites of measures can be informative, if used to answer clearly defined research questions. However, caution is needed to ensure that results that appear to be significant are actually meaningful. It is important that research designs are driven by the research questions rather than by the availability of the tools. A further area of research is attempting to investigate the impact of the complexity of vocabulary knowledge on acquisition. In Chapter 3, you will be introduced to Nation's (2013) framework, which identifies numerous types of word knowledge, including knowledge of form, meaning, grammatical constraints, and collocation, and receptive and productive knowledge. In order to understand the multifaceted nature of vocabulary knowledge, both vocabulary research and measurement have begun moving toward a multidimensional approach to understanding what it means to know a word, tapping into multiple aspects of knowledge to show the "depth" or quality of vocabulary knowledge. This has been most notable with many recent studies measuring both receptive and productive vocabulary knowledge, or both recognition and recall knowledge, e.g., Laufer and Rozovski-Roitblat (2011), Kremmel and Schmitt (2016), and Peters (2016). There have also been studies which aim to determine whether the various aspects of word knowledge are acquired in any particular sequence. Early efforts included Schmitt (1998) and
Webb (2005, 2007a, 2007b), but a set of more recent studies is much more comprehensive. These provide an initial model of how knowledge of various wordknowledge types are related (Gonzalez-Fernandez, 2018; Gonzalez-Fernandez & Schmitt, 2019). Although labor- and time-intensive, continued expansion of the multidimensional approach to acquisition research is desirable, as we can only capture the complex nature of vocabulary knowledge by using multiple measures.
2.4 Historical Overview of Vocabulary Testing People are naturally interested in their progress when they arc studying a foreign language. Teachers are likewise interested in their students' improvement. Since one of the key elements in learning a foreign language is mastering the L2's vocabulary, it is probably safe to assume that there has been interest in testing vocabulary from the earliest times in which foreign languages were formally studied. One of the first modern researchers to concern himself with systematic vocabulary measurement was Ebbinghaus (1885), who provides an early account of a selfassessment method of testing. Self-assessment may be fine for a careful researcher like Ebbinghaus, but there arc obvious problems, especially the one of people overestimating the vocabulary they know. Institutionalized testing situations require measures which are more verifiable and this involves testees demonstrating their knowledge of words in some manner. Especially in the United States, this need led to an emphasis on objective testing and the creation of a new field, psychometrics, which attempted to provide accurate measures of human behaviors, such as language learning. Spolsky (1995) believes that the first modern language tests were published by Daniel Starch in 1916. This was the time when psychometrics was beginning to establish itself. Vocabulary was one of the language elements commonly measured in these psychometric tests, and Starch's tests measured vocabulary by having testees match a list of foreign words to their English translations. This is similar to Ebbinghaus's method, except that Ebbinghaus required himself to give the answer (productive knowledge), while Starch's tests only required recognition of the correct answer (receptive knowledge). Standardized objective tests became the norm in the United States from the 1930s, with vocabulary continuing to be one of the components commonly included. In 1964, this trend culminated in the creation of the Test of English cis a Foreign Language (TOEFL), which, similar to other standardized tests of the time, included a separate vocabulary section. It is interesting to note that interest in vocabulary testing did not always stem solely from an interest in vocabulary itself. The relative ease of isolating words and testing them was also attractive. Vocabulary items set in a multiple-choice format tended to behave consistently and predictably, and they were considered relatively easy to write. Words were thus seen as a language unit particularly suited to objective testing, for technical as well as linguistic reasons.
Since the 1970s, the communicative approach to language pedagogy has affected perceptions about how vocabulary should be tested, with the testing of linguistic items in isolation becoming dispreferred. Following this, large standardized language proficiency tests have tended to drop their separate vocabulary sections, and if vocabulary is tested, it is usually with more contextualized formats. For example, the 1998 version of the TOEFL moved to embedding vocabulary items into computerized reading passages, and the current version continues with this approach. An exception to this is the Peabody Picture Vocabulary Test (Dunn & Dunn, 1997), where examinees match a spoken word to one of four pictures. This test was developed for use with young LI English speakers. Interest in assessing L2 vocabulary has steadily grown, especially since the publication of Paul Nation's landmark book Teaching and Learning Vocabulary in 1990. This has resulted in a steady stream of vocabulary tests being developed for L2 pedagogical use, which have also been adopted by language researchers. For the most part, these tests aim to provide information about the size of learners' vocabulary by measuring individual words in isolation via a variety of formats. The best known and most influential of the pedagogical tests is the Vocabulary Levels Test, which reports a profile of vocabulary size at the 2,000, 3,000, 5,000, and 10,000 frequency levels (as well as a section with academic vocabulary). It was created by Paul Nation and was first made public in Nation (1983b). This initial version was later expanded and substantially updated by Schmitt, Schmitt, and Clapham (2001). The test format has more recently been updated again by several scholars (e.g., McLean & Kramer, 2015; Webb, Sasao, & Ballance, 201 7), and has been used as the template for a test of aural vocabulary size (McLean, Kramer, & Beglar, 2015). It has also been modified to create a test of prompted written productive vocabulary knowledge: the Productive Vocabulary Levels Test (Laufer & Nation, 1999). Several other notable tests of vocabulary size have been developed. The EuroCentres Vocabulary Size Test (Meara, 1990) was a computerized Yes/No test where learners checked "Yes” or "No” depending on whether they thought they knew a series of words or not (thus the alternative term checklist test). Meara went on to create a series of other checklist tests, the most recent version being the V_YesNo (Meara & Miralpeix, 2017). A better-known format, the ubiquitous four-option multiple-choice, was used for the Vocabulary Size Test (VST) (Nation & Beglar, 2007). The VST is gaining in popularity partly because the format has been used as a template to create a number of translation variants (Vietnamese - Nguyen & Nation, 2011; Persian - Karami, 2012; Russian - Elgort, 2013; and Japanese - Derrah & Rowe, 2015). However, both the original VST and the translations lack sufficient evidence of validity for any set purpose (Schmitt, Nation, & Kremmel, 2019). (See Chapter 9 for more information on test validation.) A new internet-based vocabulary size test will soon become available (Kremmel & Schmitt, in preparation), which has several advantages over traditional paper-and-pencil tests. While most vocabulary tests measure size in one way or another, there have also been some attempts to measure the depth/quality of vocabulary knowledge. The
best known is the Word Associates Format (Read 1993, 1998). This test gives a target word (e.g., sudden), a nd examinees are required to select option words which are related to the target cither via meaning (quick, surprising) or collocation (change, noise). Although different versions of it have been used in research (e.g., Qian, 2002; Schoonen & Verhallen, 2008), th e test is no t straightforward to score (Schmitt, Ng, & Garras, 2011), a n d it is probably still best seen as experimental. Laufer & Goldstein (2004) developed th e Computer Adaptive Test of Size and Strength (CATSS), which, instead of merely rating words as being dichotomously known or unknown, provides an indication of the strength of the form-meaning link. There have also been a number of other tests focusing on other non-meaning types of vocabulary knowledge. The Test of English Derivations (TED) (Schmitt & Zimmerman, 2002)
measures productive knowledge of derivations (e.g., access
accessibility,
accessible, accessibly). Likewise, there have been several attempts to create viable tests of collocation knowledge, although it must be said that this is proving difficult to do successfully (e.g., Eyckmans, 2009; Gyllstad, 2009; Revier, 2009). As we reach the year 2020, the field of vocabulary assessment is very active, with new tests appearing every year. The above a n d other tests will be discussed further in Chapter 9, particularly looking at their various strengths and limitations.
2.5 Summary In the over two thousand years of second language instruction, a range of methodologies have appeared. Recent ones have included Grammar-Translation (with explicit grammar teaching and translation as language practice), the Direct Method (emphasizing oral skills), the Reading Method (emphasizing reading and vocabulary control), Audiolingualism (building good language habits through drills), and Communicative Language Teaching (with a focus on fluency over accuracy). A common feature of these methodologies, with the exception of the Reading Method, is that they did not address vocabulary in any principled way. During the first part of the twentieth century, several scholars were working on ways to lighten students' vocabulary-learning load. One approach was to create an extremely limited vocabulary which could be used to replace all other English words (Basic English). Another approach paid particular attention to vocabulary for reading. These scholars developed principles of presenting common vocabulary first and limiting the number of new words in any text. This line of thinking eventually resulted in the General Service List. Taken together, these approaches were known as the "Vocabulary Control Movement." The work done during this period continues to have a significant impacton current thinking about teaching and learning vocabulary. The twenty-first century has brought continued advancement to the area of vocabulary studies. The Internet offers virtually unlimited possibilities for input, and evolving technology makes better language pedagogy and analysis possible, particularly with the use of massive language corpora.
The last three decades have seen advancements in vocabulary assessment, with an increasing number of tests being released. While the development of vocabulary size tests has progressed noticeably, tests of depth of knowledge have proven difficult to construct, and much more needs to be done before we have truly robust measures of how well learners know the words in their lexicon.
EXERCISES FOR EXPANSION 1. Think of a language teaching methodology you were taught with. With
hindsight, was the vocabulary presented in a principled way? Were you as a student aware of why any particular vocabulary was presented? Was it presented in any particular order? Did it make any difference whether you were aware or not? 2. From the brief descriptions in this chapter, do any of the methodologies seem similar to the way you teach? If so, do you have a more systematic way of dealing with vocabulary than what we attribute to the methodologies? What are your ideas on the selection and presentation of vocabulary? 3. Principles coming out of the Vocabulary Control Movement were mainly targeted at reading. To what extent can they be applied to the other three skills (writing, listening, and speaking)?
FURTHER READING
• For a more detailed description of the history of language teaching: Kelly (1976), Bowen, Madsen, and Hilferty (1985), Celce-Murcia (2014), and Howatt and Widdowson (2004). • For a more detailed description of the history of vocabulary instruction: Zimmerman (1997). • For a detailed description of the various methodologies as they appear in the classroom: Larsen-Freeman and Anderson (2011). • For a complete listing of Basic English: Carter and McCarthy (1988) (including commentary), http://ogden.basic-english.org. • For the historical development of vocabulary tests: Spolsky (1995), Read (2000), and Read (2007).
3 |
What Does It Mean to "Know" a Word?
• Is knowing a word just knowing its meaning and form? • What other kinds of word knowledge are important? • A word has more meaning than is shown in a dictionary. What about encyclopedic and register meaning? • Word form is more than just spelling and pronunciation. Is knowledge of word parts important? • How are words learned incrementally over time, and what is the role of recycling in this?
A comprehensive answer to the question "What does it mean to know a word?" would require a book much thicker than this one. An impressive amount of information must be known and seamlessly manipulated in order to use words fluently, and even finding a framework to explain this complexity is not an easy matter. One could frame the answer in terms of how words are used in context, how they are acquired, or how they move from receptive to productive states. In order to give a useful answer to the question, we will first present several frameworks for conceptualizing vocabulary knowledge. We will then discuss the various components of vocabulary knowledge in more detail.
3.1 Frameworks for Conceptualizing Vocabulary Knowledge 3.1.1 The Word Knowledge Approach The genesis of the Word Knowledge Approach is usually traced back to an article in 1976 by Jack Richards in TESOL Quarterly, where he listed eight kinds of knowledge that one must have about a word in order to use it well. This list was refined and popularized by Nation (1990). He presented a revised and expanded version in 2013, which is the best specification to date of the range of word-knowledge aspects (Table 3.1). Nation's word-knowledge taxonomy is a good analytical specification of what complete mastery of a word entails. But it must be seen as an ideal range
TABLE 3.1 Form
Meaning
WHAT IS INVOLVED I N KNOWING A WORD (TYPES OF WORD KNOWLEDGE)
spoken
R P
What does the word sound like? H o w is the word pronounced?
written
R P
What does the word look like? H o w is the word written and spelled?
word parts
R P
What parts are recognizable in this word? What word parts are needed to express this meaning?
form and meaning
R P
What meaning does this word form signal? What word form can be used to express this meaning?
concept and
R P
What is included i n the concept? What items can the concept refer to?
R P
What other words does this make us think of?
R P
In what patterns does the word occur? In what patterns must we use this word?
R P
What words or types of words occur with this one? What words or types of words must we use with this one?
R
Where, when, and how often would we expect to meet this word? Where, when, and how often can we use this word?
referents associations
Use
grammatical functions collocations constraints o n use (register, frequency . . .)
P
What other words could we use instead of this one?
R - receptive element; P = productive element. (Nation, 201 3, p. 49)
of knowledge, as even LI speakers will not necessarily have mastered all word-knowledge aspects for every word they "know." In fact, partial mastery of many of the aspects is probably the normal state for many words, even for very proficient language users. For example, a person may know most of the meaning senses for a word (circle= a round shape) but not know less frequent senses (= group of people with similar interests, e.g., a literary circle). Just as we often come across new words we don't know when reading or listening, the same is true for new meaning senses for words we are already familiar with. Likewise, we are constantly building up and refining intuitions of words and their collocates as we are exposed to more and more language. But at the beginning stages when we meet new words, we gain only some limited impression of a few of the word-knowledge aspects. The word-knowledge framework has been useful for both pedagogy and research. In pedagogy, its main influence has probably been in helping practitioners to think beyond just meaning and spelling/pronunciation. The framework shows that knowledge of many components is necessary to use a word well, and that these components somehow need to be addressed in instruction and teaching materials. The framework also suggests that vocabulary knowledge is complex and cannot be addressed with a single approach to instruction or learning. Some word-knowledge components are relatively amenable to intentional learning, such as meaning and written form, while the more contextualized aspects, such as collocation and
register, are much more difficult to teach explicitly. They have to be acquired instead through massive exposure to the L2. In research, the framework has encouraged scholars to think creatively about how to measure vocabulary knowledge, and to move beyond tests of the form-meaning link (e.g., Webb, 2005, 2007a, 2007b). With more multi-component studies now beginning to appear, the framework offers a way to conceptualize the acquisition of a more comprehensive version of knowledge than has been considered before. While Nation's specification is good at describing the word-knowledge components, it does not indicate the relationships between the various components, or how those components are learned, so that takes us to the next framework.
3.1.2 The Developmental Approach There are a number of ways that vocabulary knowledge can be conceptualized. Henriksen (1999) lists three facets of vocabulary knowledge: 1 . partial— precise knowledge of word meaning 2. depth of knowledge of the different word-knowledge aspects 3. receptive knowledge— productive knowledge.
It is almost impossible to blend all of these into a single holistic view of vocabulary, but separately, each of the facets provides a useful framework for conceptualization. The first (partial—►precise knowledge) concerns overall proficiency with a word, ranging from no knowledge at all to complete mastery. Read (2000) labels this the Developmental Approach,and it is typically measured along a scale. It is undeniable that vocabulary is learned incrementally, and so using a developmental scale to model this would appear sensible. However, the problem lies in operationalizing the developmental process into a workable scale. In fact, we have only a rather vague idea about how vocabulary development advances, so creating a valid scale is quite difficult. The beginning point of the scale seems relatively clear-cut: no knowledge of a word. However, even this is not straightforward. If a person knows the spelling, pronunciation, and morphological rules of a language, then they will already know something about almost any new word they meet. More problematic is the ending point of the scale. It must be something like "full knowledge of a word," but how does one quantify this? There is no test imaginable which can verify that a word can be used accurately, appropriately, and fluently in every possible context. Then there is the question of how many stages of the acquisition process to model in the scale. Vocabulary learning is gradual, built up over many, many meetings with a word (although big jumps in knowledge can occur from focused, intentional learning). Vocabulary learning is probably a continuum, with an uncountable number of small knowledge increments. But this is no good for developing a scale; we must have a reasonable number of stages that are identifiable. Unfortunately, there is currently no principled way of knowing how many
stages an acquisition scale should contain. At a minimum, there must be the beginning "no knowledge" stage, the ending "acceptable mastery" stage, and one stage in between corresponding to receptive, but not productive, knowledge. A three-point scale may be the minimum, but there is no way to determine the maximum, or more importantly the appropriate, number of stages. (See Schmitt (2010) for more on the Developmental Approach and related scales.) Different researchers have attempted to address these issues in different ways. Wesche and Paribakht (1996) (also see Paribakht & Wesche, 1997) developed the best-known and most widely used developmental scale (the Vocabulary Knowledge Scale - VKS). The VKS was designed to capture initial stages in word learning using a five-stage scale. The scale was based on self-report, although Stages 3-5 also required a demonstration of knowledge through the provision of synonyms, translations, or sentence writing. I don't remember having seen this word before. I have seen this word before, but I don't know what it means. 3. I have seen this word before, and I think it means . (synonym or translation) 4. 1 know this word. It means . (synonym or translation) 5. I can use this word in a sentence: . (Write a sentence.) (If you do this section, please also do Section 4.) 1. 2.
A four-stage scale was developed by Schmitt and Zimmerman (2002), based on the earlier Test of Academic Lexicon scale (Scarcella & Zimmerman, 1998). Schmitt and Zimmerman opted for a simpler scale utilizing a can-do paradigm, believing that it is easier for learners to say what they are able to achieve with a word rather than making statements about how well they know it. This can-do idea is incorporated into Stages 3 and 4 of the scale, which essentially translate into receptive and productive knowledge, respectively. 1 . 1 don't know the word. 2. I have heard or seen the word before, but I am not sure of the meaning.
3. I understand the word when 1 hear or see it in a sentence, but I don't know how to use it in my own speaking or writing. 4. I know this word and can use it in my own speaking and writing. The VKS and Schmitt and Zimmerman scale have been used in a number of research studies, but both inevitably suffer from a number of limitations (Read, 2000; Schmitt, 2010). For example, they describe advancing knowledge of individual words, but cannot be used to estimate overall vocabulary knowledge. They both describe lexical knowledge, yet they have different numbers of stages, and appear to describe different points on the learning continuum. Both have stages which are unverified, and it is impossible to know how accurately learners can judge their own level of knowledge. Also, despite looking like evenly spaced scales, it is highly unlikely that the intervals between stages are equidistant. Both scales mix receptive
and productive elements in ways that are not necessarily straightforward. These and other limitations make it difficult to judge which developmental scale is best to use, or even if any arc accurately describing the vocabulary-learning process. For these and other reasons, a relatively limited number of vocabulary studies have followed this method of investigating the Developmental Approach. An alternative approach to understanding vocabulary development relates to Henriksen's (1999) second facet (depth of word knowledge), which refers to the approach of breaking word knowledge down into separate dimensions/ components (Read, 2000). This relates directly to the Word Knowledge Framework. There is a small but growing body of research that seeks to investigate how different word-knowledge aspects develop in relation to one another. Gonzalez-Fernandez and Schmitt (2019) and Gonzalez-Fernandez (2018) tested large numbers of L2 learners (144 Spanish and 170 Chinese, respectively) on four word-knowledge components: form-meaning link, derivatives, multiple meaning senses, and collocations. They tested these components to both recall and recognition levels of mastery (see below for more on these degrees of mastery). All of the components (and recall and recognition mastery) proved to be strongly interrelated, with correlations between 0.76 and 0.89. But this does not mean that they were mastered simultaneously. Unsurprisingly, some aspects were known better than others. Interestingly, the biggest difference in knowledge was not between the different word-knowledge components (e.g., knowledge of collocations vs. knowledge of derivatives), but rather between the recognition and recall levels of mastery. All four word-knowledge components were mastered on recognition tests before any were mastered on the recall tests. The researchers used an implicational scaling procedure to show the following developmental pathway (from easier to more difficult): Form-Meaning link meaning recognition > Collocate form recognition > Multiple-Meanings meaning recognition > Derivative form recognition > Collocate form recall > Form-Meaning link form recall > Derivative form recall > Multiple-Meanings recall Based on another statistical procedure (structural equation modeling), they concluded that the key aspect in conceptualizing vocabulary knowledge is the distinction between recognition and recall levels of mastery. This leads us to the next approach to conceptualizing vocabulary knowledge: receptive vs. productive mastery.
3.1.3 Receptive and Productive Knowledge While the word-knowledge framework strives for comprehensiveness, many pedagogic and research purposes require a simpler conceptualization of vocabulary knowledge. One of the most common is the distinction between receptive and productive knowledge (sometimes referred to as passive and active mastery), which
is Henriksen's (1999) third facet of vocabulary knowledge. This dichotomy has great ecological validity, as virtually every language teacher will have experience of learners understanding words when listening or reading but not being able to produce them in their speech or writing. Unsurprisingly, studies have generally shown that learners are able to demonstrate more receptive than productive knowledge, but the exact relationship between the two is less than clear. Melka (1997) surveyed several studies which claim the difference is rather small; one estimates that 92 percent of receptive vocabulary is known productively. Other studies suggest that there is a major gap between the two, with only around half to threequarters of receptive vocabulary being known productively (Laufer & Paribakht, 1998; Fan, 2000; Ozturk, 2015). Laufer (2005) found even more disappointing results: Only 16 percent of receptive vocabulary was known productively at the 5,000 frequency level, and 35 percent at the 2,000 level. The inconsistency of these figures highlights the issue of measurement, where the receptive/p reductive scores are highly dependent on the types of tests used (Laufer & Goldstein, 2004). If it is believed that receptive and productive mastery lies on a continuum (as Melka (1997) suggests), then tests need to measure where on that continuum the learner's knowledge level sits. Read (2000) notes that the real problem lies in determining the threshold where receptive mastery turns into productive mastery in this incremental process. He poses the essential question: "Is there a certain minimum amount of word knowledge that is required before productive use is possible?" (p. 154). Alternatively, if it is believed that receptive and productive masteries are states (i.e., there is a discrete difference between the two, as Meara (1997) suggests), then the question focuses on which test format is best to tap into these separate constructs. There has also been inconsistency concerning the receptive/productive terminology. Schmitt (2019) notes that receptive knowledge entails knowing a lexical item well enough to extract communicative value from speech or writing, while productive knowledge involves knowing a lexical item well enough to produce it when it is needed to encode communicative content in speech or writing. That is, receptive/productive knowledge of vocabulary is usage-based, and should presumably be measured with skill-based instruments. However, it is hardly ever measured this way. Words are usually measured in isolation, with either the form or the meaning being tested through a multiple-choice format (where the correct answer needs to be recognized),or through a fill-in-the-blank format (where the answer needs to be recalled and produced). What typically happens is that recognition/ recall test formats are used, but the results are interpreted as receptive/productive knowledge (sec also Read, 2000). Based on work by Laufer and Goldstein (2004) when they developed the CATSS, Schmitt (2010) outlined a principled way of discussing receptive/productive and recognition/recall knowledge, based on how it is measured. "Word knowledge given" refers to the prompts on the test and what information is given to the
TABLE 3.2
FRAMEWORK FOR DEFINING RECOGNITION AND RECALL KNOWLEDGE
Word knowledge given
Word knowledge tested Recall
Recognition
Meaning
Form Recall (supply the L2 item)
Form Recognition (select the L2 item)
Form
Meaning Recall (supply definition / L I translation, etc.)
Meaning Recognition (select definition / L I translation, etc.)
(Schmitt, 2010, p. 86)
learners, and "word knowledge tested" refers to what the learner needs to do to answer the test (Table 3.2). For example, if a written L2 word is presented (i.e., form is given), and the learner must produce an LI translation of that word (i.e., meaning is elicited), then the test would be considered a "meaning-recall" test according to this framework. If we compare this framework to the skill-based terms receptive and productive knowledge, we can see that "meaning recall" most closely matches "receptive knowledge," because when reading or listening, one must recognize the written or spoken word form and then recall the meaning for that form. (If the meaning cannot be recalled, then it might be possible to infer it from context, but this is not the same as knowing the word.) Likewise, when writing or speaking, one must have the meaning in mind of what one wants to say and then recall/retrieve the word form to represent that meaning, which is closest to "form recall" in this framework.
3.1.4 Size/Breadth of Knowledge and Depth/ Quality of Knowledge Another descriptive framework which has had a long history is that of size/breadth vs. depth/ quality of vocabulary knowledge (Anderson & Freebody, 1981). In simple terms, size/breadth (hereafter size) refers to how many words are known. Depth/ quality (hereafter depth) of vocabulary knowledge refers to how well those words are known, or as Read (2004, p. 155) comments: "learners need to have more than just a superficial understanding of the meaning [of a word]; they should develop a rich and specific meaning representation as well as knowledge of the word's format features, syntactic functioning, collocational possibilities, register characteristics, and so on." The size-depth distinction has been widely taken up (e.g., Read, 2004), but it turns out to be less than clear-cut. The concept of size seems uncomplicated, as it is mainly about counting how many words a person knows. But this entails measurement, and so some kind of vocabulary test needs to be used. This means
that a decision needs to be made concerning the criterion that must be met to consider a word as "known." This could be anything from matching an LI meaning to the target L2 word form on a multiple-choice test to being required to write an appropriate sentence containing the target word. In effect, this means that every size test is also a measure of depth (Schmitt, 2014). Defining and measuring depth is much more complicated. Schmitt (2014, p. 922) reviewed research into depth of vocabulary knowledge and identified numerous ways of conceptualizing it, including: 1. 2. 3. 4. 5. 6. 7.
Receptive versus productive mastery Knowledge of multiple word-knowledge components Knowledge of polysemous meaning senses Knowledge of derivative forms Knowledge of collocation The ability to use lexical items fluently The degree and kind of lexical organization (word associations).
This diverse and incomplete set of conceptualizations of depth strongly suggests that there can be no single overall conceptualization which covers everything, i.e., the construct of depth is rather loose and is inherently fuzzy. In fact, one anonymous reviewer of Schmitt's article commented that depth is "about the wooliest, least definable, and least operationalisablc construct in the entirety of cognitive science past or present" (p. 950). However, Gonzalez-Fernandez and Schmitt's (2019) and Gonzalez-Fernandez's (2018) findings that knowledge of various word-knowledge aspects is strongly correlated suggests that the various types of depth develop in some parallel manner. The strong correlations also suggest that true overall depth must be seen as the combined interrelationships between word-knowledge aspects, even though this is probably impossible to measure. Given the difficulties in defining and measuring depth, Read (2004) and Milton (2009) suggest that it may be time to disregard the general notion of depth altogether and concentrate on more specific measures of the quality of vocabulary knowledge that are tuned more finely to specific research questions. Schmitt (2014) agrees with Read and Milton that the construct of depth is probably too vague to be useful in research, but concludes that the way one views the size-depth relationship should depend on one's purpose. While depth is probably too imprecise for research, the notion of depth is useful for thinking about instruction and learning. The message about the need for a large vocabulary size to be able to function well in an L2 is becoming generally accepted (e.g., Nation, 2006; Schmitt, 2008). However, that message by itself is insufficient, as learners need to know words well in order to use them productively, appropriately, and fluently. The size-depth distinction is thus useful when talking to practitioners (as in this book) to drive home the need for rich, sustained instruction and input in order to develop knowledge beyond the simple memorization of form-meaning links.
3.2 Types of Word Knowledge When we ask teachers and students what it means to teach a word, the responses commonly relate to meaning. But as we have seen above, vocabulary knowledge is a rich multifaceted construct. We feel the framework which most comprehensively covers the rich tapestry of knowledge is the word-knowledge framework, and so we will use it as a basis for the rest of the chapter. As meaning is one of the most prominent types of word knowledge, we shall start with it.
3.2.1 Meaning - Representing Concepts and Referents Most of us equate the meanings of words with definitions in dictionaries. However, when one studies meanings in more detail, a whole host of interesting issues appear. Philosophical and psychological discussion about meaning can become quite complex and obscure, but at the most basic level, meaning consists of the relationship between a word and its referent (the person, thing, action, condition, or case it refers to in the real or an imagined world). This relationship is not inherent, rather it is arbitrary until formalized by the people using the word (Drum & Konopak, 1987, p. 73). The spotted animal with a very long neck in Africa could have been called a golf, a glisten, or a glabnab; only consensus within the English-speaking community that the label for this animal should be giraffe gives this particular word any meaning. However, there arc exceptions where words clearly have an intrinsic connection with their referents, and one of them is the class of onomatopoeic words. These attempt to mimic the sounds they represent: boom, chirp, and whoosh. Even here, the connection is not absolute, as different languages render these sounds in different ways: e.g., the sound of a rooster is rendered cock-a-doodle-do (English), cucuricu (Spanish), kukuliku (Swedish), and kokikoko (Japanese). Unfortunately, the relationship between a word and its referent is not usually a tidy and direct one. In some cases, the referent is a single, unique entity which the word can precisely represent, usually as a "proper noun" (Abraham Lincoln, Eiffel Tower, Brazil). But more often, it is really a class or category like cat, love, or uniform. There are many different kinds of uniforms, and so the single word uniform cannot exactly describe each one. Rather it represents our concept of what a uniform generally is like. We know that it is a standardized form of dress, but would be quite open to differences in color and insignia, for example. In fact, our concept of a uniform depends to a large extent on our exposure to uniforms of various types. Thus words are usually labels for concepts (our idea of what a uniform is), which themselves encapsulate the extent of our personal experience of the actual world reality (all possible uniforms) (Hirtle, 1994). So for most words, we can more accurately speak of meaning as the relationship between a word and its concept, rather than its referent.
To describe the meaning of a word, then, we need to describe the concept it represents. The traditional view is that words can be defined by isolating the attributes that are essential to the relevant concept, and that taken together are sufficient to describe it. This might be seen as the "fixed meaning" view (Aitchison, 201 2), and it works relatively well when the referent is unique, such as with proper nouns, e.g., Sydney Opera House, Mother Teresa, and Egypt. In these cases it is not difficult to describe the attributes of a single unique entity. The approach is also suitable for technical vocabulary. These are terms specific to a field that have been given precise definitions so that practitioners can use them confidently without misunderstanding. Habeas corpus and bail are examples from the area of law, and pi and harmonic dissonance from engineering. These terms are often called jargon and are essential for working in a particular field. Because they have been precisely defined by a field, they may be thought to have fixed meanings (Benson & Greaves, 1981). But the majority of words do not have one-to-one relationships with a single referent and none other. As we saw above, words normally have a meaning relationship with more open-ended concepts instead. This fact brings with it problems in definition. While it is relatively easy to precisely define a single case, as in proper nouns, it is less simple to define a category. Let us take cat to illuminate this point. Since the concept must encompass a wide variety of cats, a description of any one cat would be insufficient. Instead, we must determine the characteristics which describe the category of cats (these can be called semantic features). These semantic features are most conveniently illustrated by placing them on a semantic grid, and marking relevant features with a (+), inappropriate features with a (-), and questionable features with a (?). One possible semantic grid for cat is illustrated in Figure 3.1. In essence, many dictionary entries are lexicographers' best efforts to describe the essential semantic features of the concept which a word represents. For example, the entry for cat in the 2003 Cambridge Advanced Learner's Dictionary is: "a small four-legged furry animal with a tail and claws, usually kept as a pct or for catching mice, or any member of the group of similar animals such as the lion." This definition includes many of the features described in Figure 3.1. Figure 3.1
Semantic features of cat
/ /# /f// §/ //
/ 3 /
cat
While semantic features may work fairly well for many concepts like cat, other concepts may prove more problematic. For example, take the two words walk and rim. Although the state of walking is easy enough to discern, as it becomes progressively faster, when does it turn into running? There is probably no place on the continuum of self-locomotion where walking clearly becomes running. Instead, there is a fuzzy boundary. Aitchison (2012) concludes that most words have some degree of fuzziness in their meaning. If so, how do people handle the fuzziness of the meaning boundaries between words? One way is by contrasting a word and its concept with other words and concepts. We can partially decide whether a very fast walk is still a walk by determining whether it has become a run. Thus, a word's meaning is often partially determined by contrasting it with the meanings of other related words. The study of these meaning relationships, and meaning in general, is called semantics. '1'he categories of meaning relationships between words are called sense relations. The lay person would know some of these as "oppositeness" and "similar meaning," but the field of semantics has generated technical terms to express these relationships more precisely. They are illustrated in Table 3.3. Especially in the case of graded antonyms, the meaning of one word is determined by the others. For example, the absolute temperature of a night in Mexico City might be quite different from a night in Montreal, but both might be referred to as cool.Cool does not refer to any particular temperature in these cases, but rather stems from people's perceptions of temperature. Thus cool may denote differing
TABLE 3 . 3
SENSE RELATIONS
Sense relation
Word
Attribute
Examples
synonymy
synonym
similarity
huge-gigantic rich-wealthy
ungraded antonymy
ungraded antonym
exclusive oppositeness
alive-dead pass-fail
graded antonymy
graded antonym
oppositeness on a continuum
big-little hot-cold
hyponymy
hyponym superordinate (hyperonym)
more general category
vehicle-car fruit-apple
coordinate
same level of generality
car-truck apple-orange
subordinate
more specific category
car-Ford apple-Crab Apple
meronym
whole-part
bicycle-wheels, handle, seat
meronymy
absolute temperatures, but linguistically it will always occur between cold and warm. (See Saeed, 2016, for a fuller discussion of semantics.) For any concept, some semantic features will be more essential/salient than others. The most basic meaning elements might be referred to as the denotation of a word meaning, the kind of information that dictionaries try to capture in their definitions. But there is usually a lot of other information a person might know about a word. A number of commentators have made this distinction between some type of basic, fundamental meaning of a word and all of the other personal and cultural background knowledge which might be known. This distinction has been formulated with various terminology, but we will follow Katz and Fodor's (1963) terms core meaning and encyclopedic knowledge. For the word bachelor, the core features of a concept might be defined as +human, +male, +adult and -married. Encyclopedic knowledge consists of the other things one knows about bachelors: e.g., they are often young, date women, and have exciting lifestyles. This encyclopedic knowledge might not be essential, but it is an important component of meaning. It can become especially significant in "fuzzy" cases. For example, is a divorced middle-aged man with several children still a bachelor? He meets the core criteria, but one might not classify him as a bachelor without considering his lifestyle, which is connected to encyclopedic knowledge. It is perhaps most useful to think of core meaning as the common meaning shared by members of a society. The fact that people can define words in isolation proves that some meaning information is attached to a word by societal convention that is not dependent on context. Although this information may well include a great deal of encyclopedic knowledge, it will almost certainly entail aspects of the basic, underlying core meaning, without which it would be impossible to connect it with the represented concept. Encyclopedic knowledge, on the other hand, is idiosyncratic to each individual person, depending on their experience and personal beliefs. It may be communal to a certain extent, but it will almost certainly vary to some degree from person to person. Using the bachelor example above, everyone must agree that the person is not currently married (core meaning aspect), but there might be considerable disagreement on whether a male who is unmarried but living with his partner can still be considered a bachelor (encyclopedic knowledge aspect). While the number of core features will be limited, the amount of encyclopedic knowledge one can know about a word is open-ended. From their review of a variety of research, Anderson and Nagy (1989) agree that word meanings cannot usually be contained by cither a definition or a series of semantic features. Although these may give some sense of a word's meaning (basically core meaning), context plays a large part in filling in the other information necessary to make use of that word. They illustrate this with sentences that are uninterpretable without context, like "The haystack was important
when the cloth ripped," showing that words in a sentence cannot always be decoded from strictly intrinsic meaning properties. With the clue "parachutes," the reader is able to find a context which is congruent with the sentence to be deciphered. Thus, the core meaning of a word is sometimes not enough; listeners or readers need to be able to use their available encyclopedic knowledge. Context can allow this to happen. It seems that one way context exerts its influence on encyclopedic knowledge is via schemas (other terms used for this idea are schemata, frames, and scripts). A schema is knowledge of how things in a specific area of the real world behave and are organized. A schema can either be activated by a word itself in isolation, or by the context it is embedded in. When a particular schema is activated, say a skydiving schema, all the encyclopedic knowledge related to this area becomes available, even before the other words related to the schema are encountered. Once the context has activated a certain schema, the schema constrains how each word's core meaning can be extended into figurative meaning (e.g., jump could be extended to the meaning sense of a person jumping out of an airplane but not to the sense of attacking someone suddenly). If there is not enough context to activate a schema, then the mind must hypothesize a probable one. All of the extra encyclopedic knowledge can feed into the "unspoken" meaning aspects referred to as connotation. An example which illustrates this distinction is the word skinny. In essence it means "very thin," which is the denotation. Using only the denotation, we might assume that many people would be happy to be described as skinny. But skinny also carries the connotation of "so thin as to be unhealthy or unattractive." Of course, this extra meaning information constrains the contexts in which skinny can be appropriately used. Thus, we can use skinny to speak of starving children, but it is unlikely to be of use in describing the next-door neighbor. Likewise, the word get would be suitable for general conversation, but the word procure (with its more precise meaning and business overtones) would be more suitable for formal business communication. This important, yet implicit, extra meaning information colors words and constrains the contexts in which they can be appropriately used. Mastering this information is essential for choosing the right word for the right context, and will be discussed in more detail in Section 3.2.6.
3.2.2 Form - Spoken Form Although many people would consider meaning the most important aspect of learning a word, it is clear that knowledge of form is also a key component to both vocabulary knowledge and language processing in general. This includes both phonological form (spoken form) and orthographical form (written form). Let us consider phonological form first. Adequate phonological knowledge of a word involves being able to separate out and understand its acoustic representation from a continuous flow of speech, as
well as being able to pronounce the word clearly enough in connected speech for other people to do the same when we speak. Being able to manage these verbal input/output processes actually requires a detailed knowledge not only of the acoustic characteristics of the word as a whole, but also of its parts. First, we need to know7 the individual phonemes which make up a word. Second, we must know how these phonemes sound when tied together in the sequence particular to that word. Third, we need to know how the word is divided up into syllables, at least in English. If the word is polysyllabic, the syllables will not be pronounced with an equal amount of emphasis; rather one or more will be stressed. This stressing can be accomplished by altering the pitch, volume, or length of the syllable, as well as the features of the vowel. Syllables can be unstressed as well, typically by reducing the vowel to what is called a schwa (o) sound (the second o in bottom) or by losing its sound altogether (the second a in pleasant is virtually unspoken). Not to minimize the problem of achieving comprehensible pronunciation, but the greater challenge for most language learners seems to lie in the act of listening. This is because learners have limited control over the rate of input, unlike reading where they can read more slowly or even reread whole passages. Understanding words in continuous speech involves two problems in particular: first isolating the sound groups which represent individual words from the speech stream, and then using those phonological representations to access lexical knowledge about the corresponding words. Segmenting the natural flow of continuous speech into the individual component sound groups which represent words is no trivial task. As opposed to written discourse, spoken language does not have clear word boundaries. In fact, the words blend together in speech to such an extent that if one does not actually know a language, it is very difficult to pick out any individual words at all. At times, even LI speakers parse (segment) the speech stream in the wrong place, causing a mishearing or "slip-of-the-ear." Examples are how big is it? heard as how bigoted? and analogy heard as an allergy. Anne Cutler and her colleagues have researched the complexities of speech segmentation and have found that, for English, stress patterns arc the key. For example, Cutler and Butterfield (1992) looked at natural slip-of-the-ear data and found that erroneous word boundaries were inserted much more often before strong syllables (containing full vowels) than weak syllables (containing central or reduced vowels), while boundaries were deleted more often before weak syllables than strong syllables. Additionally, when the speakers placed boundaries before strong syllables, a lexical content word followed; when placed before a weak syllable, a function word followed. There is also evidence that the mind assumes that words do not begin with weak syllables. When presented with ambiguous strings of syllables, people used stress to determine which reading to take. If the first syllable was strong [ let as], subjects tended to choose one-word readings (lettuce) over two-word readings (let us); if the second syllable was strong [in vests], two-word readings were normally chosen (in vests) over one-word readings (invests). Also, when people listened to words in
which normally unstressed syllables were stressed, or vice versa, the words were difficult to recognize (Cutler & Clifton, 1984). All of this evidence indicates the importance of stress, and suggests that the mind assumes strong stress indicates the beginning of a new word, and weak syllables do not begin content words. In fact, there is good reason why people assume strong syllables are word-initial. Cutler and Carter's (1987) analysis of 33,000 entries in a computer-readable English dictionary showed that 73 percent of them had strong initial syllables. They calculated that there is about a three to one chance of a strong syllable being the onset of a new content word, while weak syllables are likely to be grammatical words. It should be noted, however, that the discussion on stress has dealt with English as the target language. Other languages which do not feature stress, such as syllabletimed languages like Japanese, require other strategies for determining where individual words begin. Once the individual phonological representations have been parsed from the speech stream, how are they used to access the corresponding lexical words? A variety of theories have tried to explain how these processes work, but Aitchison (2012) reports that interactive activation models are now gaining favor. These models suggest that when someone hears a word, any perceived sounds will link with ("activate") words that have similar sounds in about the same place. Words with meanings that are possible in the particular context are further activated, while words with impossible meanings are suppressed. This parallel activation and suppression goes on as one hears more of the word and more of the meaning context, until one word reaches an activation threshold and is selected. Aitchison gives the example of bracelet. Maybe it is misheard initially as blace.. ., and so words like blame and blade might be activated more strongly than bracelet, but as one hears more (. . .let) and the meaning context rules out most other possibilities, bracelet will be recognized as the best fit. An important aspect of phonological processing is that it is very fast, and it needs to be. Tauroza and Allison (1990) found that the average speech rate across a range of speech types (conversation, interview, lecture, radio) was 170 words per minute (240 syllables per minute). LI English speakers are able to recognize words in about 200 milliseconds (msec = 1/1,000 of a second), which is usually before the offset (end) of the word (Marslen-Wilson & Tyler, 1980). In contrast to this, conclusive identification of a word can sometimes occur only after several subsequent words have already been heard (Grosjean, 1985). This is because in connected speech it is difficult to know whether subsequent sounds are part of a longer word or the beginning of a new word. This is especially true of unstressed syllables. For example, relying strictly on phonological information, it is impossible to know where to parse the sound string /dalsdjizlei/. It could be part of the sentence "The ledge is laden with flowers" or "The legislature is on recess." So recognition speed is potentially fast, but is constrained by the parsing process. Of course, context usually comes into play to disambiguate such strings.
As would be expected, knowledge of spoken form is closely related to knowledge of its complement, written form, with phonological awareness being necessary for reading and using the alphabet. Knowledge of the alphabet (letter names) is necessary for LI children to separate onsets (initial consonants or clusters) from rimes (vowels and any following consonants), which in turn seems to facilitate word recognition. This combined knowledge facilitates more complex phonological analysis. The ability to analyze onset/rime structure also fits closely with the ability to spell. So the relationship between phonological awareness and orthographical knowledge is close and interrelated, where knowledge of one facilitates the learning of the other (Stahl & Murray, 1994). Phonological awareness is also important for general vocabulary learning. Goldstein (1983) suggests lower-level L2 learners must rely more heavily on acoustic clues than LI speakers, since they cannot compensate with LI -like knowledge of semantic and syntactic constraints to predict and decode words. For example, an LI English speaker will seldom mistake aptitude for attitude because the context would make clear the correct choice even if one did not hear the word clearly. But weaker L2 learners might not have enough language proficiency to adequately understand the context, and so would have to rely solely on a correct hearing of the word. If the word has a number of close-sounding "neighbors," it might be difficult to decide from among the possibilities. Thus phonological similarity between words can affect L2 listeners more seriously than LI listeners, making phonological awareness critical. In addition, the ability to vocalize new L2 words when learning them seems to facilitate that learning; for example, Papagno, Valentine, and Baddeley (1991) found that subjects who were prohibited from vocally or subvocally repeating new L2 words from a word list were much less able to learn those items. For both phonology and orthography, the beginnings of words are particularly salient. We have all experienced the situation where we have tried to remember a word but couldn't, although it was on the "tip-of-our-tongue." Brown and McNeill (1966) induced such a state in their participants by giving them definitions for relatively infrequent words. When this resulted in a "tip-of-the-tongue" situation, the researchers quizzed the participants to find out what they could remember about the word. The participants tended to recall the beginnings of words best, the endings of words next best, and the middles least well. In the case of malapropisms (where a similar-sounding word is mistakenly used for the intended one, e.g., goof for golf), we find a similar phenomenon. The beginnings of the malapropisms were usually very similar to the intended word and the endings somewhat less so. The middles, on the other hand, were much less similar. Aitchison (2012) reviews this literature and concludes that there is strong evidence for a "bathtub effect." Imagine a person lying in a bathtub with their head well out of the water at the front end of the tub and their feet slightly out of the water at the bottom. This is a visual metaphor for our memory for words. The beginnings of words are the most prominent and are remembered best, with their endings
somewhat less so. Conversely, the middle of a word is not remembered so well in comparison. Although the "bathtub effect" is a robust effect in English, it probably does not hold for some other languages. For example, the ends of Spanish words carry a great deal of grammatical information in the form of inflections; English does not. Thus, one would expect Spanish speakers to pay relatively more attention to the ends of Spanish words than English speakers would to the ends of English words. Thus, ESL learners from Lis with a different saliency structure from English may well find themselves focusing on the less informative parts of English words.
3.2.3 Form - Written Form Knowledge of orthographical form is obviously necessary if one wishes to be able to read and write in a language. Although some might consider it a "lower-level" kind of knowledge, research has shown that learners often have more trouble with form than with meaning. After all, most L2 learners already know the meaning of what they want to write and read; it is just that they do not know the L2 forms attached to these meanings. (The same is true for speaking and listening.) In fact, mastering L2 orthographic form is not easy, with psychological research showing the complexity of orthographic decoding. Results from reading research in particular have been instrumental in showing the importance of orthographical word form. Top-down models of reading (Goodman, 1967) suggested that schemas allowed the skipping of many words in a text because we could guess or predict much of the text's meaning, thus making many of the words redundant. But eye-tracking research has shown that most of the words in a text are fixated upon in reading: about 85 percent of the content words and 35 percent of the function words. In addition, between 5 percent and 20 percent of the content words receive more than one fixation (i.e., we backtrack to read these words again). These figures are averages and reading a difficult text can alter them so that more words are fixated upon for a longer duration. (See Conklin, Pellicer-Sanchez, & Carrol (2018) for a review of eye-tracking research.) The physical way the eye moves and fixates also determines what will be picked up when reading. The eye does not move smoothly when reading, but rather brings itself into focus on one point in the text (fixation) and then jumps to the next (saccade). Eye movement is very fast, measured in milliseconds. The average eye fixation during reading is only about 200-250 milliseconds, during which the necessary visual information can be obtained within about the first 50 msec. The remaining time (at least 150-175 msec) is used to program the physical movement of the next saccade. The actual saccade takes 20-40 msec and moves seven to nine character spaces. Fixations fall on the preferred viewing location, normally about halfway between the beginning and the middle of a word. This has the effect of focusing on the more informative beginnings of words.
The eye can see more than a few letters at a time, with the width of the viewing span being about three to four spaces to the left of the fixation and about fifteen spaces to the right. Interestingly, different areas of the viewing span are used for different purposes. The area four to eight spaces to the right of the fixation is used to identify the current word of the fixation. Beyond that, the first three or so letters of the next word are preprocessed for the next fixation. Beyond parafoveal vision, the length of the next word is perceived by peripheral vision, which helps program the length of the next saccade. If the word to the right of the fixated word is short and can be completely identified, then it may be skipped over during the next saccade. These eye-movement characteristics dictate that the initial part of a word is the most important, both because the preferred viewing location tends toward the beginning of the word and because parafoveal vision previews the beginning of the next word. So being able to accurately perceive and process the individual letters in words is clearly important, and reading instruction which highlights matching individual letters with their respective sounds is called phonics. This approach allows emerging readers to "sound out" words in a decoding sound-by-sound manner. But fluent reading relies on more than individual letter recognition. More proficient readers recognize whole words (or even phrases), and the ability to do this increases reading speed. In essence, the mind processes parts of words, or whole words, during each fixation without having to rely on lettcr-by-lcttcr decoding. Reading instruction also needs to promote quick and accurate recognition of words/phrases. Words that are recognized on sight are called sight vocabulary. Readers can shift between these approaches depending on the difficulty of the reading and their reading proficiency. For example, even very proficient readers sometimes sound out long and difficult unknown words, while beginning readers will eventually shift from decoding to whole-word recognition as they read easy texts. One study comparing the use of the two approaches found that kindergarten students mainly used the first letter of a word when recognizing the five-letter words used in the study, first graders used both the first letter and the word shape, while adults consistently used the first and second letters and word shape in recognizing the words (Rayner & Hagelberg, 1975). (See Pressley & Allington (2015) for an accessible discussion of phonics and approaches for developing sight vocabulary.) Words that have very similar word forms (e.g., affect/effect, stimulate/ simulate) cause problems for both LI and L2 learners in spoken language. LI speakers occasionally misspeak the wrong word (malapropism/slip-of-the-tongue - saying malicious instead of malignant) or mishear words (slip-of-the-ear - hearing Barcelona instead of carcinoma),and when this happens the words often have phonological similarities (Aitchison, 2012). The same kind of problem exists with written language. In fact, Laufer (1997) identifies formal similarity as a difficulty-inducing factor in general. There is plenty of evidence to support Laufer's conclusion. For example, Laufer (1988) studied words with similar forms and found that some similarities were particularly confusing for students, especially words which were
similar except for suffixes (comprehensive/comprehensible) and for vowels (adopt/ adapt). (Note that the confusions happen at the less salient middles or endings of words, i.e., the bathtub effect.) Similarly, Bensoussan and Laufer (1984) found that a mis-analysis of word forms that looked transparent but were not sometimes led to misinterpretation. Their learners interpreted outline (which looks like a transparent compound) as "out of line," and discourse (which looks as if it has a prefix) as "without direction." In reading, the most common cause of unsuccessful guessing from context in one study (Huckin & Bloch, 1993) was mistaking unknown words (e.g., optimal) for known words which were similar orthographically (e.g., optional). Even if the context did not support such erroneous guesses, the subjects often persisted with them all the same, supporting Haynes' (1993) assertion that word-shape familiarity can often override contextual information. In production, Schmitt and Zimmerman (2002) found that even advanced English L2 learners had problems producing derivative forms of target words (cohere, coherence, coherent, coherently). Moreover, it is not only the forms of the words themselves which can lead to problems. Regardless of the word itself, if there are many other words which have a similar form in the L2 (i.e., large orthographic neighborhoods (Grainger & Dijkstra, 1992)), it makes confusion more likely. For example, the word poll may not be difficult in itself, but the fact that there are many other similar forms in English can lead to potential confusion (pool, polo, pollen, pole, pall, pill). Word recognition in reading is the receptive process dealing with written word form, while spelling is the productive side. Teachers and students often complain that the English spelling system is filled with exceptions, fueling a debate about just how consistent it really is. Some scholars feel that the English spelling system, although it is not optimal, is reasonably systematic, and that even some of its irregularities have a functional purpose (e.g., Stubbs, 1980, Chapter 3). One example of this is that although the different members of a word family may have different pronunciations, their orthographic shape is likely to highlight their relationship, e.g., finite, infinite; Christ, Christmas; crime, criminal (Wallace & Larsen, 1978, p. 364). On the other hand, some argue that the orthography system in place for English is not as organized and systematic as is commonly assumed, and that it is deficient in the sense that its irregularities cause problems in gaining literacy in the language (e.g., Upward, 1988). In fact, Feigenbaum (1958) states that there are 251 orthographical representations for the 44 sounds of English. From among this abundance, Upward believes that redundant characters are a major problem, and fall into three classes: (1) silent letters (b as in debt), (2) unstressed vowel sounds after stressed syllables (e in chapel; o in atom), and (3) doubled consonants (committee)). He suggests adoption of a Cut Spelling System to remedy the problem, which would result in English being spelled as in the following sentence: An importnt considration behind th Cut Spelingsystm is that th apearance of words shud not chanje so drasticly that peple uninstruetd in th rules of CS find them hard to read (p. 24). Such a system shows no signs of being adopted, and so learners will have to cope with the English spelling system as it now stands. A close look at spelling mistakes
reveals that they are not often phonological errors. Alper (1942) studied 1,340 spelling mistakes from 5,000 college English compositions. Most of these made sense phonetically and followed conventional sound/symbol correspondences (e.g., the /r/ sound is represented by the character "r"). So following sound/symbol rules exclusively does not guarantee accurate spelling of the exceptions in English. Some form of visual appraisal is also necessary to tell when a word "looks right." There is some evidence to suggest that lower-ability young LI readers primarily use a visual-orthographic route to lexical access when reading, but a phonological route to generate spelling, while better readers use both routes in reading and spelling (Barron, 1980). The phonological route is useful for phonologically regular words, while the visual route is useful for the exceptions. Better readers can use the appropriate approach as needed. Spelling strategies seem to change with maturity, however. Between the second and fifth grades, there appears to be a major change in spelling strategy from reliance on sound/symbol correspondences toward spelling an unknown word by analogy to a known word. For example, in deciding how the vowel should be spelled in sheep, a learner can use knowledge of how sleep is spelled. This change seems to happen after the child has built up enough known words in his or her lexicon to use as models (Marsh et al., 1980). It has also been claimed that imagery has a part to play. A mental image of a word's orthography can be generated from the visual experience of that word, and the image used to facilitate spelling, especially of familiar words: Findings indicate that orthographic images can be scanned like real words seen in print, that they include all of the letters in a word's spelling . . . [and] that silent letters may have a special status in these images. Findings suggest that the presence of orthographic images in memory increases the likelihood that the spellings produced by readers resemble. . . [the correct spelling] rather than phonetic variants. (Ehri, 1980, p. 338) In the end, reading and spelling cannot be simplistically considered two sides of the same coin. LI children between 7 and 10 years of age were often found to approach the reading and writing of the same words in different ways (Bryant & Bradley, 1980), and in some cases they could spell out words phonologically which they were not previously able to read (receptive knowledge does not always come before productive knowledge for every word!). It is probably safest, therefore, not to assume that productive knowledge of a word's orthography (spelling) implies the receptive ability to recognize it, or vice versa. Looking at second language orthographic knowledge from a crosslinguistic perspective, it is clear that a learner's LI orthographic system plays a strong role in shaping his or her L2 processing. There are three major types of orthographic systems used in languages around the world - logographic, syllabic, and alphabetic. In logographic systems, the grapheme (smallest unit in a writing system) represents a concept, such as in the Chinese writing system (/£ - woman). In syllabic
systems, the grapheme represents syllables, such as in the Japanese hiragana (fz £ C* [tawrago] = egg). In alphabetic systems like English, the grapheme corresponds to phonemes (the smallest unit of sound which can distinguish two words, e.g., pan and ban). Each of these systems leads to different processing strategies, particularly concerning the relative importance of visual versus phonological processing. It is likely that these strategies are carried over into the L2. Students learning an L2 that is similar in orthographic type to their LI should have fewer problems with it than if it is different in type. L2 instruction should be individualized to account for these processing differences, particularly giving explicit instruction in the L2 orthographic system. (See Koda (2012) for more on these issues.) To illustrate such crosslinguistic orthography problems, let us consider Arabicspeaking learners of English. The problems these students often have with English orthography seem to stem from the fact that Arabic is based on tri-consonantal roots, with vowels being of lesser importance. When recognition strategies based on these tri-consonants are transferred into English, there can be an "indifference to vowels" which often results in misrecognized words: moments being confused with monuments (same underlying MMT structure), and pulls for plus (PLS) (Ryan, 1994, 1997). Ryan suggests using a diagnostic test at the beginning of a course to find students who might be prone to these kinds of problems. This might be a useful idea for any L2 student who comes from an orthographic system that is different from English. Another crosslinguistic factor of importance is how closely the orthographic and phonological systems correspond within each of the different languages. The "orthographic depth" can range from a close sound/symbol relationship (shallow language, e.g., Serbo-Croat) to a much weaker correspondence (deep language, e.g., Hebrew). Speakers of orthographically shallow languages will tend to generate phonology directly from written text, because the written form is a reliable guide to the spoken form. On the other hand, speakers of orthographically deep languages need to derive phonology from their internal knowledge of the words, because their orthographies are not reliable guides to the word's phonological form. Both methods probably exist for all languages, but their weight of usage will depend on the depth of orthography of the particular LI (Frost, Katz, & Bentin, 1987). Learners may well carry over their LI strategies to their L2, even if the L2 is quite different in orthographic depth, which would most likely cause problems. This certainly appears to be the case with Spanish speakers (shallow orthography) learning English (deeper orthography) (Suarez & Meara, 1989). Learners in this situation will probably need help in adopting more appropriate strategies, and this implies that L2 teachers need to be knowledgeable enough about the target language to be able to suggest appropriate strategies.
3.2.4 Form - Word Parts (Morphology) Morphology deals with affixes and how they are attached to the base forms of words. Laufer (1997) suggests that if derivational affixes are transparent, then
learning is facilitated. For example, if students know the meaning of -ful, it should not be too difficult to recognize the meaning of new words like useful or careful as long as the base forms are already known. However, a lack of consistency can cause problems even if the affix is transparent. Someone having a special skill is a specialist,a person who is pragmatic is a pragmatist, but a person who acts on stage is an actor not an *actist. This is more likely to cause problems in production than comprehension, although learners do confuse affixes receptively as well, e.g., believing cooker means a person. Laufer points out a similar problem in word compounding. In what she terms "deceptive transparency," words consisting of apparently meaningful and transparent parts can cause considerable confusion for unwary learners. She gives the example of inconstant,where in does not mean "inside." Yet some students in her experiments (Laufer & Bensoussan, 1982; Bensoussan & Laufer, 1984) interpreted inconstant as "an internal constant." Other confusing words were nevertheless, glossed as "never less," and on the grounds that as "on the earth." The learner assumption in these cases was that the meaning of the words equalled the sum of the meanings of theirparts. While making this assumption can be a useful strategy in many cases, with deceptively transparent words it unfortunately leads to incorrect guesses. Unsurprisingly, in a later experiment Laufer (1989) found that subjects made more errors with deceptively transparent words than with non-deceptively transparent words. So words that look simple to analyze, but are in fact not, are more difficult to learn.
3.2.4. 1 Processing of Affixes The manner in which the mental lexicon handles affixes depends partly on what kind of affixes they are. According to Aitchison (2012), inflections generally seem to be added to base forms in the course of speech. The exceptions are words which are most commonly used in their inflected forms, such as peas and lips. These words may be "welded together" and stored as wholes as a result of massive exposure. On the other hand, derivations seem to be stored as single units (resentful), which can be analyzed into their components (resent+ ful),if necessary. As for prefixes, if they arc obligatory (rejuvenate), they are stored as part of the word. Non-obligatory prefixes (unhappy) probably are as well, or there would be more cases of prefix errors (★dishappy, *nonhappy). (See Aitchison, 2012, for more on compounding and affixation.)
3.2.4.2
Relative Difficulty of Various Affixes
The idea that morphemes might be learned in a particular sequence (thus implying individual degrees of difficulty) began with the "morpheme studies" of Dulay and Burt (1973, 1974). They studied Spanish- and Chinese-speaking children aged six to eight and found a similar ordering in the children's acquisition of inflectional morphemes. These results were broadly confirmed by Larsen-Freeman's (1975) study of L2 adults. But it was difficult to set the order for individual morphemes,
and after reviewing more than a dozen studies, Krashen (1977) hypothesized that the morphemes clustered into the following levels: Acquired earlier -ing plural copula auxiliary article irregular past
.1 regular past third-person singular possessive Acquired later The methodology of the various morpheme studies was called into question (see Long & Sato (1984) for a review), but by 1991, Larsen-Freeman and Long had concluded that there was simply too much evidence of some kind of ordering to be ignored. At the same time, results from studies carried out by Pienemann and his colleagues suggested that the underlying basis for the ordering was cognitive processing constraints (e.g., Pienemann, 1984). Bauer and Nation (1993) used linguistic criteria instead of acquisitional criteria to inform a hierarchy of affixes. They focused on the ease or difficulty of understanding affixed words when encountered in written texts. Their linguistic criteria resulted in the following seven levels: Level 1. Each form is a different word. Each derivative is counted as a separate type. Level 2. Inflectional suffixes. Base words and their inflections are considered part of the same word family. Affixes include the plural, third-person singular present tense, past tense, past participle, -ing, comparative, superlative, and possessive. Level 3. The most frequent and regular derivational affixes. The affixes include -able, -er, -ish, -less, -ly, -ness, -th, -y, non-, and un-. Level 4. Frequent, orthographically regular affixes. The affixes are -al, -ation, -ess, -fid, -ism, -ist, -ity, -ize, -ment, -ons, and in-, all with restricted uses. Level 5. Regular but infrequent affixes. These affixes are not general enough to add greatly to the number of words that can be understood. They include -age, -al, -ally, -an, -ance, -ant, -ary, -atory, -dom, -eer, -en, -ence, -ent, -ery, -ese, -esque, -ette, -hood, -ian, -ite, -let, -ling, -ly, -most, -ory, -ship, -ward, -ways, -wise, anti-, ante-, arch-, bi-, circum-, counter-,en-, ex-, fore-, hyper-, inter-, mid-, mis-, neo-, post-, pro-, semi-, sub-, and un-.
Level 6. Frequent but irregular affixes. These affixes cause major problems in segmentation. Some of these affixes are already listed above; those can be considered the transparent cases, while these are the opaque cases. They include -able, -ee, -ic, -ify, -ion, -ist, -ition, -ive, -th, -y, pre-,and re-. Level 7. Classical roots and affixes. Bauer and Nation do not deal with these roots and affixes, except to suggest that they should be explicitly taught to learners, and to note that many frequent English prefixes belong here, such as ab-, ad-, com- de-, dis-, ex-, and sub-. Bauer and Nation succeeded in creating a hierarchy of affixes that is widely used in the discussion of vocabulary, but their linguistically based hierarchy does not necessarily equate to how difficult learners find the various affixes in practice. Since 1993, there have been several studies which have empirically explored the actual difficulty by measuring learner knowledge of the affixes. Schmitt and Meara (1997) found that inflectional suffixes were generally known better than derivational suffixes by Japanese high school and university students, although -merit was also well known. But Schmitt and Meara's knowledge-based hierarchy of difficultly bore little resemblance to Bauer and Nation's linguistically based one. Also using Japanese high school and university participants, Mochizuki and Aizawa (2000) measured Levels 3-6 in Bauer and Nation's hierarchy (so no inflectional affixes). Their results did not correspond particularly well with Bauer and Nation's hierarchy either. Ward and Chuenjundaeng (2009) investigated Thai university students and found that -tion (Bauer & Nation Level 6) and -er (Level 3) were generally known better than -merit and -ity (both Level 4). Sasao and Webb (2017) used three different measures and developed a hierarchy of the difficulty of 118 affixes based on 417 Japanese university students. Again, we find little correspondence with Bauer and Nation's hierarchy. Based on these studies, we can conclude that although Bauer and Nation's hierarchy might be a useful resource to discuss the frequency, transparency, and consistency of affixes, it is not a good tool to predict the likelihood of learners actually knowing the various affixes. Although it is difficult to compare the above studies as they have used different methodologies and measures, there does seem to be some consistency in their results. More studies with a wider range of Lis are needed, as LI almost certainly affects L2 morphological knowledge. Moving beyond the knowledge of affixes themselves, the next issue is the degree to which L2 learners know the words which incorporate affixes. This has typically been operationalized in terms of whether learners know the various members of word families. In terms of productive mastery, it is clear that learners typically have considerable gaps in their knowledge. Schmitt and Meara (1997) found that their learners were able to produce between 49 and 86 percent of the inflectional affixes (-ed, -ing, -s) attached to the target words. They also knew -merit fairly well (51-76%). But they only knew between 1 and 39 percent of the derivational affixes (e.g., -ly, -er, -al). Schmitt and Zimmerman (2002) found that their advanced L2
learners were typically able to produce only two or three of the four main derivatives for the target word families in the AWL. The noun and verb forms were usually the best known, but learners had much more trouble producing the adjective and adverb forms. In terms of receptive mastery, we might expect higher scores. Nation (2016) argues that if learners know one family member (access), they should be able to recognize or work out unfamiliar derivatives as semantically related words when they are encountered in a meaningful context (accesses, accessibility, accessible, accessibly,etc.) But this may be overly optimistic (especially for beginners), as there are several reasons why L2 learners might struggle with word families even receptively. Many semantically related words in English are not related transparently by affixation: e.g., if *stealer were the agent form of steal, learners may well make the connection by analogy with words like farm— farmer and swim—>swimmer. But the agent forms are thief, rustler, burglar, plagiarist, etc., depending on what is being stolen. None of these words have any formal similarities with steal that could help the learner. Also, there may be several affixes that carry out the same function, e.g., the agent form can be indicated by -er (runner), -1st (typist), -an (European), -ant (assistant), -ee (employee),and -ician (politician), among others. Moreover, the spellings of the affixes are not always consistent: player vs. actor. This derivational complexity surely makes relating the various word-family members to each other less transparent, which would cause difficulties for learners (Naseeb & Ibrahim, 2017). A learner's LI plays a role in determining the degree of difficulty, with similarity of L1-L2 morphology decreasing difficulty and dissimilarity increasing it (Laufer, 1997). This reasoning suggests that learners might not always be able to recognize the various derivative members of a word family consistently and accurately, and some evidence demonstrates this difficulty. McLean (2018) studied Japanese university students for their knowledge of English prefixes and affixes, and found that, despite knowing the target base words, their knowledge of the associated derivational affixes was very much incomplete. One of Ward and Chucnjundaeng's (2009) groups was able to provide accurate translations for less than 10 percent of derivative words containing -ion, -er, -ment, and -ity, while for the other, the percentages varied between 14 and 35 percent. Gonzalez-Eernandez and Schmitt (2019) tested 144 Spanish speakers of English with varying proficiency levels on their knowledge of four word classes (noun, verb, adjective, adverb) for twenty target words. The mean percentages of derivatives correctly recognized on the multiple-option test was: 0 correctly identified - 12.0% of participants, 1 - 24.9%, 2 - 28.1%, 3 - 20.0%, and 4 - 15.0%. In terms of recognition, the majority of derivative forms were not recognized by most learners (although it must be noted that the words were presented in isolation rather than in context). Overall, although word parts may not seem particularly difficult, we find that they prove to be quite challenging for learners, especially in terms of productive mastery.
3.2.5 Use - Grammatical Knowledge While meaning and word form are word-knowledge components that are easy to think about in isolation (or describe in a dictionary entry), Nation (2013) also identifies several components that arc necessary for words to be used appropriately in context. These are listed under "Use" in Table 3.1. Collocations will be covered in the wider discussion of formulaic language in Chapter 4. Here we will discuss grammatical functions and constraints on use. Traditional language instruction has typically divided grammar and vocabulary into separate compartments, but of course in use they are inextricably interrelated. The idea that the two work together and need to be considered in conjunction is captured by the term lexicogrammar. Such an integrated view is supported by corpus evidence which is now showing the extent of lexical patterning in discourse. Hunston, Francis, and Manning (1997) believe that most words can be described in terms of the pattern or patterns that they typically occur in. For example, they found that about twenty verbs have the pattern "VERB by ING": 1. those that mean either "start" or "finish": begin,close,end, finish, finish off, finish up, open, start, start off, start out. I would therefore like to finish by thanking you all most sincerely for helping us. 2. those that mean cither "respond to" or "compensate for something": atone, compensate, counter, react, reciprocate, reply, respond, retaliate. It would retaliate by raising the duty on US imports. 3. those that mean "gaining resources": live, profit. Successful businessmen can see the opportunity to profit by investing in new innovations. Thus, these verbs fall into three meaning groups which take the same syntactical patterning. This highlights the key point that "groups of words that share patterns also tend to share aspects of meaning" (p. 211). Hunston et al. went on to analyze more than 200 verbs which take the "VERB at NOUN PHRASE" pattern and found that they too fell into recognizable meaning groups, in this case ten. Among these are: 1. verbs meaning "shout" or "make a noise": bark, hiss, scream, yell, blow up. All I seem to do is scream at the children and be in a muddle. 2. "make a facial expression or gesture": grin, smile, frown, scowl, wave. He turned to wave at the waiting photographers. 3. "look": glance, gaze, look, peer, stare. She glanced at her watch.
4. "react": laugh, rage, shudder, grumble, marvel.
Jane shuddered at the thought of being stranded here. Again we see the connection between verbs and their meanings and the patterning that they take. From this kind of evidence, Hunston et al. argue that patterns like these are key elements in language and might even be considered its building blocks. Lexical patterning is one of the most exciting strands of current vocabulary study, and it will be expanded upon in the next chapter with discussion of collocations and formulaic language in general. However, from a more traditional view, two of the most obvious aspects of grammar are morphology and word class. (Others include notions such as countability and valency.) We have already covered morphology in our discussion of word parts above. (Note how word-knowledge components are interrelated and are difficult to pigeonhole into single neat categories.) Let us now look at word class. Word class (alternatively part of speech) describes the category of grammatical behavior of a word. There are a number of potential word classes, but the majority of language research has concentrated on the four major categories of noun, verb, adjective, and adverb. The results from a number of studies suggest that certain word classes are easier to learn than others. In an early study, Morgan and Bonham (1944) looked at these classes and found that nouns were clearly the quickest to be learned, with adverbs being generally the most difficult part of speech. The subjects in Phillips' (1981) study learned nouns better than verbs or adjectives, but the difference decreased with the increase in the learners' proficiency. For subjects learning Russian-English pairs of words, pairs in which the Russian word was a noun or an adjective were easier to learn than pairs in which the item was a verb or an adverb (Rodgers, 1969). More recently, Schmitt and Zimmerman (2002) found that their advanced learners knew (form recall) the noun and verb members of academic word families better than adjective or adverb members. Thus, it would appear that nouns are the easiest word class, adverbs the most difficult, with adjectives and verbs occupying the middle ground. (Even though the adverb form may seem simple with -ly being the dominant suffix, the adjective form must also usually be known to attach it to.) Regardless of whether any particular word class is easier or more difficult than others, there does not seem to be any doubt that word class is involved in the learning and storage of vocabulary. Let us look at the psycholinguistic evidence for this statement. When malapropisms arc made, the errors almost always retain the word class of the intended target word. I looked in the calendar (catalog). The tumour was not malicious (malignant). It's a good way to contemplate (compensate). (Aitchison, 2012, p. 120) Similarly, "tip-of-the-tongue" guesses also tend to retain word class. This suggests that words from the same word class are closely linked, with nouns having the strongest affinity. In contrast, words from different word classes are relatively
loosely linked. Certain aphasics (people who have lost some language ability because of brain damage) retain their use of nouns but are largely unable to utilize verbs, indicating at the very least that nouns and verbs arc stored somewhat differently. It is obvious that most LI speakers possess and can utilize knowledge of a word's part of speech, even if they are not able to explicitly name the word class. (Interestingly, Alderson, Clapham, & Steel (1997) found it is quite common for LI speakers to be unable to explicitly name a word's part of speech.) But is the same true of L2 speakers? The common assumption seems to be that word-class knowledge easily transfers from the LI to the L2. Odlin and Natalicio (1982) established that this is not necessarily true. They found that intermediate and advanced ESL students did not always know7 the word class of words they knew the meaning of. They claim that "acquisition of the semantic content of target language w7ords does not always entail acquisition of the target language grammatical classification of those words" (p. 35). However, on the positive side, both intermediate and advanced L2 students were able to identify the word classes by name about 75 percent of the time, albeit for target words of a relatively high frequency. The conclusion is that non-beginner L2 learners are likely to know the word class of at least the more frequent words, but there is also the possibility that they will not know it even if they know a word's meaning. There is also some evidence that more-advanced learners seem to recognize the value of knowing word class. In a survey of Japanese learners of English, 55 percent of junior high school students indicated that using part of speech to help remember words is a helpful strategy. For high school students, this percentage increased to 67 percent, while 85 percent of university and adult learners rated it as helpful (Schmitt, 1997).
3.2.6 Use - Register/Formality/Stylistic Constraints What makes a word the best choice in a particular context? While it must be the correct word class, and be appropriate phraseologically, there is often something additional that makes a particular w7ord the best selection. This is often connected to the idea of connotation introduced in Section 3.2.1. The way that important, and often implicit, connotation meaning colors a word and constrains how we use it is referred to in several different ways (stylistic constraints, appropriacy, formality), but we will use the term register. This describes the stylistic variations that make each word more or less appropriate for certain language situations or language purposes. Because implicit meaning information can be of several different types, register is a somewhat broad category. Nevertheless, there have been several attempts to describe the different types of register variation. Chui (1972) (in a study made famous by a better-known paper by Richards, 1976) suggests six areas where there can be such variation, although some of these were mentioned as far back as 1939 by Collinson. Temporal variation covers the continuum of how old-fashioned
or contemporary words are. Language is a living thing ever in flux, in which words are constantly falling out of use, while others are being created to take their place. Proficient language users sense this, and words which are archaic or becoming so gain a register marker in people's minds to signal the out-of-use status. On the other end of the spectrum, it is possible for words to have a current or cutting-edge feel, as selfie (taking a photograph of oneself with a smartphone) has for many people at the time of this writing. Sometimes words change their temporal marking with new uses. Wireless was current in the early 1900s as the exciting new means of radio communication, then fell out of use as the term radio displaced it. It then acquired a renewed currency with the high-tech meaning connected to cordless computing, which has now become so established that it has largely lost any sense of newness. Geographical variation refers to the way that a common language differs according to where it is spoken. The variation can be divided among countries which speak the same language, in which case the variations are called language varieties (Indian English, Australian English). If the divisions are within a country, they are known as language dialects. Perhaps the most noticeable indicator of a person's home region is their phonological "accent," but geographical variation also refers to the lexical choices they make. For example, where Norbert grew up in Idaho, the small storage space on the passenger side of a car is called a jockey box, but in Minnesota, where Diane is from, it is called a glove compartment. In normal situations, such lexical choices are probably not consciously manipulated, and arc only noticeable when one is exposed to the spoken or written discourse of someone outside one's immediate language community. The third type of variation is social variation. It is said that people in privileged classes typically use a somewhat different vocabulary from people in less privileged classes. The amount of social variation will probably differ from country to country depending on the rigidity of the social class system and on perceptions of the desirability of any particular variety. Richards (1976) gives the example that members of privileged classes refer to a female as a lady, where otherwise she is referred to as a woman (although this example may be out of date). Other examples (in British English) include lunch (privileged) vs. dinner (less privileged) and settee (privileged) vs. sofa/couch (less privileged). Social role variation covers the role of power or social relationship between interlocutors (people engaging in discourse), which directly affects the level of formality each uses. If one is speaking to a social superior, someone it is desirable to impress, or a stranger, polite deference is usually partially indicated by using more formal words (as well as more indirect syntactical structures) than one would use if addressing one's peers or friends. As everyone interacts with numerous people of varying relative power status, this implies that social role variation is routinely and consciously manipulated. Contrasting social role variation with geographical variation, we see that the amount of conscious control a person has is very likely to vary with type of register. Some register types seem to be largely unconscious and therefore likely to be less responsive to deliberate change in any particular situation
(geographical variation), while others are obviously more amenable to conscious control (variation stemming from social role). The topic being discussed can also affect the type of language used. This field of discourse variation stems from the fact that many fields have a genre, or expected style of discourse, which determines appropriate language use. This often concerns syntax (e.g., using passive voice constructions in academic discourse), but it also involves word-choice constraints. In academic discourse, we has traditionally been preferred to I, even in cases of a single author, presumably because of a greater sense of objectivity. In addition, each field has its own technical vocabulary or jargon, whose use is expected, and whose non-use can be marked (salient because it is not the expected norm). Gregory (1967) suggests that every field has a set of technical words restricted to people familiar with that field. They also use non-technical words which are usable in many fields but with different register ramifications in each one. Chui's final register area is mode of discourse variation; that is, some words are more appropriate to written discourse than oral discourse, as the former is normally more organized and formal than the latter. For example, yeah is the eighth most frequent word in the Cambridge and Nottingham Corpus of Discourse English (CANCODE) corpus of spoken English, while it is rather infrequent in written texts (McCarthy & Carter, 1997). Halliday (1978) developed a different description of the components of register variation. His influential framework divides register into three basic components: field, tenor, and mode. He uses them in an attempt to capture how vocabulary selection is constrained by the complex interactions between "the content of the message, its sender and receiver, its situation and purpose, and how it is communicated" (McCarthy, 1990, p. 61). Field covers the content and purpose of a message, such as an owner's manual explaining how to operate an appliance. Tenor refers to the relationship between interlocutors, which is very similar to the social role variation discussed above. Mode describes the channel of communication, that is, whether the message is spoken or written, and how it is physically transferred, for example, via telephone, novel, or drum. Halliday's description of register suggests that we can view register competency as (a) knowledge of the various kinds of register marking that a word may have, and (b) knowing how to apply that knowledge of a word's register marking to achieve the effect one desires linguistically. To formulate this in a slightly different way, we could say the following: For every familiar word, language users know varying amounts of ( l ) t h e above kinds of register information for the word and what the normal applications of the word are, and (2) what the effects are of using the word (with its register marking) in a number of different situations. People choose to use words with a certain type of register marking with the purpose of conforming to or diverging from their interlocutor's expectations. Most of the time, one would choose to use words with the kind of register marking one's interlocutor expects, because this is the way to maintain communication and build solidarity. Benson and Greaves (1981) partially explain this by stating that, in order to communicate, we must work within a mutually understood
field of discourse. Choice of lexis gives an indication of this field (e.g., academic discourse or a car repair manual) by utilizing both lexical items that are particular to the field and more general words which have acquired a technical meaning in the field. If this flow of expected specialized vocabulary stops or is changed, then communication breaks down; if the flow is maintained, communication continues. Thus, maintaining register of the field of discourse variation type is an important support for continuing communication. Any individual word can carry a number of different kinds of register marking. For example mosey is not only old-fashioned, but is also restricted mostly to rural American usage. Different words also carry different levels (strengths) of register marking. Some are very highly marked (pissed, anon), where others carry little, if any, marking at all. The amount of register marking is connected to the lexical specificity of the word (Cruse, 1977, 1986, pp. 153-155): More specific words tend to have more register marking, less specific words less register marking. Let us look at this in terms of synonyms, for example guffaw, chuckle, giggle,laugh, jeer, snigger. In this set, laugh is the most essential, because it is the most frequent and because the others require its use in a definition (chuckle= laugh quietly) (Carter, 1982). Being the most basic word, laugh naturally has the least amount of register marking, because it is widely used in a variety of contexts. As one moves away from the most basic, frequent, usable item, words acquire greater and greater amounts of register marking. Giggle would be less likely to be used with adult males, for instance. Another illustrative example is glass, which is a neutral, general term. If we become more specific by speaking of its subordinates, like flute, goblet, or juice glass, we start to gain register marking. Likewise, going in a superordinate direction, toward an item like drinking vessel, also increases the marking, as the situations where this term would be naturally used are more restricted. The hyponyms offspring - child infant work in a similar way, with child being the most neutral. If we think of register as stylistic constraints, we can visualize how this works. The core word of the group can be used in the greatest number of situations (glass could be used for any meal), but as hyponyms and near synonyms have increasingly greater levels of register information attached to them, the possible situations where they can be used appropriately decrease accordingly. For instance, flute has the sense of being suitable for more formal occasions and for only particular kinds of alcohol. It would probably not be suitable for the average breakfast or lunch. Thus, specific situations require specific vocabulary; and the register information attached to words allows language users to select the best word for each situation. Greater register marking may also serve an interactional purpose; Robinson (1988) believes more specific or marked words show liking, "immediacy," and willingness to continue conversation. 1. Tom let me drive his new car/ Aston Martin. 2. A: I thought the film was good tonight. B: Yes it was nice/ fantastic,(adapted from Robinson,1988)
These examples illustrate how the more marked option (in italics) may project greater involvement and interest in the topic at hand. (See McCarthy, 1984, for further discussion.) The upshot of this is that register is a complex set of information which is affected by a number of different factors, among them: what subject field is being discussed, who the interlocutors are and what their social relationship is to the speaker or writer, whether the discourse is spoken or written, and what purpose the speaker or writer has in mind. If the speaker or writer is competent, then she or he will judge the situation and select the word from a group of known hyponyms or near synonyms believed to have the desired effect. When a person is not concerned with register considerations, as in an informal conversation, then the choice will tend toward more common, less specific words. Register becomes more salient, however, when there is a specific purpose to be achieved by the communication - in an interview or when writing an academic thesis, for example. But in all of these cases, there are lexical choices affected by register constraints, whether they are conscious or not. Some register aspects may almost always be totally unconscious, such as geographical variation, but these still carry register information, and may become more noticeable to a person as she or he gains more exposure to a different norm of discourse (such as by living in a foreign country that uses the same language).
3.3 Applications to Teaching In this chapter, we have presented a number of ways of conceptualizing vocabulary knowledge. All are instructive, although they highlight different facets of lexical knowledge. Each leads to practical teaching implications. The Developmental Approach (partial—’precise knowledge) indicates that vocabulary knowledge is incremental in nature and cannot be seen as a simple knows/doesn't know dichotomy. I'he Word Knowledge Approach also emphasizes the incremental nature of vocabulary acquisition, as it is impossible to master all of the word-knowledge components simultaneously. Although they are learned in parallel, some will typically be mastered later than others (Gonzalez-Fernandez & Schmitt, 2019). The word-knowledge framework also emphasizes that there is more to vocabulary knowledge than just form and meaning. Although the form-meaning link is the obvious place to start vocabulary learning, it cannot end there. Appropriate use of vocabulary will also require knowledge of the more contextual types of word knowledge, including grammatical knowledge, morphology, and register. This can be seen as adding to the depth of vocabulary knowledge. Considering vocabulary in terms of receptive and productive mastery emphasizes its value for real-world communication in terms of listening/reading and speaking/ writing. Teachers need to be clear about their students' needs. If they are only listening/reading, then teaching vocabulary to a receptive level of mastery is sufficient. But if students need to be able to speak and write, then a productive level of
mastery is required. Research suggests that it is a challenge for students to reach a productive level, and it certainly cannot be assumed that receptive mastery will magically develop into productive knowledge without sustained study and effort. In fact, Schmitt (2019) suggests that it is much more difficult to take students from receptive to productive mastery than it is to take students from zero knowledge to receptive mastery. The incremental and multidimensional nature of vocabulary learning means that words need to be learned over time. This privileges the value of both sustained study and recycling. With so much to learn about every word, it is not surprising that it takes a great number of exposures to truly achieve mastery. Some of these exposures can usefully come from explicit study, especially initially. Explicit study can "jump start" learning, by putting a preliminary understanding of the form-meaning link into place. This fragile initial understanding can then be consolidated by further study. It also provides the foundation to gain more value from incidental exposure, where learners see the word in a variety of contexts. In essence, vocabulary learning is largely about maximizing recycling. It leads to more precise knowledge of meaning, gives exposure to more word-knowledge components, and helps move knowledge along the receptive— productive continuum. This means that both teachers and learners need to think in terms of the long term, because multiple recyclings of large numbers of words is only possible over extended periods of time. While some form-meaning knowledge of quite a large number of words (size) can be learned shallowly by short-term cramming (e.g., by using word lists), only longer-term exposure to words in a rich variety of contexts will lead to the depth required to use them appropriately. An understanding of the individual word-knowledge components also yields useful pedagogical insights. Starting with meaning, the first implication is that a useful distinction can be made between proper nouns and words which represent categories of things. When teaching the meaning of a proper noun, it may be sufficient to merely exemplify its referent in some way, such as with a picture of the Eiffel Tower, or an explanation of it. Since the referent is a single, unique entity in the case of proper nouns, there should be little problem in delineating what the word represents. In addition, since the single referent is usually fixed and unchanging, one exemplification may be enough to adequately define that word. On the other hand, words which represent categories usually require more information to give students an adequate understanding of their meaning. Teachers often define these words by giving a list of semantic features for that category, or by listing the subordinates of a superordinate term. In either case, it is usually necessary to give negative examples of what a category isn't as well as positive information of what it is (Carroll, 1964). For example, when a student asks a teacher what sprint means, the teacher might well try to explain by going down a list of semantic features: sprint involves moving quickly, either by one's own locomotion or in a mechanical vehicle, often at the end of a race. But if the explanation stopped there,
students may have the impression that a runner could sprint for a whole marathon. So teachers need to give information of what sprint is not (it does not usually describe a long, sustained, endurance type of effort), so that students can begin to understand the word's meaning limitations. In this way, the semantic boundary between sprint and run can start to form. For the superordinate vehicles, positive examples could be its subordinates cars, buses, trams, and trucks, but not horses or skateboards. The negative examples help to show that typical vehicles are mechanical means of transport that have motors. As we can see, understanding the notions of semantic features and sense relations is important to teachers because they are typical means of defining new words. As such, teachers also need to recognize their limitations. A common way of defining words is giving synonyms and antonyms, for example eavesdrop- "listen" or shallow= "not deep." These are perfectly good methods of giving the initial impression of a word's meaning, but students will need more examples and exposure to these words in order to master the extent and limitations of a word's "fuzzy" meaning. At a later point in their study, students can be made aware that very few words are completely synonymous or exact opposites, and that the definitions they were given initially are only inexact representations of the word's true meaning. In addition, once synonyms have been learned, exercises like the following can be used to start to differentiate the nuances of meaning.
In English there are adjectives which are normal and adjectives which are extreme. For example, good is a normal adjective and wonderful is much more extreme. Here is a list of extreme adjectives. Use a dictionary to find the normal adjectives for each extreme adjective.
1.
Normal
Extreme
hot
boiling
2.
enormous
3.
delicious
4.
tiny
5.
exhausted
6.
freezing
7.
awful
8.
filthy
9.
ancient
10.
wonderful
(Redman & Ellis, 1989, p. 38)
Teachers will commonly concentrate on core meaning aspects when they first define a word, as these capture the essence of the word's meaning. When dealing with aspects that are more encyclopedic in nature, they should be aware that these can vary, and that students, particularly in mixed-culture classrooms, may have vastly different ideas about them. For example, all students should agree that food is something which is eaten for nourishment (core meaning), but in different cultures this may represent very different edible substances and maybe attached to quite different eating rituals. There is sometimes much more to a word than denotative meaning, implying that teachers need to consider how to incorporate register information into their vocabulary teaching. They cannot assume that dictionaries will adequately address this information, as it is sometimes quite difficult to explain, and so dictionaries are not necessarily a good source (Hartmann, 1981). Teaching register requires teachers to be "tuned in" to register in the first place, and not just be satisfied with teaching meaning alone. Not all meaning senses and register information can be taught together in a single instance, since students are unlikely to be able to absorb everything during an initial exposure to a word. Teaching register information effectively requires that learners receive multiple exposures to words in different contexts so that they identify register patterns and learn alternative words when a different register is required. Because some words have multiple register marking (as in the example of mosey) and others have few, if any (walk), teachers must determine whether a target word has register constraints and, if so, whether or when to teach them. If the word has register marking which would stigmatize students if used in certain situations, then students need to be made aware of this. Examples of this include "swear" or taboo words (damn) or words which can be considered offensive to a particular gender or race (broad to refer to a woman; the Hawaiian term haole to refer to non-native Hawaiians). But even slang words can be offensive to some people. McCarthy and O'Dell (2012, p. 198) give an indication of this in their explanation of (British) slang, with a sensible caveat about its use: Slang
Slang is extremely colloquial language. Slang helps to make speech vivid, colourful, and interesting but it can easily be used inappropriately. Slang is mainly used in speech but it is also often found in the popular press and in literature. Slang changes very quickly, so some words g o out of fashion, but you may meet them in novels and films. Some slang expressions may cause offence to some people. Here are some examples you may hear or read. Expressions for money: Expressions for food
bread, dosh, readies
a n d d r i n k s : nosh, grub, cuppa (cup of tea)
People obsessed with computers and other equipment:
nerd, anorak
Jobs: quack (doctor), the old bill/the bill (the police), squaddie (soldier of low rank) Language
help
If you are interested in slang, you can find more examples in films or in the tabloid press but it is probably safest to keep it in your passive rather than in your active vocabulary.
Beyond this, when should a teacher highlight register? Because register is inherently related to context, the answer must depend on the situation each teacher finds themselves in. If a teacher is working in the USA, and her students are reading a transportation book which was written in Britain, then it is probably worth pointing out that lorry is the British English term for truck. Of course, if the teacher were working in the UK, this would probably be unnecessary. If the teacher is teaching academic English for the purpose of writing university essays, then it is useful for students to know that acquire may be more appropriate than get, because it has a more formal and academic register tone. As a general rule, the stronger the register marking, the more necessary it is for students to know, because strongly marked words can be used appropriately in relatively fewer contexts. A congruent idea stemming from the fact that register and context are interrelated is that register is best taught in context. In addition to explaining register marking, it is particularly useful to describe the contexts the word would typically be used in, and give some examples. For instance, anon is an old-fashioned word mainly occurring nowadays in Shakespeare's work (/ come anon [soon]: Romeo and Juliet).This contextualization gives students a much better idea of how the word is used appropriately and where to expect it than the contextless explanation of "oldfashioned word." The notion of register also provides some guidance in determining which words to teach. First, as field-specific vocabulary is important to maintain communication in that field (Benson & Greaves, 1981), learners will need to learn technical vocabulary if they want to be proficient in their specific fields (see Chapter 5 for more on technical vocabulary). Second, register is connected with the pragmatic issue of getting things done with language. If students have specific language purposes, they may need vocabulary with certain register marking in order to achieve it. For example, if students will find themselves in a power-inferior position with interlocutors from whom they desire something, then words with a polite register marking need to be taught. Pragmatic language commonly occurs in formulas (strings of words that are commonly used to achieve some purpose, e.g., requests: Would you please ?), so these strings with their own register marking may also be required in addition to individual words. Third, because some types of register (e.g., geographical variation) are particular to certain speech communities, a student may use a word in a different speech community without any awareness of its effect, so teachers need to watch for those words that may inhibit a student from smoothly integrating with a new speech community and help the student find alternatives which are more appropriate in the new environment. Fourth, teachers may choose to teach different words depending on whether the focus of the lesson is on written language or speech. Fifth, when teaching beginners, it is useful to teach frequent words without much register marking, because these will be of the most all-around use to the students. As students' needs change and they need to function in a wider range of specific situations, they will need words with more register marking.
Moving our attention to form, the importance of recognizing written word form is clear for fluent reading. Since research has shown that readers do fixate on most words, it is an advantage to have as large a vocabulary as possible to recognize any word that happens to come up. However, being able to recognize a word is not enough; it needs to be recognized quickly in order to facilitate fluent reading. In fact, there seems to be a threshold reading speed under which comprehension is quite difficult. This is because slow reading, where words are decoded individually in a word-by-word manner, makes it difficult to grasp the meaning and overall organization of the connected discourse. Above the threshold speed, the flow and logical progression of ideas can be appreciated. Anderson (1999) suggests that reading at a speed of about 200 words per minute with a minimum of 70 percent comprehension is necessary, with anything lower adversely affecting the ability to extract meaning from the text. This compares with a reading rate of between 250 and 300 words per minute for fluent LI readers for most texts (Grabe, 2009, p. 289). To reach these sorts of reading speeds, the words in one's vocabulary need to be mastered at the "sight vocabulary" level. To build up the speed of recognition, both reading and vocabulary specialists recommend the use of timed exercises, such as those below, which focus on both individual words and phrases (e.g., Anderson, 1999; Folse, 2004; Mikulecky, 1990). These types of exercises are easy to make by drawing on keywords or phrases from word lists you are focusing on in class or words from the texts your students are reading. The distractor words/phrascs should be similar in form to the keywords/ phrases, but also of a high enough frequency that your students already know them. This is because the exercises are meant to focus on the recognition speed of words your students know, rather than on dealing with unknown words. Underline the keyword in each line as quickly as possible. Keyword close
class
cloze
close
clash
crash
bake
bike
book
boot
beak
bake
watch
watch
waste
wasp
washed
worst
catch
can't
cash
catch
cat
chance
Underline the key phrase when it occurs. Key phrase: buy a book read a book
buy a book
buy books
read a paper
sell a book
book buyer
by a book
see a book
buy a magazine
buy a book
bring a book
buy two books
read two books
sell a paper
book seller
by the book
buy some books
buy a book
Such recognition exercises should be a staple activity for beginners who are at the decoding stage of reading, but they are also important for learners at higher levels who often still read at speeds below the thresholds required for fluent reading. They can be done for a short period at the beginning or end of every class, and have the advantage of focusing students' attention on the importance of both vocabulary and reading. When teaching vocabulary and reading, the teacher may sometimes have to decide which will receive the major focus for a particular classroom segment. If the pedagogical aim is faster reading speed, then students can be encouraged to quickly guess or skip over unknown words in the text, as stopping to ponder them would slow down the reading. In this case, the students could be encouraged to reread the text later with the purpose of looking up the unknown words in order to learn them and facilitate future reading. The texts used in fluency exercises need to be graded for reading level, as those with a small number of (or no) unknown words are most appropriate. If the classroom segment has a focus on vocabulary learning, the teacher can either preview vocabulary in a pre-reading exercise or allow students to stop and look up unknown words while reading, even though this breaks up the reading process. The attention given to spelling for English LI learners is rarely matched in L2 teaching materials. Regular quizzes that focus on spelling are a simple way to focus learners' attention on spelling at any level of proficiency. Peer-to-peer spelling tests also require the word reader to pay attention to pronunciation. Although students obviously must master the sound/symbol correspondences of a language, it seems that developing a mental "image" of words is also important in phonologically ambiguous cases. Words with redundant characters falling into one of three categories that Upward (1988) isolates as being problematical for learners would make good candidates for such imaging. Teachers can encourage their students to imagine these words visually while they are learning to spell them. In this way they can build intuitions of whether a word "looks right" when they are spelling it. In addition, since increasingly proficient learners also use analogy to spell unknown words (Marsh et al., 1980), a valuable side benefit of having a larger vocabulary is that a person has more words to use as models. (See Shcmesh & Waller (2000) for numerous spelling drills.) Given the importance of initial letters in the word-recognition process, spelling errors at the beginning of words are particularly confusing for the reader, who, even if they believe a word is misspelled, will usually assume the beginning is correct. (The same is true of spell-checkers in computer word processing programs.) This indicates that students need to be especially careful to get the initial part of the word correct in their writing. The teacher can also draw students' attention to the orthographical similarities between members of a word family, even if they are phonologically different. For example, the orthographical forms make it easy to point out to students that crime is the base form of criminal, even though this might not be so obvious when
spoken. Such grouping of related words is a main principle in vocabulary teaching and learning (although see the issue of cross-association in Chapter 7). The idea of grouping orthographically similar words can be maximally exploited by working with lemmas or word families instead of single words. Instead of just teaching indicate, for instance, it can be useful to show that it is just a part of a wider cluster of words: indicate, indicated, indicating, indicates, indication, indicative, and indicator. Because research shows that students often do not master the derivative forms of a base word, extra attention to these forms is typically warranted, particularly for academic writing. In return for the extra investment in time, students should be better able to use the correct form in any context, rather than being limited to contexts where only the verb form indicate will work. Finally, teachers should always be aware of the effect of the learners' LI orthographical system when learning an L2. Second language learners will think and perceive orthography in ways dictated by their L I , and if it is different in kind from the L2 being taught, then explicit instruction in the L2 system will probably be necessary. The perception still exists that orthography is a "lower-level" kind of knowledge that is easily and surely acquired, but if the LI and L2 differ, this may be far from the truth. Ryan's (1997) suggestion that students be tested for potential problems with orthography seems a sensible one, and her test for intermediate students and above can be found as part of her paper. Perhaps the most obvious implication from phonological research is the need for students to be attuned to word stress if they are to successfully parse natural, connected English speech. This means teachers need to highlight this stress information when dealing with vocabulary. Once students have a sense of the rhythm of the English language, they can fall back on the strategy of assuming strong stress indicates new words if they find themselves unable to parse a verbal string of discourse. Once the string is parsed, knowing the stress patterns of the individual words should help in decoding them. The teacher can give stress and other phonological information by pronouncing a word in isolation, but it should be realized that words can sound somewhat different when spoken together. Because of this, it is probably advantageous to also pronounce the word in the larger context of a phrase or sentence. There are several advantages for the students if this is done. First, it allows them to hear the more natural intonation which comes from speaking in connected speech. Students will usually hear the word in connected speech in the real world, so it is important that they be exposed to connected pronunciation in the classroom. Second, this gives students practice in parsing out the word from connected speech in a situation where they have the advantage of knowing that it will occur. Third, the teacher can use this opportunity to give a context that helps illustrate the meaning of the word, thus allowing the example to do double duty. Dictation activities are a good candidate for diagnosing students' ability to comprehend connected speech. Field (2008b) and Lynch (2009) offer a number of suggestions for developing this skill.
Given the bathtub effect, teachers can anticipate that learners will have the most trouble remembering the middles of words. It is not clear what should be done about this, but teachers can at least make their students aware of the phenomenon, and let them know that if they are having trouble remembering the form of a word, their impressions are likely to be the most accurate about its beginning. Research has shown that L2 learners often have difficulties with the forms of various members of a word family. Inflections do not seem to be much of a problem, but derivative forms are often not well known. There are now multiple studies showing that knowledge of a base form does not mean that all of the derivative forms will be mastered, either receptively, or, especially, productively. This indicates that teachers should consider giving a higher profile to derivative forms in their instruction. If affixes are transparent and behave as would be expected, then their acquisition should be facilitated, as Laufer (1997) claims. But affixes that are not regular can clearly cause problems. Although it does not correspond very well to the order of acquisition, the Bauer and Nation (1993) affix list is still a helpful guide to the relative linguistic difficulty of affixes. Nation (1990, p. 48) suggests that, in general, exceptions should not be introduced until any rule or regularity in patterning has been acquired. This implies that the most regular affixes in the first levels should be taught initially, and only after students are comfortable with them should the more irregular affixes be focused upon. Another approach is to use the acquisition tables in McLean (2018, Table 1) and Sasao & Webb (201 7, Appendix 1 ) to see which affixes were generally better known (and so probably better taught earlier) and which lesser known (and so better taught after the "easier" affixes). Nation (1990, pp. 168-174) includes teaching word parts as one of three major strategies which can help students become independent vocabulary learners (guessing from context and memory techniques are the other two), and this is definitely worth explicit attention from the teacher. He illustrates a number of exercises which focus on morphology, including a form of Bingo game where students build complete words from a given prefix or stem. Another good reason for focusing on suffixes in particular is that they facilitate the learning of lemmas and word families. As mentioned above, teachers should consider having students work with lemmas or word families instead of just single words; an understanding of suffixes makes this possible. Using word parts can be a very useful strategy, but it occasionally has pitfalls (e.g., discourse does not mean "without direction"). When students use word parts as an initial word-guessing strategy, they must be careful to check the surrounding context to see if their guess makes sense. Haynes (1993) found that students sometimes made an incorrect guess about what an unknown word meant in a text, and then stuck with that meaning even though it made no sense in the context. For this reason, Clarke and Nation (1980) suggest that word parts might best be used as a confirmatory strategy to verify guesses made from context.
3.4 Summary What does it mean to "know" a word? This chapter has shown that being able to use words well requires going well beyond learning a word's form and meaning. Vocabulary knowledge is complex and multidimensional. In addition to meaning and form, it includes other word-knowledge components, such as word parts, grammatical characteristics, collocation, and register. Even meaning is not always straightforward, as it includes encyclopedic and register information in addition to denotative meaning. All of these word-knowledge components can be known to greater or lesser degrees, which means that a word will never be just "known" or "unknown." Vocabulary acquisition is incremental, as more word-knowledge components are developed over time, and each of the components becomes relatively better mastered. The implication is that vocabulary learning is a longterm process, and a great deal of recycling is necessary for acquisition, both from explicit instruction and from incidental exposure outside the classroom. The next chapter will discuss two of the major insights from corpus research into vocabulary. The first is that some words occur more often and are more important than others. This concept of frequency is a key concept which informs all vocabulary research and pedagogy. The second is formulaic language. Up until now, the book has focused on single words, but a great deal of language consists of multi-word units. The next chapter will discuss these in detail, including one of the main categories of formulaic language: collocation.
EXERCISES FOR EXPANSION 1. The high correlations between several word-knowledge aspects i n Gonzalez-Fernandez a n d Schmitt (2019) suggest that different kinds of word knowledge are interrelated. On e example of this is that more frequent words tend to have an informal register (ask) while less frequent words tend to be more formal (invite). Another is that knowledge of derivational suffixes is connected to knowledge of word class, since these suffixes change a word's part of speech. Can you think of any other examples of such interrelatedness? 2. Make a list of the semantic features for both cat and dog. Which features are similar? Which features distinguish the concepts? Is it difficult to find features which belong exclusively to one or the other category? Which of the features would more likely relate to core meaning, a n d which would relate to encyclopedic knowledge? 3. This chapter suggests that the most frequent word in a set of synonyms or hyponyms has the least amount of register marking. Look in a thesaurus and choose a set of words. Decide which word in the set is the most common. Does it have the most general meaning and is it indeed the most neutral in terms of register? Do the less frequent words have more register
marking? In what ways is their use constrained by register? What extra meaning information do they convey in addition to their denotation? 4. The following are items from Ann Ryan's (1997) Word Discrimination Test. The learners must find any mistakes in the sentences, underline the word that is wrong, and write the correct word above it. One-third of the sentences are already correct. He won the rice by running very fast. We had delicious hot soap for dinner. How much do you earn each month? Step making so much noise. There was a horse and two cows on the farm. Can you get some broad if you are going to the baker's? How useful do you think a test like this is for discovering language-learning problems that are mainly orthographically based? What type and level of student would it be most appropriate for? 6. Keep track of any tip-of-the-tongue experiences. What can you remember about the form of the word you cannot quite retrieve? Does the "bathtub effect" hold, especially if the language you are using is not English? 7. The following sentences come from argumentative essays written by L2 English students. What do the derivation errors (underlined) by relatively advanced learners tell us about the difficulty of mastering derivatives? Arc there similar errors in your students' writing? Are the derivation errors common or infrequent? The examples are paraphrased extracts from the ICLE (https://uclouvain.be/en/research-institutes/ilc/cecl/icle.html). a)
Most important of all was the conscious that it was humankind's resourcefulness which allowed the achievements and skills we have now. (consciousness) b) For those who have some mentally distractions, they can easily get rid of them by dreaming, (mental) c) The line between ethic and not ethic is subtle and it can be easily crossed, (ethical) d) Widcsprcading its use, the values transmitted by television have changed too. (Widespread [in])
FURTHER READING
• For more on word-knowledge frameworks: Henriksen (1999), Zimmerman (2009), Nation (2013), Webb and Nation (2017), and Gonzalez-Fernandez and Schmitt (2019).
These articles provide a more detailed discussion of size vs. depth of knowledge: Schmitt (2014) and Read (2004). These books provide a detailed discussion of meaning and semantics. The first provides a psycholinguistic perspective, the second a full treatment of semantics, and the third a concise treatment: Aitchison (2012), Saeed (2016), and Cowie (2009). These sources give different perspectives on register: Richards (1976) and Halliday (1978). The Encyclopedia of Applied Linguistics provides state-of-the-art overviews of over 1,100 applied linguistic topics. Useful discussions on word form include: Spoken form: Guion-Anderson (2012), Fraser (2012), Derwing (2012), and Goodwin (2012). Written form: Grabe and Stoller (2012), Koda (2012), Bassetti (2012), and Snider (2012). These sources discuss the acquisition of word parts, and the gaps that L2 learners have in their derivative knowledge: Nagy, Diakidoy, and Anderson (1993), Schmitt and Zimmerman (2002), and McLean (2018).
4
Corpus Insights: Frequency and Formulaic Language
• "Language" is such a big subject that it is almost impossible to get my head around it. Is there a way to conveniently obtain language data so that I can study particular words or phrases? • Some words are relatively common while others are relatively rare. What difference does this make? • Many words seem to "fit together" in combinations, e.g., sweep a floor. What am I to make of this? • There seem to be a lot of different types of "phrases" in language. Are they important? Or is language mainly single words and grammar?
The vast improvement of corpora has been one of the most significant developments in vocabulary studies from the latter part of the twentieth century until the present time. Corpora or corpuscs (singular: corpus) are simply large collections or databases of language, incorporating stretches of discourse ranging from a few words to entire books. The exciting thing about corpora is that they allow researchers, teachers, and learners to use great amounts of real data in their study of language instead of having to rely on intuitions and made-up examples. Insights from corpus research have revolutionized the way we view language, particularly words and their relationships with each other in context. In particular, two kinds of word knowledge in Nation's (2013) list under the heading of usage (frequency and collocation) have been studied almost exclusively through corpus evidence. Because research into large databases of language is necessary in order to make any meaningful statements about these two lexical aspects, we will discuss them in this chapter. But there is much more to lexical patterning than just collocations, and so we will also cover the wider range of formulaic language.
4.1 Corpora and Their Development Some of the earliest corpora began appearing in the first third of the 1900s, the products of thousands of painstaking and tedious hours of manual labor. Extracts from numerous books, magazines, newspapers, and other written sources were selected and combined to create these corpora. This large investment of time meant
that corpora of one million words were considered huge at the time. Even in the early age of computers, one-million-word corpora were considered to be on the large side, because the written texts still had to be manually typed in. Two good examples of corpora at this point of development are the Brown University Corpus (Kucera & Francis, 1967) focusing on American English, and its counterpart in Europe, the Lancaster-Oslo/Bergen Corpus (LOB) (Hofland & Johansson, 1982) focusing on British English. Decades before these two efforts, Thorndike and Lorgc (1944) combined several existing corpora to build an 18-million-word corpus, which was colossal at the time. It was when texts could be quickly scanned into computers that technology finally revolutionized the field. With the bottleneck of manually typing and entering texts eliminated, the creation of immensely larger corpora was possible. "Third-generation" (Moon, 1997) corpora containing hundreds of millions of words began appearing. Three important examples are the COBUILD Bank of English Corpus (now accessible as Collins Wordbanks Online), the British National Corpus (BNC), and the Corpus of Contemporary American English (COCA, Davies, 2008-). To give an example of the size of these corpora, the COCA had 560 million words in early 2019. But this is far surpassed by a new generation of mega-corpora which are built by crawling text from the Internet. This automated text gathering has allowed the compilation of exponentially larger corpora: e.g., the iWeb Corpus (14 billion words) and enTenTenlS (15 billion words). These corpora have reached the size where their sheer number of words allows them to be reasonably accurate representations of the English language in general. This is partly because their larger size means that more infrequent words are included. To get some idea of what these numbers mean, let us consider how many words people are exposed to in real life. It is actually quite difficult to quantify, but let us assume that a person reads for three hours per day, which would add up to around 54,000 words (reading 300 words per minute (wpm)). Research shows that men and women speak about 16,000 words per day (Mehl et al., 2007). If we assume that people listen for around two hours per day (to television, radio, etc. in addition to other people talking), that would equal around 37,600 words (from a speech rate of 180 wpm). This would total roughly 90,000 words per day (54,000 + 37,600). (Of course, a person may read or listen for more hours than this, but one seldom reads or listens nonstop without breaks, interruptions, or daydreaming.) At this rate, that person would be exposed to 2.7 million words per month, meaning that the COCA would represent over seventeen years of exposure. The enTenTenl5 corpus would represent 463 years! These kinds of figures mean that the larger current corpora can equal or exceed, at least in numerical terms, the amount of language an average person might be exposed to in daily life. Numerical size is not everything in corpus design however. There is also the important question of what goes into the corpus. A corpus which consisted of only automotive repair manuals would contain only a very specific kind of language,
even if it grew to billions of words. It could be extremely representative of the kind of language appearing in such manuals, but it would not represent language in general use to any great extent. Although there are corpora concentrating on specific domains of use, such as air traffic control (Godfrey, 1994), business (Wolverhampton Business English Corpus'),and university student academic writing (British Academic Written English Corpus (BAWE)), the largest corpora are designed to represent language as a whole, including all topics and spheres of use. To be truly representative of such global language, a corpus must be balanced to include all the different genres of a language (sermons, lectures, newspaper reports, novels, etc.) in proportions similar to that of their real-world occurrence. This is probably unachievable, because no one knows exactly what those percentages are, and they will not match any individual person's experience in any case. Corpus linguists do their best to incorporate large amounts of language from a wide range of genres, on the assumption that this diversity will eventually lead to a sample of language representative of the whole. There are other issues in balancing a corpus as well. With a worldwide language like English, corpus developers must consider what proportions, if any, to include of the various international varieties (North American, British, Australian, Indian, etc.). But a more important issue is that of written versus spoken discourse. It is technically much easier to work with written text and this has led to most corpora having a distinct bias towards written discourse. Spoken discourse must first be recorded, then manually transcribed, and finally entered into the computer before it can be used. This has inevitably led to smaller percentages of spoken data compared to written (e.g., approximately 11 percent for the BNC). However, technology may eventually eliminate this imbalance by automatizing the input of spoken data. Computer programs for automatically transcribing spoken discourse into a written form are being refined, and in the future, we may find corpora with spoken and written ratios approaching that of real-world language use. (Note that these ratios are yet to be determined.) There is a great deal of interest in purely spoken corpora, spurred on by the realization that spoken discourse exhibits quite different behavior from written discourse. Early spoken corpora included the 500,000-word Oral Vocabulary of the Australian Worker (OVAW) Corpus (Schonell et al., 1956), the 250,000-word corpus from which the Davis-Howes Count of Spoken English (Howes, 1966) was taken, and the 5-million-word CANCODE, which led to a grammar of the spoken English language (Carter & McCarthy, 2006). More recent corpora are larger, with the spoken component of the COCA totalling 118 million words. By carefully considering the above issues, corpus linguists have succeeded in developing modern corpora which are arguably reasonably representative. Still, it must be remembered that no corpus is perfect, and that each will contain quirks which are not typical of language as it is generally used in the world. Thus, one must maintain a critical eye and a certain healthy skepticism when using this and other language tools.
4.2 Frequency Once a corpus has been compiled, it needs to be analyzed to be of any value. Computers have revolutionized this aspect, with powerful programs that can explore corpora and isolate more aspects of language behavior than ever before. The three major kinds of information these programs provide about language are how frequently various words occur, which words tend to co-occur together, and how the structure of language is organized. The last two aspects are related to formulaic language, and will be expanded upon later in the chapter. But let us look at frequency first. To derive frequency information, computer programs simply count the number of occurrences of a word (or lemma or word family) in a corpus and show the results on the screen in a matter of seconds. Table 4.1 illustrates frequency lists from three different corpora. The first is from the COCA, and indicates the fifty most frequent words in the English language, from a predominately written perspective. The second list comes from the CANCODE, and represents the most frequent words in spoken English discourse. The third list shows which words are most frequent in the specialized genre of automotive repair manuals (Milton & Hales, 1997). Word counts like these have provided some very useful insights into the way the vocabulary of English works. One of the most important is that the most frequent words cover an inordinate percentage of word occurrences in language. As we can see from the frequency lists in Table 4.1, the is the most frequent word in general and spoken English, making up approximately 6 percent of all word tokens (occurrences). Of course, the reason the very most frequent words arc so common is that they are function words, and occur in all contexts. But high-frequency content words are important too: Nation (2013) estimates that the most frequent 2,000 word families derived from general corpora make up about 80 percent of average discourse. Function words will make up about half of this coverage, with content words making up the other half, as Figure 4.1 (based on lemmas) shows. Table 4.1 and Figure 4.1 illustrate some important points. First, at the very highest frequency levels (left side of Figure 4.1), the coverage gains increase dramatically, but as we move toward lower-frequency levels (toward the right of the figure), the gains taper off. This shows that frequency is a very good indicator of vocabulary usefulness at the higher-frequency levels, but as frequency decreases, it becomes less and less predictive of the vocabulary one may find in any particular text. The coverage gains gradually diminish as frequency decreases, but there is no single point where coverage abruptly drops. Rather the curve is quite smooth. This means there is no obvious place where high-frequency vocabulary ends and lowfrequency vocabulary begins. The traditional cut-point has been at 2,000 words. This figure partly comes from the General Service List (West, 1953), which includes about this many headwords, and research by Schonell et al. (1956) which showed
TABLE 4.1
THE MOST FREQUENT GENERAL, SPOKEN, A N D AUTOMOTIVE WORDS
General English (COCA)
Spoken English (CAN CO DE)
Car manuals (AUTO HALL)
1.
the
the
and
2.
be
I
the
3.
and
you
to
4.
of
and
of
5.
a
to
in
6.
in
it
is
7.
to
a
or
8.
have
yeah
with
9.
to
that
remove
10.
it
of
a
11.
I
in
replace
12.
that
was
for
13.
for
is
oil
14.
you
it's
be
15.
he
know
valve
16.
with
no
check
17.
on
oh
engine
18.
do
so
from
19.
say
but
if
20.
this
on
on
21.
they
they
gear
22.
at
well
install
23.
but
what
rear
24.
we
yes
When
25.
his
have
not
26.
from
we
bearing
27.
that
he
assembly
28.
not
do
it
29.
n't
got
cylinder
30.
by
that's
brake
31.
she
for
as
32.
or
this
that
33.
as
just
at
Table 4.1 (cont.) General English (COCA)
Spoken English (CANCODE)
Car manuals (AUTO HALL)
34.
what
all
by
35.
9°
there
clutch
36.
their
like
shaft
37.
can
one
piston
38.
who
be
front
39.
get
right
system
40.
if
not
air
41.
would
don't
switch
42.
her
she
pressure
43.
all
think
transmission
44.
my
if
rod
45.
make
with
removal
46.
about
then
side
47.
know
at
note
48.
will
about
out
49.
as
are
seal
50.
up
as
ring
that 2,000 families covered around 99 percent of the spoken language they studied. The 2,000 figure was reinforced by being the beginning level of the influential Vocabulary Levels Test (Nation, 1983b) and a key level in Nation's often-used RANGE frequency analysis software program. More recently, Schmitt and Schmitt (2014) have argued for 3,000 being a better cut-point for pedagogical purposes. Low-frequency vocabulary has been characterized in various ways, ranging from anything beyond 2,000 word families all the way up to all of the word families beyond the 10,000 frequency level. However, we can see from Figure 4.1 that the coverage gains at the 10,000 level become very slight indeed (i.e., 8,000 10,000 only added 0.93% additional coverage). Therefore, Schmitt and Schmitt (2014) recommend that the low-frequency threshold be set at 9,000, based on Nation's (2006) calculations that 8,000-9,000 word families arc necessary for wide and independent reading, and thus anything beyond this is not crucial for language users. Most discussion on vocabulary frequency used only the two categories high and low frequency. But this leaves 6,000 word families in the middle, between the 3,000
Figure 4.1 Coverage provided by all lemmas vs. coverage provided by content lemmas only across the frequency continuum Source: Krcmmel, B. (2016). Word families and frequency bands in vocabulary tests: Challenging conventions. TESOL Quarterly,50(4), 981. © 2016 TESOL International Association. Reproduced with permission of the Licensor through PLSclear. 100% i90%
70%:? 60%;? 50%;? 40%;? 30%:? 20%-10%-I _ _ _ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____ _ _ _ _ ° 0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 5500 6000 6500 7000 7500 8000 8500 9000 950010000
Coverage of function + content words