241 82 11MB
English Pages 205 [208] Year 1987
Cognition and Language Growth
STUDIES ON LANGUAGE ACQUISITION This series will focus on both first language acquisition and second/ foreign language learning. It will include studies on language acquisition in educational settings, f i r s t / s e c o n d / f o r e i g n language loss, and early bilingualism. High quality dissertations and other individual works will be considered for publication, and also collections of papers from international workshops and conferences. The primary goal of the series is to draw international attention to current research in The Netherlands on language acquisition. Editors
of
SOLA:
Guus Extra, Tilburg University Ton van der Geest, Groningen University Peter Jordens, Nijmegen University Also published
in this
series:
1. Guus Extra and Ton Valien (eds.) Ethnic Minorities and Dutch as a Second Language 2. Bert Weltens, Kees de Bot and Theo van Els (eds.) Language Attrition in Progress
Sascha W. Felix
Cognition and Language Growth
¥
1987 FORIS PUBLICATIONS Dordrecht - Holland/Providence - U.S.A.
Published
by:
Foris Publications Holland P.O. Box 509 3300 A M Dordrecht, The Netherlands Sole distributor
for the U.S.A.
and
Canada:
Foris Publications USA, Inc. P.O. Box 5904 Providence Rl 02903 U.S.A. Sole distributor
for
Japan:
Toppan Company, Ltd. Shufunotomo Bldg. 1-6, Kanda Surugadai Chiyoda-ku Tokyo 101, Japan
C/P-DATA
Felix, Sascha W. Cognition and Language Growth / Sascha W. Felix. - Dordrecht [etc.]: Foris. - (Studies on Language Acquisition ; 3) With ref. ISBN 90-6765-272-5 SISO 803.3 UDC 159.953:800.7 Subject heading: language acquisition / psycholinguistics.
ISBN 90 6765 272 5 © 1986 Foris Publications - Dordrecht No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission from the copyright owner. Printed in The Netherlands by ICG Printing, Dordrecht.
Contents
Preface
1
Chapter 1 Some Issues in Cognition and Language Development 1. Preliminary remarks 2. The cognitivist vs. universalist hypothesis 3. The psychological reality of linguistic descriptions 4. Mental representations vs. cognitive processes 5. Perceptual strategies and cognitive processing 6. On the basis of scientific idealizations Notes
9 11 17 27 47 65 75
Chapter 2 Two Problems of Language Acquisition: On the interaction of Universal Grammar and language growth 1. Introduction 2. The logical problem of language acquisition 3. The developmental problem of language acquisition 4. The maturational schedule of Universal Grammar 5. Conclusion Notes
81 82 97 113 132 134
Chapter 3 Competing Cognitive Systems 1. The problem 2. Acquisitional deficits 2.1 Biological aspects of adult language learning 2.2 Environmental aspects of tutored language acquisition 3. Cognitive principles in language development 4. Cognitive competitors
.
137 140 141 143 144 155
vi
Contents
5. Some implications 6. Some empirical evidence 7. Two models of language learning Notes
163 171 178 183
References
185
Index
197
Preface
This volume contains a collection of three fairly independent studies which explore some aspects of the relationship between cognition and language development; more specifically, they examine the question of how the structure of the human mind shapes and determines the process of acquiring either a first or a second language. Some of the material discussed in these studies has been previously presented at two meetings of the European-North American Workshop on Language Acquisition. In recent years there has been what I believe to be a major shift in perspective in the area of language acquisition studies, especially in second language acquisition research. In the early days most discussions in the field centered around problems of data collection and methodology. It was felt that so little was known about acquisition processes in general that researchers should primarily be concerned with collecting and analyzing a solid body of data which would provide a rough idea about the types of utterances and structures children produce in the course of language development. During the past years this attitude has changed. Most students of language acquisition have come to realize that data cannot be appropriately collected or analyzed without some theoretical perspective as a guideline. Consequently, acquisition studies have become more and more theory-oriented. Observations of real-time acquisition are being interpreted within the wider context of what is currently known about human cognition, information processing and the mental representation of linguistic knowledge. This shift from data collection to theory construction has not always come about smoothly. Many researchers felt that language acquisition and linguistic theory are concerned with intrinsically different issues (namely, the former with performance, the latter with competence) so that there is relatively little of interest that linguistic theory has to say to the developmental psycholinguist. Similarly, theoretical linguists have frequently been overly skeptical with respect to what acquisition studies can contribute to a deeper understanding of the structure of the human mind. As a consequence, there has been a long-time resistance against looking at acquisition from the perspective of linguistic theory, as well as against considering acquisition data in the context of problems that
2
Preface
appear to be the proper domain of linguistic theory. One of the major purposes of this volume is to help to overcome this mutual skepticism. Chapter I examines some of the standard arguments against applying insights from linguistic theory to acquisition research and attempts to outline a possible interface between the two fields. Chapter II aims at showing how certain developmental facts can be explained by the interaction of the child's linguistic experience with principles that are commonly attributed to the human mind. Chapter III, finally, extends this approach to some problems concerning the relationship between first and second language acquisition. The theoretical framework which will guide much of the discussion in this volume is the one developed by Chomsky and his students during the past 25 years. This framework, I believe, may best be termed 'the generative enterprise' (Chomsky 1982). It is a research program, a specific way of looking at certain fundamental problems concerning the nature of language, rather than a petrified body of doctrines held to be true on a priori grounds. Whether or not the generative approach ultimately turns out to be correct must remain uncertain, as is the case with any theory in a mature science. The best that can be said is that, at the present state of our knowledge, this approach appears to be highly promising in that it has led to the discovery of many non-trivial properties of natural languages. It thus places man's linguistic abilities into the wider context of human cognitive psychology, thereby giving rise to what amounts to a coherent theory of the human mind. The basic assumption in Chomsky's theory is that the human mind has an essentially modular structure. That is, rather than being a homogeneous multi-purpose mechanism, the human mind is viewed as consisting of a finite set of independent, though interacting, subsystems (modules) which are designed to handle specific mental tasks. One such subsystem or module is the language faculty, an ensemble of abstract principles, innately specified, which guides and controls the child's grammar construction. There are various conceptual and empirical reasons which lend some credibility to a modular view of human cognition (cf. Bower 1973; Carey 1980; Fodor 1984). Chomsky's primary motivation for assuming a language-specific mental faculty is that it seems totally unclear how a child could ever induce the full range of formal properties characteristic of natural languages from the evidence that the environment commonly provides. In other words, the linguistic knowledge ultimately acquired by every normal child is heavily underdetermined by the available evidence. It thus appears that language acquisition can only be successful if, in addition to environmental evidence, the child has some a priori knowledge about the properties of natural languages, i.e. if he has some experience-independent information on
Preface
3
what may count as a possible linguistic structure. Since every child (ignoring pathological cases) eventually acquires the language of his speech c o m m u n i t y , one may reasonably assume that this necessary a priori information is provided by the specific mental organization humans are born with (see also C h a p t e r II.2). If this reasoning is correct, then we may assume that the child approaches the task of language acquisition with a set o f innately specified principles which interact with his linguistic experience in such a way as to yield eventually full knowledge of the language he is exposed to. This set o f innately specified principles, i.e. the initial mental state that the child is born with, is c o m m o n l y referred to as Universal G r a m m a r . Universal G r a m m a r ( U G ) thus specifies those properties that characterize and delimit the class o f natural languages, and can therefore be regarded as a theory of the child's initial mental state as it relates to language. W h a t exactly U G ' l o o k s ' like is an entirely empirical question. There have been many specific proposals during the past 20 years of research, some of which were, as we k n o w by n o w , incorrect. A t the present state of our knowledge, most generative grammarians w o u l d agree, I believe, that we are still very far f r o m having an even moderately comprehensive view of what exactly U G l o o k s like. Nevertheless, at least we k n o w with some a m o u n t of certainty what U G presumably does not look like. F o r example, it was believed for some time that U G consisted essentially of a set o f rules plus constraints on the application of these rules. A t present, there are g o o d reasons to believe that such a view o f U G is false. A l l the available evidence suggests that U G incorporates above all principles which constrain abstract representations rather than the application o f rules. A s in many other sciences (e.g. biology), what we k n o w is relatively little c o m p a r e d to what we d o not k n o w . Scientific progress is slow, but w h a t is truly exciting, at least for those involved in the 'generative enterprise', is the sight o f an avenue that leads to a better understanding o f the enormous complexity of human mental organization and of smelling the flavor o f some o f the deeper principles underlying human linguistic abilities. T o many researchers in the field of language acquisition, particularly second language acquisition, the modular view o f human cognition in general and the idea of a task-specific language faculty appear highly implausible, to say the least; I believe that there are primarily two reasons f o r the skepticism with which many psycholinguists view the generative approach; one is historical, the other in part technical. It seems that, in the United States more so than in Europe, the present generation of linguists and psycholinguists has been educated in a psychological and cultural tradition which is heavily influenced by empiricist doctrine. A s a consequence, many people - scientists as well as
4
Preface
non-scientists - take empiricism to be the most natural and, in fact, only reasonable way of looking at things. Generative grammar has questioned the philosophical foundations of the empiricist tradition and it is not to be expected that people will easily give up attitudes and convictions that had remained unchallenged for more than a lifetime. On the more technical level, the plausibility of the generative approach as well as of the arguments that are advanced in support of it depend very much on ones conception of language. If one believes that language is a fairly simple system which encodes ideas and intentions in a relatively straightforward way, then Chomsky's ideas do not sound particularly convincing. It is not until one gets familiar with the more technical literature, that the enormous formal complexity of natural languages becomes fully transparent; and it is this formal complexity that the plausibility of Chomsky's view crucially depends on. On the other hand, if one believes - for whatever reason - that the generative approach is misguided, there is not much point in reading the technical literature which will be disappointing to anyone who is primarily interested in the comprehensive description of individual languages. This dilemma becomes even more severe for acquisition researchers. Most of the data commonly discussed in the psycholinguistic literature come from fairly early acquisition stages and relate to comparatively simple structures for which it does not seem imperative to postulate a task-specific cognitive module. Such structures, it seems, can be equally well acquired with some general-purpose mechanism relying on nothing but inductive generalizations. However, it is frequently overlooked that a theory which can only account for the acquisition of simple early structures, but fails in principle to explain more complex linguistic knowledge, loses much of its persuasiveness. It is these conceptual problems that have motivated the discussion in Chapter I. I have attempted to examine some of the more popular arguments that have been advanced to question the plausibility of the modular view, and have tried to make the conceptual background of the generative approach more transparent. It seems to me that much of what is frequently proposed as a more plausible alternative results from a number of fundamental misunderstandings about the goal of the 'generative enterprise'. Apart from more technical details and arguments of plausibility, I believe that it is also the generative style of scientific inquiry that has aroused the suspicion, if not the outright hostility of many researchers in the psycholinguistic field. Much of Chapter I is devoted to a clarification and justification of this style. Many basic assumptions and leading ideas in linguistic theory have been motivated by considerations that relate to language acquisition. The relevant facts and observations are frequently epitomized as the
Preface
5
'logical problem of language acquisition'. There is another set of observations which may be summarized as the 'developmental problem of language acquisition'. These observations relate to regularities which occur in real-time acquisition. One such regularity is that language acquisition proceeds in stages, i.e. the child uses a certain set of structures for a more or less extended period of time and then 'suddenly' decides to replace these by some other set of structures. One may ask for the cognitive mechanism that initiates this transition from one stage to the next; i.e. the crucial question appears to be: what makes the child realize that his present grammar is somehow wrong and thus forces him to restructure it? It is my impression that many researchers including those sympathetic to, and influenced by, the generative approach feel that these problems fall outside the domain of explanations that linguistic theory can be responsible for. More specifically, it is felt that such developmental questions should be dealt with by a theory of cognitive processing (Slobin 1973) or by a theory of perception (White 1982). In other words, many researchers believe the logical problem and the developmental problem of language acquisition should be handled by different theories. Chapter II of the present volume challenges this view. I will suggest that the problem of how children find out that their present grammar needs to be restructured can also be successfully handled by the theory of grammar. In particular, I will argue that the child's restructuring of his grammar is triggered by principles of Universal Grammar. The crucial (non-standard) assumption is that the principles of Universal Grammar, though being innately specified and thus characterizing the child's initial mental state, are not fully available from the very beginning, but rather emerge in a specific temporal order which is specified by a genetically determined maturational schedule. Chapter III extends the modular view to a particular problem concerning the relationship between first and second language acquisition. I think it must be acknowledged that many do not consider second language research to be a respectable occupation for someone interested in theoretical issues. The reason is partly, I believe, that in his influential book 'Biological Foundations of Language' Lenneberg argues that the ability for what he calls 'primary acquisition' somehow declines around the age of puberty. It has thus been widely assumed that second language acquisition proceeds via cognitive mechanisms essentially different from those that operate in first language acquisition, so that L2 learning cannot provide insights into the language-specific module. There is, however, some evidence to suggest that the Lenneberg hypothesis - at least in its strong version - may not be correct. Second language researchers have collected a large body of data which seem to indicate that second language learners - children as well as adults - approach the
6
Preface
learning task in ways that are frequently very similar to those found in first language acquisition. These findings suggest that principles and mechanisms of the language faculty may also be available to the adult, at least to a certain extent. If this is correct, however, then it appears to be largely mysterious why adult L2 learners commonly fail to achieve full native speaker competence, while children are regularly successful learners. The Competition Model proposed in Chapter III deals with this problem and attempts to provide a solution which is based on the idea that in adult language acquisition two different cognitive modules compete in the processing of linguistic input data. More specifically, it is argued that adults tend to utilize a general problem-solving module for the acquisition of language, which, however, is not fully available to children because during the prepubescent period it lacks the ability to carry out formal operations. In other words, while the child's acquisition process is fully controlled by the language faculty, the operation of the language faculty is impeded by the general problem-solving module in the case of adult acquisition. It thus seems that also second language acquisition may provide crucial information about the functioning and the structure of the language faculty and how it interacts with other cognitive modules. If the ideas expressed in this volume are not totally misguided, then it appears that both linguistic theory and developmental psycholinguistics pursue a common goal, namely to enhance our knowledge about the structure of the human mind and its role in language acquisition. In particular, it appears that acquisition studies which have often been viewed as contributing little or nothing to the question of how linguistic knowledge is represented in the mind, may, in fact, provide invaluable information about those problems and issues which are sometimes taken to be the exclusive domain of linguistic theory. Part of the material that is being presented in chapters II and III has been previously published in two articles called "More evidence on competing cognitive systems" and "Maturational aspects of Universal Grammar" which appeared in Second Language Research 1:47-72 (1985), and in the volume 'Interlanguage' edited by Alan Davies et al. (1984) respectively. I gratefully acknowledge the editors' and publishers' permission to reprint this material in this book. I am greatly indebted to many friends and colleagues who have generously given their time and energy discussing the issues and ideas developed in this volume. Above all I wish to thank Gisbert Fanselow whose challenging intellect and inspiring criticism have been more than influential on my thinking. I am equally grateful to Alison d'Anglejan, Marina Burt, Heidi Dulay, Lila Gleitman, Angela Hahn, Jiirgen Meisel, Manfred Pienemann, Tom Roeper, John Schuman, Raj Singh, Peter
Preface
1
Staudacher, Arnim von Stechow, Lydia White, and Helmut Zobl who have read and commented on earlier versions of different parts of the manuscript. I also wish to acknowledge my gratitude to C.L. Baker, Cariota Smith, Robert Bley-Vroman, Georgette Ioup and the students of the Linguistics Department of UT at Austin with whom I was able to talk about my ideas in numerous stimulating discussions. I owe special thanks to David Lightfoot and Jan Koster who did much to deepen my understanding of the conceptual framework of current generative grammar. Finally my gratitude goes to Luise Haller and Gabi Neszt who patiently typed and retyped a seemingly never-ending flow of revised manuscripts and heroically struggled with the blessings of computerized word-processing. Sascha W. Felix April 1985
Chapter 1
Some Issues in Cognition and Language Development
1. P R E L I M I N A R Y R E M A R K S
Research on first and second language acquisition, it seems, has reached a point in its own history where scientific efforts need no longer be restricted to merely describing observed learner behavior in a systematic and coherent way; rather, there appears to be a growing awareness among researchers that efforts should be directed towards finding explanations for why language learners proceed the way they do. After more than 20 years of intensive research, we are equipped with a sufficiently solid data base( 1) which should allow us to fruitfully explore possibilities of constructing an adequate theory of (second) language acquisition (or rather, of certain interesting aspects thereof) which specifies some general principles for explaining a significant and interesting(2) range of observed phenomena. As Meisel (1982:1) remarks: 'Theoretical understanding of observed behavior cannot be achieved merely by describing data . . . The ultimate aim of research is the explanation of the observed facts; one way of interpreting this claim is to require that a theory should explain the causes of a p h e n o m e n o n under discussion and that it should make predictions a b o u t future occurrences of this phenomenon.'
It goes without saying that, at the present state of the art, we are still far from having an even moderately adequate theory or subtheory of (second) language acquisition, even though many stimulating proposals have been advanced during the past years (see McNeill 1970; Slobin 1973; Dulay & Burt 1974c; Schumann 1975; Krashen 1977a-b; Schlesinger 1977; papers in Snow & Ferguson 1977; Hatch 1978; Pinker 1982; Roeper 1978; Karmiloff-Smith 1979; Bates 1979; Wode 1981; Felix 1982; Wanner & Gleitman 1983). One of the reasons is, I think, that in many cases it has not yet been made sufficiently explicit what the crucial issues and the non-trivial problems are, so that currently much of the field is dominated by theoretical confusion and a tendency towards conceptual disintegration. Since, however, it seems that acquisition research has largely overcome the period in which methodological problems of data
10
Cognition and Language Growth
analysis absorbed most of the field's intellectual capacities, it may be warranted to explore, in some depth, the major theoretical considerations and proposals that presently influence discussions in (second) language acquisition research, trying to specify, with a sufficient degree of precision, those fundamental problems that call for an explanation. Past attempts to construct a theoretical framework within which there is some hope of explaining, in a non-trivial way, phenomena of (second) language acquisition have led, as I see it, to the pursuit of two major approaches. One of these focuses primarily on the effect that certain social, contextual and situational factors may have on the way in which learners acquire and use the target language. The other approach appeals to terms such as 'cognition', 'cognitive principles', 'cognitive mechanisms' and the like; this approach attempts to explain language acquisition (or at least some interesting aspects thereof) in terms of a set of very general mental structures or cognitive abilities which are assumed to constitute a significant part of man's biological endowment(3). In this chapter I will restrict my attention to problems concerning the relationship between cognition and (second) language acquisition. I will have little or nothing to say about those proposals that attempt to relate the learner's linguistic behavior to social and/or situational factors. It seems to me that with respect to the question of how cognitive principles/structures/mechanisms can be shown to determine language acquisition and use, there is some confusion about what an adequate theory should account for, i.e. what exactly the problem is that needs to be explained. On a somewhat superficial level one can see that 'cognition' is frequently used quite loosely, as some kind of cover term for a large variety of heterogeneous and possibly unrelated phenomena(4). On a deeper level, it is my impression that many current discussions about language development and cognition suffer from a lack of theoretical clarity and terminological precision. Some seemingly conflicting views are not really conflicting, but either focus on different areas of personal interest, or are mere notational variants of one another. However, I believe that without the necessary degree of theoretical and conceptual explicitness there will be little hope for (second) language acquisition research to successfully construct an adequate theory, so that the field may run the risk of becoming fossilized on the stage of mere data collection and analysis. In particular, it seems to me that above all L2 research has thus far failed to explicitly state what it conceives of as the fundamental problem^) of language learning so that there is even much disagreement among researchers about what may count as an adequate explanation. It should be clear, however, that, as Hornstein & Lightfoot (1981:9) remark, "the question of what constitutes a valid explanation can arise
Some Issues in Cognition and Language Development
11
only in the context of some problem for which there is no self-evident solution. Hence in any domain there is an intimate relation between the way in which a problem is conceived and the kinds of explanations that one should offer". Consequently, in any theoretical attempt priority should be given to the task of stating explicitly what the problems are for which there is no such self-evident solution. In the following sections I will discuss what I believe to be some of the more fundamental theoretical issues that need such clarification. In particular, I will focus on what seems to me to be a particularly crucial question, namely the relationship between acquisition research, psychology and linguistic theory.
2. T H E COGNITIVIST VS. UNIVERSALIST HYPOTHESIS
The wide range of activities within the general domain of (second) language acquisition research has led, quite expectedly, to a substantial amount of diversity with respect to research paradigms and objectives, theoretical stands, and methodological commitments. In order to locate, with sufficient clarity, the merits and perspectives of individual studies within the overall framework of language acquisition research, it has, in part, become customary to identify the theoretical positions that different researchers have adopted in terms of what is frequently referred to as the universalist vs. cognitivist hypothesis (cf. Clahsen 1982a, Nicholas & Meisel 1983)(5). As it seems to me that this terminological distinction entails a number of unjustified implications, let us examine the distinction in some detail. During the past 15 or 20 years a large number of studies have discovered a partially impressive body of invariant regularities (frequently couched in such terms as developmental sequence, acquisitional stage, or order of acquisition) across different learners, different languages and different acquisition types. Various researchers have focused primarily on collecting, specifying and formalizing such invariant regularities (see e.g. Burt & Dulay 1980; Pienemann 1981; Wode 1981; Felix 1978, 1982; Hahn 1982; Clahsen et al. 1983; Klima & Bellugi 1966; Bloom 1970; Bowerman 1973). On the assumption that such regularities are not accidental, but rather reflect some deeper properties of human mental structures, the ultimate objective of such work is to specify a set of universal principles which will not only describe post hoc the way a (second) language is acquired, but will rather explain the specific, i.e. non-accidental aspects of language development in terms of certain innate properties of the human mind. Researchers working within a framework(6) that focuses on the commonalities rather than the differ-
12
Cognition and Language
Growth
ences between different acquisition types have frequently been said to pursue a universalist approach (see Clahsen 1982b: 1 Iff.) It is fairly clear that searching for universal principles that underlie the language acquisition process requires certain rather obvious and natural idealizations, i.e. a principled abstraction from such real-life phenomena as individual variation among learners, context-specific idiosyncracies in learning, different social and psychological attitudes, and the like. In other words, the so-called 'universalists' have largely focused on what may be assumed to be the common basis for all (second) language learning, abstracting away from individual or group-specific differences. Critics of the universalist position (see e.g. Bates et al. 1983) have frequently pointed out that many important aspects of the (second) language acquisition process are unjustifiably being ignored if attention is deliberately restricted to invariant regularities and universal principles underlying a large variety of different acquisition and learner types. Such universal principles, it is argued, obviously cannot account for the acquisition process as a whole; in particular, they cannot explain such demonstrably important phenomena as individual variation, differences reflecting social setting and psychological attitudes, etc.(7). Let us look at this criticism in some detail. On a very superficial level, it is evidently correct. However, it should be clear that, on a deeper level, the crucial objection cannot be that the universalist approach ignores a large number of factors that undoubtedly influence and interact in the process of language learning. This is as obvious as it is irrelevant. At the present state of our knowledge (and maybe even in principle) it would be naive to believe that significant insight can be gained (only) if the total range of intuitively relevant phenomena is being studied at the same time. It seems that the process of language acquisition in all its different manifestations and aspects is much too complex to be properly understood under some global perspective. Any reasonable scientific research will - implicitly or explicitly - start out with certain idealizations, i.e, it will isolate, for the purpose of detailed study, a given subdomain of observable phenomena for which there is some justified hope of arriving at significant insights(8). It is difficult to see how any serious scientific study can be expected to deal with or even account for, the total range of observable phenomena within a given domain, such as (second) language acquisition. Thus the crucial question must be: is the idealization commonly adopted in the universalist approach legitimate in the sense that it will help to discover some non-trivial properties of the language learning process or 'does the idealization so falsify the real world that it will lead to no significant insight into the nature of the language faculty' (Chomsky 1980:25)? Quite clearly, the question as to whether or not an idealization is
Some Issues in Cognition and Language
Development
13
legitimate in this sense is a matter of empirical fact which cannot be decided a priori. It should be noted that a person who claims that the universalist idealization is, in fact, not legitimate, is committed to one of the following two views (see Chomsky 1980:25/26): a)
b)
Individual variation and socio-psychological differences are a necessary prerequisite for (successful) language acquisition. Universal principles as such cannot explain man's ability to learn language(s). H u m a n beings are constituted in such a way that they are incapable of learning a language unless options for individual variation exist and socio-psychological factors can differ. Even though people may be capable of learning language without individual variation and within a uniform socio-psychological environment, the mental properties that determine the specific nature of language learning under such circumstances would not be those that are activated in normal, real-life language acquisition.
It seems difficult to believe that anyone who is familiar with the state of the art in (second) language acquisition research could seriously maintain either one of these two views. In fact, it seems to me that both proposals are highly implausible and can hardly be supported by any available evidence. Consequently, the observation (or criticism) that studies focusing on universal principles of language acquisition are not considering the total reality of the learning situation, is - at least at the present point - largely vacuous and meaningless (in fact, naive from the point of view of theory construction). Let us now turn to a more substantial issue. Researchers who are not commonly associated with, and who, in fact, frequently oppose the universalist approach have attempted to explain language acquisition in terms of a finite set of underlying cognitive principles (see e.g. Bever 1970, Slobin 1973, Sinclair-de Zwart 1973, Wong-Fillmore 1976, Meisel 1980b, Clahsen 1982b, Bates et al. 1983). More specifically, these researchers assume that there are various general constraints on the way in which the h u m a n mind processes and stores information, and it is these (cognitive) constraints that determine exhaustively(9) the nature and course of (first or second) language acquisition. Under this view language acquisition is seen to be just one of many cognitive domains evidencing how h u m a n beings, in general, process information and acquire knowledge. Quite clearly, this view represents one version of the widely-held belief that man is biologically equipped with a finite set of multi-purpose learning strategies that enable him to acquire knowledge in a large (or possibly infinite) variety of domains (cf. Inhelder 1979, Putnam 1979). Researchers working under these general assumptions
14
Cognition and Language Growth
are frequently identified as pursuing a cognitivist approach (cf. Clahsen 1982a). While the terms 'cognitivism' and 'universalism' are generally regarded as indicating two conflicting, if not mutually exclusive approaches, it is my impression that the cognitivist vs. universalist distinction is highly misleading and reflects some severe misconceptions about the genuine issues that are at stake. If we look at the crucial assumptions in more depth, it becomes evident that both the universalists are cognitivists and the cognitivists are universalists. If we ignore for the moment the physical aspects in the domain of articulation, it would be difficult to deny the fact that language acquisition and use result from mental operations which, by definition and general consensus, incorporate cognitive processes. It is a fairly trivial observation that knowing or acquiring a language is a mental and not, say, a physical achievement (again abstracting away from articulatory skills). Language acquisition is something that happens in the mind, i.e. the attainment of a certain state of knowledge, and whatever is involved in this process is obviously cognitive in nature. It is thus self-evident that any type of universal principle that is proposed by the so-called universalists must be a principle of an essentially cognitive sort, i.e. a principle resulting from, or relating to the structure of the human mind. Consequently, the universalists - irrespective of whether the specific principles proposed are correct or not - appeal to the cognitive basis of language learning and are thus cognitivists under any reasonable definition of the term (as opposed e.g. to behaviorists). In much the same sense cognitivists are obviously universalists. There does not seem to be much disagreement about the fundamental assumption that the human mind/brain does not differ without limit from individual to individual; rather, it is generally acknowledged that there are certain basic and genetically determined properties of the human mind that are fairly homogeneous across our species and that determine and restrict, among other things, the class of humanly accessible knowledge. These properties of the human mind specify the intellectual capacities of man as opposed to other organisms. Hence, it is clear that whatever class of cognitive processes is being proposed by the cognitivists as an explanation for language acquisition and whatever constraints are imposed on these cognitive processes, the relevant proposals must entail some claim of universality if they are to be taken seriously, that is, if they are intended to contain true generalizations about non-accidental properties of human cognition. In this sense, cognitivists do not crucially differ from universalists, in that they, too, make claims about universal principles and constraints that characterize the nature of mental processing(10).
Some Issues in Cognition and Language Development
15
It thus seems to me that the terminological distinction between cognitivism vs. universalism and its conceptual implications lack both theoretical and empirical substance and are highly misleading in terms of research perspectives. Moreover, this distinction obscures the crucial issue that appears to be at stake. The problem cannot be whether the proposed cognitive principles are universal, i.e. specify some significant properties of the human mind (if they were not, they would be largely uninteresting(l 1)), or whether the proposed universal principles are cognitive in nature (what else should they be?); instead what appears to be at stake is what the exact nature and the inherent properties of the (universal cognitive) principles are, and, on a more general level, what a plausible and empirically well-motivated theory of the structure of the human mind should look like. Are those principles that govern language acquisition the same as those that also determine learning in other domains of knowledge, or should we assume the human mind to consist of various independent, though, of course, interacting cognitive subsystems? There is another fairly popular distinction which, I believe, is also somewhat misleading and both theoretically and empirically unmotivated; namely the distinction between the reductionist and the interactionist position (see Lewis & Cherry 1977, Hatch 1982). Hatch characterizes the reductionist position in the following way: ' . . . social and cognitive factors become irrelevant to the study of language acquisition. The model limits the phenomena to be explained and the kinds of explanations that can be given. Since social and cognitive factors are ignored, causality can be ascribed only to the individual's innate language device. That is, language learners acquire a new structure because the language system learned thus far allows for the acquisition of the new form.' (Hatch 1982:1)
If this characterization is to depict, in fact, the basic tenets of the reductionist position, I am not aware of anybody who would seriously hold such a view, because it is obviously untenable. There can be no doubt that social and general psychological factors do interact with many aspects of language acquisition. In view of the numerous examples that can be given for such an interaction, it would be absurd to deny this fact. Since, however, this observation merely states the obvious, it is, without any further qualification, largely trivial. It only becomes interesting to the extent that it is possible to determine the exact nature of this interaction and the principles underlying it. The crucial issue is therefore not whether such external factors influence the acquisition process in some way (of course, they do; this has never been in doubt), but rather, whether these factors are sufficient to account for all aspects
16
Cognition and Language
Growth
of language acquisition. In other words, is language acquisition in all its different manifestations exclusively a derivative of social and general cognitive factors, or are there aspects which, in principle, cannot be explained in terms of the operation of such external factors? With respect to Hatch's statement that 'the reductionist model limits the phenomena to be explained and the kinds of explanations that can be given' it is difficult to see in what sense this is a specific feature of the approach Hatch is criticizing(12). Any serious scientific model limits the phenomena to be explained. This is nothing unusual, but merely standard scientific procedure. A model is always a model of something, and this something relates to a restricted set of observable phenomena. A model of language acquisition is exactly what it says, namely a model of language acquisition, and not of, say, nuclear physics. In this sense it naturally limits the phenomena to be explained to those that, in some intuitive sense, relate to language acquisition. One might speculate that what Hatch really has in mind is that a model that accounts for a wider range of facts is to be preferred to a model that accounts for a narrower range of phenomena. On a general level, this is certainly correct. But it is correct only to the extent that both models have a comparable degree of explanatory power, i.e. both models must reach at least a similar level of depth in accounting for the observable phenomena that they are meant to be models of. The critical measure of evaluation is the explanatory power of a model, not the number of phenomena it enumerates. It is also obvious that the kinds of explanations that are considered to be valid can only be determined with respect to a particular range of problems, as the previously quoted statement by Hornstein & Lightfoot (1981) has already made clear. Hatch furthermore criticizes the reductionist view that language learners acquire a new structure because the language system learned thus far allows for the acquisition of the new form. It is difficult to understand this criticism in the absence of any evidence to the contrary. Note that whether this view is true or false is simply a matter of empirical fact. Either the acquisition of a given structure crucially presupposes prior acquisition of some other structures, or the emergence of a structure depends only on some other, i.e. non-linguistic factors. Which of these two views is factually correct has to be demonstrated empirically and cannot be decided on a priori grounds(13). With respect to the interactionist model Hatch remarks: 'This model assumes that language, social and cognitive knowledge are interrelated in a unidimensional way. Language is seen as derived from, or strongly influenced by cognitive factors. Or, language may be seen as derived from, or strongly influenced by, social interaction factors. Social and cognitive knowledge are each strongly influenced by language factors.
Some Issues in Cognition and Language
Development
17
Language, social, and cognitive knowledge are still discrete, but each influences the others. Research within this model looks at cognitive and perceptual requirements as prerequisites or co-prerequisites of the acquisition of parts of the language system.'
The problem with this characterization is that it proposes as an innocent alternative what is, in fact, the crucial issue. Note again that the assumption that 'language, social and cognitive knowledge are interrelated' is not controversial, but evidently true (while the exact nature of this interrelationship is still of major theoretical interest). The decisive problem is whether language (acquisition) is derived from cognitive factors, or merely influenced (strongly or weakly; again an empirical question) by them. If the former is true, then we can restrict our efforts to the study of systems of cognitive and social knowledge. The properties of language should automatically follow from the interaction of these systems. In other words, the properties of the systems of social and cognitive knowledge would be a sufficient basis for explaining the properties of language (acquisition). If, on the other hand, the structure of language cannot be sufficiently and completely explained in terms of systems of cognitive and social knowledge, then, of course, there must be some further system(s) which somehow account for 'the rest'. In this case, it would be a matter of considerable theoretical consequence what this system and its properties are. It should thus be clear that the alternative 'derived from, or (strongly) influenced by' is far from innocent, but rather captures the central problem. It is one way or the other; it cannot be both. More importantly, which of these alternatives is the correct one is a purely empirical matter which cannot be decided on grounds of 'training, one's personal area of interest, and the degree of ambiguity tolerable to each researcher' (Hatch 1982:3). The fundamental problem with the reductionist vs. interactionist distinction as characterized by Hatch is that it confuses matters of personal preference with statements about empirical facts. Whether someone is primarily interested in the interaction of social/cognitive knowledge and acquisition processes or in the formal properties of linguistic systems cannot seriously be a subject of scientific debate; factual claims about the relationship between language (acquisition) and cognition, however, must be considered in the light of the available empirical evidence and cannot be seen as merely reflecting personal preference and individual inclinations(14).
3. THE PSYCHOLOGICAL REALITY O F LINGUISTIC DESCRIPTIONS
In modern psycholinguistic research the issue of psychological reality has
18
Cognition and Language
Growth
received much controversial attention (Brown & Hanlon 1970, Bowerman 1974, Clark 1974, F o d o r et al. 1974, Johnston & Slobin 1973, Chomsky 1980, Erreich et al. 1980, Wexler & Culicover 1980, Fletcher 1981, White 1981, Clahsen et al. 1983). The basic question seems to be if and to what extent standard linguistic descriptions of utterances produced either by language learners or adult speakers can be attributed the status of representing something that is psychologically real. In other words, to what extent do structural descriptions of actual utterances or possible sentences(15) reflect accurately what is going on in the speaker's mind? If I judge the general feeling among psycholinguists correctly, there is a strong tendency to believe that, by and large, linguistic descriptions are not - at least not to a significant extent - psychologically real(16). It is my impression that those who hold this view tend to believe that the types of linguistic descriptions currently developed and discussed in linguistic theory fail to be psychologically real, in principle, because the kind of empirical evidence on which such descriptions are commonly based (essentially 'grammaticality judgements') is neutral with respect to the question of psychological reality(17). That is, it is not necessarily a specific proposal of present-day linguistic theory for which psychological reality is disclaimed; rather it is the general nature and empirical basis of linguistic descriptions as such that make it difficult to believe that they could represent anything mentally real(18). Under this view linguistic descriptions are, in fact, little more than what the term itself suggests, namely systematic - though largely arbitrary - accounts of the structural organization of (possible) real-life utterances as they can be observed by the linguist. For the purpose of a coherent description these accounts typically use linguistic, i.e. non-psychological(19), notions, such as noun phrase, verb phrase, subject, c-command, government, etc. Neither the standard linguistic categories nor the structural relations expressed by them, it is argued, correspond necessarily to anything real in the h u m a n mind. Since it is the aim of psycholinguistic (as opposed to linguistic) research to study the cognitive processes that actually take place in the human mind, linguistic descriptions are little more than a partially useful tool to specify, in a uniform way, what one is talking about. The genuine objective of psycholinguistic research is to deal with psychologically real phenomena, i.e. those processes and mechanisms that can, in fact, be shown to exist in the h u m a n mind(20). It seems to me that many discussions concerning the relationship between linguistic descriptions and psychological reality suffer quite significantly from a serious lack of understanding of the conceptual and philosophical basis of linguistic theory. More specifically, various crucial distinctions between mental processing and mental representation
Some Issues in Cognition and Language
Development
19
are frequently overlooked - or, at least, not consistently observed. It thus seems fruitful to review some of the standard arguments against the psychological reality of linguistic descriptions. A reasonable first step to approach the problem may be to ask: what is a linguistic description? On a somewhat pretheoretical level, the answer could be: any consistent and coherent account of the elements constituting an utterance and of the relationship holding between these elements. Such a description may be considered to be adequate to the extent that it correctly accounts for the full range of observations that a linguist makes with respect to the utterances he is studying. Such observationally adequate descriptions have a long tradition in the history of linguistics and constitute one of the major goals of what is sometimes called 'taxonomic' or 'descriptive linguistics', i.e. a discipline which largely focuses on the segmentation and classification of real speech utterances. The theoretical power of such descriptions can be considerably increased, if, apart from accounting for the structural properties of actually observed utterances, they also succeed in specifying, in a principled way, the class of possible utterances (for a detailed discussion of a largely descriptive perspective of linguistic theory, see Emons 1982). Even though certain traditional linguists such as Sapir, Hockett and Jakobson were very much concerned with questions of the psychological reality of linguistic analyses, many proponents of taxonomic linguistics deliberately refrained from any type of claim to the effect that structural descriptions of linguistic data are psychologically real in the sense that they are models of something that exists or happens in the mind(21). What such versions of taxonomic linguistics have essentially been concerned with is 'the description and analysis of the way in which a language operates and is used by a given set of speakers at a given time . . . guided by such principles as consistency, exhaustiveness, and economy' (Robins 1964:7). Since those taxonomic linguists have never claimed that the descriptions they propose relate to any cognitive entities or mechanisms, the issue of psychological reality simply does not arise in their domain. Consequently, we may assume that such taxonomic descriptions are not psychologically real, or, if they are, this is purely accidental. As a reasonable alternative to attempts to provide linguistic descriptions which exclusively aim at a systematic analysis of the structure of actually produced utterances, one might conceive of linguistics as a science that 'is concerned with describing what it is that a human being knows when he knows how to speak a language' (Jackendoff 1977:1). In other words, one might view the linguist's effort as a principled attempt to describe the intuitive(22) knowledge that a member of a given speech community has about his language. Note that distinguishing between
20
Cognition and Language
Growth
describing the structure of observed utterances and describing a person's (intuitive) knowledge of his language is not merely a matter of terminological sophistication. Aiming at a specification of the notion 'knowledge of a language' involves a commitment to a highly specific research program guided by conceptual and theoretical principles that are radically different from those of descriptive linguistics. Adopting this perspective for a moment, a linguistic description is not viewed as simply an account of the structural properties of observed speech, but rather as a formalized statement (hypothetical, of course) about a certain type of mental representation that constitutes a person's knowledge (or rather a certain part thereof) of his language. If we assume, as I think we have to, that the notion 'knowledge' refers to a specific mental state of an organism, it is difficult to avoid the conclusion that 'knowledge' consitutes a psychological reality. In fact, 'knowledge' is merely a terminological equivalent to 'mental state' so that 'knowledge' is psychologically real by definition, if the notion 'psychological' refers to properties of the (human) mind(23). It is fairly obvious that, if we discuss the psychological status of linguistic descriptions, we have to consider those descriptions which are proposed under the explicit claim that they represent a speaker-hearer's linguistic knowledge, and not those that are merely intended to specify some structural properties of elements in real-life utterances. If it is argued that, nevertheless, linguistic descriptions that aim at the specification of a certain type of knowledge are not psychologically real, or rather that their psychological reality must be independently proved (i.e. on the basis of evidence that is essentially different from the evidence that motivated them, in the first place), then, it seems to me, such arguments obviously relate not only to strictly empirical facts, but also to important conceptual questions. There is, of course, one fairly trivial aspect under which it is evidently true that linguistic descriptions of the type outlined above are not psychologically real. Linguistic descriptions which are essentially manifestations of paper-and-pencil work, are, quite obviously, nothing more than models, i.e. graphical representations of a speaker's knowledge of his language. Of course, they are not the knowledge itself, but merely a formalized hypothesis about what this knowledge is like(24). It seems that considerations such as these have motivated e.g. Clahsen et al.'s (1983) preference for the term 'psychological plausibility' rather than psychological reality: 'We use the term 'plausibility', since it is, o f course, not yet clear that the representations and operations of a p r o p o s e d g r a m m a r correspond to any mental reality; they are merely required to b e consistent with psycholinguistic performance m o d e l ' (p.78)(25).
Some Issues in Cognition and Language
Development
21
How, then, can we be really sure that a proposed model is actually correct, i.e. accurately describes what is mentally represented in a person's brain? I believe that the answer is fairly straightforward: we can't. All the evidence that is used to construct a certain model or theory is quite clearly non-demonstrative in nature. A theory is always underdetermined by its evidence and there is no way in which one can prove, in any absolute sense of the word, whether it is correct/true or not (cf. Lakatos 1970, Popper 1970). A model or theory is assumed to be correct (or true) insofar as that it can adequately account for an interesting range of observable phenomena using all the relevant evidence available. This is nothing unusual, but simply standard procedure in all rational scientific inquiry. Suppose a physicist (or chemist) constructs a model of, say, the hydrogen atom. The model asserts that the atom consists of a single positively charged proton located in the center and a single negatively charged electron which moves around the proton in a sphericsymmetrical space. There are certain forces that hold the proton and the electron together, guaranteeing that the electron does not somehow 'fly away'. H o w can a physicist who constructs such a model know that his model is actually correct, i.e. accurately describes what is going on inside the atom? Again, the answer is: he can't. He has no way of knowing in any absolute sense that his model correctly describes physical reality. It could still be that the forces inside the atom are not those specified in his model but rather 'that little men [inside the atom] are turning gears and cranks' (Chomsky 1980:75) without the physicist knowing or ever being able to know it. Any interesting theory or model is based on non-demonstrative evidence and therefore cannot be subject to final proof. The empirical justification for a given model of the atomic structure of hydrogen is that it can account, in an interesting way, for the observed behavior of the gas under various physical or chemical conditions. One might, of course, question whether or not the model correctly accounts for the facts, but it does not make sense to require additional justification for the model's physical/chemical reality apart from the empirical evidence on which it is based. A more serious and substantive criticism concerning the psychological reality of linguistic descriptions relates to the fact that a specific linguistic analysis proposed on the basis of a certain range of observations might be rejected as not being psychologically real in some empirically well-motivated sense. Note that under this interpretation it is not argued that linguistic descriptions as such are not psychologically real, but rather that a specific proposal concerning the nature of a person's tacit knowledge of his language lacks psychological reality. Such an objection is, however, tantamount to simply saying that the proposed analysis is false in terms of the empirical evidence available. It is presumably true
22
Cognition and Language Growth
that most of the linguistic descriptions that have been proposed in the past are not - at least not fully - psychologically real in the sense that they are at best rough approximations of what is actually the true nature of mental representations. As research progresses, additional and more sophisticated empirical evidence is taken into consideration in order to show that original proposals were false and more adequate proposals can be made. For the sake of illustration, the reader may recall that in the early days of generative grammar it was assumed e.g. that the input to the semantic component consisted exclusively of deep structure configurations, i.e. all the information necessary for a correct semantic interpretation of a sentence has to be present on the level of deep structure (cf. Katz & Postal 1964). At the time when those proposals were made (i.e. in the early 60's), it was, of course, assumed that the corresponding descriptions are psychologically real in the sense that with respect to the mental representation of language semantic interpretations are, in fact, derived from deep structures only. At the present state of our knowledge we have good reason to believe that such descriptions are inadequate because they miss certain important generalizations and make empirically incorrect claims about what it is to know a language; as a consequence they are obviously not psychologically real(26). Extensive empirical research (cf. Jackendoff 1972; Chomsky 1973, 1981) has shown that in order to account for a crucial range of observable phenomena it must be assumed that semantic interpretations also need input from structural configurations on the level of surface structure. Consider two of the classical examples which show that surface structure configurations need to be considered for a correct semantic interpretation of sentences: (1)
a. b.
beavers build dams dams are built by beavers
(2)
a. b.
he,- will review the books which Bill- wrote (*i=j) which books that Bill, wrote will he, review?
Assuming for the moment that (lb) is derived from (la) by some kind of transformation, it is obvious that, nevertheless, the two sentences have significantly different meanings, (la) asserts that it is a property of beavers to build dams, a presumably true statement. In contrast, (lb) asserts that it is a property of dams to be built by beavers, an obviously false statement. In order to give (lb) the correct semantic interpretation we, therefore, have to consider its transformationally derived structure. Analogously he and Bill in (2a) cannot be coreferential, i.e. he must refer
Some Issues in Cognition and Language Development
23
to someone different from Bill. In contrast, he and Bill in (2b) can be coreferential. This shows that, in order to state the coreferentiality facts in (2) correctly, we have to consider the structural configuration of the sentence after the wh-movement transformation has applied. A further example of an assumption that eventually proved to be incorrect can be illustrated by sentences such as (3) and (4): (3)
John's refusing to come
(4)
John's refusal to come
Originally it was assumed (cf. Lees 1961) that both of these constructions were derived - via application of a nominalization transformation from an underlying sentence such as (5): (5)
John refuses to come
With more empirical evidence (cf. Chomsky 1970, Jackendoff 1977) coming in, it became clear that this assumption was false. In order to account for certain systematic differences in the syntactic behavior of gerunds such as refusing vs. nominals such as refusal it was necessary to assume that, while (3) may be transformationally derived from (5), (4) is essentially base-generated, where the relation between refusal and refuse is accounted for by a lexical rule. Examples such as these show little more than the fact that linguistics, just as any other science, is making progress in a very natural way. As further evidence accumulates, original proposals about the mental representation of linguistic knowledge prove to be incorrect. New and more adequate proposals are advanced, again with a fair chance that they, too, will eventually have to be modified or replaced by even better proposals. This is nothing unusual in scientific inquiry. The same is true, to return to our previous analogy, of models concerning the atomic structure of elements. For example, Rutherford's model of the structure of atoms was proved to be incorrect because it could not account for certain types of behavior of the electron within the atomic shell. Rutherford's assumptions were replaced by a more adequate model developed by Bohr and Sommerfeld which, in turn, was shown to be inadequate because it assumed that the electron's orbit can be specified in a unique way. Heisenberg, subsequently, developed a new model which was meant to overcome the inadequacies of earlier models(27). This kind of scientific progress is nothing extraordinary. Note, however, that at the time a natural scientist develops a model or theory to account for certain observable behavior, he assumes, of course, that the underlying princi-
24
Cognition and Language Growth
pies as specified in his model are, in fact, true. Presumably Rutherford would have been amazed, had anybody asked him to provide additional evidence to show that what his model proposes is, in fact, physically real. Even if certain assumptions are later disconfirmed by new evidence, there is no reason to doubt that, in principle, physical models are meant to represent something that is physically real. It is difficult to see why similar scientific standards should not hold for the science of language. Linguistic descriptions that aim at specifying a person's knowledge of his language are, in principle, psychological. The observation that individual proposals have been shown to be incorrect, i.e. psychologically unreal, is as true as it is irrelevant. It thus seems to me that the argument that linguistic descriptions cannot be psychologically real, in principle, because assumptions and proposals have been disconfirmed, is without much force or substance. There is a further version of the argument that linguistic descriptions are not psychologically real which, as far as I can see, has a number of interesting implications. The main purpose of psycholinguistic research, so the argument runs, is to determine the nature and properties of the cognitive processes that permit a learner to acquire and/or use a (first or second) language in real-life situations. Barring certain inherent limitations, an adequate method to tap these cognitive processes seems to be one that is roughly in line with current psycholinguistic experiments which measure and evaluate subjects' real-time reactions in response to a given set of verbal material. There is good reason to believe (cf. Fodor et al. 1974), the argument continues, that current linguistic descriptions are, in principle, inadequate as models of real-time cognitive processes. Since the cognitive processes involved in language acquisition and use must obviously be something psychologically real and since linguistic descriptions fail as a model of these processes, it follows that linguistic descriptions do not represent anything psychologically real(28). The argument as outlined above is undoubtedly correct in terms of some of the fundamental observations it appeals to; however, the conclusion seems to be unwarranted, since it misconceives the basic objective of linguistic theorizing. Current proposals about the principles that specify the class of humanly accessible languages (i.e. of what is a possible language) and the nature of mental representations have nothing to say about the real-time cognitive processes that are involved in the acquisition or use of language. It needs to be emphasized that these proposals were never intended to be models of real-time cognitive processes. What linguistic descriptions are meant to be, is a principled account of a person's tacit knowledge of his language, not of how he uses or acquires this knowledge in real time. It does not make much sense to blame linguistic descriptions for not being something that they were
Some Issues in Cognition and Language Development
25
never meant to be. Linguistic theory develops hypotheses about a speaker's competence, i.e. his tacit linguistic knowledge in terms of a certain mental state that is attributed to him. These hypotheses do not incorporate any claims or predictions about linguistic performance, i.e. how a person uses his knowledge in real-time situations (see also White 1981). Assuming for a moment that mental states and cognitive processes are equally real in principle, a serious issue concerns the question: what is relevant evidence for a theory of mental states and what is relevant evidence for a theory of mental processes? Some psycholinguists seem to think (cf. Clahsen 1982) that, since a person's knowledge of his language must somehow enter into the mental processes involved in speech perception and speech production, it is reasonable to assume that psycholinguistic experiments that provide information on the properties of cognitive processes should at the same time reveal (at least, in principle) properties of a speaker's underlying linguistic knowledge (i.e. of a certain type of mental state). This is, in fact, a very plausible (if not necessarily true (29)) assumption. The crucial (and entirely empirical) question is, however: what is the relation between specific mental states and those cognitive processes that draw upon them, i.e. how exactly does a person's competence enter into his performance? There can be little doubt that, at the present state of the art, we know very little about this problem (for some very specific proposals, see Berwick & Weinberg 1984). What little we do know suggests that the relation between mental states and mental processes is far from being straightforward. It is particularly unclear which elements of a speaker's mental representations (i.e. those that relate to his linguistic knowledge) enter into which cognitive process(es) of speech perception/production in what way. As Fodor et al. (1974:264,274) note: ' . . . the way subjects respond to sentences depends on the properties of linguistic representations. Most of the studies . . . show that linguistic structures engage psychological processes in some way, but they do not seek to explore the character of the interactions . . . they tell us very little about which mechanisms interact with which features.'
These considerations suggest that evidence from psycholinguistic experimentation might be extremely difficult to evaluate with respect to the question of how a speaker's knowledge of his language is represented in the brain, i.e. which theory of grammar is true(30). While it is obvious that a person's knowledge must enter into his cognitive processing somehow, the problem is that it is difficult to know how. As Berwick & Weinberg (1983) show, many researchers have assumed serial computation in sentence processing, i.e. grammatical processes are carried out one after the other. This assumption led to certain incompatibilities
26
Cognition and Language
Growth
between psycholinguistic results (in particular, reaction time testing) and certain claims of generative grammar. Consequently, some researchers, such as Bresnan (1978), chose to modify linguistic theory in order to accommodate the theory of grammar with the available psycholinguistic evidence. This is a plausible, but by no means logically necessary, step to take. Berwick & Weinberg (1983) argue that one could equally well change one's assumptions about the brain's computational capacities, allowing e.g. a certain degree of parallel computation, in which case reaction time data do not mirror computational complexity in any straightforward way. Despite these difficulties some researchers seem to think that mental processes must be attributed some kind of 'cognitive primacy' and that mental representations are only psychologically real to the extent that they are a function of mental processes. Thus Clahsen et al. (1982:78) argue that . . all grammatical structures and rules have to be evaluated in terms of what is known about mental representations and about processes of speech perception and production . . . no statement of the grammar must be incompatible with insights from psychology and psycholinguistics with respect to the mental processes that occur in actual language use.'(31)
If this statement argues for a primacy of mental processes vis-à-vis mental representations, it would be interesting to know what kind of evidence (empirical or conceptual) this view is based on. Of course, it might be true that mental representations are merely a function of cognitive processes, or conversely, that cognitive processes are exhaustively derivable from mental representations; but I d o not know of any evidence suggesting that either alternative is true. One could object, of course, that all the previous arguments only go through, if the assumption of an underlying tacit knowledge which mediates a person's linguistic behavior is, in fact, correct. Conceivably, one might want to argue that this assumption is false and that all that exists are real-time cognitive processes triggered by certain factors and phenomena of a person's experience(32). Tacit knowledge does not enter anywhere in the production/perception of speech. It should be clear that this is a very serious issue which is not restricted to the discussion of language and its acquisition, but rather touches a fundamental problem in cognitive psychology, in general. Is it necessary to assume mental representations for any type of a person's behavior, in order to explain it, or is it possible to reach such an explanation by considering only cognitive processes and the situational context in which they arise? In other words: do mental representations exist? Since much of the preceding discussion depends on how we answer
Some Issues in Cognition and Language
Development
27
this question, it is necessary to review the relevant evidence in some detail. I will turn to this issue in the following section.
4. M E N T A L REPRESENTATIONS VS. COGNITIVE PROCESSES
I believe that some of the major theoretical and conceptual problems in contemporary language acquisition research result from a growing neglect of what seem to me to be fundamental distinctions. In point of plausibility one might want to distinguish a specification of what the learner tacitly knows of his language at various stages of development from a theory of acquisition proper stating the principles through which the learner acquires this knowledge (see e.g. Clahsen et al. 1982:93ff.). Furthermore, one might want to distinguish between principles of language acquisition and principles of language use the latter determining how the learner uses the knowledge he has acquired up to a certain point. It is at least conceivable that these three domains (acquisition, knowledge, use) are governed by quite distinct, though interacting cognitive systems or principles. Of course, this does not have to be true; it might turn out that the principles that govern the three domains under discussion fall together on some level of abstraction (though the question as to which remains). In other words, whether we have to assume one uniform system of principles or various distinct systems of principles, cannot be decided a priori, but is strictly a matter of empirical fact. While we are still very far from being able to answer these questions with a fair amount of certainty, there is, I think, some indication that, once the relevant assumptions are made sufficiently explicit to be empirically testable, the proposal that one and the same set of principles governs all three domains, loses much of its initial plausibility. It is for example easy to think of certain communication strategies in language use which give the learner a fair chance of successfully getting his message across despite his limited competence (say, random selection from a finite repertoire of structures; cf. Felix 1982, Felix & Simmet 1982, Hahn 1982), but which are hopelessly implausible candidates as principles of acquisition. Furthermore, one might suspect that the level of abstraction on which such different principles might fall together (e.g. on the level of memory limitations) is bound to incorporate concepts of such generality that the range of observable phenomena can no longer be specified in any meaningful way(33). Failure to observe the distinctions indicated above has, it seems to me, led to considerable conceptual confusion in contemporary acquisition research. Frequently, studies that obviously deal for example with production strategies of (LI or L2) learners lead suddenly and without
28
Cognition and Language
Growth
further justification to claims about language acquisition and vice versa. I believe that the frequent lack of conceptual precision in modern acquisition research has both methodological and theoretical origins. Let us turn to the methodological problems first. It is, no doubt, extremely difficult to determine in any mechanical way a speaker's knowledge of his language (though one might consider this fact a particular intellectual challenge in our field)(34). Since this knowledge is not directly observable, it seems that the only data available are those taken from observations of people's verbal behavior, i.e. the utterances a speaker produces in a given situation. However, the problem is that the actual utterances which a speaker produces do not reflect his knowledge of the language in a straightforward way; rather, a wide variety of different factors - his general beliefs, attitudes, motivations, as well as limitations on his memory capacity, etc. - enter into real-life performance. We may thus assume that in actual speech a large number of external factors and cognitive systems interact, the knowledge of the language being one of them. The methodological problem of how to determine a speaker's knowledge of his language is by no means new and has been widely discussed in the literature. As Clahsen et al. (1982:5) correctly observe: 'In certain individual cases it may, in fact, be difficult to determine whether a given structural phenomenon reflects a hypothesis of the learner about the structure of the target language, or whether his underlying grammatical knowledge is masked by strategies of language use.'(35)
Because of the difficulties of directly inferring a person's tacit knowledge from his actual behavior, studies within the field of theoretical linguistics have systematically relied on grammaticality judgements which are, however, not without their individual problems.(36) While it is notoriously difficult to determine the linguistic knowledge of an adult native speaker (though certainly not more difficult, in principle, than obtaining significant insights in many other sciences), it may become tantalizing to find appropriate methods of tapping the linguistic knowledge of someone who is still in the process of acquiring the language. It seems to me that the real problem does not so much relate to the fact that a learner's knowledge of his language is continually changing, but rather, that the number of factors and systems interacting with the learner's linguistic knowledge is presumably even greater than in the case of the adult speaker. In particular, the significant discrepancy between what the learner wants to say and what he is able to say on the basis of his acquired knowledge will lead to a substantial body of communication strategies designed to overcome the learner's linguistic limitations. Consequently, in order to arrive at a fairly plausible descrip-
Some Issues in Cognition and Language Development
29
tion of what the learner knows at a given point in time, one will have to abstract away from an even greater number of factors. Furthermore, for obvious reasons, grammaticality judgements which have become a standard and fairly reliable experimental device in linguistic theory cannot - at least not easily - be obtained from learners during their early stages of development. It is my impression that these methodological difficulties are the primary motivation for many researchers to abandon altogether the goal of specifying the learner's knowledge and the way in which this knowledge is acquired. In reaction to this quite a number of researchers have turned to the study of the learner's speech performance speculating on the kinds of strategies that he might have employed to produce his utterances. It seems to me that this shift involves a number of problems with respect to the question of how language is acquired. There is, of course, nothing wrong with studying strategies or principles of speech production and perception(37), however, it should be clear that such studies do not automatically tell us anything about language acquisition. As noted before, we cannot assume without empirical justification that the principles of speech perception and/or production are the same as those governing language acquisition (see Bever 1970), i.e. the attainment of a certain type of knowledge. In other words, we cannot simply transfer insights from one area to another. Studies of speech perception and production (of learners), while covering an important research area in itself, cannot be taken as an appropriate replacement for studies of language acquisition. Even if processes of perception and production were fully understood, both the logical and empirical problem of language acquisition might still remain unsolved. Ignoring the methodological problems, it seems to me that many psycholinguists have abandoned the distinction between (tacit) knowledge of a language and language use, because they feel that such a distinction is either empirically unmotivated or will become irrelevant once the mechanisms and strategies of language use are fully understood. They believe that what is commonly called 'competence' will eventually turn out to be totally explicable in terms of performance regularities. This, of course, is a very strong claim which needs to be examined with much care. Note, first of all, that such a view might very well turn out to be true. However, there is no reason to believe that it must be true. If there were a theory that succeeds in exhaustively explaining significant properties of speaker's competence in terms of performance strategies, we would, in fact, have compelling evidence to abandon the distinction. To my knowledge, no such theory exists. I believe that this 'behavioral' view is particularly strong in studies that are being carried out in the spirit of contemporary psychology(38). Since
30
Cognition and Language Growth
it is verbal behavior, i.e. speech production, that is most readily observable and amenable to precise testing procedures, many psychologists consider the investigation of a learner's verbal behavior as the only reasonable and manageable object of inquiry. This attitude is motivated by the fact that many traditional psychologists are quite accustomed to restricting their attention and efforts to the description of human behavior rather than the mental principles underlying this behavior. As Fodor et al. (1974:3) observe: 'In these various respects, much of the domain of what is traditionally called 'cognitive psychology' turns out to be concerned with the problem of how organisms apply their concepts. The issues which emerge when one attempts to explore this domain are exceedingly complicated. To see what they are like, consider some ways in which appeals to concepts can enter into psychological explanations in even a relatively transparent case where the behavior of an organism is controlled by what it knows; consider the case of a man writing his name. One way of analyzing what happens when a man writes his name would be the following: we give a detailed description of the instantaneous positions occupied by the man's fingers, hand, and arm during the course of writing. We thus seek to represent the man's behavior as a temporally ordered sequence of postures. Such a description could presumably be given equally well in terms of the behavior of the peripheral musculature controlling these postures. Indeed, whatever we take to be the effector mechanisms of the organism, it should always be possible, at least in principle, to provide a description of the actions of the organism as a temporally ordered series of states of such mechanisms. We might even consider identifying the action 'writing one's name' with running through some specified series of the effector states. Many psychologists appear to think of the subject matter of their discipline in some such way.' (my emphasis)
It seems to me that this attitude of many professional psychologists also pervades much of contemporary research in language acquisition. If, however, there is evidence suggesting that we can make a principled distinction between what an organism does and what it can do, then psychological and psycholinguistic research might be well-advised to adopt a perspective that goes somehow beyond a mere description of how a person behaves and acts under certain circumstances. I think that there is good reason to believe that a person's behavior (linguistic or otherwise) can be considered as an overt manifestation of certain underlying mental states, as an application of concepts or ideas that determine the limits within which an individual can act. If this is correct, then psychology (and psycholinguistics) would certainly be the appropriate field(s) to study and specify the concepts underlying observable behavior instead of restricting their attention to a mere description of the observable behavior itself. It is a curious fact that many psychologists still view their discipline as a 'behavioral science', which is "as if natural science
Some Issues in Cognition and Language Development
31
were to be designated 'the science of meter readings' "(Chomsky 1968:65). In order to give a revealing and unified account of an organism's behavior in a particular domain, it might at least be worth while to study the organism's underlying mental states with respect to that domain(39). Without specifying the underlying concepts, principles, and ideas, there is, I think, little hope of arriving at an explanatory theory of behavior. As indicated above, the crucial problem is, however, that many psychologists and psycholinguists seem to believe that it is both empirically and theoretically unnecessary to postulate underlying mental states/representations to explain an organism's behavior. What type of evidence do we have to assume the existence of such underlying mental states? Let us first take a look at Fodor et al.'s example concerning the notion 'writing one's name'. One might suggest that when we say that someone knows how to write his name, we do not intend to say that he knows how to perform a certain sequence of movements or postures, rather, we wish to indicate that he has some general underlying concept 'writing one's name' that guides and determines all overt behavior of actually writing down his name. This underlying concept or mental state is the unifying force behind a large (presumably infinite) variety of different behaviors. Someone can write his name in printed letters, on a piece of paper, on a blackboard, or he can use a chisel to carve it on a piece of stone. In terms of the overt, observable behavior his actions demonstrate a wide variety of muscular movements and effector states; the fact that we call all these highly different types of behavior 'writing one's name' is obviously due to our intuition about the existence of some underlying concepts or knowledge that somehow unify an infinite range of different actions. In other words, a person's knowledge (or mental state) defines the possible range of behaviors in a particular domain, i.e. what the person is able to do rather than what he actually does in a given situation. The assumption of underlying mental, representations is thus motivated by the attempt to explain how an indefinitely large number of different actions can still count as realizations of the same type of behavior. It is difficult to see how behavior can be described and explained in some satisfactory way without taking into consideration both an organism's behavioral potentialities and its underlying tacit knowledge with respect to a particular domain(40). I believe that part of the problem arises from a fundamental misunderstanding of the term 'tacit knowledge'(41). The general idea underlying this term has to do with the difference between the two notions 'knowing how' and 'knowing that'. What makes things somewhat complicated is that this distinction is frequently used with quite a variety of different meanings.
32
Cognition
and Language
Growth
First, there seems to be an obvious difference between what we know how to do ('knowing how') and what we know how to explain ('knowing that'). This might be the case with a large number of everyday skills and abilities. In other words, the capacity to perform a certain skill does not necessarily entail the ability to explain how the skill is (best) performed. This is most evident in the case of children, who, in general, are unable to (adequately) explain many of the skills that they are, nevertheless, able to perform. The same may also be true for adults; someone can be, say, an excellent tennis player, but be totally unable to explain how he plays. But even in those cases where a person is able to both perform and explain a certain skill these two abilities are not mutually dependent. As Fodor (1981:70) notes: ' . . . there are cases where we know how to do X and can give an account of what we do when we do X, but where it seems clear that the ability to give the account is logically and psychologically independent of the abilities involved in X-ing. Thus, if I think about it, I can tell you which fingers I put where when I type 'Afghanistan' or tie my shoes. So, I know how to do it and I know that when I do it I do it that way. But what I do when I do it has, I imagine, very little or nothing to do with what I do when I explain how I do it. What suggests that this is so is that I don't have to think when I type 'Afghanistan' or tie my shoes, but I do have to think when I try to explain how I type 'Afghanistan' or tie my shoes.'
It is thus clear that in this sense 'knowing how' is not only distinct from 'knowing that', but more importantly, with respect to a certain skill there may be 'knowing how' without 'knowing that' (and presumably also vice versa): i.e. a person may know how to X, but be unable to answer the question: 'how does one XT. If the term 'knowledge' is understood as 'knowing that' in the sense of'being able to give an account o f , then it is clear that knowledge is not a necessary prerequisite for certain types of behavior. It is these considerations that have frequently been taken as conclusive evidence that, at least in certain cases, the ability to perform does not presuppose any underlying knowledge. Quite clearly the term 'tacit knowledge* cannot refer to the notion 'knowing that' in the sense outlined above. What, then, is the justification for assuming some type of special 'knowing that' which has nothing to do with the ability to explain and which is therefore specified as 'tacit'? Note, first of all, that there is an obvious difference between whether a person who knows how to X is able to answer the question 'how does one XT and whether the question 'how does one XT can, in principle, be answered at all. To take up Fodor's example again, whether or not a person who knows how to tie his shoes is able to answer the question 'how does one tie one's shoesT is obviously independent of the fact that there is a way to explain how one ties one's shoes. One might give an
Some Issues in Cognition and Language Development
33
account of the various movements and their interactions that occur when a person ties his shoes. Since there is such an account, i.e. since the question 'how does one tie one's shoes? can be answered (irrespective of whether or not an individual person can answer this question), we might suggest that the act of performing X involves employing a system of rules which, in fact, do nothing more than specify how to X. In other words, such a system of rules is taken to be the answer to the question 'how does one XT. If this line of reasoning is correct, then one might want to say: 'if an agent regularly employs rules in the integration of behavior, then if the agent is unable to report these rules, then it is necessarily true that the agent has tacit knowledge of them' (Fodor 1981:74). In other words, tacit knowledge is the mental representation of a system of rules which a person regularly employs to produce a certain type of behavior without being aware of them. So far we have merely given a definition of what we mean by the term 'tacit knowledge'. The question still remains: what is the justification for attributing 'tacit knowledge' (i.e. mental representations of rule systems) to an organism who is able to regularly produce a certain type of behavior? The fundamental consideration is that while it is possible that an organism knows how to X without being able to answer the question 'how does one XT, an organism certainly cannot know how to X, if an answer to the question 'how does one XT does not exist, not even in principle. In other words, the ability to X (i.e. knowledge how to X) presupposes the existence of a principled answer to the question 'how does one XT. If there is no such answer, in principle, then we cannot speak in any meaningful way about 'the ability to X' or 'knowledge how to X'. Now, assuming that psychology is the science that is essentially concerned with the description and explanation of an organism's behavior, it is certainly reasonable to expect that an adequate psychological theory must somehow provide an answer to the question: how do organisms produce certain types of behavior? To put it differently, if there is an answer to the question 'how does one XT, an adequate psychological theory concerned with X-ing should, among other things, contain this answer. Of course, one might think - at least in principle - of different types of answers that a psychological theory could contain. Suppose that an answer is given in terms of a system of rules which exhaustively account for the behavior in question (e.g. X-ing). As an analogy, one might visualize a machine which contains this system of rules as a program of instructions, and, as a consequence, produces the same and only the same behavior that a given organism has been observed to produce. Under these conditions, I think, it would be reasonable to assert that the specification of the rule system is an adequate psychological account of
34
Cognition and Language Growth
the behavior under investigation. In other words, a psychological theory explains a certain type of behavior to the extent that it specifies - among other things - those rules that would regularly and consistently produce the behavior. If this is what psychological theories are about, then it is clear that the rules in question are psychological rules that are formed as hypotheses about mental/cognitive structures (assuming that these are the ultimate concern of psychology). I believe that this is as much as we can do in scientific inquiry, and I believe that the line of reasoning I have suggested is quite standard in other branches of science. There is little reason to suppose that the framework in which the general behavior of an organism and its underlying principles can be fruitfully explored should be inapplicable when it comes to language or verbal behavior. In other words, one important aspect of linguistic research is to determine a speaker-hearer's tacit knowledge of his language and how he puts this knowledge to use. In a similar way, it would seem to be a fruitful perspective for psycholinguistic research to determine a learner's knowledge at various stages of development, how he acquired this knowledge and how he puts it to use. A reasonable approach to the problem of language acquisition may be to ask, first of all, how a person's tacit knowledge of his language can be made explicit; i.e. what does a person actually know when we say he knows his language? It is self-evident that much depends on how this question is answered. Any adequate learning theory has, of course, to be formulated in such a way that it will, in principle, be capable of accounting for the attainment of exactly the type of knowledge that we know the adult speaker to possess. Apart from the fact that even on a pre-theoretical level it does not make much sense to study learning without taking into consideration what, in fact, has to be learnt, it is fairly obvious that the principles of an adequate learning theory have to somehow take into consideration the properties of the object of learning. That is, a learning theory is adequate only to the extent that it can plausibly account for the learning of what is eventually learned. Even though all this seems to be highly trivial, it is my impression that many psycholinguists tend to vastly underestimate what, in fact, constitutes knowledge of a language and what therefore has to be eventually learned. Looking at some of the psycholinguistic literature that has appeared over the past 15 years, one can easily get the idea that what the learner has to acquire is little more than a few word order rules, permutations, morphemes, negative placement, question words, etc. plus a few rules governing discourse. While these domains certainly represent central aspects of all languages, it can easily be shown that knowing a language also involves knowledge of an essentially different nature. More specifically, knowing a language involves knowledge of some very deep and abstract principles for which, to
Some Issues in Cognition and Language Development
35
introduce a further complication, it is difficult to imagine how they can be induced from direct observation of the surface structure of linguistic forms (see Hornstein & Lightfoot 1981). In order to illustrate some such principles and to give some idea of what it means to know a language, I will present a few very simple examples which are all taken from the standard linguistic literature. Consider the following sentences: (6)
a. b.
Bill stole a car. What did Bill steal?
(7)
a. b.
Mary believes that John stole a car. What does Mary believe that Bill stole?
(8)
a. b.
Bill said that Mary believes that John stole a car. What did Bill say that Mary believes that John stole?
It is clear that the (b)-sentences are questions asking for the constituent a car in the corresponding (a)-sentences. We may thus assume that there is, in English, a fairly simple rule of question formation (ignoring irrelevant details for the moment) by which a certain type of constituent can be replaced by an appropriate question word which is then moved to sentence-initial position. Note that this rule is very general in that it also operates across clause boundaries, as is demonstrated by (7b) and (8b). From the preceding considerations it follows that knowing English also involves (tacit) knowledge of this rule and it is not difficult to imagine how a child exposed to such utterances could induce the relevant rule from his linguistic experience. The crucial observation is, however, that there are very severe constraints on the application of this rule, as can be demonstrated by the following examples:
(9)
a. b.
Mary believes the claim that John stole a car. *What does Mary believe the claim that John stole?
(10)
a. b.
John stole a bicycle and a car. *What did John steal a bicycle and?
(11)
a. b.
The car that belongs to Mary was stolen by John, *Who was the car that belongs to stolen by John?
(12)
a. b.
That John will marry Mary is obvious, *Who is that John will marry obvious?
36
Cognition and Language Growth
The (b)-sentences in (9) - (12) are formed by exactly the same question rule as the (h)-sentences in (6) - (8), namely by replacing a constituent ('a car' in (9) and (10); 'Mary' in (11) and (12)) by an appropriate question word which is moved to sentence-initial position. Of course, every native speaker of English knows that the sentences (9b), (10b), (1 lb) and (12b) are ungrammatical, while (6b), (7b) and (8b) are fully grammatical. It should be clear that any adequate theory of language acquisition must, in principle, be able to specify mechanisms that allow for the development of exactly this type of knowledge. One of the questions that linguistic theory is interested in is: how can we account for this knowledge of a speaker of English, i.e. on what basis does a speaker of English know that (9b) - (12b) are ungrammatical? There have been various proposals to explain the ungrammaticality of the sentences exemplified above. Without going into details, I will merely mention that such proposals focus on very general and abstract principles that constrain either the application of rules (such as the question rule) or the way in which a sentence can be interpreted. At a very intuitive level it seems to be clear that what makes the above sentences ungrammatical is the fact that a question word has been moved out of a certain type of structure, e.g. a complex noun phrase in (9b) and a relative clause in (1 lb). It thus appears that certain structures constitute some kind of 'island' that does not allow the extraction of e.g. wh-words. It is now assumed that what the speaker of English has to know is not a collection of individual structures (in which case it would be difficult to explain how he could acquire knowledge of these structures, in case the collection is infinite), but rather a principle which defines the potentially infinite class of structural configurations from which wh-words cannot be extracted. It is obviously an empirical question to determine what the crucial common properties of such island structures are. Once a principle which specifies these properties is found, we then say that this principle explains the ungrammaticality of the sentences under discussion. Since every native speaker knows that the sentences (9b), (10b), (1 lb), and (12b) are ungrammatical, it is plausible to assume that such principles (once they are properly formulated) constitute a significant part of a speaker's knowledge of his language. In other words, knowing a language is not merely knowing words, rules, inflectional morphemes and the like, but knowing a language also involves (tacit) knowledge of such principles. In order to illustrate the highly abstract and formal nature of these principles, let us look at a somewhat different case. Consider the following sentences:
Some Issues in Cognition and Language Development (13)
a. b. c. d.
37
John loves the dog that bit him. The dog that bit him is loved by John. The dog that bit John is loved by him. He loves the dog that bit John.
The crucial aspect of these sentences has to do with the problem of coreferentiality. Two noun phrases (NPs) are said to be coreferential, if they refer to the same individual. Every native speaker of English knows that, in this sense, John and him in (13a-c) may be coreferential, while in (13d) he and John cannot be coreferential, but must refer to two different individuals. How can we account for these facts? Lasnik (1976) and Reinhart (1976) have proposed the Noncoreference Principle which roughly asserts the following: (14)
Noncoreference Principle(42) Given two noun phrases NP(1) and NP(2) in a sentence, NP(1) and NP(2) are noncoreferential, if all of the following conditions hold: a) NP(1) precedes NP(2) b) NP(1) commands NP(2) c) NP(2) is not a pronoun (NP(1) commands NP(2) if NP(1) is not in a subordinate clause from which NP(2) is excluded.)
As the reader can easily check, the Noncoreference Principle correctly explains the coreference facts in (13a-d). Note that this principle is not restricted to pronominalization, as can be seen in the following sentences in which no pronouns appear: (15)
a. b.
People who have voted for Reagan seem to like Reagan, Reagan said that Reagan will lose the next elections.
Here again, the two Reagans in (15a) may be coreferential, whereas in (15b) Reagan in the embedded sentence must necessarily refer to someone different from Reagan in the matrix sentence. This is correctly predicted by the Noncoreference Principle. The same principle also accounts for coreference between two nonidentical nouns in cases such as (16a) and (16b). (16)
a. b.
People who have worked for Reagan seem to like the president. Reagan said that the president will lose the next elections.
38
Cognition and Language
Growth
While coreference between Reagan and the president is possible in (16a), it is excluded in (16b). The discovery and specification of such principles are significant for a number of reasons. First, these principles give some fairly precise idea of what it means to know a language and what kind of abilities the acquisition process must ultimately lead to. Since every speaker of English is able to make judgements about coreferentiality in sentences such as (13) - (16), it is clear that the notion 'knowing English' implies knowledge of something like the Noncoreference Principle(43). Secondly, the high degree of abstractness of these principles indicates the general nature of some of the requirements that theories of language and language acquisition have to meet. Whatever cognitive mechanism is proposed in the domain of language acquisition and use, it must be designed in such a way that it can handle the formal properties of these principles, i.e. it must, in principle, be able to account for the fact that something like the Noncoreference Principle may constitute part of an adult speaker's knowledge of his language. The abstract nature of the principles of natural languages thus constitutes a crucial test of adequacy for any proposed cognitive process. Finally, it is difficult to see how the child could induce a principle such as the Noncoreference Principle f r o m his/her actual linguistic experience. There appears to be very little in real-life utterances that could plausibly be assumed to lead the child to hypothesizing the existence of such abstract principles. It thus does not seem unnatural to suppose that these principles - or at least some of their essential properties - need not be learned, but rather constitute part of the initial mental state the child is born with (for more extensive discussion of this issue, see Baker 1979a, Hornstein & Lightfoot 1981, White 1982). I believe that, for obvious reasons, these considerations suggest a number of significant consequences for language acquisition research. If psycholinguists deliberately continue to restrict their attention to the analysis of only fairly superficial phenomena such as word order, negplacement, morphemes, interrogative transformations and the like, there is, I feel, not much hope of successfully constructing a theory of language acquisition that will offer explanations at a sufficiently deep level(44). A more promising perspective may therefore be to study the way in which the abstract principles constraining the structural properties of natural languages emerge in the individual and interact with his/her linguistic experience (for such an approach, see e.g. Matthei 1978, Adjemian & Liceras 1982, Mazurkewich 1981, Singh 1981). I therefore wish to argue that neglecting the abstract principles of natural languages will eventually prevent any meaningful and deeper insights into what it means to acquire knowledge of a language. Studies of the
Some Issues in Cognition
and Language
Development
39
verbal behavior of language learners should take into consideration what is presently k n o w n about the properties of natural languages. It does not make much sense to propose principles and mechanisms of language acquisition, if it is totally unclear h o w these principles and mechanisms could ever lead to full knowledge of a language in the sense illustrated above. It is frequently argued that, while principles such as those discussed above may, in fact, look quite convincing when it comes to an account o f the adult speaker's linguistic knowledge, they are not very useful for studies of language acquisition for the simple reason that children, i.e. language learners, d o not c o m m o n l y use c o m p l e x sentences of the type illustrated above during the earlier stages of the learning process. Since developmental psycholinguistics, so the argument runs, has been primarily concerned with the early stages of language development, considerations about the abstract principles o f language simply d o not apply. These observations are, of course, as true as they are irrelevant. A n y adequate theory of language acquisition must, of course, be able to account for the learning process in all its phases and its entire complexity, at least in principle. A theory that can only account for the earlier stages of the learning process, but is, in principle, unable to explain later developments, simply does not qualify as an adequate theory. T o illustrate this point, let us suppose that someone merely looks at those developmental stages where learners rarely produce utterances longer than two or three words. O n the basis of such data it may even be possible to propose a very simple model which merely asserts that learners store in their memory a finite repertoire of structural configurations which then serve as a basis f o r their utterances. While such a model may give an adequate account o f early speech productions, it is patently inadequate as a theory of language acquisition, because, for obvious reasons, it cannot account for any o f the later developments. Similarly, we may assume that during the earliest stages children d o not use recursive rules, so that the model o f early learning need not contain such devices. H o w e v e r , a theory of language acquisition which does not allow for recursiveness is, of course, hopelessly inadequate. It is, quite clearly, conceivable that the learning mechanisms holding for the early stages are of a totally different nature f r o m those holding for the later stages, i.e. we might assume that we need different learning theories for different developmental stages. I d o not think that this is a very plausible assumption, but if this is supposed to be true, then, of course, it must be demonstrated. A s indicated before, one of the crucial questions obviously concerns the relationship between a speaker's competence, i.e. his knowledge of the language, and his performance, i.e. his use o f this knowledge. Pre-
40
Cognition and Language Growth
sently, very little is known about this relationship, and some people are overly pessimistic as to whether much will be known in the near future. This pessimism seems to derive from two considerations. One is that we are just beginning to get some idea of what an adequate theory of competence must look like. Most of the constraints that have been proposed in the past are still little more than rough approximations of the principles that actually seem to govern a speaker's knowledge of his language. Even though there are numerous reasonable and empirically well-motivated proposals, there is good reason to believe that many of these proposals still need major revisions(45). The second reason why some people are somewhat doubtful about current possibilities for constructing an adequate theory of language use has to do with the enormous complexity of performance phenomena. Language use seems to involve the interaction of so many different systems and mechanisms most of which are still very badly understood that it seems hopelessly premature - at the present state of the art - to develop a theory of performance which will meet criteria of explanatory adequacy, i.e. one that goes beyond a mere listing of a large number of individual observations^). Despite these difficulties there have been a number of stimulating proposals in the past concerning the relationship between competence and performance. The relevant issues in connection with these proposals will partially bring us back to the problem of psychological reality which we have already discussed in the preceding section. For the past 25 years linguists working within the framework of generative grammar have constructed rule systems and specified general principles governing these rule systems in an attempt to characterize a speaker's tacit knowledge of his language. Even though most of the rules and principles as well as their interaction are still poorly understood, there is little doubt that some such rules and principles must exist(47). Given some moderately adequate account of a speaker's linguistic competence, it seems to be a reasonable guess that the rules and principles which represent a speaker's knowledge of his language might - at least in part - be the same as those that determine his use of his knowledge, i.e. his performance in real-life situations. Note, however, that while the assumption that competence (knowledge of a language) and performance (language use) are governed by the same rules and principles, may be plausible, it is by no means necessarily true. It remains a matter of empirical fact whether or not the assumption is correct. There have been various psycholinguistic experiments testing the assumption that the principles and rules which specify the nature of mental representations of language are the same as those that govern the cognitive processes that operate when a person is actually speaking.
Some Issues in Cognition and Language Development
41
Many of these experiments have been reviewed by Fodor et al. (1974); for the sake of brevity I will only mention some of the major results. Let us first look at a particularly clear case which concerns grammatical operations commonly called transformations. Linguistic theory assumes - as a formal universal - the existence of transformations which specify the relationship between minimally different phrase structures. Thus, in some earlier version of generative grammar it was assumed that there is a grammatical operation, the passive transformation, which, roughly speaking, converts an active sentence into a passive sentence. Similarly, there is a question transformation and a negation transformation which convert a declarative sentence into an interrogative and negative sentence respectively. In order to account for the full range of empirical facts we have to assume that more than one of these transformations can apply, since we find negative questions, negative passives, interrogative passives and, if all three transformations are applied, negative passivized questions. It should be borne in mind that transformations in linguistic theory are not merely descriptive devices, but have to be viewed as a hypothesis about certain properties of a speaker's competence, i.e. about his tacit linguistic knowledge. Quite naturally this hypothesis might prove to be wrong, and recent studies in linguistic theory (Bresnan 1979, Emonds 1976, Koster 1976, Chomsky 1981) have, in fact, produced evidence to suggest that many structural relationships which in earlier versions of the theory were described in terms of transformations are more adequately represented by other devices (e.g. interpretive rules). However, it is important to bear in mind that the assumption of transformations as a computational device in the theory of grammar, be it right or wrong, is not a hypothesis about real-time cognitive processes as they occur in speech production; rather, it is a hypothesis about how linguistic knowledge is represented in the human mind and how as such it interacts with other cognitive systems in actual performance. Nevertheless, one might hypothesize with some plausibility that grammatical operations, such as transformations, also represent real-time cognitive processes as they occur in normal speech production. That is, one might assume that what actually goes on in the speaker's mind when he produces a passive sentence, can be described in terms of a real-time cognitive process having the same formal properties as a grammatical transformation. If this turns out to be true, we can conclude that - at least in a particular domain - transformations represent both grammatical relations on the level of competence and cognitive operations in actual sentence processing. This hypothesis has been tested in a number of experiments. One such experiment was carried out by Miller & McKean (1964). I will not
42
Cognition and Language Growth
describe here the details of the experimental design; for our purposes it is sufficient to state the general rationale behind this experiment. If transformations are an adequate model of the cognitive processes that occur in sentence production, then it should be the case that the more transformations a subject has to apply, the longer it will take him to convert one type of structure into another (see, however, Berwick & Weinberg's (1983) argument that this is only true under certain specific assumptions about of the brain's computational capacity - assumptions that are by no means logically necessary). Thus, converting a declarative sentence(48) into either a passive, an interrogative, or a negative sentence should require approximately the same length of time, because, in each case, only one transformation has to be applied. If, however, a declarative sentence has to be converted into a passivized question, a negative question or a negative passive, then this should take longer, since, in this case, two transformations need to be carried out. Going one step further, converting a declarative sentence into a negative passivized question should again take longer because, in this case, three transformations need to be applied. Measuring the time that subjects needed to convert sentences into other sentences, the results of the Miller and McKean experiment showed quite clearly that, in many cases, the transformational relationship between sentences did, in fact, seem to reflect certain aspects of the cognitive mechanisms involved in sentence processing. That is, the larger the number of transformations that separate the source from the target sentence, the longer it takes the subject to carry out the task. However, some other results of the Miller and McKean experiment suggest that things are much more complicated. Thus, the negative transformation proved to be easier to process than the passive transformation, although the theory of grammar does not specify any properties of these transformations that could account for this difference. Furthermore, pairs of sentences which include a declarative sentence as the source seem easier to process than pairs which involve two derived sentences, as for example converting a negative sentence into a passive sentence. On the surface these results seem to suggest that transformations are, in fact, an adequate model of real-time cognitive processes in speech production. However, there are numerous other studies which seem to point into just the opposite direction, ase.g. the experiments by Clifton, Kurtz & Jenkins (1965) and Clifton & Odom (1966). The experimental design in these studies is somewhat complex and does not directly bear on the issues to be discussed here, so we do not have to go into a detailed description of the experiments themselves. The crucial result of these studies is the following: while in some cases subjects do, in fact, seem to process sentence structures via executing individual transformations, there are also numerous other instances where this is obviously not the case. In particular,
Some Issues in Cognition and Language Development
43
when interrogation and negation interact either on declarative sentences or on passivized sentences, subjects apparently proceed in a way that does not reflect the transformational history of the relevant sentences. Results such as these have frequently been cited as compelling evidence against the 'psychological reality' of transformations, leading to the claim that transformations are merely convenient descriptive devices that have no mental correlates. It seems to me that this conclusion is totally unwarranted and shows some serious misunderstanding. Assuming that the results of the above-mentioned experiments are correct, then they do, in fact, suggest that people do not process sentences via the application of individual transformations; i.e. transformations as a part of the speaker's knowledge of his language need not enter the real-time production process in any direct or straightforward manner. What the above mentioned experiments do show is that competence and performance do not necessarily follow the same principles, i.e. the formal principles and properties of mental representations need not directly or uniquely be carried over into the cognitive processes of actual speech production. Quite obviously, the claim that transformations are an adequate model of certain aspects of a speaker's linguistic knowledge may still be wrong, but experiments of the type described above are not the crucial test for deciding this issue, because, as Berwick & Weinberg (1983:34) observe 'the way in which a grammar may be embedded as part of a model of language use is less than straightforward.'(49) The finding that a person's (tacit) knowledge of his language need not enter into real-time sentence-processing in any direct or straightforward way is not very surprising, but rather something that one might have expected. Assuming that in real-time speech production and comprehension many different cognitive factors and systems interact in ways that are still poorly understood, it would be rather astonishing to find that the principles of competence are directly carried over into sentence processing. All this becomes much more evident, when we look at other domains of human knowledge, e.g. mathematical computation. Suppose a person knows the principle(s) of mathematical multiplication, i.e. he knows that multiplication is a special case of addition (namely the addition of identical summands). Knowledge of this principle enables a person to carry out multiplication over an infinite domain of numbers. When specific multiplication is carried out in a real-time process, the application of this knowledge may appear in very different forms. As far as the multiplication of single digits is concerned, I would assume that many people have the results simply stored in their memory. In order to multiply, say, 5 times 7, these people would not carry out any computational process at all, but would merely retrieve the result from a finite mental repertoire. For more complex multiplications, say, 80 times 125, certain
44
Cognition and Language Growth
properties of the principle of multiplication may well enter into the actual computational process. One might multiply 8 times 125 and then attachO to the resulting number; or one might multiply 100 times 125 and then subtract 20 times 125, because for some people 20 times 125 appears to be computationally easier than 80 times 125. Due to certain limitations on a person's memory capacity higher-order multiplications, in general, require paper-and-pencil work. In this case a person may only be able to perform a purely mechanical procedure of multiplication without understanding the underlying principle. These considerations show quite clearly that what a person does (cognitively) in actual computation does not necessarily tell us very much about his underlying knowledge. Some people may only know a mechanical paper-and-pencil procedure and thus be unable to carry out any multiplication, if, for some reason or other, they have 'forgotten' the procedure. It is likewise conceivable that a person can only 'carry out' those multiplications whose results he has stored in a finite mental repertoire. Someone who knows the underlying principle of mathematical multiplication, however, may carry out multiplications over an infinite range of numbers, and he may also 'invent' new procedures derived from this underlying principle. Nevertheless, in real-time mental processing he may utilize the same procedures as someone who is not aware of the underlying principle at all. It thus seems clear that the relationship between a person's knowledge and his actual use of this knowledge is anything but straightforward. In a similar way, one might assume that in real-time speech production certain standard expressions such as nice to meet you, have a good day, enjoy your dinner and the like are not mentally computed, but rather drawn from a mentally stored finite repertoire in much the same way as single lexical items. Nevertheless, in terms of the speaker's tacit knowledge these expressions are subject to the same principles that also govern sentences which may be assumed to be individually constructed in each instance of real-life language use. That is, the speaker knows that e.g. nice to meet you is an abbreviated form of it is nice to meet you, and that due to certain principles of Case Theory (Vergnaud 1979) *(it is) nice me to meet you is ungrammatical. In the light of these considerations it is fairly obvious that the relationship between competence and performance is extremely complex in terms of the basic underlying principles. The problem is that, in most cases, we do not know in any principled way which cognitive systems exactly are involved and how these interact. Experiments of the type mentioned above suggest that possibly some of the rules that specify a speaker's knowledge of his language enter directly into processes of speech production and comprehension. There is some evidence that this may be the case
Some Issues in Cognition and Language Development
45
for certain types of phrase structure rules (see Fodor et al. 1974). On the other hand, there is apparently no complete one-to-one correspondence between principles of competence and mechanisms of performance. In other (and maybe the majority of) cases there seem to be principles operating in sentence processing that are of a totally different nature from those that characterize mental representations. This is surely not a very promising state of affairs, but there is not very much else that can be said with any degree of certainty. It is frequently argued that one of the major weaknesses of linguistic theory is that it is exclusively concerned with competence and that matters of performance and acquisition are either totally ignored or at least considered as something more or less marginal and uninteresting. While it is certainly true that most work in linguistic theory has focused on the specification of rules and principles underlying a speaker's (tacit) knowledge of his language, it is definitely incorrect to say that other aspects of linguistic reality have not been given due consideration. Rather, the opposite seems to be true. During the past ten or fifteen years, there have been major revisions and reformulations of generative grammar that are directly and primarily motivated by considerations concerning the relationship between competence, performance and acquisition. It seems to be self-evident that a person's knowledge of his language, i.e. his competence, is one of the cognitive systems that necessarily interact with sentence-processing mechanisms. After all, one cannot use a language without knowing it. Consequently, a theory of a speaker's mental representations of his linguistic knowledge must be so designed as to make real-time sentenceprocessing possible, i.e. the principles of mental representations must allow for the interaction with processing mechanisms in a plausible way (50). As far as I can see, there are two major types of considerations concerning processing mechanisms which have motivated recent revisions in linguistic theory. One relates to language acquisition directly. A theory of grammar that claims to represent a speaker's knowledge of his language needs to be constructed in such a way that the class of possible grammars (i.e. the class of grammars that qualify as grammars of natural languages) it specifies are learnable, since the acquisition process must, in principle, be able to lead to the type of knowledge specified by the theory. In other words, if a theory proposes principles and rules that are simply too complex and varied to be learnable by any child under known conditions of time, exposure and age, then it must be assumed that such a theory, even if descriptively adequate, is empirically false. Linguistic theorists soon realized that original formulations of transformational grammars were much too powerful to meet the criterion of learnability
46
Cognition
and Language
Growth
(Peters & Ritchie 1973, Wexler&Culicover 1981). Most of all, there were far too many different types of transformations, largely unconstrained with respect to their general properties(51). In other words, the class of possible grammars was much too large, so that it seemed impossible for a normal child to discover, under known conditions of language learning, the grammar of the language he is exposed to. That is to say, in the absence of any severe constraints on the class of possible grammars, language acquisition would require more time and more linguistic experience than it, in fact, does. Such earlier versions of transformational grammar, while being, in many cases descriptively adequate, did not meet the criterion of explanatory adequacy, since they could not explain how the type of linguistic knowledge they specified could also be acquired by a normal child. Consequently, much effort in linguistic theory was devoted to the task of constraining the class of possible grammars (cf. Chomsky & Lasnik 1977), and thereby enhancing the degree of explanatory adequacy of the theory. It was argued that the more restrictions are placed on the options for a transformational grammar, the easier it would be for the child to select the correct grammar from the data he is exposed to. The logic of this argument can be illustrated by a very simple example. Suppose a person is handed a deck of cards, knowing that the deck contains only aces, kings, queens, and jacks of the same suit. The chance of correctly guessing the picture of any drawn card is obviously much greater than it would be if the deck contained all colours and pictures. In other words, much work in linguistic theory was guided by considerations of what is called the 'logical problem of language acquisition' (see Chapter II.2). The second consideration that led to major revisions in linguistic theory concerns adult speech production and comprehension. If a theory of competence proposes rules and mechanisms that, by any reasonable standards, appear to be too complex to be processible in speech production and recognition, it is again warranted to assume that the theory does not meet criteria of e xplanatory adequacy. Considerations of this sort led, for example, Joan Bresnan (1978) to propose some crucial revisions in linguistic theory. Bresnan has attempted to formulate restrictions that are motivated by the question of how grammars can be realized in actual speech. Within our context, it does not seem necessary to illustrate Bresnan's approach in detail (see Bresnan 1978); it may suffice to state Bresnan's general line of argument. She argues that when two descriptively adequate theories of grammar are proposed, one which attributes a heavy computational load to the transformational component and one in which a large amount of information is contained in the lexicon, it makes more sense, in terms of grammar processing, to assume that the second theory is the correct one, since 'it is easier for us to look something up than it is to compute it' (Bresnan 1978:4)(52).
Some Issues in Cognition and Language Development
47
Note that all of the above arguments deal essentially with problems concerning the choice between two or more descriptively adequate theories. It should be clear that considerations of language acquisition and speech processing have, to a large extent, influenced discussions in linguistic theory during the past 20years. In fact, the notion of'explanatory adequacy' which plays a central role in generative grammar, relates to phenomena which lie outside the domain of competence proper. I have argued so far that a speaker's knowledge of his language is one crucial system that interacts with performance strategies and sentence processing mechanisms in actual speech. This view implies the claim that little of interest can be said about sentence processing, unless one also takes into consideration what it is that is being processed, namely the language user's competence. In other words, the principles of competence and the formal properties of mental representations are one factor that constrains the possible mechanisms and strategies in language performance. Conceivably, one might suggest that the opposite could also hold. That is, the constraints on the possible form of mental representations, i.e. the language user's competence, are merely instances of more general constraints that characterize the nature of sentence processing mechanisms, e.g. perceptual strategies, memory limitations, etc. Under this view the formal properties of natural languages directly and exhaustively reflect constraints that have to do with principles of cognitive processing in general. In other words, natural languages have the properties they do, only because of certain general principles of cognitive processing. If this view is correct, it would be sufficient to construct a theory of cognitive processing and its underlying regularities; the properties of competence would then directly follow from these regularities. Whether or not this view can be empirically justified seems to be an interesting theoretical issue to which I will turn in the next section.
5. PERCEPTUAL STRATEGIES AND COGNITIVE PROCESSING
There seems to be a fairly popular view - widely held among both psycholinguists and psychologists - which asserts that both the acquisition of language and the structural properties of natural languages can be explained in terms of perceptual strategies and mechanisms of cognitive processing (Bever & Langendoen 1971, Clark & Haviland 1974)(53). Under this view, the processes involved in language acquisition and the rules specifying a person's knowledge of his language can be directly and exhaustively derived from the regularities and constraints that characterize the nature of human perceptual strategies and mental processes. In
48
Cognition and Language Growth
other words, the human mind is structured in such a way that it imposes certain constraints on perception and cognitive processing in the sense that these constraints determine what a human being can potentially perceive and process in his brain. These cognitive constraints, it is then argued, provide a sufficient basis for specifying the class of human grammars, i.e. they determine in a unique way what counts as a possible natural language. Certain versions of this view are frequently referred to as 'functional explanations of natural languages'. In at least one aspect some version of this view is obviously true. Assuming that language knowledge and use involve mental representations and cognitive processes, it must be logically true that any constraint on what the human mind can potentially perceive and process must also be a constraint on the class of possible grammars(54). Suppose that someone were to propose a theory of natural languages which asserts that knowledge of a grammar means knowing an infinite repertoire of individual structures. It is clear that knowing and storing an infinite number of structures would require human memory to have an infinite storage capacity, i.e. human memory would have to be infinite. Since this is clearly not true, such a theory of language must be false, as it is based on invalid assumptions about mental processing capacities. To illustrate this point, let us draw an analogy to human physical properties. One might for example want to study the structure of bodily organs, say the heart or the lungs, in terms of their cellular structure. A rational way to proceed would be to study the specific properties of the cardiac or pulmonary cells and determine how they interact. It is intuitively clear that any statement about the cellular structure of the heart or the lungs must be compatible with what cellular biology has determined about the general properties of human cells; i.e. constraints on the general structural properties of human cells are necessarily also constraints on the cellular structure of specific bodily organs such as the heart or the lungs. A theory which attributes to, say, cardiac cells properties which human cells in general are known not to have, is obviously false. However, it is common knowledge in biology that the cells of the heart or the lungs have specific properties which, though being compatible with, do not uniquely follow from, the general properties of human cells. It thus seems to me that any suggestion to the effect that the structure of natural languages and the processes involved in their acquisition must be compatible with the general properties of mental processes and representations is obviously true. While there is no doubt that the structure of natural languages and their acquisition must necessarily conform to those constraints that determine what is humanly perceivable and processible, the crucial issue is whether or not these perceptual and cognitive constraints can explain linguistic knowledge exhaustively. In other
Some Issues in Cognition and Language Development
49
words, are the perceptual and cognitive constraints not only a necessary, but also a sufficient condition for specifying the properties of natural languages? Alternatively, one might suggest that there are further specific properties which must be taken into account in order to adequately characterize what it is to know a language. It is clear that this issue cannot be decided a priori, but is strictly a matter of empirical fact. Let us e xplore the issue concerning the relationship between perceptual strategies and the structure of natural languages in some more detail. For illustration I will use an example taken from Chomsky & Lasnik (1977). I will assume without further discussion that the theory of grammar contains what is commonly referred to as surface filters (for an empirical justification of this assumption see Perlmutter (1971), Kayne (1975), den Besten (1976), Chomsky (1981)). The assumption of surface filters implies that the base component and the transformational component of the grammar generate as output certain ungrammatical strings. This output is again input to the surface filters which mark the corresponding sentences as ungrammatical. In other words, surface filters serve to distinguish between certain types of grammatical vs. ungrammatical strings at the level of surface structure. Consider the following sentences: (17)
a. b. c. d.
the the the *the
man man man man
that I saw who I saw I saw who that I saw
These data suggest that there must be some rules in the grammar of English which account for the fact that, in relative clauses of the above type, either that or who may appear in clause-initial position (for a more precise account of English relativization and complementizers see Bresnan (1979) and Chomsky (1977b)). Furthermore, while either one may apppear in the context indicated, the presence of both of them as in (17d) leads to an ungrammatical string (we will not be concerned here with the filter that accounts for this ungrammaticality; for details see Chomsky (1973,1977b)). Although the occurrence of both that and who is ungrammatical, it is possible to delete either one or both of them as in (17c). In contrast, consider the following structures: (18)
a. b. c. d.
The The The The
man man man man
that met you is my friend. who met you is my friend. met you is my friend. who that met you is my friend.
50
Cognition and Language Growth
The structural properties of the sentences in (17) and (18) converge to the extent that either that or who may appear, whereas the simultaneous occurrence of both wh o and that leads to an ungrammatical sentence. The crucial case, however, is (18c). While the sentence (17c) with that or who deleted is fully grammatical, the corresponding sentence (18c) is ungrammatical. Chomsky & Lasnik (1977) have proposed an explanation of such facts in terms of surface filters. The relevant filter for the above case can be roughly stated as follows: (19)
* [ n p N P TENSE VP]
This filter asserts that a clause consisting of an NP followed by TENSE and VP which is itself a noun phrase will be marked as ungrammatical. This filter seems to correctly account for the ungrammaticality of (18c). The substring the man met you is obviously a clause consisting of an NP, the man, folio wed by the verb phrase met you where the verb itself is tensed (namely, past tense) and not infinitival as e.g. in the man to befired is my friend. The entire phrase the man met you is also a noun phrase, namely the noun phrase followed by the verb phrase is my friend. The filter (19) is not applicable in the case of (17c), because here the string the man I saw is not of the form NP TENSE VP, and the phrase I saw while being NP TENSE VP does not belong to the category noun phrase, but is rather a sentence. Filter (19) receives further empirical justification from the fact that it is able not only to explain certain ungrammaticalities in the domain of relative clauses, but also in other types of embedded sentences. Consider the following sentences: (20)
a. b. c. d.
I think that he left. I think he left. That he left is a surprise. *He left is a surprise.
(20a) does not meet the structural description of the filter, since the embedded clause he left is preceded by the complementizer that so that the sentence is marked as grammatical. In (20b) the complementizer that is deleted; however, for reasons that will not concern us here, the embedded sentence he left is not an N P. (20c) is grammatical, because the subject N P that he left not only consists of NP, TENSE and VP, but also contains the overt complementizer that. (20d), however, is ungrammatical because the subject noun phrase meets the structural description of the filter. That is, it is a tensed sentence without an overt complementizer. In addition to the empirical evidence provided by the sentences in (17),
Some Issues in Cognition and Language Development
51
(18), and (20), one might suggest that the assumption of a surface filter such as (19) receives further support from the fact that it seems to correspond to a fairly plausible perceptual strategy. This perceptual strategy could be formulated in the following way (see Chomsky & Lasnik 1977): (21)
In analyzing a construction C, given an utterance-initial structure that can stand as an independent clause, take it to be a main clause of C.
The perceptual strategy (21) asserts the following: if a hearer perceives at the beginning of an utterance a structure such as the man met you as in (18) or he left as in (20d) which can function as an independent clause, then this structure will be analyzed as being the main clause of the relevant construction. In other words, if a given string of words at the beginning of a construction meets the structural description of an independent clause, then this string will be perceived to be the main clause(55). Since (21) seems to be a fairly plausible perceptual strategy, one might, in fact, propose (21) as a functional explanation of the above set of linguistic data; (functional in the sense that certain regularities in the way human beings process linguistic data impose restrictions on what constitutes a grammatical sentence). There are, however, a number of serious problems with such a proposa l ^ ) . Suppose that the filter (19) is not a language universal, but rather a specific property of the language under analysis, in this case English. It is clear then that (19) has to be learned; that is, a child acquiring English as a first or second language has to be confronted with specific evidence from his linguistic environment in order to know that (19) holds. If we assume that the filter (19) is a function of a more basic perceptual strategy such as (21), then we have to conclude that speakers of English use perceptual strategies different from those used by speakers of, say, Japanese where the filter (19) does not hold. In other words, we would have to postulate that speakers of English develop different perceptual mechanisms from speakers of Japanese, i.e. speakers of different languages perceive the world in different ways - at least with respect to the properties expressed by the filter (19). This is not a very plausible view, to begin with; but even if it were, it is clear that the perceptual strategy under discussion would then have to be a function of the specific properties of the language, and not the other way around, i.e. speakers would develop the perceptual strategies (21), if and only if the filter (19) held for their language. This, again, is tantamount to saying that the specific properties of (21) depend on those of (19), rather than vice versa. It thus seems that a functional explanation which derives (19) from the properties of (21) has no empirical justifica-
52
Cognition and Language
Growth
tion, if (19) is a specific property of certain languages; unless, of course, it can be shown on independent grounds that English and Japanese people differ genetically with respect to those properties of their perceptual apparatus that are relevant for (19) and (21). I do not see any reason to believe that this is so. Let us examine, for the sake of the argument, the other alternative, namely that the surface filter (19) is a true language universal(57), in which case it would be a necessary property of all natural languages. If this is so, then the surface filter (19) does not need to be learned, but is rather part of the child's genetically-determined language faculty; in technical terminology, it is part of Universal G r a m m a r . If this, however, is the case, then the child obeys the surface filter, not because of perceptual considerations, but rather, because his genetic program does not leave him any other choice. A language violating (19) would simply not be a humanly accessible language, so that perceptual mechanisms become irrelevant in this case. In a similar way birds grow wings instead of arms, not because they are useful for flying - although they doubtlessly are - , but rather, because the growth of wings is determined by the bird's genetic program. Suppose, however, that someone rejects the notion of 'Universal G r a m m a r ' as a specification of a genetically determined language faculty altogether, arguing instead that the universal properties of natural languages are a function of more general properties of man's (genetically-determined) perceptual system(s). A person proposing such an idea would necessarily be committed to the view that the properties of the perceptual system determine the universal constraints on natural languages in a unique and exhaustive way. That is to say, whatever structure is (relatively) difficult to perceive or to process will invariably (tend to) be ungrammatical as a universal property of natural languages; conversely, structures that are (relatively) easy to perceive or to process will be universally grammatical. Under the premises stated above, this view is logically necessary, because if a given perceptual strategy such as (21) still left open options for the grammaticality or ungrammaticality of a given string, then the child could not uniquely determine on the basis of perceptual considerations whether or not the string is grammatical. The child would therefore have to resort to other types of information, e.g. those provided by Universal G r a m m a r . It should be noted that the question of whether or not the properties of the human perceptual system determine exhaustively the structural properties of natural languages is a purely empirical issue which cannot be decided a priori. It is certainly plausible to assume that various cognitive systems interact in the process of speech production and comprehension so that surface filters such as (19) may, in fact, facilitate perception and cognitive processing to a significant extent. The claim, however, that the
Some Issues in Cognition and Language
Development
53
universal properties of natural languages can be sufficiently explained in terms of perceptual strategies, needs to be supported by empirical evidence indicating that structures difficult or near-impossible to process perceptually, will invariably be ungrammatical. It may well be that this view turns out to be correct; but I do not see any a priori justification for insisting that it must be correct. In fact, I believe that it is fairly easy to show that the claim is empirically false. There are numerous examples demonstrating that fully grammatical sentences seriously violate principles of perceptual easiness and that, in certain cases, children acquiring a language consistently choose rules that are more difficult to process than other rules that would equally well account for the data to which the child is exposed. Consider, first of all, the following well-known example (taken from Bever (1970)): (22)
The horse raced past the barn fell.
People presented with this example will, in general, tend to judge it as ungrammatical. Due to the structural ambiguity of raced (past tense or perfect participle) people tend to analyze the string the horse raced past the barn as a main clause, thus following the perceptual strategy (21). However, the grammaticality of (22) becomes immediately apparent when the sentence is identified as a word-by-word analogue to the following structure: (23)
The ball thrown past the barn fell (on the pasture).
There are numerous other examples of so-called garden-path sentences: (24)
I gave the girl the dog bit some flowers.
(25)
The old train the young.
(26)
While Mary was eating fresh spaghetti were delivered to her home.
(27)
Without her contributions to this volume will be incomplete.
Sentences such as (22)-(27) are obviously difficult to process perceptually, but subjects realize immediately that the sentences are fully grammatical, once the perceptual difficulties are resolved. In other words, difficulties in cognitive processing do not, in principle, obviate correct grammaticality judgements, given appropriate conditions. It seems to be clear from these examples that English does not have surface filters which will guarantee
54
Cognition and Language
Growth
the success of perceptual strategies such as (21), though, in many cases, perceptual strategies may be facilitated by surface filters. There have been various proposals suggesting that not only the structure of natural languages, but also the way in which children acquire a language can be sufficiently explained in terms of the nature and the regularities of the cognitive processes involved in speech production and comprehension (see e.g. Clark & Haviland 1974). In particular, it has been argued that what is difficult and complex to process cognitively will be acquired relatively late, while structures which are easy to process will be learned early. Personally, I suspect that this issue - interesting as it is cannot be solved conclusively on the basis of presently available evidence, but it might turn out to be fruitful to see what kind of evidence could help to make the correct choice between competing theories. It seems to me that at the present state of the art theories of language and language acquisition which crucially rely on some complexity-measure with respect to cognitive processes are facing both a conceptual and an empirical problem - provided they aim at explanatory adequacy. At the conceptual level, the major problem consists in determining on very general grounds what is cognitively more complex or less complex; i.e. one needs to find some domain-independent measure of cognitive complexity. Note that this measure of cognitive complexity must be domainindependent in the sense that it cannot be derived from properties of the domain to be processed, if cognitive complexity is to explain the properties of that domain. In this case, the argument would be circular. Illustrating the problem with an example from language acquisition, one could imagine the following situation: given two constructions A and B, we observe that a child acquires A before he acquires B. According to the view under discussion, it might be reasonable to assume that B is acquired after A, because its structural properties are more difficult to process than the structural properties of A. The problem now is to determine on indepedent grounds in what sense or by what criterion the properties of B are cognitively more complex than those of A. Obviously, we cannot simply assume (without further empirical justification) that B is cognitively more complex than A, because then the argument would be circular: structures are cognitively less complex if acquired early and structures are acquired early if cognitively less complex. There have been various proposals which have sought to solve this problem by defining cognitive complexity in terms of linguistic complexit y ^ ) , e.g. in terms of the derivational history of a given structure. However, such proposals raise a host of new problems. First, by taking linguistic complexity as a measure of cognitive complexity, we obviously cannot solve the circularity problem. If cognitive complexity exhaustively and uniquely reflects linguistic complexity, then, of course, the relevant
Some Issues in Cognition and Language
Development
55
acquisition facts follow directly from the linguistic complexity of the structures involved, i.e. the attempt to derive facts of language (acquisition) from more general cognitive phenomena becomes largely vacuous. Note, however, that it is exactly the aim of the so-called 'cognitivist approach' to reduce at least a significant number of linguistic and acquisitional regularities to more general constraints of cognitive processing. It seems that this aim cannot be achieved in principle, if cognitive complexity is taken to be a total function of linguistic complexity. Furthermore, linguistic complexity - at least in its traditional sense - is a notion that belongs to the domain of competence, i.e. a speaker's knowledge of his language. As such it does not tell us very much about language performance in a straightforward way, i.e. how the speaker uses his linguistic knowledge under real-life conditions. Consequently, if we define cognitive complexity - a notion that obviously refers to the actual processing of information, hence to the domain of performance - in terms of linguistic complexity, then this view implies a strong isomorphism between competence and performance. As I have tried to show in section 4, however, there is good reason to believe that such an isomorphism does not exist, i.e. it does not seem to be the case that the cognitive processes involved in performance and the mental representations specifying a speakers competence follow the same principles. In other words, within the range of problems under discussion 'linguistic complexity' is only meaningful, if - in contrast to traditional usage - it is meant to refer to phenomena of performance, i.e. language use and actual processing. While it is perfectly legitimate to redefine even traditional terminology, the fundamental problem cannot be solved in this way, because now we end up at exactly the point at which we started, namely trying to find out how linguistic structures are processed. The problem is then that, on the one hand, we need an independent measure of cognitive complexity to prevent the argument from becoming circular or vacuous, and, on the other hand, it seems difficult to imagine how such a measure could be found without reference to a particular cognitive domain. While it is presumably true that different cognitive domains interact in actual language use, it is difficult to see any compelling evidence for the assumption that the structure and acquisition of natural languages can be exhaustively reduced to general properties of human cognition; in fact, there seems to be plenty of counter-evidence to this assumption, i.e. we know of many properties of grammars which seem to have no analogues in other cognitive domains. I will discuss some of these properties as we proceed. Let us now turn to the empirical aspect of the problem. In certain fairly simple and general cases one might hope to find a (language-)independent measure of cognitive complexity that seems to be reasonably feasible. By
56
Cognition and Language Growth
any intuitively plausible standards of cognitive complexity, it does not seem totally unrealistic to assume that cognitive complexity somehow relates to the kind of information to be processed. One might thus suggest that rules or structures that incorporate more information - in terms of how they have to be specified - will be cognitively more complex while rules or structures that incorporate less information will be cognitively less complex. If this assumption is correct, then it should be true that a structure or rule involving less information will be acquired earlier and more readily than a structure that involves more information. By way of some very simple examples, I will try to show that, even in some extremely fundamental cases, such a proposal proves to be empirically incorrect. During the past 20 years much empirical research in linguistic theory has been concerned with determining and specifying universal properties of natural languages. In studying the deeper regularities of individual languages linguists have discovered a number of highly abstract constraints and conditions on grammatical rules that appear to delimit the class of humanly possible languages. While in many cases specific proposals about universal conditions or constraints are still highly controversial, there appears to be at least one very general property of grammatical rules whose universal status is hardly in doubt. As a universal property, rules of grammar in natural languages are always structuredependent. What this term means, will become clear as we proceed. Suppose that a child acquiring English is presented with sentences such as the following: (28)
a. b.
Mary is coming tonight, Is Mary coming tonight?
(29)
a. b.
John has stolen a car. Has John stolen a car?
(30)
a. b.
John can play the piano very well, Can John play the piano very well?
(31)
a. b.
Sue will marry Tom. Will Sue marry Tom?
Let us assume that the child somehow knows that the (b)-sentences are questions corresponding to the respective (a)-sentences. The child now faces the problem of determining the correct rule of English question formation that will convert the (a)-sentences into the (b)-sentences. Given empirical evidence of the type illustrated by the sentences in (28)-(31), it
Some Issues in Cognition and Language Development
57
seems to be a fair guess that the child might form a hypothesis such as the following: HYPOTHESIS A:
English questions are formed by a rule that looks for the first verbal element in a sentence and preposes it to sentence-initial position.
We will refer to rules such as the one specified in HYPOTHESIS A as structure-independent rules. The notion 'structure-independent' captures the observation that the rule does not consider the internal structural configuration of the sentence, but merely counts its elements from left to right. Nevertheless, this rule provides a fully adequate description of question formation in sentences such as (28)-(31). Note that such a rule could easily be computerized. AH that needs to be done is to feed the computer with appropriate sentences marking each word as to the syntactic category to which it belongs. The machine can then start at the beginning of the sentence, look for the first verbal element and prepose it to sentence-initial position. In other words, the machine has to have access to only two types of information (irrelevant details aside): a) the syntactic category of each element and b) the linear order in which these elements occur. A child who would hypothesize a rule of question formation such as the one specified in HYPOTHESIS A, should then be expected to produce questions of the type exemplified in (32)-(34) (the first verbal element in each source sentence (a) is underlined): (32)
a. b.
The man who will get the job is my brother, * Will the man who get the job is my brother?
(33)
a. b.
The woman walking down the street ran into a car. *Walking the woman down the street ran into a car?
(34)
a. b.
That Sue will marry Tom is obvious, *Will that Sue marry Tom is obvious?
Sentences (32b), (33b) and (34b) are obviously ungrammatical. The reason is that the rule specified in HYPOTHESIS A is clearly inadequate. In order to arrive at a more adequate solution, the child would have to form a hypothesis such as the following: HYPOTHESIS B:
English questions are formed by a rule that looks for the first verbal element following the topmost noun phrase in a sentence and preposes it to sentence-initial position.
58
Cognition and Language Growth
A rule of the type specified in HYPOTHESIS B will be called a structuredependent rule. That is, the rule does not only consider the syntactic category of each individual word (as well as the linear order in which it occurs), but, in addition to that, the structural configuration of the entire sentence. In particular, the rule has to have access to syntactic variables such as noun phrase, verb phrase, etc. Consequently, an adequate rule of English question formation that prevents the generation of ungrammatical sentences such as (32b)-(34b) has to know which elements constitute the topmost noun phrase in the sentences under discussion. Thus in a sentence such as (34b) both Sue and that Sue will marry Tom are noun phrases, but only the topmost noun phrase that Sue will marry Tom will be considered by the question rule(59) (this very general constraint is commonly referred to as the 'A over A Principle', cf. Ross (1967)). It is fairly clear that such a rule is much more difficult to process or to computerize than the rule in HYPOTHESIS A. The crucial difficulty is, of course, that, in order to apply the rule correctly, the computer must be able to carry out a syntactic analysis over an infinite range of syntactic categories, because otherwise there would be no way of determining which is the topmost noun phrase. These data are interesting for various reasons. First of all, it seems that by any reasonable measure of cognitive complexity, a structure-dependent rule should be more difficult to process than a structure-independent rule. Notice that processing a structure-dependent rule such as the one specified in HYPOTHESIS B involves the following information: (a) the linear order of the elements in a sentence, (b) the syntactic categories of the elements involved, (c) the internal structural configuration of the sentence. In contrast, processing a structure-independent rule such as HYPOTHESIS A, requires only information of the type (a) and (b). Consequently, structure-independent rules should be easier to process(60). If cognitive complexity guided the acquisition process, then children should consistently choose structure-independent rules; in particular, in those cases where both a structure-dependent and a structure-independent rule are equally compatible with the empirical evidence. However, from what we know about language acquisition, it seems that children do not choose structure-independent rules at all. Errors of the type illustrated in (32b)-(34b) do not seem to occur in language acquisition. All the available evidence suggests that, whenever both a structure-independent and a structure-dependent rule account for a given set of data, the child will consistently choose the structure-dependent rule, even though such a rule involves a higher degree of processing complexity. Consequently, in
Some Issues in Cognition and Language Development
59
forming hypotheses about the grammatical rules of the language to be acquired children do not seem to consistently follow principles of cognitive complexity. In other words, a theory which proposes that the acquisition process can be sufficiently explained in terms of cognitive complexity makes obviously false predictions and is thus inadequate. The child's consistent preference of structure-dependent rules seems to converge with our intuitive knowledge about what is a humanly possible language. Even if a certain set of empirical evidence is consistent with the assumption of a structure-independent rule, we seem to know a priori that structure-independent rules do not occur in natural languages. Also adults seem to have very strong intuitions about certain fundamental properties of grammatical rules. Consider the following case: It is widely known that the languages of the world use a variety of very different formal devices to form interrogative structures, i.e. to convert a declarative sentence into a question. Some languages, such as Finnish, Japanese or Chinese use particles which are either attached to the end of a sentence or to different constituents within the sentence. The majority of Romance languages use prosodic devices to mark questions. German, Dutch and many of the Scandinavian languages use subject-verb inversion. English has the somewhat 'exotic' device of do-support in certain cases and Chinese can also form questions by juxtaposing a declarative and a negative verb within the verb phrase (you have not have book — do you have a book?). In view of this wide variety of formal devices one might ask whether or not there are any principled constraints on the types of rules that a language can potentially use, say, for the formation of questions. Let us imagine a language 'Quenglish' which is identical to English except that it has a different rule of question formation. Suppose that in Quenglish questions are formed by preposing the fourth element of a given declarative sentence to sentence-initial position. Applying this rule to the following (a)-sentences, we obtain the corresponding questions in (b) (which, of course, are all grammatical in Quenglish, but not in English): (35)
a. b.
My best friend is a good swimmer. Is my best friend a good swimmer?
(36)
a. b.
Daddy likes to eat pancakes. Eat Daddy likes to pancakes?
(37)
a. b.
John who is my brother will not come. My John who is brother will not come?
60
Cognition and Language Growth
(38)
a. b.
John stole a car, Car John stole a?
Looking at these constructions, it is intuitively clear that a language such as Quenglish cannot exist, as a matter of principle. Even those who do not know very much about the structural diversities of natural languages seem to have very strong intuitions that a rule of question formation such as the one hypothesized for Quenglish is simply not possible in natural languages. This judgement obviously does not derive from a person's experience with a large number of different languages, but seems to be based on principled intuitions about what may or may not be a possible rule of grammar. Note that there is no logical reason why such a rule should not exist, since the mechanism of the rule is, in fact, extremely simple; however, we seem to know intuitively that a language such as Quenglish does not belong to the class of possible human languages. Our intuitions about the rule of question formation in Quenglish can be made explicit in a very natural way. The hypothesized rule is obviously a structure-independent rule in that it is blind to the internal syntactic structure of the sentence and merely counts the elements (or classes of elements) of the sentence from left to right. This property of the rule seems to be responsible for our intuition that a language such as Quenglish cannot be a natural language. Apparently we know a priori, i.e. independent of our experience with languages, that however much rules of grammar may vary from one language to another, they m us the structuredependent. In other words, the non-existence of Quenglish is not a historical accident; rather, a language having properties of the type attributed to Quenglish simply does not qualify as a humanly possible language. It seems to be clear that our intuitions with respect to Quenglish are not fundamentally guided by considerations of cognitive processing or cognitive complexity. If the degree of cognitive complexity were our criterion in determining if a given language, say Quenglish, belongs to the class of humanly possible languages, then we would have to conclude that Quenglish might, in fact, exist, since, in terms of cognitive complexity, the rule under discussion should be relatively easy to process. However, it seems that, in judging whether or not Quenglish is a possible natural language, we disregard questions of cognitive processing and consider principles of a totally different nature. While the structure-dependent properties of grammatical rules provide an example where principles of natural languages are in conflict with principles of cognitive complexity, there are other cases in which both principles seem to somehow converge. Let us consider such a case. There is a (possibly universal) principle of natural languages which is
Some Issues in Cognition and Language Development
61
commonly referred to as the Subjacency Constraint or the Binary Principle (cf. Chomsky 1973, Wexler & Culicover 1980). Informally stated, this principle asserts that no transformation (of a certain type) may move a constituent over more than one cyclic node, where both N P (noun phrase) and S (sentence) are considered to be cyclic nodes(61). The Subjacency Constraint can be illustrated, for example, by certain restrictions in the application of the rule of Extraposition. Consider first the following sentences: (39)
a. b.
That Sue will marry Tom is obvious, It is obvious that Sue will marry Tom.
(40)
a. b.
That John didn't win the race surprises me. It surprises me that John didn't win the race.
The (b)-sentences are derived from the (a)-sentences by a rule of Extraposition. This rule moves the embedded sentence that Sue will marry Tom and that John didn't win the race to the end of the entire construction, leaving in its place the pronoun it. Let us now consider the sentences (41a)-(41e) which are derived from a structure of the type illustrated in the tree diagram (42): (41)
a. b. c. d. e.
That (the fact) that Tom will marry Sue surprised you is obvious. It is obvious that (the fact) that Tom will marry Sue surprised you. That it surprised you that Tom will marry Sue is obvious. It is obvious that it surprised you that Tom will marry Sue. * That it surprised you is obvious that Tom will marry Sue.
62
Cognition and Language Growth
Sentence (41a) may be considered as the base structure to which no Extraposition has been applied. (41b) is derived from (41a) by applying Extraposition to the subject N P (marked as S! in the tree diagram) of is obvious, namely that (the fact) that Tom will marry Sue surprised you. The entire structure dominated by S, is extraposed to the end of the sentence, i.e. following is obvious, leaving behind the pronoun it. (41c) is also derived from (41a) by Extraposition which in this case has, however, applied to a different sentence structure. The embedded sentence S2 that Tom will marry Sue is moved to the end of the next highest sentence S 1; namely surprised you, leaving behind as a dummy-subject of surprised you the pronoun it. (4 Id) is then derived from (41c) by a further application of Extraposition. In this case, the subject of is obvious in (41c), namely that it surprised you that Tom will marry Sue is moved to the end of the entire construction, leaving behind, again, the pronoun it. The crucial case, however, is (41e) which is derived from (41a) by moving the sentence S2 that Tom will marry Sue to the end of the entire construction. In this case, however, the embedded sentence S2 is moved over two S-nodes, namely S, and S 0 , thus violating the Subjacency Constraint. (41e) is clearly ungrammatical. Its ungrammatically is predicted by the Subjacency Constraint which does not permit a constituent such as Tom will marry Sue to be moved over two cyclic nodes (in this case, two S-nodes). A further case where ungrammaticalities can be explained in terms of subjacency violations can be seen in the following structures: (43)
a. b. c. d.
Mary believes that John stole a car. What does Mary believe that John stole? Mary believes the claim that John stole a car. * What does Mary believe the claim that John stole?
(44)
a. b. c. d.
Mary What Mary *What buy?
suggested that John should buy a car. did Mary suggest that John should buy? made the suggestion that John should buy a car. did Mary make the suggestion that John should
Here again, the (d)-sentences are ungrammatical, because the Subjacency Constraint is violated. To be more precise, the constituent what which refers to the object N P ( = the car) of the embedded sentence has been moved over more than one cyclic node to sentence-initial position. The first cyclic node over which what has been moved is the N Pdominating the claim and the suggestion respectively. The second cyclic node is the topmost S-node of the entire construction. Apart from these two exam-
Some Issues in Cognition and Language Development
63
pies, there is a large number of seemingly very heterogeneous constructions which can all be accounted for by the Subjacency Principle (for further details and examples see Chomsky (1973, 1977b)). Let us consider some of the general implications that the Subjacency Constraint might have with respect to the relationship between mental representations and cognitive processing. As mentioned before, the constraint specifies that, with respect to certain transformational rules(62), a constituent cannot be moved over two cyclic nodes. In more general terms, the constraint asserts that a constituent may not be moved too far away from its original position; i.e. subjacency imposes certain restrictions on the permissible distance between a constituent's original and derived position. Given these facts, it might be tempting to translate the Subjacency Constraint into notions of cognitive processing. In order to correctly interpret or process a given structure, both the original and the derived position of a constituent must be somehow 'remembered'. Consequently, it could be argued that the greater the distance between the original and the derived position, the more memory or processing capacity will be needed. One might thus interpret the Subjacency Constraint as a plausible device to reduce the cognitive load in sentence processing. While it may reasonably be maintained that the Subjacency Constraint facilitates cognitive processing, the crucial question is whether or not memory or processing limitations necessarily require the existence of the Subjacency Constraint as a universal principle of natural languages. It is easy to see that this is obviously not the case. Note that subjacency is a very specific constraint which does not preclude rules of unbounded movement (i.e. indefinite distance between a constituent's original and derived position) in general. There are numerous well-known cases of unbounded movement in which a constituent may be moved over an indefinite number of S-nodes(63). For reasons that need not concern us here(64), subjacency does not apply in constructions as the following: (45)
a. b.
Mary says that Bill heard that the police believed that John had stolen a car. What does Mary say that Bill heard that the police said that John stole?
(45b) is fully grammatical, even though the constituent what has been moved over three cyclic nodes, namely S-nodes. As these examples demonstrate, subjacency crucially involves notions such as 'cyclic node' and it is difficult to see why only N P and S, but not say, PrepP or VP should affect cognitive complexity or memory load. To summarize, the Subjacency Constraint may, in fact, facilitate
64
Cognition
and Language
Growth
cognitive processing in certain cases, but it does not guarantee easy processing over the full range of relevant constructions. There are obviously numerous constructions that are relatively difficult to process, and yet fully grammatical, while others that are easy to process turn out to be ungrammatical. There is thus no compelling reason to believe that the Subjacency Constraint can be functionally explained in terms of limitations of cognitive processing. Apart from these theoretical considerations, there are various psycholinguistic studies which seem to indicate that linguistic abilities and cognitive abilities are phenomena sui generis which cannot simply be regarded as functions of each other. Hatch (1982) reports on some such studies which all lend support to the claim that a child's linguistic development does not necessarily reflect his intellectual development, but rather may, in certain cases, follow its own independent path(65). Hatch (1982) mentions a case study by Yamada who examined a 16-year-old girl's linguistic and cognitive abilities. Various Piagetian-type tests showed that the girl's intellectual abilities were those of children in t h e 2 t o 7 y e a r range. She had problems with numerical and time concepts, was unable to tell her birthday or age, and failed to demonstrate common knowledge abilities, such as naming the days of the week. Despite these cognitive deficiencies, the girl had acquired fairly complex syntactic structures such as various types of embedded sentences, co-ordination, etc. In other words, 'this subject's linguistic abilities place her well beyond that of the normal pre-schooler while her cognitive capacities still fall within that age range' (Hatch 1982:6). Yamada furthermore reports on individuals with Turner's syndrome (a sex chromosomal anomaly) who show severe deficiencies in visual perception, temporal sequencing abilities and other cognitive capacities, while their linguistic abilities are judged to be normal or even superior. It should be noted, however, that it is primarily syntactic knowledge that does not correlate with cognitive abilities, whereas semantic knowledge seems to depend quite strongly on conceptual maturity. A counter-example to the claim that language development necessarily requires and depends on social interaction is given by Hatch in connection with a study conducted by Blank et al.: 'John, the three-year-old subject, acquired language without acquiring social interaction skills . . . While John's language represents an adequate description of the physical world (he can describe objects and events accurately), it is inappropriate in a communicative sense. He does not follow conversational themes in a coherent manner and seems basically uninterested in the content of his partner's speech. He appears incapable of understanding non-verbal communication as well . . . He has never interacted in games of peek-aboo and has not acquired social greeting terms (hi, bye-bye)' (Hatch 1982:7).
Some Issues in Cognition and Language Development
65
I believe that the empirical data from these and other psycholinguistic case studies as well as the theoretical considerations outlined above suggest quite strongly that language and cognition, while obviously interacting in many interesting ways, are highly distinct phenomena whose deeper properties cannot be explained in terms of each other. As Hatch (1982:6) correctly observes: ' . . . a strong version of cognitive development as causal or prerequisite of language development, at least in the area of syntax, may not be tenable'.
6. ON THE BASIS OF SCIENTIFIC IDEALIZATIONS
In recent years a good deal of controversy has surrounded the question of what constitutes a reasonable and legitimate object of inquiry for psycholinguistic research. Various researchers have argued that psycholinguistics should be concerned with 'real' language as it is actually used in real-life social and interactional contexts. These arguments are primarily directed at scholars who, in their own work, have frequently abstracted away from certain real-life factors and have relied heavily on strongly idealized versions of a given set of phenomena. Since such idealizations are frequently attacked in current psycholinguistic (and sociolinguistic) work(66), it seems worth-while to examine the basis for such criticism more closely. Idealizations seem to belong to the standard repertoire of procedures in many branches of science(67). In order to arrive at theories of sufficient explanatory power, the natural scientist for example will, quite generally, isolate a specific and well-defined subset of observed phenomena which he assumes to constitute a coherent and independent system; he will then proceed to study the properties of the components of this system and will develop models and theories which are intended to account for the specific structure of the system under analysis. By restricting his attention to a single separate system of observable phenomena, the scientist will necessarily look at an idealized subdomain of nature. He will abstract away from the influence of other systems of natural phenomena which, under conditions of the real world, obviously interact with the systems he has chosen for analysis (see Lakatos 1970, Kuhn 1962). Obviously the motivation for such an approach is that there is no hope of constructing, in a single effort, a theory that will explain all aspects of nature (or, for that matter, of the entire world). Consequently, it is a reasonable and well-established procedure in the natural sciences to study the properties of idealized systems or subsystems of nature. The plausibility and empirical validity of theories derived from such studies are by no means vitiated by the fact that the idealized (sub)systems which are
66
Cognition and Language Growth
subjected to scientific analysis do not, of course, exist as such in the real world, but are rather the result of well-defined abstractions which deliberately ignore (the influence of) certain aspects of reality. It would certainly come as a surprise to the natural scientist if someone were to argue against a given theory by pointing out that it fails to consider or to account for phenomena which, in the real world, interact with the specific system the theory attempts to explain. In order to illustrate the role of idealizations in the natural sciences, let us look at a particularly suggestive example, namely the theory of ideal gases (or, more correctly, the kinetic theory of gas; cf. Christen (1981)). Gases have the property of being highly compressible and, conversely, of expanding under the influence of increasing temperature; in other words, gases may assume different states. In order to determine the specific state of a certain quantity of a gaseous substance, three variables need to be known: temperature, pressure and volume. It was found that the relationship between these three variables, which can be expressed by the equation p * V = R * T (for details, see Christen (1981)), is approximately constant for all gases. Gaseous substances which strictly follow the equation are called ideal gases. The crucial point here is that ideal gases do not exist in reality. Under room temperature hydrogen and helium come fairly close; but most real gases deviate from the ideal states specified by the equation to a significant extent. The reason is that the theory of ideal gases disregards the inherent attractive forces of the gas particles as well as their intrinsic volume. These two factors explain why, even under normal temperature conditions, real gases behave differently from ideal gases. The crucial significance of this example is to show that, in certain cases, the natural scientist develops a theory of something that does not as such exist in reality. In developing such a theory, the scientist may abstract away from certain significant properties of a system - in this case, the attractive forces of gas particles - which determine much of the behavior of the substance under real-world conditions. The justification for such an abstraction is that it permits him to study the deeper regularities and more significant principles that govern the behavior of substances under various conditions. The example of the theory of ideal gases is by no means an exceptional case. Any textbook on chemistry, physics, biology, etc. provides numerous illustrations of how the natural scientist achieves significant insights and arrives at explanatory theories by studying idealized systems of natural phenomena. Thus, certain chemical elements such as silicon, germanium or tin do not naturally occur in their pure form, but only in certain compounds. Nevertheless, the chemist will proceed to study the properties of the elements in isolation and explain their behavior in compounds on the basis of these properties. In the case of astatine (At), a
Some Issues in Cognition and Language Development
67
halogen element, chemists were able to determine its properties long before it was actually discovered through radioactive decay. It seems that the function and status of idealizations in scientific inquiry can be examined under at least two aspects: (a) their motivation and (b) their justification. I believe that the motivation both for a specific idealization and for idealizations in general is straightforward. Without idealizations, i.e. without studying systems in isolation abstracting away from their interactions with other systems, science would hardly progress towards constructing explanatory theories. If all observable phenomena as well as the way they interact were examined at the same time, all that could be achieved is a (possibly structured) list of observations, but no systematic and principled explanation of why phenomena are the way they are. By studyingthe properties of independent systems wemayhaveafairchance of arriving at an understanding of the deeper principles and regularities that govern such systems. How a specific idealization can bejustified is a much more serious issue. I believe that usually two types of considerations enter into the problem of justifying an idealization. The first is whether or not the idealization proposed is legitimate. That is, does the idealization misrepresent the real world in such a way that it will prevent significant insights into the nature of the phenomena under analysis, or does the idealization open the possibility for discovering real and fundamental properties of the system in question? This is a question which, of course, cannot be answered a priori, but must be examined for each specific idealization proposed. The second, and more practical consideration in justifying a given idealization is whether or not it meets the requirements underlying its original motivation. In other words, a specific idealization must be both explicit and strict enough to delimit a feasible domain for scientific inquiry. An idealization incorporating a large collection of heterogeneous phenomena which simply do not seem to be analyzable in a single effort, may be plausible and legitimate, but is clearly useless with respect to the specification of a research program. While the use of idealization is a well-established procedure in most scientific disciplines, it can be observed that, in the domain of linguistics and language acquisition research, idealizations, once they are made explicit, arouse storms of protest and criticism. One of the apparently most controversial idealizations in the study of language is that'linguistic theory is concerned primarily with an ideal speaker-listener, in a completely homogeneous speech community, who knows its language perfectly and is unaffected by such grammatically irrelevant conditions as memory limitations, distractions, shifts of attention, and interest, and errors (random or characteristic) in applying his knowledge of the language in
68
Cognition and Language Growth
actual performance' (Chomsky 1965:3). The reason for arguing against this idealization can certainly not be that it abstracts away from phenomena that are obviously part of language reality. This is as evident as it is irrelevant. Idealizations as such are a necessary procedure in scientific inquiry; to reject the use of idealizations altogether would place linguistics outside of science. The crucial question, of course, is whether or not this idealization is legitimate, i.e. does it misrepresent linguistic reality in such a way as to prevent significant insights into fundamental properties of natural language? This question has extensively been discussed in Chomsky (1980:10-22), so it seems unnecessary to restate the issue here in all its comple xity. However, it appears that some of the aspects under which this idealization has been criticized deserve some detailed examination. Various types of criticism characteristic of much of the general attitude in our field are stated in Labov (1971). I will therefore focus on Labov's arguments in some detail and will examine what insights and proposals these arguments might eventually lead to. It is frequently claimed in the current linguistic and psycholinguistic literature that the idealization under discussion has redefined the field of general linguistics in such a way that only the study of competence is an acceptable undertaking, while everything else is excluded from the field. In other words, the idealization is interpreted as implying that anyone who wants to do general linguistics properly should restrict his attention to the study of linguistic competence and should not be concerned with anything outside of this domain, particularly not with problems of performance. In Labov's words: 'Linguistics has thus been defined in such a way as to exclude the study of social behavior or the study of speech' (p. 1 5 5 ) , ' . . . linguists did take the somewhat unusual step of redefining their field so that the everyday use of language in the community would be placed outside of linguistics proper' (p.212). It is difficult to see how this criticism can be evaluated. Idealizations do not, of course, define or redefine a field such as linguistics, biology, chemistry etc. in any meaningful sense of the word. The implication of a given idealization is merely that it will lead to significant insights into fundamental properties of the phenomena under investigation. Whether or not studies outside of this idealization or under different idealizations can achieve the same level of explanatory power, is an entirely open question. That this is, in fact, the case must be shown by those who advocate a different approach. There is no ethical value attached to what a 'real' linguist should do or should not do. Anyone who studies something that relates to language, may, of course, call himself a linguist, if he so wishes. T o debate this issue is not very interesting and does not lead to any meaningful proposals. All the idealization implies is that, under a certain range of abstractions, linguistics may progress to a stage where theories of considerable explanatory power can be proposed.
Some Issues in Cognition and Language
Development
69
Strangely enough, Labov is much more rigorous about what linguistics must be than the position he c r i t i c i z e s . ' . . . the object of linguistics must (emphasis added) ultimately be the instrument of communication used by the speech community; and if we are not talking about /Aa/language, there is something trivial in our proceeding' (p. 156). Labov's statement seems to suggest that there is something unethical about the study of competence and that the 'real' linguist is somehow morally obliged to study language in its social context. It is difficult to imagine that a biologist or a chemist could be led to a similar statement about what proper biology or proper chemistry must be. The crucial question is not what a scientist m ust do, but rather what he can do with some hope of success, given the limitations under which he has to work. While there is no doubt that the study of language in its social context is an important and interesting domain, I do not see any reason why this must be the ultimate object of linguistics. Labov even asserts that 'there is something trivial in our proceeding', if we restrict our attention to the study of competence. Unfortunately, he does not give any argument in support of why this is so, and, in fact, I do not see any justification for such a claim. Looking at the proposals that linguists working under the idealization in question have made, one might, of course, argue that these proposals are misguided, incorrect or that the evidence they are based on is inconclusive, but I do not see how anybody can seriously maintain that they are trivial. In fact, it appears to me that it is Labov who has to show that studies of the type he advocates will produce non-trivial results. That is, he has to demonstrate that studies of 'language in social context' can show more than merely that there are differences in the way people use their language. He will have to propose principles of an equally abstract and general nature that explain why languages are used in the way they are. In order to emphasize the privileged status of sociolinguistic studies of a specific type, Labov states that 'language is a form of social behavior . . . very few people spend much time talking to themselves. It is questionable whether sentences that communicate nothing to anyone are a part of language' (p. 152). Statements such as these are quite common in the literature; however, I believe that to the extent that they make sense at all, they are wrong on all grounds. There is no doubt that language has many functions, among others the function of communicating a message in a given social context. But it is difficult to see how this fact can reasonably lead to the assertion that language is social behavior, i.e. exists only by virtue of its social function. A major function of our legs e.g. is that they enable us to walk. But no one would seriously maintain that our legs do not exist apart from our ability to walk; i.e. that our legs can only be studied in terms of the specific movements we execute when we are walking. It seems to me that this is confusing structure and function. Any
70
Cognition and Language
Growth
textbook on human anatomy sharply distinguishes between the leg's structure (to be specified in terms of its muscles, bones, tendons, etc.) and its function in actual movement. Although function and structure interact, there is no reason to believe that they are actually the same, or that one is a complete derivative of the other (exceptions may be on an evolutionary level; but this is not the issue). Labov does not present any argument why, in the case of language, things should be essentially different. At a more specific level, one might wonder if Labov is actually serious about his statement that sentences that communicate nothing are not a part of language at all. Suppose, I am sitting in a faculty meeting being totally bored after five hours of useless discussions on a trivial point. Suppose furthermore that, out of sheer boredom, I start scribbling things on a piece of paper in front of me. I might be drawing little boxes and circles, but I might just as well scribble down a sentence, say 'if the sun doesn't shine tomorrow, I am going to kill myself. Quite obviously, in writing this sentence I do not want to communicate anything to anybody; I am merely reacting to my emotional state hoping that the meeting will soon come to an end. I am also not socially interacting with anybody by virtue of scribbling down the sentence. Does Labov seriously mean to say that the sentence given above is not a sentence of English? The same is true if I sit at home, writing something in my personal diary. What l a m writing down are merely my private thoughts which I do not want anybody to know. Still I am using a specific language in that context, say English or French or some other language, and it can hardly be maintained that the sentences I write down are not part of that language. A further criticism advanced by Labov concerns the reliability of grammaticality judgements that are commonly used in studies of linguistic competence. Labov claims that linguists have generally worked under the assumption that grammaticality judgements are homogeneous, i.e. that all native speakers of English, or any other language, will always agree on the grammaticality or ungrammaticality of a given sentence. On the basis of various experiments Labov shows that this is, in fact, not the case; rather with respect to certain constructions, grammaticality judgements may differ among speakers to a significant extent. Consequently, Labov argues, if people are inconsistent with respect to their grammaticality judgements, such judgements cannot be used to determine the properties of natural languages. The observation that people are frequently uncertain whether a given sentence is grammatical or ungrammatical is certainly correct. Quite obviously, the individual act of giving a grammaticality judgement is an act of performance and may therefore be affected by a large number of different factors. Labov himself suggests that 'misunderstandings, mistakes and different conceptions of grammaticality may have contributed
Some Issues in Cognition and Language
Development
71
to the failure to achieve categorical judgements' (p. 161). The crucial problem, however, is whether or not speakers of a language are, in principle, able to determine, if a given string is a sentence of their language. It is difficult to believe that anyone could seriously maintain that this is not the case, i.e. that people are in principle unable to judge grammaticality, or that it is a matter of accident whether they agree on a judgement or not. The fact that there are some specific cases in which people are uncertain does not affect their principled ability to judge the grammaticality of a given sentence. To give an analogy, no one would seriously deny that human beings have spatial vision, i.e. that they are, in principle, able to visually locate a given object in space, to determine its relative distance from the observer, etc. The fact that there are specific cases, e.g. optical illusions, in which this ability fails to achieve correct results, would not lead any biologist or psychologist to the conclusion that humans do not have spatial vision. In other words, if certain abilities fail to be successful under certain well-defined conditions, this, of course, cannot be taken as evidence that the ability itself does not exist. Labov's argument would only be relevant, if hecould show that speakers of a given language are, in principle, unable to determine the grammaticality of sentences. In fact, all linguistic descriptions that I am familiar with, including Labov's, work implicitly with grammaticality judgements. Even such a simple statement as 'English verbs take an s-morpheme in the 3rd pers.sing.pres.' implies a 'homogeneous' grammaticality judgement, namely that speakers of a certain variety of English will judge he says, he plays, he sings as grammatical and he say, he play, hesingas ungrammatical. Even Labov could not have carried out his analyses, if he believed that grammaticality judgements are, in principle, impossible. In his chapter on negative concord (p. 189-192) he is consistently relying on grammaticality judgements whose usefulness he questioned a few pages earlier. Thus, on p. 190 he specifies a rule of Negative Attraction followed by the statement: 'Not only is this rule obligatory for all dialects, but sentences where it has not applied, such as * a n y b o d y doesn't sit there are un-English in a very striking way'. It is difficult to see how this statement can be interpreted as anything other than a grammaticality judgement. Consequently, his observation that people frequently differ in their grammaticality judgements is not very insightful and the arguments based on it are without much force or substance. In a number of very impressive empirical studies dealing with certain phonological deletion, negation, plural and past tense formation, Labov (1971:177-192) observes that, in actual speech, people may use alternative forms in a given domain. In order to account for this type of variation in real-time speech behavior, Labov proposes so-called variable rules whose
72
Cognition and Language
Growth
essential property is that they have one input, but two or more outputs. In other words, a person has various options as to which form he can use in a specific speech context. The variable rule is a formal representation of these options. Thus in words such as act, bold,first etc. the speaker has the option of either deleting the final dental or maintaining the full consonant cluster. While these facts can be adequately accounted for by conventional categorical rules (see Labov 1971:178), Labov furthermore observes that certain options are chosen more frequently than others, ageneralization that conventional rules cannot capture, in principle. He therefore suggests that the different options expressed by the variable rule be assigned 'a quantity cp representing the proportions of cases in which the rule applies out of all those cases in which it might do so' (p. 179). That is, an essential property of the variable rule is that frequency values are attached to the various options the rule incorporates. It should be clear that what Labov is, in fact, dealing with, is performance, i.e. the use of language in social context. Let me emphasize again that there is nothing morally, methodologically or theoretically wrong with studying performance phenomena; however, Labov insists that these variable rules are not only rules of performance, but also represent 'an important aspect of their [i.e. the speakers'] competence or tongue1 (p. 185). In other words, Labov proposes variable rules as rules of competence. I believe that this is where some serious problems arise. Note that a variable rule specifies, in the simplest case, two options for realizing a given form, where each option is assigned a quantity value cp which determines its frequency of occurrence with respect to the other option. Suppose that a given variable rule VR has two outputs A and B. That is, A and B are different realizations of a given linguistic form. Suppose furthermore that empirical studies of the type advocated by Labov have determined that in actual speech people consistently use A in 30% of the cases and B in 70% of the cases. B is thus used more frequently than A, where the relative frequency is specified in terms of percentage values. As a description of language performance, this rule is fully acceptable. Note, however, the empirical consequences of the claim that a rule such as VR is a rule of competence. As a rule of competence it specifies the speaker's knowledge of his language in terms of what is acceptable vs. non-acceptable speech behavior. In order to be a competent member of the speech community, the speaker m «5/observe the frequency distribution of A and B, i.e. the speaker must somehow make sure (unconsciously, of course) that he does, in fact, use A in 30% of the cases and B in 70% of the cases. What kind of mechanism could guarantee that the frequency distribution is, in fact, observed? It seems that a speaker can only achieve the correct frequency distribution in his speech (for a given time unit), if he
Some Issues in Cognition and Language
Development
73
continuously keeps track of all the relevant utterances he produces during a given period of time. That is, he must store in his memory all his past utterances (at least those that relate to VR) in order to make sure that he finally ends up with the correct 30% vs. 70% distribution. In other words, the speaker must constantly record mentally how often he has already chosen e.g. A in order to decide if he can still 'afford' to use A in his next utterance or if he must now use B in order to guarantee the correct frequency distribution in his speech. It seems to me that the assumption that this is what actually happens in real-life speech is somewhat bizarre and without much empirical support, particularly, since there appears to be a large number of variable rules in a given language. Therefore, I do not see any possibility to avoid the conclusion that variable rules are rules of performance, a fact that does not at all affect their plausibility or empirical validity. Let me finally consider a point which, I feel, is crucial in evaluating theories, approaches, and methodologies. As mentioned before, both the motivation and the justification for a given idealization derive from considerations of how significant insights into the fundamental properties of a given object can be achieved; i.e. of how we can succeed in constructing explanatory theories in a given domain. Granting that the study of competence entails severe constraints on which aspects of language reality are being subjected to scientific analysis, this highly restricted view has led to the discovery of some very deep and abstract principles which appear to govern natural languages. In other words, the idealization is essentially justified by the results that it has helped to produce. While there is nothing a priori wrong with the study of language performance, those who favor such activities must show that both the general approach and the specific studies that are being presented will likewise lead to the discovery of principles of equal generality and abstractness. That is, they have to demonstrate that studies of language performance, as they are presently carried out, will reveal properties of human nature with respect to language use. It appears that the requirements explanatory theories must meet are extremely difficult to achieve under the general approach of sociolinguistic studies as advocated by Labov(68). While the observations that Labov has made with respect to how people use language in social contexts are fascinating and impressive, they are, in fact, little more than observations. All these observations show is that people systematically differ in the way they use language and that some of these differences appear to correlate with certain social variables. Such observations are interesting, to be sure, but what do they show about what? In order to explain people's verbal behavior, we have to ask: what are the underlying principles of various sorts of language use? Why do variable rules have the
74
Cognition and Language
Growth
frequency distribution that they have? What are the principles underlying the relationship between social variables and certain linguistic forms? Why do social variables affect only specific linguistic forms and not others? Why are only certain linguistic forms subject to variable rules, while others can be expressed by categorical rules? In essence, what we need is an explanatory theory that specifies the universal principles underlying the use of language in social contexts, and the interaction of cognitive and social systems in actual speech. The problem with such a goal is that there seem to be too many interacting systems and factors each of them still poorly understood in itself - that need to be considered, if a theory of sufficient explanatory power is to be constructed. It seems to me that a reasonable way to proceed within the context of this problem would be to isolate a certain subdomain of performance and to study the properties of this subdomain in depth, i.e. to adopt a procedure that is standard practice in other fields of scientific inquiry. But this is something that Labov is apparently unwilling to do, because he feels that, by adopting an idealization, he might loose something of the world's reality. Of course, one might want to argue that listing observations in a systematic way is all we have to do. One might claim that there is, in fact, nothing to explain; things are the way they are, and there is no particular reason why they are this way; everything is mere accident. While one cannot exclude this possibility a priori, it still does not seem very plausible. If, however, it turned out to be the case that the observable aspects of language performance are merely accidental phenomena which are not governed or constrained by deeper principles - something which, I feel, is very unlikely - then performance would not be a very interesting object for scientific investigations. After all, by generally accepted standards, science seems to be about the deeper properties and principles of the observable world. It should be noted that most sciences, including biology, chemistry, physics etc. originally started out by merely recording systematic observations. A crucial advancement in scientific endeavors was made when scientists began to devise methods and procedures that permitted them to discover the deeper principles underlying observable behavior and its regularities. Most of the natural sciences have taken this step centuries ago, and it seems that recent progress in linguistic theory has shown that such deep and abstract principles can also be found in the domain of language. There is thus no principled reason why normal scientific standards and procedures should not be applied in the study of language.
Some Issues in Cognition
and Language
Development
75
NOTES 1. In the early days of first and second language acquisition it was frequently argued that the observation of just a few learners obviously did not provide a sufficient basis for making any generalized statements about how language is acquired. While it is true that even today many questions must remain unanswered because appropriate empirical data are lacking, it still seems to me that acquisition researchers have accumulated a sufficiently solid data-base to justify the claim that the field has overcome its purely exploratory and descriptive stage of development. 2. In order to avoid a common misunderstanding, I wish to emphasize that I am using the term 'interesting' (and its counterpart 'trivial') in a very narrow and strictly technical sense. In accordance with current usage in linguistic theory a phenomenon is said to be interesting if its origin and underlying principles and regularities are not self-evident. 3. It should be clear that these two are neither conflicting nor mutually exclusive approaches, but rather focus on different phenomena of language learning in what appears to be a complementary way. 4. One of the problems seems to be that different researchers have widely divergent notions of'cognition'. As Scovel (1978:129) quoting Lamendella observes: 'there are two kinds of researchers who deal with cognition, those who define it erroneously and those who do not define it at all'. 5. For a similar terminological distinction with respect to first language acquisition see e.g. Clark & Haviland (1974) or Bates et al. (1983); for a critical evaluation see Wexler & Culicover (1980:393ff.). 6. In order to avoid a wide-spread misunderstanding, it should be made clear that adopting this framework does not imply the assumption that all aspects of language acquisition can be reduced to properties of the human mind. It merely implies that at least some aspects cannot be explained in any other way (i.e. in a way that does not make reference to properties of the human mind). 7. Clahsen et al. (1983), in particular, argue that an adequate theory of (second) language acquisition has to be bi-dimensional in nature, where one dimension captures invariant regularities of the developmental process and the other dimension accounts for systematic variations that relate to different learner-types. 8. Note that even those researchers who verbally oppose the 'universalist' approach frequently adopt it in their own practical work. Quite generally, either a certain structural domain or a well-defined set of external factors is examined moreor less in isolation; i.e. one abstracts away from all other potentially relevant phenomena. Of course, there is nothing wrong with such an approach. It is simply standard scientific procedure to select a well-defined subsystem for closer scrutiny. 9. The crucial notion here is 'exhaustively'. The problem is not whether or not there are some areas which reflect the operation of non-linguistic cognitive structures; obviously there are. The crucial question is whether those structures can account for all aspects of language acquisition, or whether there isa'rest' of linguistic phenomena which, in principle, cannot be explained in terms of non-linguistic mechanisms. 10. Slobin's (1973) Operating Principles may be a case in point. Irrespective of whether or not the specific principles proposed by Slobin and others are correct, the notion 'operating principle' implies, of course, a claim of universality. The theoretical appeal of this notion derives from the fact that, once the complete set of operating principles is found, these will exhaustively characterize the human mind's information-processing capacity in a manner that is, in principle, homogeneous across our species. 11. They are uninteresting in the sense that they would merely enumerate certain accidental phenomena in the process of language acquisition.
76
Cognition
and Language
Growth
12. If it is not taken to be a specific feature of the reductionist a p p r o a c h , then, again, this observation is largely meaningless. 13. It is, of course, conceivable that b o t h linguistic and specific non-linguistic factors are necessary for the emergence of a given new f o r m . But, again, this needs to be d e m o n s t r a t e d empirically. 14. For such a view of science see S c h u m a n n (1982). 15. Within the context of this section, I am using the alternative'utterance' o r ' s e n t e n c e ' i n a completely innocent way. At a more serious level, it is clear that whether a description focuses on utterances or sentences has much to do with the individual researcher's theoretical aims; see e.g. Lyons (1977). 16. Clahsen et at. (1983) use the term 'psychologically plausible'. If the term 'plausible' is more t h a n just an expression of usual scientific modesty, the basic problem, of course, remains, unless criteria for determining plausibility are known. Since such criteria must obviously reflect some empirical statement as to what is (psychologically) potentially real, 'plausibility' and 'reality' seem to be merely different personal assessments of essentially the same issue. 17. A linguistic description is only accepted as representing something psychologically real, if, apart f r o m the g r a m m a t i c a l l y judgements on which it is based, it can be confirmed by some kind of psycholinguistic experimentation. 18. There seems to be a strong feeling a m o n g m a n y researchers that there abstractness of linguistic principles and the e n o r m o u s complexity of grammatical processes, as they are currently proposed by linguistic theory, make it - intuitively - difficult to imagine that all these things could actually exist in the mind. This feeling, however, appears to derive f r o m the idea that the structure of h u m a n biology should be simple and straightforward. Whoever has looked at a structural description of polypeptides of amino-acids should also find it difficult to imagine that these structures could actually exist in the h u m a n body; yet n o b o d y seriously d o u b t s that they d o . There is n o a priori reason to believe that the h u m a n mind should have a simpler structure t h a n the h u m a n body. 19. It is quite c o m m o n in current psycholinguistic research todistinguish between linguistic and psychological/cognitive p h e n o m e n a . I find it difficult to conceive of linguistic knowledge as anything but a specific p r o p e r t y / s t a t e of the h u m a n mind (ignoring articulatory skills), particularly, as I am not aware of any explicit statement in the literatureas to what this distinction is based on. 20. See Clahsen et al. (1983:78) who state t h a t ' . . . alle grammatischen Strukturen und Regeln sind an dem zu messen, was über mentale Repräsentationen u n d über Prozesse des Sprachverstehens und der S p r a c h p r o d u k t i o n bekannt ist'. 21. One might, of course, argue (as C h o m s k y (1965) has d o n e ) that also traditional ta xonomic analyses imply some claim of psychological reality. The evaluation of conflicting descriptions in t a x o n o m i c linguistics seems to be guided to a large extent by some intuitive notion of what most plausibly constitutes the 'real' knowledge of a speaker. 22. For a detailed discussion of the term 'intuitive' see section 4 of this chapter and F o d o r (1982). 23. It a p p e a r s that the psychological reality o f ' k n o w l e d g e ' is also acknowledged by those researchers who emphasize the importance of'cognitive processing' for language acquisition, see also Clahsen et al. (1983:74ff.). 24. It might a p p e a r superfluous to emphasize the self-evident fact that a description is not the same as the object which it describes. However, it is my impression that the frequent distinction between linguistic vs. cognitive p h e n o m e n a isexactlydue to the feeling that what appears on a piece of paper certainly c a n n o t be the 'real thing'. See also note 18. 25. This is an English translation of the corresponding passage in Clahsen et al. (1983). The G e r m a n original is: 'Wir sprechen von Plausibilität, d a damit noch nicht erwiesen ist, daß Repräsentationen und Operationen der G r a m m a t i k in der vorgeschlagenen F o r m einer
Some Issues in Cognition
and Language
Development
11
mentalen Realität entsprechen; sie sollen lediglich widerspruchsfrei sein im Vergleich zu den psycholinguistischen Performanzmodellen'. 26. It m a y be a curious fact that researchers such as Clahsen et al. (1983), who emphasize the importance of psychological reality in evaluating grammatical proposals, simply ignore the evidence indicating that the Aspects-model is inadequate. They state: 'Es gilt also festzuhalten, d a ß wir d a v o n ausgehen, d a ß e i n e G r a m m a t i k Sätze mittelsderen Ableitungsgeschichte zu erklären hat, was impliziert, daß zwischen Oberflächenstruktur und zugrundeliegender Struktur zu unterscheiden ist.' 27. F o r a detailed account of the history of atomic models, see Christen (1981). 28. I believe that it is this line of reasoning that is, in large p a r t , responsible for the frequent 'linguistic vs. cognitive p h e n o m e n a ' distinction in current psycholinguistic work. 29. The assumption is necessarily true in the sense that, if mental states are defined as mentally represented rule schemata that determine and mediate overt behavior, then it logically follows that overt behavior (at least o f a c e r t a i n k i n d ; s e e F o d o r ( 1982)) draws u p o n these mental states. 30. Statements such as this one have frequently been interpreted as suggesting that standard psycholinguistic e xperimentation is irrelevant for the study of language acquisition. This is a total misinterpretation. What this statement asserts is simply that it is unclear what exactly psycholinguistic experimentation tells us a b o u t a speaker's linguistic competence. 31. This is a translation of the corresponding passage in Clahsen et al. (1983). The G e r m a n original i s : ' . . . alle grammatischen Strukturen u n d Regeln sind an dem zu messen, was über mentale Repräsentationen und über Prozesse des Sprachverstehens und der S p r a c h p r o d u k tion bekannt ist. Psychologische 'Plausibilität' heißt f ü r uns folglich, d a ß keine Aussageder G r a m m a t i k im Widerspruch stehen darf zu Erkennntnissen der Psychologie und der Psycholinguistik über die mentalen Prozesse, die beim Sprachgebrauch ablaufen.' 32. This is a view which F o d o r (1981) calls 'logical behaviorism'. 33. The concepts and principles at such a level might fail to capture the specific properties of the p h e n o m e n a under discussion. F o r example, m e m o r y limitations might constrain the class of possible p h e n o m e n a in such a way that the class still contains p h e n o m e n a which, in fact, d o not occur. 34. F o r reasons which are not entirely clear to me, m a n y linguists and psycholinguists seem to think that finding a purely mechanical procedure for reducing a set of d a t a t o a coherent description should be the primary object of research in o u r field. 35. This is a translation of the corresponding passage in Clahsen et al. (1983). The G e r m a n original is: 'Im Einzelfall kann es aber d u r c h a u s problematisch sein, wenn entschieden werden soll, ob ein bestimmtes strukturelles P h ä n o m e n ein Indiz ist f ü r die Lernerhypothese über die Struktur der Zielsprache, o d e r ob das zugrunde liegende Wissen in diesem Fall verdeckt wird durch Verwendungsstrategien.' 36. I would like to think of grammaticality judgements as an essentially experimental procedure which has all the basic advantages and problems that experiments in all other branches of science are k n o w n to have. Note that the general a p p r o a c h of u s i n g g r a m m a t i c a lity judgements is highly theory-dependent. A fairly sophisticated version of the theory will make very specific predictions a b o u t what is a grammatical string in a given language. Grammaticality judgements are little more t h a n a testing procedure to find out whether the predictions made by the theory are correct or not. In other words, only those structures will be tested in grammaticality judgements a b o u t which the theory makes some interesting claims. T o what extent such structures actually occur in real-life communication is largely irrelevant (see e.g. the discussion on parasitic gaps in C h o m s k y (1982)). T h a t is, those structures are taken to be relevant d a t a that a p p e a r to provide some deeper insights into the nature of those p h e n o m e n a that the theory is a theory a b o u t . 37. Quite frequently researchers interested in speech production tend to defend themselves
78
Cognition and Language
Growth
against alleged accusations from linguists by ascribing to the latter the view that speech production/perception is not a field worth of scientific investigation, since only competence is 'the real thing'. I am not aware of anyone holding such a view. Of course, speech production/perception is an interesting area. The crucial problem is whether or not it is possible to construct an adequate theory of performance without making reference to competence, and to what extent current theories of performance qualify as explanatory theories. 38. See e.g. Bates et al. (1983) who state: 'The separation of grammar from meaning seems counter-intuitive to many non-linguists who cannot conceive of grammar as anything other than a tool for mapping meanings into sound' (p. 11), ' . . . as psycholinguists, with a particular interest in language acquisition by children, we have been more attracted to the functionalist approach' (p. 16). It is certainly strange for a rational scientist to decide between competing theories on a priori grounds and on the basis of personal preferences. 39. This statement should not be taken as suggesting that every type of behavior in every organism reflects underlying mental states. For an illuminating discussion of this problem see Fodor's (1982) 'Why paramecia don't have mental representations'. 40. For a detailed criticism of some alternative empiricist explanations see Fodor (1981). 41. In the early days of generative grammar the most widely-used term was 'intuitive knowledge'. More recently, authors have also used 'implicit knowledge'. It seems, however, that by now 'tacit knowledge' has become standard usage. 42. In a more recent version of the theory (see Chomsky 1981) the coreferentiality facts are accounted for by Principles B and C of the Binding Theory which state that pronouns have to be A-free in their governing category while R-expressions such as John must be free everywhere. Him in (13b) is not a possible A-binder for Jo An, since it does not c-command John. 43. Assuming, of course, that the Noncoreference Principle is a true principle of mental representations. If it turns out that the Noncoreference Principle is empirically false or follows from a more general principle, then, of course, it would no longer be considered as characterizing a speaker's knowledge. This is largely trivial, since it applies to all principles proposed in the course of scientific investigations. 44. In particular, it will not be able to answer some of the more fundamental questions, such as those connected with what is commonly referred to as the logical problem of language acquisition or those that derive from the poverty-of-stimulus issue (see Chapter II.2). 45. See e.g. the different conceptions of linguistic theory as advanced by Chomsky (1981), Bresnan (1978), Gazdar (1979) or Koster (1982). 46. This pessimism that some people have expressed with respect to the possibility of constructing an e xplanatorily adequate theory of language performance has frequently been taken as an assertion that performance data are somehow irrelevant and that studies of performance should not be considered a legitimate activity in linguistic research. This is, of course, an error. The pessimism merely derives from the fact that most of the proposals that have been advanced in the past do not meet the criterion of explanatory adequacy. 47. This is, of course, not a logical necessity; however, as far as we know, it is difficult to think of language in any other way than as rule-governed behavior. Even though the exact nature of such rules may be a matter of considerable controversy, there does not seem to be any way to account for linguistic competence without the notion 'rule'. 48. Note that this view of the nature of transformation is not quite correct. In contrast to Harris who viewed transformations as operating on sentences, there is currently near-total agreement that transformations operate on abstract structures (P-markers) and not on sentences. For a review of some of the arguments dealing with this issue see Fodor et al. (1974:108ff.).
Some Issues in Cognition and Language
Development
79
49. The problem is one of 'relevance of data'. Of course, no one seriously claims that psycholinguistic data are generally without value, but only that they may be irrelevant with respect to the specification of competence in certain domains. In contrast, some people seem to insist that psycholinguistic data must be relevant; in fact, that they are the only truly relevant data when it comes to claims about psychology. Here psycholinguistic data are assigned an a priori privileged status for which there is little if any justification. 50. These considerations relate to questions of explanatory adequacy in the construction of theories; i.e. given two descriptively adequate theories, how are we to decide which is the correct one? 51. Earlier versions of generative grammar assumed that transformations are extrinsically ordered. However, there were no principled constraints on the types of orders that were permitted; i.e. any order was possible. The problem with such a conception of grammar is that it is totally unclear on the basis of what type of experience the child could ever find out the correct order. Hence it was argued that, while a grammar using extrinsic ordering may be descriptively adequate, it is simply unlearnable for the child and therefore does not meet the criterion of explanatory adequacy. Such a (theory of) grammar must thus be false. 52. For similar approaches see Gazdar (1979), Fodor (1978). 53. It seems that this view is implicit in a large body of current studies on language acquisition, even though relatively few authors state this view explicitly. 54. That is, constraints on processingconstituteanecessary, but not sufficient conditionfor constraints on the properties of natural languages. 55. If the rest of the construction is not compatible with such an analysis, the entire construction will be perceived as ungrammatical. 56. One of the problems with this analysis is discussed in Baker (1979b). The filter (19) is formulated in such a way that it specifies the set of ungrammatical sentences. In order to learn this filter, the child would have to know which are ungrammatical sentences in the language he is exposed to; i.e. the child would need negative evidence (for an alternative view see Marzurkewich (1982)). Baker has therefore reformulated filter (19) as a filter stating the grammatical cases. 57. It is clear that (19) is not a language universal, given such constructions as Japanese kare ga uchi o deta no wa hen-na koto desu (that he left the house is strange). This sentence is fully grammatical even though kare ga uchi o deta (he left the house) meets the structural description of (19). 58. It seems to me that this approach is implicit in Slobin's work and in many studies conducted in his spirit (see Slobin 1973, Hatch 1982, Clahsen et al. 1983). 59. Applying the rule stated in H Y P O T H E S I S B to (34a) will still yield an ungrammatical *is that Sue will marry John oAv/oui? This ungrammatically is handled by a principle which distinguishes sentential subjects from sentential complements (for details see Ross (1967), Chomsky (1973, 1981)). 60. In a comment on an earlier version of this chapter, Hatch (1982:10) questions the relevance of the examples I am giving. Hatch states: 'The examples Felix gives for the separation of linguistic and cognitive systems are curious. I find it difficult to believe that anyone working in the area of cognition would claim that structure independent rules are any more typical of cognitive processes than of linguistic processes. Try giving the rule that English questions are formed by a rule that looks for the first verbal element in a sentence and preposing it to the sentence-initial position to a computer (along with oral language data for it to operate on). Those of you who have been following the work of researchers in Artificial Intelligence know that such a rule is much more difficult than one that operates on classes. One of the basic cognitive processes, one stressed in all kindergarten, is the process of classification and rules that work on classes. It is not clear to me, then, why such examples show us anything about the relationship (or non-relationship) of language and cognitive processes.'
80
Cognition and Language
Growth
It seems to me that Hatch misses completely what is crucial about the examples I have given (possibly, because she misunderstands the notion 'structure-dependence'). The difference between a structure-dependent and a structure-independent rule is, of course, not that one operates on classes and the other does not. Quite obviously, both types of rules operate on classes ('verbal element' evidently refers to a class of elements sharing certain properties). The crucial difference is that structure-dependent rules operate on abstract syntactic variables, such as N P (while structure-independent rules do not). In order to determine whether or not a given string counts as an instance of a specific variable, it is necessary to know the syntactic structure of the entire sentence. In the absence of any specific evidence I cannot believe that Hatch seriously wants to argue that knowing the syntactic structure of a sentence is cognitively easier or involves fewer cognitive processes than not knowing it. 61. Within the domain of subjacency, languages appear to differ with respect to whether S or S' counts as a cyclic node (see Chomsky 1981, Rizzi 1982). 62. In current versions of linguistic theory all movement rules are subject to the Subjacency Constraint. In fact, this constraint is the crucial criterion to determine whether a given grammatical phenomenon is to be described in terms of movement or in terms of construal. 63. I ignore here the Chomsky-Bresnan controversy over unbounded movement; see Chomsky (1977) and Bresnan (1978). 64. It is assumed that in sentences such as (45b) the wh-word first moves to the deepest COMP, then to the next higher C O M P , and so forth, until it finally reaches sentence-initial position. In other words, wh-words can only cross more than one S-boundary, if they can move through the various COMP-positions. This phenomenon is commonly referred to as the COMP-to-COMP Escape Hatch. The assumption correctly predicts that sentences should be ungrammatical if a C O M P position is already filled so that the wh-word cannot pass through it on its way to sentence-initial position, cf. John doesn't know where Bill met Tom •who doesn't John know where Bill met 65. Unfortunately, the draft of the paper that Evelyn Hatch sent me does not contain any detailed bibliographical references of the studies she mentions. I can therefore only rely on her manuscript and have not had any access to the original studies. 66. See e.g. Mayerthaler (1980) and Clément (1978). 67. In fact, scientific work seems to be largely impossible without idealizations. Since real-life phenomena never occur in complete isolation, without idealizations one would have to study the entire world or nothing at all. 68. Of course, someone might be fully satisfied with merely recording observation and not care about e xplanatory theories. At a personal level this attitude is acceptable; however, as a guideline for scientific inquiry it reveals an amazing lack of curiosity.
Chapter 2
Two Problems of Language Acquisition: On the Interaction of Universal Grammar and Language Growth 1. INTRODUCTION
In a discussion on methodology in the social sciences Popper (1969:104) makes the following observation: (1) 'If it is at all reasonable to say that science or knowledge begins at some point, something like the following seems to be true: gaining knowledge does not begin with recording observations or collecting data or facts, but rather it begins with stating problems. No knowledge without problems and similarly no problem without knowledge.'
If we take Popper's view seriously, it somehow doesn't make sense to characterize the field of developmental psycholinguistics as the discipline concerned with how language(s) is/are acquired or develop(s). It appears that such a definition, though frequently advanced, is much too global and unspecific, delimiting research goals in a way that does not go much beyond the mere collection of observations and facts. Consequently, a more promising and ultimately more insightful approach to language acquisition might be to identify a fairly small set of fundamental, but highly specific problems which open a perspective for constructing truly explanatory theories in the field. While there is little doubt that much of what is considered to be a fundamental problem depends on the individual researcher's perspective and personal preference, I believe that there are at least two problems which are fundamental in the sense that they meet certain criteria which make them a particular challenge to scientific curiosity. That is, these two problems relate to issues that can be stated with sufficient clarity and precision, and furthermore provide an evaluation metric for acquisition research in the sense that any approach which aims at theoretical adequacy should, in principle, be able to offer a solution to these problems. The first of these problems is generally referred to as the logical problem of language acquisition and has, roughly speaking, to do with the question of why children can learn natural languages at all, i.e. what is it that makes language acquisition possible? The second problem relates to the question of why natural languages are acquired the way
82
Cognition and Language Growth
they are, i.e. how can the regularities that have been observed in realtime acquisition processes be explained? I will call this problem the developmental problem of language acquisition. In the past the logical problem has been almost exclusively the concern of theoretical linguists; in particular it has been a major motivating force in work on generative grammar (cf. Baker 1979a, Wexler & Culicover 1980, White 1982, Lightfoot 1982, Chomsky 1975, 1980). In contrast, psycholinguists and language acquisition researchers have primarily dealt with the developmental problem (see Slobin 1973, Wode 1981, Wanner & Gleitman 1983, Clahsen et al. 1983, Felix 1982). Although this difference in perspective is, I believe, primarily a matter of historical accident(2), it has led to a separation of issues which, at a certain level of abstraction, can be viewed as relating to essentially the same fundamental question. Thus it may seem reasonable to look at both the logical and the developmental problem under a common perspective. This is essentially the approach I wish to take in this chapter. First, I will briefly characterize some of the basic issues commonly associated with the logical and the developmental problem of language acquisition. I will then argue that while the theory of generative grammar has offered a principled solution to the logical problem, no such principled solution is currently available for the developmental problem. I will subsequently speculate on how insights from linguistic theory can be fruitfully exploited in assessing developmental phenomena, and finally propose some ideas on how the biological structure of Universal Grammar may interact with aspects of language development. If this approach is not misguided, it appears that there is a fairly straightforward way in which studies of language acquisition may provide insight into the properties of Universal Grammar and in which grammatical studies may be relevant for psycholinguistic considerations (3).
2. THE LOGICAL PROBLEM OF LANGUAGE ACQUISITION
As a first approximation the logical problem of language acquisition can be viewed as relating to the following question: what is it that enables humans to acquire natural languages? What makes this question interesting is the fact that, upon closer examination, some of the most popular and apparently obvious answers turn out to be incorrect or at least not fully adequate. Thus many people might be tempted to argue that children learn languages simply because they grow up in an environment in which they are regularly exposed to the use of language and to verbal interaction. Note, however, that exposure to speech is a necessary, but by no means sufficient condition.
Two Problems of Language Acquisition
83
Dogs and cats are also regularly exposed to speech, but they do not acquire natural languages. Consequently, exposure alone cannot adequately account for the child's ability to learn the language of his/her environment. It is intuitively plausible to assume that the ability to acquire natural languages crucially reflects specific properties of the human mind. That is to say, what distinguishes humans from other organisms with respect to language acquisition is that the former are equipped with systems of mental structures which the latter do not possess (4). We may therefore suspect that any adequate solution to the logical problem of language acquisition will make crucial reference to those properties of the human mind which permit language acquisition to take place within a relatively short time-span and under conditions which are typical of child development. One of the most intriguing aspects of the logical problem concerns the fact that it is far from obvious what these mental properties are in detail. This becomes immediately apparent once we try to make explicit what exactly it is that the child has to acquire when we say he acquires (knowledge of) his language. To look at the problem from a slightly different perspective: what does a person actually know when he is said to know his language, and what does a learning mechanism have to look like that makes acquisition of exactly this type of knowledge possible? Before we examine these questions in detail, it should be made clear that the term 'language' is commonly used as a cover term for a wide variety of cognitive systems and social skills which may interact in various ways: e.g. systems of belief, social attitudes, mechanisms of verbal processing, knowledge of the world and many more. In what follows I will restrict my attention to one particular system of knowledge, namely knowledge of the grammar, i.e. of the formal-syntactic properties of a language. What is particularly interesting about grammatical knowledge is that, once we gain some deeper understanding of what this knowledge actually consists of, it becomes totally unclear how children could ever acquire it given the type of evidence they usually have access to. Linguistic research during the past 25 years has shown that the grammars of natural languages are governed by formal principles of an almost unbelievable complexity. This insight forces the conclusion that the child's linguistic experience is evidently insufficient to fully determine the knowledge eventually attained; i.e. the child acquires knowledge for which there is no direct evidence in the utterances he/she is exposed to. In standard terminology: the child's knowledge is underdetermined by the available evidence. If this view is correct, the following question arises: if the knowledge the child actually acquires cannot be fully derived from
84
Cognition and Language Growth
his/her linguistic experience in any discernible way, where does this knowledge come from? It is precisely this question that is known as the logical problem of language acquisition. Let us consider in more detail in what sense it can be said that the child's knowledge is underdetermined by the available evidence. Note, first of all, that what the child eventually attains is the mature speaker's ability to produce and understand an infinite number of sentences and structures. On the other hand, the child's input, i.e. the sentences he hears, is evidently finite; consequently, the knowledge actually acquired comprises structures which the child has never heard, i.e. for which there is no evidence in the child's linguistic experience. It is in this sense that we can say that the child's knowledge is underdetermined by the available evidence. Notice, furthermore, the empirical consequences of the observation that the mature speaker can produce and understand an infinite number of sentences/structures of his language. Since the brain, where knowledge is somehow stored, is itself finite, grammatical knowledge cannot simply consist in knowing sentences or structures; rather we may assume that what the mature speaker actually knows is a finite set of rules and principles whose properties are such that they specify an infinite class of sentences. If this is correct, then the child's task consists essentially in discovering these rules and principles on the basis of the finite set of sentences that constitutes his linguistic experience. Surprisingly, there is good reason to believe that in view of the conditions under which normal acquisition takes place, this is, in principle, an impossible task. Let us view the rules and principles of grammar as generalizations over a certain set of structures; i.e. given these structures one could formulate a generalization which correctly specifies their properties. There are at least three aspects under which it seems almost impossible to imagine how children could ever discover the correct generalizations on the basis of their linguistic experience, i.e. under which it may be said that the child's knowledge is underdetermined by the available evidence: A1 in many cases the correct generalization crucially involves sentences and structures which are so rare and marginal in language use that we may assume that children are not regularly exposed to them. A2 in other cases the correct generalization can only be found if negative evidence is available, i.e. if the learner explicitly knows that a given structure is ungrammatical. However, children do not have systematic access to negative evidence (5).
Two Problems of Language Acquisition
85
A3 as far as we know, the rules and principles of natural languages are of a highly complex and abstract nature and do not reflect in any straightforward or unique way the surface properties of individual sentences. That is to say, given a certain set of utterances or sentences, the child might discover the correct generalization, but he/she might also not discover it. There is nothing in the data that forces the child to make the correct generalization. Let us examine in what way the above observations may be crucial. Recall the fundamental problem: how can the child generalize over a finite set of observed sentences in such a way that the resulting generalization will correctly hold for an infinite set of structures. Traditionally, it has been assumed that this can be achieved through a logical procedure commonly known as 'induction' or 'inductive generalization' (see Piattelli-Palmarini 1980, Quine 1972, Stegmuller 1974, Sampson 1979). Roughly speaking, inductive generalization is a mental process by which a regularity valid for a finite set of observed events is extended to an appropriate infinite set of events. Consider the following example: a child observes that the swans in a pond near his home are white and have long necks. Some time later he sees swans in a river which are also white and have long necks. The child may then conclude that all swans including those he has never seen and will never see - are white and have long necks. Note that, in an obvious sense, this conclusion is underdetermined by the available evidence, i.e. the child's experience does not force this conclusion, but would be equally compatible with the idea that some swans are white and some are black. In an analogous way, a child may observe that in the speech he hears finite verb forms are consistently marked for either present or past tense. By inductive generalization he may then conclude that all English finite verb forms are tense-marked. Inductive generalization thus seems to be a viable procedure to acquire knowledge on the basis of limited evidence. The problem with this story is that it does not work. It is fairly easy to show that a learning mechanism that relies exclusively on inductive generalizations could never acquire the type of linguistic knowledge that every mature speaker possesses. There are both empirical and conceptual reasons for rejecting inductive generalizations as an adequate explanation of how a child may attain knowledge for which there is no evidence in his experience. The conceptual problem has most explicitly been stated by Fodor (1975, 1980, 1981) and is directly linked with the observation (A3) stated above. Fodor points out that, logically, any set of observed facts permits more than one, possibly even an infinite number of generalizations. The question is then: how does the learner find out the correct generaliza-
86
Cognition and Language Growth
tion(s)? There does not seem to be anything in the evidence itself that permits to distinguish correct from incorrect generalizations. Let us suppose that our child observes that there is an old man sitting every afternoon at 4 o'clock on a bench near the pond feeding the swans after they have come out of the water. There is nothing in the observation or in the mechanism of inductive generalization to prevent the child from concluding that it is a property of all swans to be fed at 4 o'clock in the afternoon by old men sitting on benches. However, as far as we know, this is the type of conclusion children are not very likely to draw. Why is this so? Apparently the child can distinguish between relevant and irrelevant properties, correct and incorrect generalizations, but the basis for this distinction does not seem to be in the evidence. The same observation holds for language acquisition. For any finite set of sentences or verbal utterances it is possible to formulate more than one generalization so that, here again, the question arises: how does the child find out the correct generalization for the language in question? While children invariably make the correct choice, there does not seem to be any way of deriving this choice from the generalization itself. Furthermore, if the children's learning process were merely constrained by procedures of inductive generalization, it should be possible that different children formulate totally different generalizations for the same set of observed speech data and therefore arrive at different sets of rules. This, however, would be tantamount to predicting that children exposed to the same speech data will acquire different languages depending on the types of generalizations they happen to formulate; an obviously false prediction. Children raised in the same language community arrive at essentially the same grammar, irrespective of the specific sets of utterances that the individual happens to be exposed to, and irrespective of the types of generalizations that are logically possible for a given set of data. Therefore the problem with inductive generalizations is that, while they permit us to come up with some solution they do not guarantee that we find the correct solution. With a few illustrative examples from English I would like to give some empirical substance to the observations (A1)-(A3) stated above. The first example which relies heavily on material analyzed by Stowell (1981) is intended to show how a prima facie obvious generalization breaks down in the light of certain ungrammatical structures. More significantly, in order to find out that the generalization is incorrect the learner would need negative evidence, i.e. explicit information that a given structure is ungrammatical, and, as we have observed before, children do not seem to have systematic access to this type of information (see Baker 1979a, White 1982). English //lai-clauses frequently appear in the same position as noun
Two Problems of Language Acquisition
87
phrases, in particular both the subject and the object position of a sentence may be occupied by either an NP or a /Aai-clause: (1)
a. b.
Everybody expected John's victory. Everybody expected that John would win.
(2)
a. b. c.
John's immediate reaction proved his innocence. That John reacted immediately proved his innocence. That John reacted immediately proved that he was innocent.
Given these structures one of the most obvious generalizations would be the assumption that at some level of representation clauses are like NPs. This assumption would lead to the prediction that grammatical processes that apply to NPs will also apply to clauses. For many processes this prediction is, in fact, borne out; e.g. passivization and topicalization may move either NPs or clauses: (3)
a. b.
John's victory was expected by everybody. That John would win the race was expected by everybody.
(4)
a. b.
John's victory, (certainly) everybody expected. That John would win the race, (certainly) everybody expected.
Note the significance of this type of generalization for the logical problem of language acquisition. Given this specific generalization the child will know that the sentences in (3) and (4) are grammatical irrespective of whether or not he/she has ever been exposed to such structures. In other words, the generalization permits the child to acquire knowledge (i.e. of (3) and (4)) on the basis of much more limited experience (i.e. (1) and (2)). The problem with this account is that the generalization, though highly plausible, is incorrect. This becomes apparent, as soon as we look at a wider range of facts. Note, first of all, that NPs can follow prepositions, while i/iai-clauses can not: (5)
a. b.
John's innocence is proved by his immediate response, * John's innocence is proved by that he responded immediately.
(6)
a. b.
We counted on John's help. *We counted on that John would help us.
88
Cognition and Language Growth
Furthermore, sentential subjects may not occur after complementizers: (7)
a. b.
(8)
a. b.
Although John's immediate response proved his innocence, . . . ""Although that John responded immediately proved his innocence, . . . Bill's belief that John's immediate response proved his innocence is certainly naive, * Bill's belief that that John immediately responded proved his innocence is certainly naive.
Finally, the generalization already breaks down under topicalization. While both i/i¿»-clauses and infinitivals may appear in object position, only i/iai-clauses may be topicalized: (9)
a. b.
(10)
a. b.
John believes that Mary loves him. That Mary loves him, John believes. John believes Mary to love him. *Mary to love him, John believes.
Stowell (1981) has proposed an analysis in which he accounts for these and many other facts of English phrase structure in terms of the Theory of Abstract Case and the 9-Criterion (see Chomsky 1981) which interact in specific ways (6). In particular, Stowell proposes the Case Resistance Principle^) motivated on independent grounds, which forces a structural description on (2b) and (2c) in which the i/za
(X)
Two Problems of Language Acquisition
93
Let us further assume that the crucial noun phrases have structures as in (26a) and (26b): (26)
a. Det
student
with long hair
Given these assumptions all the relevant facts follow immediately. Student in (26b) cannot be replaced by one, because One-Interpretation applies to N's, but not to Ns. In contrast, student in (26a) can be replaced by one because it is also dominated by N'. What is interesting about these data with respect to the problem that concerns us here is simply that the correct structure of noun phrases can only be determined if we consider a fairly wide range of somewhat marginal facts relating to One-Interpretation and the relative order of PPs. There is very little reason to suppose that children have regular and systematic access to this type of information. Let us finally consider some grammatical phenomena which are intended to show that mature speakers have very clear intuitions about constructions that are so marginal and exotic that people hardly if ever use them. Moreover, some of these constructions would presumably be judged as ungrammatical by most people and become relatively acceptable only in contrast to other constructions which are clearly worse. These are, it seems to me, the most convincing cases in support of the claim that children acquire knowledge for which there is no experiential evidence. The structures I would like to consider are commonly referred to as 'parasitic gap constructions' (see Chomsky 1982, Engdahl 1983, Felix 1984). Consider the following contrast, where e stands for an empty
94
Cognition
and Language
Growth
category, and t for the trace of a moved constituent. Indices indicate relations of coreference. (27)
a. b.
* John filed the article! without reading ev Which article; did John file ti without reading e(?
The ungrammaticality of (27a) is obviously due to the occurrence of an empty category after reading. If we fill the gap with the pronoun it the sentence becomes fully grammatical. In (27b), however, the empty category after reading is perfectly acceptable, apparently because there is a second gap in the sentence (after file) which is the result of wA-movement. It thus appears that the gap in the without-c\ausz is somehow licensed by the gap in the matrix clause to which it is, in some metaphorical sense, 'parasitic'. What makes this construction interesting in the context of the logical problem of language acquisition is that speakers of English have very precise intuitions about the contrast in (27a) and (27b), even though parasitic gap constructions appear to be highly marginal. That is, constructions with pronouns rather than parasitic gaps such as (27c) are usually felt to be more natural and are preferred in actual language use, so that we may assume that children are predominantly exposed to structures such as (27c) rather than (27b). (27)
c.
Which article; did John file tt without reading it;?
Notice, however, that, as Engdahl (1983) observes, there are also cases in which parasitic gaps are somehow obligatory in the sense that they are preferred over a pronoun. Thus the following (b)-sentences seem to be less acceptable than their (a)-counterpart: (28)
a. b.
Which boy; did Mary's talking to bother ti most? ? Which boy; did Mary's talking to him; bother most?
(29)
a.
Which student; did your attempt to talk to et scare to death? ? Which student; did your attempt to talk to him; scare tt to death?
b.
Another interesting observation about parasitic gaps is that some constructions sound so odd that people tend to reject them at first glance. Consider the following sentence:
Two Problems of Language Acquisition (30)
a.
?This is the marij who everyone who knows
95 admires tr
Most speakers of English would agree that (30a) is definitely bad, yet it is certainly better than the corresponding sentence (30b) without wh-extraction: (30)
b.
*Everyone who knows ei admires the man;.
This suggests that people have intuitions about degrees of grammaticality in constructions which are so marginal and odd that nobody would ever really use them. To take an extreme case illustrating the same point, consider the following examples from Kayne (1983) which are even less acceptable, though exhibiting clear differences in grammaticality: (31)
a. b.
(32)
a. b.
? the personj that John d e s c r i b e d w i t h o u t examining any pictures of . * the personj that John described f(. without any pictures of ei being on file ? the books; you should read f, before it becomes difficult to talk about ej * the booksj you should read tt before talking about ei becomes difficult
These examples illustrate some of the knowledge that must be attributed to every mature speaker of English and that, somehow or other, every child has to attain in order to become a native speaker of his language. I have tried to demonstrate that, given the highly limited evidence available to the language learner, there does not seem to be any plausible way in which the child could ever discover the full range of rules and principles that correctly characterize his language, if inductive generalizations were the only learning mechanism available. How then can we account for the fact that all children eventually succeed in making the correct generalizations, rejecting many other generalizations which are fully compatible with the available evidence; that is, how can we account for the fact that, after all, children do acquire languages? It seems that there are, in principle, two possibilities to solve the logical problem of language acquisition. One of these possibilities would be to assume that the child is equipped with a generalization device much more powerful than ordinary mechanisms of induction, having properties which somehow guarantee that the child does discover the correct rules and principles of the language he is exposed to. While it is certainly unwarranted to reject this possiblity on a priori grounds, there is no
96
Cognition and Language
Growth
reason to believe that this is, in fact, the true story. The problem is that no one has ever been able to make any substantive proposals as to what such a powerful generalization device might look like. To be more specific, it is totally unclear what kinds of general properties should be attributed to such a device. In essence, no one has proposed an explicit theory of language acquisition which crucially relies on mechanisms of inductive generalization and yet explains how children acquire the full range of (linguistic) knowledge that must be attributed to the mature speaker.(11) The other possibility would be to assume that some of the rules and principles that characterize a language need not be learnt or discovered, but rather constitute part of the child's biologically determined endowment. That is to say, the child is genetically equipped with a certain amount of a priori knowledge about the properties of natural languages. At first glance, this view may seem bizarre to someone who is struck by the idiosyncracies of individual languages. Since it is largely a matter of coincidence whether or not a child is born, say, into an English-speaking community, one might wonder if there are any properties of English that could plausibly be attributed to a genetically determined representation of knowledge in the child's mind. While this is clearly an empirical issue, the basic idea is something like this: suppose that all natural languages share certain universal properties which thus define and constrain the class of rule systems(12) that children will acquire under normal conditions of language development. Suppose furthermore that it is precisely these universal properties that constitute the child's innate knowledge about the structure of natural language(13). There is one immediate consequence of this view which seems to provide for a principled solution to the logical problem of language acquisition: the kinds of generalizations which the child will consider for a given set of data are heavily constrained by his innate linguistic knowledge, i.e. the child will reject generalizations which are fully compatible with the observed data, if such generalizations are in conflict with the universal principles of natural languages. Note that such a view provides a principled answer to the question of what enables the child to make the correct choice among several possible generalizations, the essence of the logical problem. This is a rather sketchy and simplified account of a view that has been adopted as a research guideline by linguists working within the conceptual framework of generative grammar. The basic assumption that, as a biological property of the human mind, the child is endowed with a rich deductive system of cognitive structures has motivated much work in linguistic theory during the past 25 years. As a consequence of this work, we begin to get an idea of some of the deeper regularities of natural languages which, in turn, helps us to make more and more specific
Two Problems of Language
Acquisition
97
proposals about the exact nature of the child's innate linguistic knowledge and, more generally, deepen our understanding of the structure of the human mind. I will not explore these ideas in any more detail nor discuss any of the relevant empirical issues currently under focus; the reader is referred to Chomsky (1981), White (1982), and Lightfoot (1982) for a discussion of these matters. What is important in the present context is simply that the view outlined above - whether or not it ultimately turns out to be true offers a principled solution to the logical problem of language acquisition and thus gives some hope of constructing an explanatory theory of language.
3. THE DEVELOPMENTAL PROBLEM OF LANGUAGE ACQUISITION
While the logical problem of language acquisition deals with the question of what enables children to acquire language at all, the developmental problem focuses on the question of why children acquire language the way they do. In order to fully appreciate this difference in perspective it is necessary to recall that discussions about the logical problem have typically adopted an idealized view of language acquisition which abstracts away from the temporal dimension of the learning process. Under such an idealization language acquisition is taken to be an instantaneous process in which the only relevant factors to be considered are the child's linguistic experience and his biologically determined cognitive capacities. This idealized view leads to the following question: given what we know about the mature speaker's knowledge of his language and given the experiential evidence normally available to the child, what are the minimal assumptions that we have to make with respect to the child's innate mental structures in order to account for the fact that mature knowledge is attained by every normal child( 14). While the logical problem has commonly abstracted away from the temporal dimension of language acquisition, the developmental problem brings this aspect back into focus. It appears that the developmental problem of language acquisition has been studied primarily under two aspects. Quite expectedly, it was found that many regularities of language development may be seen as reflecting the influence of a large number of external factors which relate to properties of the environment in which the individual learns his language (Snow & Ferguson 1977, Newport 1977, Cross 1977). In addition, there seem to be certain psychological variables, such as memory constraints, emotional and affective factors, attitudes, etc. which -
98
Cognition and Language Growth
particularly in second language acquisition - may determine various aspects of the developmental process (Schumann 1975, McWhinney 1975, Snow 1979). To the extent that it can be shown that specific environmental or psychological factors are causally related to specific regularities of the acquisition process, the developmental problem will, in principle, be solved. Apart from such environment-dependent aspects, language acquisition research has discovered developmental regularities which appear to be largely invariant across languages, across learners, and across acquisition types so that they resist an immediate interpretation in terms of environmental influence (Klima & Bellugi 1966, McNeill 1970, Bloom 1970, Brown 1973, Wode 1977, Felix 1978b). One of the most striking discoveries of developmental psycholinguistics has been that all instances of language acquisition seem to follow a certain basic structural pattern, commonly referred to as 'developmental sequences' or 'order of acquisition' (Brown 1973, Burt & Dulay 1980, Wode 1981, Felix 1982, Clahsen et al. 1983)(15). The fundamental observation is that children acquire the structural properties of the language they are exposed to in a severely constrained temporal order. A given structure will emerge only after a specific set of other structures has previously been acquired. For illustration see Table I which shows some stages in the acquisition of English and German negation: Table I: Some Early Stages in the Acquisition of Negation (Source: Bloom 1970, Klima & Bellugi 1966, Wode 1977, Felix 1978b) English
German Stage I (Neg + S)
no the sun shining no Daddy hungry no see truck
nein nein nein with
hauen (don't bang) schaffe ich (I can't manage) spielen Katze (I don't want to play cat)
Stage II (no/nein + VP) Kathryn no like celery I no reach it man no in there
ich nein schlafen (I don't sleep) das nein aua (that doesn't hurt) ich nein hat eins (I don't have one)
Stage III (not/nicht + V; don't + V; V + nicht) Kathryn not go over here I don't go sleep this one don't fits
ich nicht essen mehr (I don't eat any more) Eric nicht schlafen (E. doesn't sleep) Henning brauch nicht Uni (H. doesn't need to go to the university)
Two Problems of Language Acquisition
99
The following aspects of the data in Table I seem to be particularly significant with respect to the developmental problem of language acquisition: 1. 2.
3.
4.
the acquisition process is highly systematic in the sense that certain properties of this process hold across languages and across learners. language acquisition proceeds in developmental stages, i.e. a given structure is used consistently and regularly for a certain period of time and is then replaced by some other structure. the sequence of stages is ordered; i.e. the structure(s) characterizing any stage x emerge(s) only after the structure(s) characterizing stage x-1 has/have been productively used. children make regular use of constructions which are ungrammatical from the point of view of the adult language, i.e. they produce structures which do not reflect rules of the input language in any direct way.
Because of their largely invariant nature it has been widely assumed that the regularities that manifest themselves in the developmental sequences reflect properties of the human mind, rather than properties of the environment in which the individual grows up. More specifically, the basic developmental pattern illustrated in Table I is seen as something that specific properties of human mental organization impose upon the acquisition process. Keeping in mind such invariant developmental patterns we can now restate the developmental problem in a somewhat more precise way: why does language acquisition proceed by ordered sequences of stages? Why do developmental sequences look the way they do? With respect to the data in Table I we may ask: why do children use sentence-external negation prior to sentence-internal negation? Why do they use no earlier than not? Why do they place the negator preverbally before they find out the correct postverbal position? I believe that there are two fundamental aspects to the question of why the developmental process is structured in the way outlined above. Both of these aspects, I believe, need to be answered independently in the sense that a satisfactory answer to one of these aspects does not automatically carry over to an adequate solution of the other. First we may ask where those of the child's constructions come from that do not exist in the adult language. Why is it that children use sentence-external negation, even though there are no such structures in the speech they hear? What motivates the child to create sentences such as Tommy no like milk in the absence of any input forcing such a construction? The second aspect has to do with the fact that, at regular intervals, children move from one
100
Cognition and Language Growth
stage to the next, i.e. abandon one type of structure in favor of some other. One may reasonably wonder what it is that causes the child to suddenly identify a structure, e.g. NP no VP as inappropriate, which he has been perfectly happy with for an extended period of time? As far as I know, most serious attempts to explain the developmental aspect of language acquisition have focused on the first question, presumably because most researchers felt that the answer to the second problem is fairly self-evident. However, I will attempt to show that it is the stage-transition phenomenon which, under current conceptions of language acquisition, is one of the most challenging aspects of the developmental problem. As far as I can tell there are two types of traditional answers to the question of why children move from one stage to another. The first has to do with what is commonly referred to as the communicative function of language. The child's linguistic progress, so the argument runs, is propelled by his deep-rooted desire to become a fully adequate member of the society in which he grows up (see Bates 1979). Since the ability to communicate effectively is an indispensible prerequisite for proper social functioning, the child will constantly strive to improve his communicative skills - i.e. move to higher stages of language acquisition - so that eventually he will be accepted as a well-functioning member of his society. I believe that there are a number of problems with this view. If the acquisition of grammar is crucially motivated by each child's individual desire to appropriately function in society, then one should find significant variations in children's ultimate grammatical competence, since individual learners may obviously differ with respect to their inclination to adjust to social needs and requirements. It has sometimes been argued that possibly children do not differ in their desire to meet the linguistic expectations of their speech community. While this may be true, the only evidence to support this view is that children do, in fact, attain full knowledge of their language. It is easy to see that, on this account, the argument becomes circular. Moreover, if it is only his desire for effective communication that instigates acquisitional progress, then, of course, we should expect the child to be satisfied with much less than he actually attains. This fact was first pointed out by Lenneberg (1967, 1969). While aspects of communicative effectiveness might plausibly be addressed to explain the very early stages of language acquisition, they are certainly not a very convincing argument when it comes to later stages. From a point of view of communicative effectiveness, it is difficult to see why a child should learn all the intricate morphological, phonological, and syntactic patterns that are a fundamental property of all natural languages. Phonolo-
Two Problems of Language
Acquisition
101
gical assimilations, subject-verb agreement, wh-movement and many other rules do not as such serve any obvious communicative purposes. Most importantly, however, the view that acquisitional progress is a function of the child's desire to communicate effectively seems to be incompatible with the fact that acquisition proceeds by developmental stages. Let us assume, for the sake of the argument, that the child's desire to improve his communicative skills is, in fact, the crucial force behind language development. F o r illustration, consider once again the acquisition of negation outlined in Table I. Suppose the child has used sentenceexternal negation for a couple of months after which he replaces this construction type by sentence-internal negation. Conceivably, one might argue that this transition f r o m Stage I to Stage II reflects the child's growing awareness that, in order to enhance his communicative skills, his language must be more 'adult-like'. But now the question arises why the child continuously used the same original structure for an extended period of time in the first place? Did the child ignore his communicative needs for a couple of months, suddenly realizing that it would be communicatively more useful to produce a different type of structure? Or is it that his linguistic awareness develops in stages? But if this is the case, how can this phenomenon be explained in terms of communication? One might furthermore ask why the child uses sentence-internal negation al all? It is difficult to see in what sense Johnny no like milk is communicatively more effective than no Johnny like milk. And why does the child retain this new construction type again for an extended period of time? Does he suddenly forget all about his communicative needs? That is, why does the child's desire to be communicatively effective come in stages? F r o m these considerations it seems fairly obvious that aspects of communication cannot adequately account for the stage-transition problem of language acquisition. There is another fairly popular view about children's linguistic progress which underlies much current work in language acquisition. In a very simplified version, the relevant argument goes something like this: on the basis of the available input the child formulates certain hypotheses about the structure of the language he is exposed to, i.e. he constructs a (fragmentary) grammar of that language in his own ways. The child will subsequently compare the utterances generated by his own grammar with those of the adult language and will thus find out that - at least with respect to some structures - his grammar is in conflict with the adult grammar. He will then proceed to revise his grammar in such a way that it will be either identical or at least closer to the adult grammar (see e.g. Brown 1973). This view of language acquisition as a continuous process of testing and revising hypotheses presupposes a satisfactory answer to a question
102
Cognition and Language
Growth
which, as far as I know, is still a completely open issue; namely, how does the child find out, in a principled way, if the rules of his own grammar are in conflict with those of the adult language? I believe the reason nobody has given much serious attention to this question is simply that prima facie the answer is so obvious: the child finds out that his grammar is in conflict with the adult grammar by comparing his own utterances with those produced in his environment. But upon closer examination it becomes clear that this answer cannot possibly be correct. The reason is essentially the same that led researchers to reject the view that language acquisition is a process of inductive generalization. Let us consider again the negation data. H o w does the child find out that the type of sentence-external negation he uses at Stage I is not a grammatical structure in adult English? One might be inclined to answer that the child will come to know this simply because he never hears anybody in his environment use such a construction. Recall, however, our observations in the previous section, namely that a speaker's knowledge of the grammaticality of a structure cannot possibly be based on whether or not he has been exposed to that structure in actual speech. Since a person's linguistic experience comprises only a (relatively small) subset of the sentences possible in a language, he always has to consider the possibility that the non-occurrence of a given structure in his linguistic environment is merely accidental. The same argument obviously holds also for the child. Suppose the child were to accept as grammatical only those of his own structures that he hears other speakers in his environment use. In this case he could acquire only a finite body of structures, thereby failing to achieve full competence of his language which encompasses knowledge of an infinite set of structures. Still worse, at least during the early stages the child could not accept any of his structures as grammatical since they are all in conflict with the adult language. Under such circumstances one might expect the child to revise his grammar continuously; however, as we observed before, language development proceeds in stages, so that one might wonder what makes the child ignore the available conflicting evidence during the time within each stage. Finally, it is far from obvious in what sense comparing his own grammar with the adult language should force the child to move from sentence-external to sentence-internal negation rather than the other way round. Both construction types are totally unacceptable from the point of view of the adult language, so that the child will never be exposed to either one. A somewhat more sophisticated version of the same argument goes something like this: the child will realize that whenever he uses sentenceexternal negation the adult will use (some kind of) sentence-internal negation. Assuming that the adult version is the correct one, the child
Two Problems of Language
Acquisition
103
will be led to revise his grammar accordingly. But here again the essential problem remains: if the child does, in fact, find out that in all cases where he himself would use sentence-external negation, the adult uses a different construction type, then, of course, he has good reasons for giving up his negation structure. But how does he find this out? If the child believes that sentence-external negation is a possible option for negating sentences, there is nothing in the input that forces him to give up this belief, since, as noted above, the fact that this construction does not occur in adult speech cannot be considered as crucial evidence. Of course, one might conceive of the child as hypothesizing that sentence-internal rather than sentence-external negation is the correct structure. But where does this hypothesis come from? There is nothing in the data that would force such a hypothesis. At least for some extreme cases one might want to argue that whenever the child says something like me no like book the adult will comment on this utterance by saying oh, you don't like the book. Since the child cannot fail to notice this very specific and situation-bound discrepancy between his own and the adult's utterance, he will conclude that the structure he uses is ungrammatical. However, this view faces essentially the same problems. While the adult may respond by saying oh, you don't like it, he obviously has other reasonable options, e.g. oh, you hate it, oh, you dislike it, you don't want it, and the like. As far as we know, the child does not conclude from the latter cases that like does not have a negative form and therefore should be replaced by hate or dislike. In much the same way the child does not conclude from hearing passives that actives are ungrammatical, nor is the occurrence of contracted forms considered to be evidence against the existence of uncontracted form (see Bellugi 1967). It thus appears that the child is somehow able to distinguish between relevant and irrelevant data; he seems to know that some adult utterances constitute crucial evidence against his own rule system while others do not. The interesting question is, of course: what is the basis for the child's ability to distinguish between relevant and irrelevant data? Obviously there is nothing in the adult's utterances that would force such a distinction. The crucial observation is that the child can only evaluate sentence-internal against sentence-external negation if he already knows that sentence-internal negation is a potential replacement candidate for his earlier construction. Speech input thus helps to either confirm or disconfirm the child's hypothesis about which of the two is the 'better' structure, but it does not explain where this hypothesis comes from in the first place (cf. Fodor(1975,1981) for a similar argument). By the time the child considers sentence-internal negation as the possibly more 'mature' structure, he has already left Stage I and moved on to Stage II, so that we
104
Cognition and Language
Growth
are left with the original problem: what is it that makes the child move from one stage to the next? These considerations suggest that the view of language acquisition as a hypothesis-testing process misses the crucial point that calls for an explanation, and must therefore be regarded as an inadequate answer to the developmental problem of language acquisition. As far as I can see, there is only one type of evidence that would tell the child unambiguously that a given structure is ungrammatical; namely, if somebody came along and told him so, i.e. if the child had access to negative evidence. There are two problems with this story. First, there is little reason to suppose that children do have systematic access to negative evidence. Even though adults correct children's utterances once in a while, they do not do so regularly or consistently, in which case it would be a matter of pure chance whether or not the child found out the ungrammatically of a given construction. More importantly, there are various reports in the literature (see Smith 1973, Brown 1973) showing that children tend to simply ignore adult's statements of correction. That is, even if the child did have regular access to negative evidence, apparently he could not care less. Among the more specific proposals to explain the temporal aspect of language acquisition one of the best known approaches is Slobin's (1973) cognitive theory in which the notion of Operating Principle plays the central role. The leading idea is that the human mind is equipped with a set of principles which impose constraints on the way in which language data are being processed. It is these processing constraints, Slobin argues, which determine the specific course of real-time language development. Slobin crucially assumes that developmental regularities of the type illustrated in Table I reflect mechanisms of mental processing rather than principles which organize human knowledge. By way of example let us look at Slobin's (1973:182) Operating Principle D which, in a simplified f o r m , reads: avoid interruption or re-arrangement of linguistic units. This principle captures the intuitive idea that it is easier to mentally process uninterrupted items and structures to which no movement rule has applied. Consequently, the principle predicts that such easily processible structures will be preferred by children and will thus appear in the earlier stages of language development. It is immediately obvious in what way Principle D is capable of accounting for early-stage constructions such as sentence-external negation. Placing the negator inside the sentence between N P and VP would evidently interrupt the sentential structure and thus impose a relatively heavy processing load on the child's cognitive mechanisms. Principle D correctly predicts that children will therefore prefer a construction in which the negator is merely adjoined to a sentential structure otherwise
Two Problems of Language
Acquisition
105
'untouched'. In much the same way, Principle D explains why children produce uninverted structures such as where I can go instead of the inverted where can I go, or why they initially avoid French ne ... pas or Arabic ma ... si (Omar 1973). Note first of all that Slobin's Operating Principles - assuming they are descriptively adequate - are designed to account for the fact that children use construction types for which there is no direct model in the adult language. That is, while nothing in the linguistic input which the child receives could conceivably force the use of sentence-external negation, there are specific properties of the child's cognitive processing mechanism - as expressed by Operating Principle D - which, metaphorically speaking, 'convert' adult negation structures into the corresponding early child constructions. In this sense Slobin's theory can be taken to correctly answer the question of where those child constructions come from that have no direct equivalent in the language. Note, however, that Slobin's principles differ in at least one important way from the type of principle that recent work in generative grammar has proposed as a characterization of Universal Grammar. Principles of UG such as the Projection Principle, the 0-Criterion, the Case Filter, etc. (see Chomsky 1981) must apply to any configuration meeting the relevant structural description. Thus the Case Filter e.g. specifies that any lexical N P must have abstract Case; that is, any NP in any configuration must either be non-lexical or have Case, otherwise the string will be ungrammatical. This strict applicability, however, does not hold for Slobin's principles. An operating principle may apply or it may not apply and there is nothing in the principle that tells us explicitly under exactly what conditions it does apply. Thus in the acquisition of negation Principle D applies at Stage I but does not apply at Stage II, thus accounting for the difference between sentence-external and sentence-internal position of the negator. That is, we may find out if a given principle applies at some stage X, but there is no principled way of finding out why it applies at X rather than at some other stage. In other words, Slobin's theory does not explain why the application of a given principle is restricted to a specific stage nor what causes the child to move from one stage to the next. Note furthermore that an operating principle is not simply given up at a particular point in the child's development, but may rather reappear in the acquisition of different types of structures. For example, in the acquisition of French children begin with sentence-external negation just like English children. Many stages later, however, Operating Principle D appears again: before they acquire the split negator ne ... pas children will use a single-word negator. It thus seems that Slobin's Operating Principles correctly explain why certain structures occur at all, but they do not explain why these struc-
106
Cognition and Language
Growth
tures occur only at specific stages. In other words, these principles cannot account for the stage-transition aspect of language development, i.e. why children suddenly give up a structure and move on to the next stage. A somewhat different view of language acquisition has been developed within the framework of generative grammar. While generative grammarians, notably Chomsky himself, have tied the conceptual background of their work primarily to issues relating to the logical problem of language acquisition, abstracting away from the temporal dimension of the developmental process (Chomsky 1975,1980; Wexler & Culicover 1980; Lightfoot 1982), there have been some recent conceptual modifications with respect to how language acquisition should be properly viewed. These modifications have, to some limited extent, re-introduced the temporal aspect of language development and have led to what is called the parameter model of language acquisition. The fundamental idea is something like this: there is a set of abstract principles, technically known as Universal G r a m m a r , which constrains the class of possible human languages (or rather other grammars)(16) and thus constitute the child's biologically determined a priori knowledge of what is a possible human language; that is, of all logically conceivable grammars or rule systems the class of (possible) human languages constitutes a relatively small subset and the properties that hold across this subset are specified by Universal G r a m m a r . Natural languages will therefore have certain non-accidental properties by virtue of being members of this subset. However, within the limits imposed by Universal G r a m m a r , languages may select all kinds of different, idiosyncratic properties so that 'existing languages are a small and in part accidental sample of possible h u m a n languages' (Chomsky 1981:1). Given these assumptions one might ask how the child's experience interacts with his innately specified a priori linguistic knowledge in the actual developmental process. One possible answer to this question is the idea that, in a sufficiently structured theory of Universal G r a m m a r , the individual principles will leave open a certain narrowly constrained range of options as to what kinds of structural properties different languages may choose. These options - technically known as parameters - reflect the accidental properties of natural languages and need thus be determined or fixed by experience. Under this view 'the languages that are determined by fixing their [the parameters'] values one way or another will appear to be quite diverse, since the consequences of one set of choices may be very different from the consequences of another set; yet at the same time, limited evidence, just sufficient to fix the parameters of U G will determine a grammar that may be very intricate and will in general lack grounding in experience in the sense of an inductive basis' (Chomsky 1981:4).
Two Problems of Language Acquisition
107
By way of example(17) let us suppose - counterfactually - that natural languages have either underlying SOV or SVO order but no other order and that this property is specified by Universal Grammar in some appropriate way that need not concern us here. If a child learning English faces a sentence such as this book I have seen before he will not conclude that the language he is exposed to has underlying OSV order, since the principles of Universal Grammar rule out the possibility of OSV in natural languages. On the other hand, the child still has to choose between underlying SOV and SVO order for the language to be acquired and it is this choice that needs to be determined by experience, with possibly far-reaching consequences for other domains of the grammar. If this view is correct, then language acquisition can be seen as a process in which the child successively fixes the various parameters left open by Universal Grammar. According to Chomsky (1980) the notion of parametrization characterizes language development as a strongly maturational process, so that language growth rather than language acquisition would be the appropriate term. Since this view provides a principled solution(18) to the logical problem of language acquisition, it might be fruitful to explore some of its consequences with respect to the temporal aspect of language development, in particular to the stagetransition problem outlined above. While, to my knowledge, Chomsky himself has not published on these questions in detail, some other authors have expressed very precise ideas about the relationship between parametrization and child language. The most extensive treatment in this area that I know of is White's (1982) dissertation as well as her (1981) paper. It seems that one of the central claims in White's work is that child grammars are possible grammars. That is, child grammars fall within the limits imposed by Universal Grammar in exactly the same way adult grammars do. The class of humanly accessible grammars does not only include all existing mature languages but also all grammars that characterize the child's linguistic knowledge at any stage of his development. This view implies that a child grammar will never violate any principle of Universal Grammar though, of course, individual parameters may still be undetermined. White is fairly vague about why she believes that child grammars are possible grammars. She seems to think that this view somehow follows from the parameter model: ' . . . by construing the principles of g r a m m a r as part of the a priori endowment of the child, one includes the class of child g r a m m a r s within the class of possible g r a m m a r s and this has certain consequences f o r how language will emerge in the child. It is a reasonable working hypothesis that children's g r a m m a r construction is limited by the constraints of Universal G r a m m a r so that they d o not have to evaluate the full range of g r a m m a r s that would be logically possible, were they working f r o m inductive principles alone.' (White 1981:242)
108
Cognition and Language
Growth
In her dissertation on the acquisition of English and Italian Hyams (1983) is more explicit on this point: 'A parametrized theory of grammar makes certain empirical predictions about the shape of the intermediate grammars. One obvious prediction is that these grammars, though perhaps not fully specified, will not fall outside the limits imposed by UG.' (my emphasis; p. 9)
It seems to me that the claim that child grammars have to fully fall within the limits imposed by Universal G r a m m a r is essentially a matter of empirical fact and thus may be true or not. However, I do not see any compelling reasons for assuming that such a view is a logically necessary consequence, if one accepts the parameter model of language development. In fact, I believe that the parameter model as outlined in Chomsky (1981) is fully compatible with an alternative view, namely that child grammars may and, in fact, do violate principles of U G at various developmental stages. I will explore this view in detail in section 4. It seems to me that part of the logic behind White's and Hyams' view derives from terminological conventions. It has become customary to refer to the system of mental structures that is biologically determined as the 'initial state', suggesting that this is what the child is born with and can thus use for the evaluation and interpretation of his experience. In contrast, the term 'final state' is used to refer to the essentially stabilized system of knowledge that has grown out of the interaction of experience and initial knowledge. The terms 'initial state' and 'final state' have a definitely temporal flavor suggesting - misleadingly, as I believe - that whatever is part of the initial state must be fully available from the very beginning. If, however, we interpret 'initial state' in a more restrictive manner as that system of knowledge that is biologically rather than experientially determined, then the question of when and how this knowledge becomes available or productive is totally open. That is, the question of what must be attributed to the genetic program is something different from the question of how the genetic program interacts with experience in real time. In order to make this point more explicit let us adopt a strategy which Chomsky has used many times in his attempts to argue for a rationalist approach to linguistic inquiry. The force of an argument will sometimes become more transparent if it is applied to phenomena of physical rather than mental growth. Let us assume that the property of the h u m a n organism to grow teeth is somehow genetically determined. That is, there is a set of innately specified principles of some presumably biochemical sort that determine human physical growth including the property of having teeth. At an appropriate level of abstraction one might view this innate set of principles of physical growth in much the same way as Universal G r a m m a r , which is the set of
Two Problems of Language
Acquisition
109
principles of mental growth. Following the logical structure of White's and Hyams' argument these considerations should force us to conclude that, at any stage of their physical development, children must have all the properties specified by the genetic program, hence that they must have teeth. Of course, children are toothless at various stages of their physical development; they are born without teeth and they usually loose their first set of teeth around the age of four or five years. Nevertheless, this observation does not lead us to alter our view with respect to whether or not the property of having teeth is part of the genetic program. At the physical level it is common knowledge that genetic principles may themselves be subject to some kind of temporal development. That is, though the 'tooth-growing' principle is innate in the sense that it is part of the child's initial stage, it appears to be somehow latent for a certain period of time and does not operate until some time after birth. That is, the fact that something is innate does not entail its manifestation at the earliest stages of the child's development. In an analogous way the assumption of a parameter model of language acquisition does not logically entail the view that child grammars are fully constrained by UG, though, of course, this may turn out to be the case for empirical reasons. In what follows I will argue that this view faces a number of serious problems, in particular with respect to the stage-transition issue. It thus may be fruitful to explore alternative conceptions about the relationship between innate principles and their interaction with experience. It appears that a crucial assumption implicit in the parameter model is that children will, in principle, fix parameters correctly, i.e. children will not accidentally select the wrong parameter value. Following the standard assumption that parameter values are subject to a markedness hierarchy we may conclude that a child will choose the (unmarked) default value unless there is specific evidence for some other (marked) option. Thus the only way for a child to ever make a 'mistake' and then re-fix a parameter would be to choose the default value at some early stage (because the relevant evidence is not yet available) and then select the correct value at some later stage on the basis of new positive evidence. This is the view proposed by Hyams (1983). She argues that with respect to the pro-drop parameter children learning English will first select the default [+]-value and will later re-fix the parameter to its [-¡-values as soon as they perceive constructions with e.g. expletive there, it, etc. Apart from this possibility, however, children will always interpret the available evidence correctly as far as the fixation of parameters is concerned. This view seems to be also implicit in White's account. She argues forcefully for the idea that children's grammars are optimal grammars for a given set of data:
110
Cognition and Language
Growth
'As the child learns the optimal grammar by definition, the only way a child's grammar can be non-optimal is if the linguist has made a mistake. Unlike a linguist a child never makes such a mistake, never chooses a non-optimal grammar because he has no choice in the matter. The principles of grammar available to him and his perception of the data will result in his creating a grammar suitable for that data and that grammar will be optimal.' (White 1981:251)
The problem with this view is that it is far from obvious how to accomodate it with many of the available developmental data. Recall that in the acquisition of negation e.g. children pass through an ordered sequence of stages, from sentence-external to sentence-internal negation, and finally to correct adult structures. That is, children do make 'mistakes' and they continually have to restructure their grammar until they finally achieve adult competence. Consequently the question arises how the child realizes that a given structure is incorrect and must therefore be replaced by some other structure: the old stage-transition problem. The only answer to this problem from Hyams' version of the parameter model is in terms of'default value — > specified value'. It seems to me that this view does not provide a strong enough mechanism to account for the child's systematic restructuring of his grammar. A somewhat different solution is offered by White. She begins by arguing quite convincingly that a grammar is obviously always a grammar of something and this something is a certain set of data. It is self-evident that the child does not construct an optimal grammar as such, but only an optimal grammar for a certain set of data. For example, there is an optimal grammar for the sentences of English, and another optimal grammar for sentences of Chinese; but there is, of course, no optimal grammar for both Chinese and English. As soon as the data change, a one-time optimal grammar will no longer be optimal, but has to be replaced by a new grammar which, again, is optimal for the new set of data. This is White's central idea. Consequently, she argues, while the child constructs an optimal grammar at each stage, he will perceive the linguistic input differently at different stages. The child will construct an optimal grammar for the data he perceives, and as soon as his perception of the data changes he will construct a new, again optimal grammar for the new set of data. The fact that the child moves from one stage to the next is thus a matter of changing perception: 'The child at one stage will construct a grammar for the data of that stage; at another stage he will construct a different grammar. Both will be optimal representations of the child's linguistic knowledge at each stage. Therefore, one must assume that the child's perception of the data changes. Despite apparently similar input data at different stages, the child's intake actually varies (due to maturational factors, increasing memory, etc.). The child's
Two Problems of Language
Acquisition
111
perception of the data is different from the adult's. The grammar that he comes up with will be optimal for his own perception of the data, i.e. the relevant triggering experience, and the grammar that emerges at any given stage can provide us with a clue as to how he reacts to the data.' (White 1981:247-248)
Note, first of all, that White's view is a fairly specific proposal as to how to solve the stage-transition problem, i.e. to explain why children move from one stage to the next. According to White the temporal aspect of language development has nothing to do with principles of Universal G r a m m a r , parameter-fixing and the like, but is strictly a matter of perception. It is only after perception has changed that Universal G r a m mar comes into play. I believe that in the absence of any specific proposals as to what the principles look like that cause perceptual changes, White's view is somehow question-begging. By viewing language development largely as a restructuring process in which the child constructs optimal grammars for differently perceived sets of data, White places the solution to the developmental problem, in particular the stage-transition aspect, outside the domain of the theory of grammar; more specifically, into the domain of the theory of perception. Note that White's theory of perceptual changes essentially amounts to claiming that there is some kind of perceptual mechanism that converts an adult utterance into the corresponding child structure. It is only after this conversion that the principles of Universal G r a m m a r (as well as any other mechanisms of structural analysis) begin to operate. That is, when an adult says something like Johnny doesn't like ice-cream the child will perceive this utterance as no Johnny like ice-cream and it is this latter structure that is the input to the language acquisition device. While obviously not impossible in principle, it is difficult to see - considering what is generally known about perceptual mechanisms - how this type of structure-conversion should work in detail. Note furthermore that such a powerful and necessarily unconstrained perceptual mechanism makes Universal G r a m m a r largely superfluous. If perception can shape the input data in such a way that the child will be forced to choose one specific generalization (out of many logically possible generalizations) there is not very much left for Universal G r a m m a r to do. Perceptual principles will always make the input data look in such a way that the child cannot possibly miss the correct solution. Such a view, however, comes close to the idea that language and language acquisition are merely perceptual phenomena and nothing else, a view which Chomsky and other generative grammarians have convincingly argued against. It should be clear that White's view about the role of perceptual changes in language acquisition is a logical consequence of her original
112
Cognition and Language
Growth
claim that child grammars are possible grammars, i.e. child grammars are fully constrained by U G . If the principles of Universal G r a m m a r are fully operative on any kind of child grammar, then, of course, they cannot be responsible for the restructuring process, i.e. for the replacement of one grammar by another. Let us suppose, for the sake of the argument, that there is, in fact, a set of principles that determine the child's specific perception of the speech input. The crucial question is whether or not it can be shown that these principles are language-independent and must therefore be specified in a theory of (auditory) perception, rather than in the theory of grammar. There is little doubt that some such perceptual principles do exist. F o r example, there seems to be a principle of salience (cf. Smith 1973) which captures the fact that signals with certain acoustic properties are more easily perceivable than others. This principle is truly perceptual in the sense that it operates not only in language acquisition but also in many other kinds of auditory perception. It is thus a principle of the theory of perception which interacts with, but does not belong to, the theory of grammar. It seems doubtful that those principles that determine the child's early perception of linguistic structures along the lines suggested by White are principles of the theory of perception in this sense. The reason is simply that the relevant principles must make crucial reference to grammatical facts. They must be able to distinguish between no and not, between different syntactic positions, between syntactic categories such as S or VP, etc. If this is the case, however, then these principles are not principles of the theory of perception, but rather principles of the theory of grammar (unless, of course, one assumes that the properties of language can be reduced to general properties of perception). Consequently they cannot simply be dismissed f r o m considerations concerning the interaction of Universal G r a m m a r and developed phenomena. It seems that White's attempt to remove the stage-transition problem from the domain of Universal G r a m m a r by assigning it to the level of perception, actually does not buy us very much. In dealing with the child's (linguistic) knowledge at different developmental stages and how this knowledge changes over time we are concerned with genuinely linguistic matters and thus with aspects of the theory of grammar. Psycholinguistic evidence furthermore suggests (cf. Smith 1973) that people's grammatical knowledge determines this linguistic perception, rather than the other way around. Consequently, White's view, too, fails to give us a satisfactory answer to the original question: what is it that makes the child realize that a given structure which served him nicely for an extended period of time, is inadequate, so that he will move to the next developmental stage? In the following section I will explore an alternative view which shares
Two Problems of Language Acquisition
113
most of the fundamental assumptions of generative grammar, but at the same time avoids the problems inherent in White's approach.
4. THE MATURATIONAL SCHEDULE OF UNIVERSAL GRAMMAR
In the preceding sections I have argued that there are two fundamental problems with which any adequate explanatory theory of language (acquisition) has to deal: the logical problem of language acquisition which focuses on the question of why children are able to acquire language at all, and the developmental problem of language acquisition which relates to the question of why children acquire language the way they do. I have tried to demonstrate that one of the most interesting phenomena within the developmental problem concerns the stage-transition aspect of language acquisition: how do children manage to realize, in a principled way, that structures they have used productively for an extended period of time are ungrammatical in terms of the adult grammar? This insight is obviously a prerequisite to the formulation of a new set of rules and therefore to the emergence of a new developmental stage. Of course, it is logically conceivable that two different and unrelated sets of principles need to be invoked to provide an adequate answer to each of the problems outlined above. That is, one might think of one set of principles ( = Universal G r a m m a r ) responding to the logical problem of language acquisition and a further independent set of principles which explains why children pass through an ordered sequence of developmental stages of the type illustrated and discussed in section 3. It seems to me that, in the past, most researchers have implicitly assumed that such a view is, in fact, the correct one. Discussions on the conceptual basis of Universal G r a m m a r have usually focused on the logical problem (see Chomsky 1965, 1975, 1980; Hornstein & Lightfoot 1981; Lightfoot 1982), while psycholinguists have, for the most part, tried to specify a different set of principles that were meant to account for the developmental aspect of language acquisition (see Slobin 1973; Clark & Haviland 1974; Marantz 1982; Clahsen 1982). Under a somewhat different approach one might easily conceive of the possibility to handle both problem domains by the same set of principles. Note, first of all, that it is entirely a matter of empirical fact whether one or two sets of principles are needed to cope with the logical and the developmental aspect of language acquisition. On the other hand, it would clearly be preferable, on a priori grounds, to find a solution in which both aspects are adequately accounted for by one and the same set of principles. The reason for this a priori preference is simply the following: if we can show that the principles of Universal G r a m m a r are
114
Cognition and Language
Growth
structured in such a way that also developmental regularities will follow from them, we thereby attribute more structure but less material to the child's genetic program. It is this option that I wish to explore in the present section. The leading idea behind the proposal I wish to make is that the principles of Universal G r a m m a r are themselves subject to an innately specified developmental process(19). While I retain the traditional idea that Universal G r a m m a r constitutes the child's genetically determined a priori knowledge about the structure of humanly possible languages, I will suggest that during the developmental process the various principles of Universal G r a m m a r will successively emerge and thus become operative in a specific temporal order. That is, although the set of universal principles is fully and exhaustively specified by the child's genetic program, each of these principles is somehow 'latent' up to a specific point in time after which it will start to operate and thus constrain the child's knowledge of what may be a humanly accessible language. Most crucially, I will suggest that the sequence in which the various principles of Universal G r a m m a r emerge does not reflect any externally determined factors in the child's linguistic experience, but rather is itself an inherent part of the genetic program. Under this view the unfolding of the genetic program that characterizes the child's language faculty resembles very closely the genetic program determining the child's physical growth. While it is uncontroversial that the crucial physical properties of h u m a n organisms are genetically determined, it is, at the same time, clear that many of these properties do not emerge until a specific point in the child's bodily development. In this sense I wish to suggest that the genetic program which determines the child's language-related mental growth incorporates two parts: a set of parametric constraints on possible mental representations and a maturational schedule for the emergence of these constraints. Let us assume, in the spirit of this suggestion, that there is a set of principles P]t P2, P} ... Pn- technically known as Universal G r a m m a r which constrains the class of humanly accessible languages. Let us furthermore assume that the relevant genetic program contains a maturational schedule which specifies a sequence of points in time tj, t2, t3 ... tn, where j > i implies that tj is later in real time than tr Under an appropriate idealization of this view(20) I will assume that there is a one-to-one correspondence between the principles of Universal G r a m mar and the sequence of points in time such that for any i tt specifies the point in time at which the constraint expressed by P t emerges and thus becomes available for the construction of the child's grammar. This view crucially implies that at any stage t( the child's grammar construction will be constrained by a set of principles bearing indices < i while at the
Two Problems of Language
Acquisition
115
same time the structures which the child uses may violate principles bearing indices greater than /. Ignoring details for the moment, the fundamental idea behind this view is that the various principles of Universal G r a m m a r will successively emerge according to a specific maturational schedule so that at any developmental stage the child's grammar construction will be guided (or rather constrained) by a proper subset of universal principles; i.e. those that the maturational schedule has already made operative, while at the same time all those principles which are still 'latent' at that stage, may be violated. If this view is correct, in principle, then it becomes fairly obvious why and under what conditions the child will be forced to restructure his grammar and thus pass to the next higher stage. What makes the child realize that his grammar needs to be restructured is the emergence of a new principle, which his present grammar violates. Suppose that at some stage the child's grammar violates a principle P.. According to the innately specified maturational schedule Pi will then emerge at some specific point in time making the child realize that his grammar is in conflict with a constraint of Universal G r a m m a r . He will therefore have to restructure his grammar in such a way that it will henceforth meet the constraints expressed by PL That is to say, the observable sequence of developmental stages reflects the maturational schedule that brings in the principles of Universal G r a m m a r one after the other(21). What makes the child realize that something is wrong with his present grammar is neither a matter of perception nor of specific input data, but rather the emergence of a new principle of Universal G r a m m a r , and it is this newly available principle which also tells the child which input data are relevant and which are not. Whether or not this view of language development is, in fact, true, has to be regarded as an entirely empirical question. If the basic idea is correct, then it should be possible to show that at some early stage the child's grammar violates a specific principle of Universal G r a m m a r , while the immediately following stage instantiates a restructuring process in which the child reorganizes his grammar in accordance with the principle as it has just become operative. In other words, we should find evidence in developmental data for the maturation of principles of Universal G r a m m a r . I will now look at some fairly well-known developmental data and try to show that the stage-transition aspect can, in fact, be regarded as reflecting the emergence of various principles of Universal G r a m m a r . Note, however, that, in this attempt we are facing a problem of a predominantly technical nature. Most of the principles of Universal G r a m m a r that have been proposed in the literature relate to fairly complex structures which children do not use until relatively late in their
116
Cognition and Language
Growth
development, while the majority of developmental studies have focused on the earlier stages of language acquisition. It is therefore far from obvious how some of the better established principles of Universal G r a m m a r can be taken to account for the development of such early structures. In order to obtain a more precise understanding of how Universal G r a m m a r and language development interact, it would be necessary to either have more detailed accounts of later developmental stages or to identify those principles of Universal G r a m m a r that are relevant also for structures of very limited complexity. In view of this problem the following account will be both highly selective and tentative. However, it seems to me that some of the basic notions in linguistic theory are already reflected in early child grammar and I will therefore try to demonstrate how the corresponding principles might account for the stage-transition aspect of language development. Let us first of all look at some developmental data from the very earliest stages of language acquisition. It is customary in the psycholinguistic literature to refer to the early stage in which children produce the first multi-word utterances as the two-word stage or three-word stage, since children's sentences do not usually contain more than two or three words. Some such utterances taken from Miller & Ervin (1964), Bloom (1970, 1973), Brown & Fraser (1963), Braine (1963) are the following: (33) (34) (35) (36) (37) (38) (39) (40) (41) (42)
allgone shoe allgone outside bye bye man night-night hot night-night boat more book more write other milk no dirty no down
(43) (44) (45) (46) (47) (48) (49) (50) (51) (52)
see ball want baby want up want bye bye car there ball there high here more paper hat off off bib that yellow
What is immediately striking about these utterances is that, from the point of view of the adult grammar, they do not seem to exhibit consistently any of the familiar syntactic patterns based on common grammatical notions such as subject, object, NP, VP, etc. It appears that in forming these sentences children use construction principles of a fundamentally different nature f r o m those of the adult language. It was Bloom (1970, 1973) who first suggested that children's early two-word and three-word utterances are not formed on the basis of syntactic, but rather on the basis of conceptual relations; i.e. the child's first grammar maps directly from semantic relations to their surface expressions without any
Two Problems of Language
Acquisition
117
intervening grammatical level. One of the crucial observations is that in the earliest two-word stage every utterance obligatorily contains a function word such as more, another, no, here, bye bye, allgone, want, etc. which expresses an inherently relational concept; children combine any one of these relational words with items out of a class of different words denoting objects, actions, properties, and the like. The resulting phrase or sentence expresses the scope of the relations with respect to the non-relational word: 'Their [ = the children's] early syntax, then, would appear to depend on the linguistic induction that such function words combine with other words in a direct and linear relation, and the meaning relation between the two words derives from the meaning of the function word. There is a linear one-to-one correspondence between the meaning of such words and the relational meanings between such words combined with other words.' (Bloom 1973:116)
In a comprehensive review of the then available literature Brown (1973:178) concludes that 'learning to express a small set of semantic relations and their combination constitutes the principal work of Stage I'. Some such semantic relations that develop at Stage I are: agent-action, entity-locative, possessor-possession, etc. The child's earliest construction principles are thus semantic (or perhaps conceptual) in nature. The child forms semantic rather than syntactic categories, and the way in which words are combined to form more complex utterances is guided by semantic, not grammatical rules. To my knowledge this view is fairly uncontroversial among psycholinguists, at least as far as the factual observations are concerned (see also Brown 1973, Bowerman 1973, Schlesinger 1974). I will therefore assume that it is essentially correct, i.e. the child's grammar(22) during the two-word and three-word stage is based on a small set of semantic categories which enter into combinations that express semantic relations. We may formalize this view in something like the following way (for ease of exposition let us focus on utterances with function words as described by Bloom): children form a lexical category R containing only words which inherently express a relation (no, more, here, want, etc.). R is thus semantically defined. Since R is the obligatory constituent in the relevant multi-word structures, we may assume that R forms the head of any construction in which it occurs. That is, R can take a certain range of complements which may be categories of different types that are again semantically defined (objects, actions, events, etc.). If this approach is correct then there will be a phrasal category RP (Relation Phrase) such that RP is some projection of R. (Whether or not RP is the maximal
118
Cognition and Language
Growth
projection o f / ? in the sense that there are 'intermediate' projections oiR dominated by RP is an empirical question which need not concern us here.) For a sentence such as here more paper we thus obtain the following structure: R R
R
I
here
R more
paper
In a similar way we may look at other semantic relations, e.g. agent-action or entity-locative, as being projections of a semantically defined head category. Let us assume that this view of early child grammar is correct, in essence. The interesting question is, of course, how things develop from here. Bloom is somewhat vague about this point. While she explicitly argues that 'cognitive categories do not develop in a one-to-one correspondence with mental linguistic categories'(1973:12 l),shedoesnot make any specific proposal as to exactly how syntactic categories and grammatical relations evolve from a rule system based on semantic notions. Nevertheless, she does assume that children will have to replace the notion of semantic relation by the concept of grammatical relation in order to acquire full competence of the adult language. A different approach is pursued by Schlesinger (1974). Since children start out with semantic relations, Schlesinger argues, and it is not conceivable how they could discover the concept of grammatical relation, we are forced to conclude that natural languages ( = adult grammars) are not really syntactic objects, but rather that what appear to be grammatical relations are reducible to semantic facts. I will not discuss Schlesinger's view any further, because I feel that it is difficult to reconcile with what is currently known about the formal properties of natural languages. While one might possibly argue that even later stages in child language development are semantically-based, I do not see how the results of syntactic research during the past three decades can be plausibly accommodated with the view of language as a purely semantic object (for a detailed critique see also Marantz (1982)). We are thus facing the following situation. While the earliest child grammars(22) are based on semantic categories and exclusively express semantic relations, adult grammars crucially contain syntactic categories
Two Problems of Language
Acquisition
119
and encode grammatical relations. Consequently the child has to discover at some stage that his original semantically-based grammar(22) is somehow misconceived, and that formal syntactic categories and grammatical relations are what needs to be encoded in his grammar. The question is: how does the child find this out? It seems to me that this question cannot adequately be answered by exclusively appealing to the input that the child receives. Note first of all that many of the child's utterances are perfectly well-formed from the adult point of view or could at least be interpreted in terms of grammatical relations. If Bloom's story is correct, however, then a sentence such as want baby does not represent a VO relation, but rather a R + Complement relation, where want belongs to the same lexical category as more, allgone, here, etc., but to a different category from write or eat. Thus the child's grammar will permit strings such as want up or want hot which are ungrammatical as a VO relation. As a consequence the child might discover that some of his utterances occur also in adult speech, while others do not. However, this discovery as such, should not affect the child's grammar construction, since, for reasons outlined in section 2, the non-occurrence of child structures in adult speech cannot be taken as disconfirming evidence for the child rules. Consequently there is nothing in the input that tells the child, in a principled way, that he is on the wrong track. In the spirit of the ideas outlined above I would like to suggest that it is a principle of Universal G r a m m a r emerging at the end of the two-word/ three-word stage that tells the child that his present grammar is somehow misconceived. Once the principle has emerged in accordance with the maturational schedule the child's grammar will violate a principle of Universal G r a m m a r and this is what forces the child to give up his original constructions and restructure his grammar. That is, restructuring is a process to bring the child's rule system in line with Universal Grammar. Notice first of all that, in a very intuitive sense, early child grammars of the type outlined above do not appear to be possible grammars in White's sense. If it is an essential i.e. biologically necessary property of human languages that they are syntactic rather than purely semantic objects, then semantically-based child grammars d o not qualify as possible grammars, in an obvious sense. The principle (or possibly set of principles) which, I would like to suggest, forces the child to restructure his grammar at the end of the two-word/three-word stage is the X-bar schema. If it is true that the X-bar schema specifies constraints on the class of humanly accessible grammars, then this schema will require any natural language to have among other things - the following properties:
120 a)
b)
Cognition and Language Growth there are only four lexical categories that may enter into the projection of phrasal categories, namely N, V, A, and P. That is, any phrasal category must have one of these and only these lexical categories as its head. in the unmarked case, complements occur uniformly either to the right or to the left of the head (for details see Stowell 1981). That is, if in a language the complements of P occur to the right, then the complements of all other lexical categories (N, V, A) will occur to the right. For left expansions the same holds respectively. In other words, constituents in a sentence cannot occur in random order.
Let us assume that as part of the genetic program's maturational schedule the X-bar schema emerges some months after the onset of the two-word/three-word stage. Until that point in time the child has used a semantically-based grammar of the type outlined above. As soon as the X-bar schema becomes operative, the child will realize that his current grammar is in conflict with constraints specifying the class of humanly accessible languages. First, the child's grammar contains - among other things - a lexical category R which enters into a phrasal projection RP. Such a projection, however, is ruled out by the X-bar schema, since only N, V, A, and P may be heads of phrasal categories. Furthermore the X-bar schema requires a certain canonical order within each phrasal category, whereas the order in the child's grammar is free. This conflict between the child's grammar and the structural requirements imposed by principles of Universal Grammar initiates a process of restructuring through which the child brings his rule system in line with a new set of constraints. In the case under discussion the child will abandon the rule(s) of i?/>-expansion and replace it/them by rules operating on 'permissible' phrasal categories such as NP, VP, AP, and PP(23). This restructuring process will lead to a new developmental stage in which sentences encode grammatical relations as specified by the X-bar schema rather than purely semantic relations. I would now like to turn to a set of data for which I hope to show how a number of successively emerging principles of Universal Grammar can explain an observable order of developmental stages. The phenomena I wish to examine concern the acquisition of word order in German. One interesting aspect of German word order is that main clauses have SVO, while embedded clauses have SOV order. For reasons which are extensively discussed in the literature (see Ross 1970, Maling 1972, Koster 1975, Kohrt 1976, Bartsch et al. 1977, Thiersch 1978), it is generally assumed that the base-generated order corresponds to the one in embedded clauses, i.e. SOV. A transformational rule, commonly called Verb-Second (V2), applies to main clauses and moves
Two Problems of Language
Acquisition
121
the finite verb into a 'second' position, yielding SVO (for details see Lenerz 1977, Thiersch 1978, Safir 1982, Sternefeld 1982). I will assume without further discussion that this view is essentially correct. The question then arises: how do children learning German manage to acquire these facts? One of the crucial observations is that during the earliest developmental stage in which sentences contain subjects, verbs, and objects, children use a large variety of different word orders. There is some disagreement in the literature as to how extensive this variation is: Park (1974) presents evidence suggesting that all logically possible orders do, in fact, occur, while Clahsen (1982b) argues that, even though there is no fixed linear order of constituents, verb-initial patterns do not occur. Since it is difficult to decide between Clahsen's and Park's evidence without any further data, we have to leave the question unsettled at this point. What is crucial in the present context, however, is merely that, in simple declarative sentences, children use patterns other than SOV and SVO. Considering the input children receive, this variable word order does not seem surprising, since on the surface German does, in fact, exhibit (nearly) all logically possible patterns: (53) (54) (55) (56) (57) (58)
SVO SOV VSO VOS OSV OVS
main clauses embedded clauses y e s / n o questions marginal in yes/no questions object relative clauses object topicalization
If the child merely responded to the various surface word others that occur in adult speech, he might conclude that German has random constituent order, and this is exactly what children seem to be doing during the earliest stage. The interesting aspect in this case is the child's further development which I will describe relying on Clahsen's (1982b) and Clahsen & Muysken's (1983) account. According to Clahsen the prominent feature of Stage II is that the children discontinue to produce free constituent order and use randomly SOV or SVO with finite verb forms and SOV with non-finite verb forms. At Stage III finite verb forms stabilize to SVO, while non-finite verb forms continue to occur only with SOV. This is essentially the correct adult pattern, where complex verbals are 'split': (59)
Hans trinkt ein Bier 'John drinks a beer'
(60)
Hans hat ein Bier getrunken ' J o h n has a beer drunken'
122
Cognition and Language Growth
Clahsen furthermore observes that children never make word order mistakes in embedded clauses, where they always place the verbal elements) correctly in final position, so that the relevant rules seem to be firmly established by the time embedded clauses emerge. Under a somewhat simplified perspective we are thus facing the following developmental situation: children learning German begin with variable constituent order; at the subsequent stage they restrict word orders to SOV and SVO which they partially use randomly. At the final stage they discover that SOV rather than SVO is the underlying word order, and simultaneously learn the rule of V-Second for main clauses. Once these rules are learned, we do not find any errors either in embedded clauses or in split verbals in main clauses. Given this development two questions call for an explanation: what makes the child select SOV and SVO as the correct order in simple declarative sentences (transition from Stage I to Stage II) and why does he subsequently choose SOV rather than SVO as the underlying word order? As in the previous case I do not believe that these questions can be adequately answered by appealing to the child's input alone. As far as the word orders are concerned that the child actually hears, SOV and SVO have no privileged status; consequently there is nothing in the data that should force the child to restrict word orders to SOV and SVO at Stage II. In much the same way the input itself does not tell the child that SOV rather than SVO is the underlying word order (transition from Stage II to Stage III). In fact, logically the child has two options: either SVO is the underlying order in which case a rule of Verb-final applies in embedded clauses and in split verbals; or SOV is the underlying order with a rule of Verb-Second applying in main clauses. As the developmental data indicate, children consistently choose the latter option. I would like to suggest that the restriction of word orders at Stage II reflects the maturational emergence of the X-bar schema (or rather one specific aspect thereof), while the choice of underlying SOV over SVO can be explained in terms of the notion government and of Emonds' (1976) Structure Preserving Principle which emerges some time after the X-bar schema has become operative. As already outlined in the previous case the X-bar schema incorporates - among other things - two restrictions: the first limits the choice of lexical categories that may enter categorial projections (i.e. N, V, A, and P), while the second concerns possible word orders. It is this latter constraint of the X-bar schema which, I will suggest, explains the transition from Stage I to Stage II. Recall that the X-bar schema constrains constituent order through the notion of 'maximal projection'. Each maximal projection constitutes some kind of a 'closed' domain consisting of a head and its complement, such that complements appear either
Two Problems of Language Acquisition
123
to the left or to the right of their lexical heads. In other words, the order of lexical heads and their complements is constrained in some specific way, so that base rules generating random word order are excluded in principle. It thus follows from the X-bar schema that in the case of the VP as the maximal projection of the verb the object must occur either directly to the left or directly to the right of the verbal head and cannot be separated from the verb by material which does not belong to the VP. I would therefore like to suggest that, with the maturational emergence of the X-bar schema, the child will know that the base-generated order of constituents must be in some way constrained. To put it differently, the child receiving input from a language such as German which permits a wide range of different surface orders will be forced to conclude that some of the orders are base-generated, while others are the result of movement. More specifically, the X-bar schema will 'tell' the child that base-generated word orders can be only those in which the object is directly adjacent to the verb(24). That is, all orders in which the subject separates the object from its verb must be derived via movement. Returning to the six different word orders that the child used at Stage I, the maturational emergence of the X-bar schema will thus rule out VSO and OSV as base-generated strings. We may also conjecture that, due to its marginal status, the child will not consider VOS as a possible base-generated word order. Consequently, the X-bar schema constrains the number of possible underlying word orders to three combinations of subject, verb, and object: SVO, SOV, and OVS. As the developmental data indicate, the child does, in fact, limit his use of word orders to SVO and SOV at Stage II. Under the present reasoning we would also expect OVS to occur, which, however, is not the case. At present I have no explanation for this fact; one might, however, speculate on why the child also rejects OVS as an underlying word order. Since both SVO and SOV show subject-initial position whereas OVS has the subject in final position the child may be led to conclude, on the basis of some kind of majority consideration, that the subject initial-position is the correct one. We may also conjecture that, by some principle of Universal Grammar, OVS is simply not a possible base-generated word order of human languages(25). Whatever the correct solution may be, it is clear that the X-bar schema constrains the number of possible underlying word orders in natural languages. Since the developmental data indicate that children move from a stage of nearly random word order to a stage of more restricted word order we may account for this fact by assuming that the X-bar schema, as a principle of Universal Grammar, emerges at some time after the onset of language development and thus tells the child that languages simply do not permit random constituent order in the phrase-structure component.
124
Cognition and Language
Growth
Let us look at the same facts from a somewhat different perspective. It is well-known that in natural languages word order and morphological case correlate in a specific way. Languages with relatively fixed word order such as English, French, Italian, etc. usually do not have a well-developed system of morphological case-marking, whereas languages with an overt case system such as German, Russian, Japanese, etc. generally show relatively unconstrained word order combinations. Let us assume that this distribution is non-accidental. Suppose furthermore that abstract Case can only be assigned under some very strict condition of adjacency in the spirit of Stowell (1981). If a language does not simultaneously mark cases morphologically, then the order of constituents under which abstract Case is assigned must, in principle, be preserved at S-structure because otherwise case relations could not be properly identified. If, in contrast, cases are morphologically marked, then some kind of scrambling rule can freely rearrange constituents without destroying proper case-identification. It may be reasonable to assume that the principle specifying the correlation between word order and overt casemarking is part of Case Theory and thus of Universal G r a m m a r . If this view is correct in essence, then we may explain the children's development in the following way. Free word order is only possible if cases are morphologically marked, otherwise fixed word order is obligatory. At Stage I children use free word order even though NPs are not morphologically marked for case. By the relevant principles of Case Theory such a grammar does not qualify as a possible h u m a n language. If we assume that Case Theory maturationally emerges at some specific point in time, we will predict that at exactly this point the child will realize that his current grammar which permits free word order without morphological case-marking violates principles of Universal G r a m m a r . As a consequence, the child will, in the absence of morphological case, constrain the number of possible underlying word orders in such a way that objects are always adjacent to the verb from which they receive (abstract) Case. Under this view, SVO, SOV, and OVS will again qualify as the only possible underlying word orders. To summarize, after the maturational emergence of the X-bar schema and possibly Case Theory the child will have to restructure his grammar in such a way that only SVO and SOV will be base-generated, since all other orders would violate principles of Universal G r a m m a r . We will now have to explain the transition from Stage II to Stage III, i.e. why the child eventually opts for SOV rather than SVO as the underlying word order, deriving the main clause order by a movement rule. Note that there are actually two aspects involved in the transition from Stage II to Stage III. First, the child has to realize that it is not possible to
Two Problems of Language
Acquisition
125
have two different base-generated word orders, but rather that only one order can be base-generated with the other order derived through movement. Secondly, the child has to find out which is which. It seems to be a tacit assumption in generative grammar that the base component of a grammar generates only one underlying order for the major components of a clause (subject, verb, and object) so that all other orders which may appear on the surface are the result of movement rules(26). If this is correct, then it seems clear that a grammar base-generating two different word orders does not qualify as a humanly accessible grammar. The question is whether or not this assumption can be related to some principle of Universal Grammar. I believe that the relevant principle is the notion of Government. It can be shown that the government relation works always into one and only one direction. Consider the following German structures which G. Fanselow called to my attention: (61)
ein hübsches Mädchen a pretty girl
(62)
ein Mädchen hübsch a girl pretty
(61) represents the standard Det-A-N sequence in German. Crucially the adjective hübsches carries the inflectional ending -es through which the adjective agrees in gender and number with the noun Mädchen. Following standard assumptions we may say that the noun governs the adjective, thereby assigning an agreement marker to it. Now observe the contrast in (62), a structure which has a definitely archaic a n d / o r poetic flavor. Here the adjective occurs to right of the noun and crucially carries no agreement marker. If we assume that government works only in one direction, this contrast can be readily explained. Since government goes to the left in German the adjective in (61) receives its agreement marker, while in (62) the adjective appears in neutral form. Returning to the word order case, it is clear that the notion of government as specified above predicts that a grammar cannot base-generate two different word orders for the sequence of major constituents in a clause. If the object NP is governed by the verb and if government goes only in one direction, then only one of the word orders, SVO or SOV, is base-generated, while the other is derived via movement. Let us assume, again, that the notion of government emerges some time towards the end of Stage II. The child will then know that only one of the two word orders can be base-generated and he will then have to find out whether SOV or SVO is in fact the underlying order.
126
Cognition and Language Growth
Let us now turn to the question of what may determine the child's choice in this respect. Recall that logically there are two options available to the child: first, he may choose SVO as the underlying word order, in which case the correct position of the verb in embedded clauses and in split verbals will result from a movement rule which we may call 'Verb-final'. Alternatively, he could opt for SOV as the base-generated word order so that a movement rule of Verb-Second will account for the correct position of finite verb forms in main clauses. I would like to suggest that children's consistent choice of the latter alternative, i.e. SOV, again reflects the emergence of a principle of Universal Grammar, namely Emonds' (1976) Structure Preserving Constraint (see also Roeper 1973). To put it in a somewhat simplified version, Emonds' constraint requires that all non-local transformations either apply to root sentences or be structure-preserving in the sense that a constituent can only move to a position which bears the same categorial label as the constituent itself, i.e. NPs can only be moved to NP positions, verbs only to verb positions, etc. It is easy to see that a Verb-final rule operating on base-generated SVO and moving the finite verb form to the final position in embedded clauses would clearly violate the Structure Preserving Constraint, since there is no base-generated final V-position to which the verb could move. In contrast, a Verb-Second rule operating on base-generated SOV and moving the finite verb form to a 'second' position in main clauses does not violate the Structure Preserving Constraint, since root transformations, i.e. those applying to the highest S, need not be structure preserving(27). If we assume again that, as a reflection of a biologically determined maturational schedule, the Structure Preserving Constraint emerges at a specific point in time, then the explanation for the transition from Stage II to Stage III is obvious. As soon as the Structure Preserving Constraint becomes available, the child will know that it is the word order of the embedded clause that must be the base-generated word order, because otherwise a rule violating the constraint would need to be invoked. If this view is correct, then an important prediction will be made. Assuming that the child takes SOV as the base-generated word order and then acquires a rule of Verb-Second applying in main clauses, one would expect to find erroneous SOV in main clauses, but never erroneous SVO in embedded clauses. That is, the child might have problems with the rule of Verb-Second, sometimes forgetting to apply it; however, there is no possibility for word order errors in embedded clauses, because these have the simple base-generated SOV. Notice that this is exactly what Clahsen (1982b) found. Children never make mistakes with respect to SOV in embedded clauses, but once in a while they do make mistakes in main clauses, where they sometimes use erroneous SOV.
Two Problems of Language Acquisition
127
To summarize, I have suggested that the development of word order in children learning German can be explained with reference to the interaction of a number of principles of Universal Grammar which emerge maturationally in a specific order. The X-bar schema in conjunction with Case Theory leads the child to constrain the number of underlying word orders that he will consider. The choice among the remaining word orders is subsequently determined by the emergence of the notion of government and the Structure Preserving Constraint proposed on independent grounds by Emonds (1976). I would now like to discuss the developmental data on negation as shown in Table I. In particular, I would like to propose an explanation for the transition from Stage I to Stage II, i.e. how children come to realize at a certain point in time that Neg + S is not the correct structure for sentential negation, so that they will move to the next developmental stage in which the negator is placed between the subject NP and the verb phrase. Notice first of all that - perhaps somewhat unexpectedly - natural languages do not seem to express sentential negation by means of placing the negator in pre-sentential position, i.e. the structure Neg + S does not seem to occur in natural (adult) languages(28). In his survey of sentence negation in 240 languages representing some 40 language families and genetically isolated languages Dahl (1979:93) observes: 'the only examples of sentence-initial Neg placement that I have found are verb-initial languages, where this position is identical to immediate pre-FE [= finite element] position'. Dahl furthermore observes that inflected negators tend to behave exactly like other auxiliaries, while uninflected negators tend to occur before the finite verb form of the sentence. If Dahl's observations are correct and the phenomenon itself is nonaccidental, then it seems reasonable to assume that there is a principle (or possibly set of principles) of Universal Grammar that rules out the possibility of Neg + S structures in natural languages. If this is the case, then, of course, the structures produced by children at Stage I do not qualify as a possible grammar. Let us assume, again in the spirit of the previous proposals, that the relevant principle(s) emerge(s) at some specific point in time during the child's development; then it is clear that he will have to restructure his grammar once the principles become operative, and thus move to Stage II where the negator henceforth appears in pre-VP position. I am not aware of any particular principle proposed in the literature that would specifically rule out pre-sentential negation. However, it seems that Hornstein's (1977) analysis of English negation may shed some light on what the relevant regularity may be. The primary goal of Hornstein's paper is to show that 'it is a mistake to incorporate S in the
128
Cognition and Language
Growth
X' convention by identifying it with any level of V, e.g. V " or V " " (p.137). Rather, so Hornstein argues, S is a node sui generis which does not enter the projection of any lexical category. In the section on negation Hornstein proposes a rule of neg-placement which can move the negator from its base-generated sentence-initial position to the specifier position of any major constituent (NP, PP, AP, VP). This rule provides a uniform account of the distribution of English not\ i.e. in the so-called constituent-initial position (e.g. not even John came-*not John came), whereas sentence-negation is really VP-negation with not in the specifier position of the VP. While it is not entirely clear to me why Hornstein proposes a movement rule instead of base-generating the negator directly in the respective specifier slots, his analysis nicely explains the impossibility of 'sentence-external' negation. If the negator can only appear in specifier positions and if S does not enter into the X-bar system, as Hornstein can show on independent grounds, then S does not have a specifier position for the negator to go to. Consequently, the negator can never be dominated by S without an intervening maximal projection. Such, however, would be the case in sentence-external negation. While Hornstein does not spell out any principle from which the restrictions on neg-placement would follow, it is easily conceivable that such a principle may exist. If this is correct, then the child's move from sentence-external to sentence-internal negation can be readily explained. As soon as the relevant principle emerges, the child will know that the negator can only appear in specific positions under a maximal projection. Consequently, he will realize that sentence-external negation must be ungrammatical, since S does not have a specifier slot. He will therefore restructure his grammar to sentence-internal negation, i.e. with the negator placed in the specifier position of the VP. Let us finally turn to a set of data which appear to be characterized by a specific type of deletion. It is a standard observation in the psycholinguistic literature that during the early two-word and three-word stage children frequently produce utterances in which two nouns occur in juxtaposition. These utterances are commonly referred to as N + N constructions. Consider the following sentences taken from Bloom (1970): (63)
mommy sock
(64)
mommy cottage cheese
(65)
baby milk
Two Problems of Language (66)
mommy pig tale
(67)
girl dress
(68)
sweater chair
(69)
Wendy elevator
(70)
bean bag horse
(71)
sheep ear
(72)
baby raisin
Acquisition
129
What makes these constructions interesting is the fact that they exhibit a certain range of different relations between the two nouns. On the basis of contextual clues Bloom determines the following relationships: a) b) c) d) e)
attributive constructions: the first noun modifies the second noun; conjunction constructions: the two nouns are conjoined without any overt conjunctor and or or, identity constructions, corresponding to adult NP - is - N P patterns; subject-object constructions', subject-locative constructions.
In the following discussion I will be exclusively concerned with the construction types (d) and (e) in which the first noun represents the subject and the second noun represents the object of an unexpressed action or event or, in the case of locative constructions the place at which a certain action or event occurs. Bloom (1970:5 Iff.) observes that subjectobject constructions are by far the most frequent pattern. Superficially, the crucial difference between the child's utterances and the corresponding adult constructions is the lack of a verb form in the case of the child's utterances. Consequently the question arises: why do children leave out the verb in these constructions? It has frequently been suggested that the lack of the verb in subject-object sentences is a performance phenomenon and reflects certain memory constraints. That is, the child's processing capacity is limited to producing no more than two words per utterance and this is why the verb is deleted. If this view is correct, then we may assume that the child's grammatical knowledge does contain rules expanding S into NP + V + NP; however, processing limitations require an additional rule of verb deletion. Alternatively, one could assume that the child's grammar
130
Cognition and Language
Growth
contains a rule that does not exist in the adult language, namely S — NP, + NP 2 , where NP, is the subject and NP 2 is the object. I will consider both possibilities in turn. Bloom argues for the first alternative; i.e. even at the early two- and three-word stage the child knows the SVO pattern, but is unable to produce all three constituents in a single utterance. The crucial evidence that Bloom presents comes from a number of sentences in which the verb is, in fact, present: (73)
me show mommy
(74)
machine make noise
(75)
man ride a bus
On the basis of utterances such as (73) - (75) Bloom argues that the SVO pattern must be available to the child, in principle; however, the child's grammar contains a rule which optionally deletes the verb and thus facilitates sentence processing. We are therefore facing the following situation: at Stage I children may optionally delete verbs between subjects and objects, whereas at some following stage they have to realize that such subject-object constructions are ungrammatical, because the verb cannot be deleted. The crucial question is again: how does the child find out that the deletion of verbs is ungrammatical in the context of simple subject-object patterns? Notice again that this question cannot be satisfactorily answered by merely pointing to the input that the child receives. Many utterances that children hear do, in fact, have deleted verbs as e.g. in Gapping or Right Node Raising constructions (see Maling 1972). So what the child has to find out is not that the deletion of verbs is generally incorrect, but rather that it is ungrammatical in certain configurations such as simple subjefctobject constructions. If the child were to rely merely on the input he receives he might just as well be led to the conclusion that the deletion of verbs in subject-object constructions is a perfectly grammatical option which, only by coincidence, does not occur in the utterances he happens to hear. Another point may be added. Intuitively, we would not expect to find a natural language in which the type of subject-object construction typical of early child language constitutes a regular and permissible syntactic pattern. That is, natural languages do not seem to allow free deletion of verbs in simple sentence structures. If this observation is correct, then early child grammars do not qualify as possible h u m a n languages in White's sense in the domain exemplified by (63) - (72). If the
Two Problems of Language Acquisition
131
lack of free verb deletion is a non-accidental property of human languages, we would expect some principle(s) of Universal Grammar to account for this fact and it is clear that the child's grammar at Stage I obviously violates these principles. I would like to suggest that the principle that explains the problems under discussion is the well-known constraint on the recoverability of deletion, i.e. a lexical item may be deleted if and only if it can be syntactically recovered. In other words, there must be syntactic clues which unambiguously indicate which item exactly has been deleted (for details see Katz & Postal 1964; Bach 1974:100,101,241; Chomsky & Lasnik 1977; Radford 1981:257ff.). While it is true that in many cases the verb missing in the child's utterance can be reconstructed from the general contextual background, there is no syntactic way of recovering the deleted verb. Consequently, constructions such as Gapping or Right Node Raising are permissible, whereas verb deletions in simple sentences are ungrammatical. Let us assume again that the constraint on the recoverability of deletion is a principle of Universal Grammar whose emergence at a specific point in time is determined by the maturational schedule which is part of our genetic program. I will assume that this principle emerges and becomes operative some time at the end of Stage I during which the child has used N + N constructions of the type outlined above. As soon as the child realizes that only those lexical items can be deleted which are syntactically recoverable, he will automatically know that his own N + N constructions are ungrammatical because they violate a principle of Universal Grammar. He will also know that Gapping constructions are possible since they do not violate the Recoverability Constraint. In other words, the child will discontinue to use his original N + N constructions and will thus move to the next developmental stage. Let us now consider the alternative idea explicitly dismissed by Bloom, but defended by other psycholinguistic researchers, namely that the child does not delete the verb, but rather that at Stage I the child has a rule S — N(P) + N(P) which encodes subject-object relations. Again it seems intuitively clear that such a rule does not occur in natural languages, i.e. one is inclined to think that subject-object relations are always tied to the presence of a verb or some kind of predicate. I suggest that this fact follows from the 8-Criterion which according to Chomsky (1981:36) states that 'each argument bears one and only one 0-role, and each 9-role is assigned to one and only one argument.'
In constructions containing two arguments where one is the subject and
132
Cognition and Language
Growth
the other the object, 0-roles are assigned by the verb or some other predicate. If, however, a verb or predicate is not present, the two arguments will not receive their respective 9-roles, thereby violating the 0-Criterion. Obviously, neither one of the two NPs could have the status of a predicate since this would exclude the grammatical function of subject or object. As a consequence the 0-Criterion as a principle of Universal G r a m m a r rules out a base-generated N P + N P structure encoding a subject-object relation. Following the spirit of the previous proposals we may assume that the 0-Criterion emerges at some time at the end of Stage I, thereby forcing the child to abandon the original N(P) + N(P) constructions and to restructure his grammar in accordance with the newly available principle of Universal G r a m m a r .
5. CONCLUSION
In this chapter I have tried to show that linguistic theory conceived of as a theory about the biologically determined properties of natural languages cannot only successfully deal with the logical problem of language acquisition, but may be extended to explain, at the same time, certain developmental aspects of real-time language growth. This extension involves one additional assumption, namely that man's genetic program does not only specify the principles of Universal G r a m m a r , but also contains a concomitant maturational schedule which determines a specific temporal order in which these universal principles emerge during the developmental process. This assumption was defended on both conceptual and empirical grounds. An obvious question is, of course, in what way the maturational model developed in the preceding sections relates to a perceptual model of the type proposed by White as well as to the idea that language development proceeds by parameter-fixing. It seems to me that, on logical grounds, the maturational and the perceptual model are mutually exclusive - at least for the same set of data. That is, children either restructure their grammar because of different data-perception or they do so because an emerging principle of U G forces such a restructuring process. While it is obviously an empirical question which of the two views is the correct one, it can't be both. At best one might argue that in some cases restructuring is perceptually driven, while in other cases it is maturationally driven. Again, it is an entirely empirical question whether or not such a possibility exists. In contrast, the maturational view seems to be fully compatible with the parameter model - essentially, because the two partially address
Two Problems of Language
Acquisition
133
different issues. The parameter model tries to answer the question of what makes the child choose between different language-specific options given the limited evidence available to him, while the maturational model seeks to explain what forces the child to restructure his grammar at various stages of his development. In other words, the parameter model is primarily an answer to the poverty-of-stimulus problem, while the maturational model is an answer to the restructuring problem. Since these two aspects of language development obviously interrelate, it is easily conceivable that principles of U G emerge along the lines suggested in this chapter with parameters being fixed as soon as the principle/constraint begins to operate. One might even go on to speculate whether or not the parameter model and the maturational model are, in a sense, interdependent. Note that the parameter model provides an answer to the question of how the child's linguistic experience interacts with principles of U G in a specific manner. However, for a parameter to be fixed, it has to be there in the first place. Consequently, linguistic evidence can fix a parameter, but it cannot create a parameter. Hence the question arises: where does the parameter come from? Of course, one might want to argue that universal principles together with their parametric values are all there from the very beginning. But, then, why do some parameters get fixed earlier than others? If we adopt the maturational view, the answer to the last question becomes self-evident. If the operation of universal principles and their parameters is maturationally triggered, then the emergence of a specific principle/parameter will make the child sensitive to the relevant data. That is, once the principle/parameter is operative, the child will 'look' for crucial data to fix the relevant parameter, while he simply ignores that same type of data as long as the principle/parameter is merely 'latent'. Finally, a little ideology. Although - as already stated - it is an entirely empirical question whether the child's restructuring process is perceptually or maturationally triggered, it seems that the maturational view is the more exciting alternative, because it makes child language an independent and irreplaceable source of information about properties of Univeral G r a m m a r . If, as the perceptual model claims, child language is fully constrained by U G in the same way as adult language, then child language can't tell us anything about U G that adult language could not tell us as well. Since - for purely technical and methodological reasons adult data are generally easier to handle than child data, one might wonder why a cognitive psychologist interested in the properties of U G should bother to look at child data at all, as he can get all the relevant information from adult data as well. In contrast, if the maturational model is correct, then child language will give us information about the relationship and mutual dependence between UG-principles in a way that adult data can't.
134
Cognition and Language
Growth
NOTES 1. This is a translation. The G e r m a n original states: 'Soweit man ü b e r h a u p t d a v o n sprechen k a n n , d a ß Wissenschaft oder die Erkenntnis irgendwo beginnt, so gilt folgendes: die Erkenntnis beginnt nicht mit W a h r n e h m u n g oder Beobachtung oder S a m m l u n g von D a t e n oder von Tatsachen, sondern sie beginnt mit Problemen. Kein Wissen ohne Probleme - aber auch kein Problem ohne Wissen.' 2. In the early and mid-60s much influential work was carried out by researchers who were psychologists rather than linguists by education. Psychology, at that time, was seen as the science dealing with cognitive processes rather t h a n with systems of mental representation. 3. I am, of course, not claiming that the specific proposals made by generative g r a m m a r ians are all correct; in fact, 'surely no one expects that any of these current proposals are correct as they stand or perhaps even in general conception' ( C h o m s k y 1981:4). However, if generative g r a m m a r is viewed as a research p r o g r a m , then the general assumptions on which this program is based, provide a principled answer to the logical problem of language acquisition. F o r a brief outline of these assumptions see C h o m s k y (1981:1-15). 4. ' H o w do we delimit the d o m a i n of core g r a m m a r as distinct f r o m marked periphery? In principle, one would hope that evidence f r o m language acquisition would be useful with regard to determining the nature of the b o u n d a r y or the propriety of the distinction in the first place, since it is predicted that the systems develop in quite different ways. Similarly, such evidence, along with evidence derived f r o m psycholinguistic experimentation, the study of language use (e.g. processing), language deficit, and other sources should be relevant, in principle, to determining the p r o p e r ties of U G and of particular g r a m m a r s . But such evidence is, for the time being, insufficient to provide much insight concerning these problems. We are therefore compelled t o rely heavily on grammar-internal considerations and comparative evidence . . . ' ( C h o m s k y 1981:9). 5. While it is more or less a standard assumption in the literature that children do not have access to negative evidence, some authors have expressed a somewhat different view (see e.g. Mazurkewich 1981). 6. 8-Criterion: 'Each argument bears one and only one 0-role, and each 8-role is assigned to one a n d only one a r g u m e n t ' ( C h o m s k y 1981:36). Case Filter: ' * [ N p a] if a has n o Case and a contains a phonetic matrix or is a variable' ( C h o m s k y 1981:175). 7. Stowell (1981:146) proposed the following Case Resistance Principle: Case may not be assigned to a category bearing a Case-assigning feature. Informally speaking, this principle states that a category may either assign or receive Case, but not b o t h . 8. Even if Stowell's specific analysis is not correct, then his proposals can at least be taken as a descriptive generalization. The f u n d a m e n t a l problem remains: how do children acquire the grammatical facts that Stowell's analysis focuses on? 9. There are, of course, f u r t h e r logical possibilities for structural description of (11) a n d (12) which I will not discuss here. 10. Here it is implicitly assumed that grammatical processes apply to constituents only, a standard assumption in the literature. 11. Of course, it is well-known that Piaget a n d his collaborators have proposed the Constructivist Theory of language development. The problem with Piaget's theory is that it cannot explain, in principle, how new concepts are acquired (see F o d o r 1978) n o r h o w children acquire the linguistic knowledge which, in fact, they d o acquire (see Piattelli-Palmarini 1980).
Two Problems
of Language
Acquisition
135
12. It is assumed that what children acquire and what is, in fact, represented in the adult's mind is a system of rules with specific properties (see C h o m s k y 1965:3-37). 13. It would be more correct to talk a b o u t principles rather t h a n properties in this context. The basic idea is that properties of rule systems ultimately derive f r o m universal principles, so that the child's a priori knowledge consists of principles f r o m which knowledge of specific properties follows. 14. F o r a justification of the methodological assumption that language learning may be viewed as a n instantaneous process see C h o m s k y (1975:3-35,118ff.) 'Despite considerable variety in learning experience, people can c o m m u n i c a t e readily (at the level of communication relevant to this discussion), with no indication that they are speaking fundamentally different languages. It seems that the cognitive structure attained - the g r a m m a r - does not vary much, if at all significantly, o n the basis of factors that should lead to e n o r m o u s differences, were the possibilities just sketched in fact realized . . . The facts suggest that the initial idealization, with its falsifying assumption a b o u t instantaneous extensional learning, was nevertheless a legitimate one a n d provides a proper basis for pursuing a serious inquiry into h u m a n cognitive capacity.' ( C h o m s k y 1975:122) 15. Strictly speaking, 'developmental sequence' and ' o r d e r of acquisition' refer to different concepts. While ' o r d e r of acquisition' concerns the order in which different structures are fully mastered, 'developmental sequence' refers to the sequence of developmental stages through which children pass before they master a given structure completely (see Felix 1982). 16. ' N o t e that the central concept t h r o u g h o u t is ' g r a m m a r ' , not 'language'. T h e latter is derivative, at a higher level of abstraction f r o m actual neural mechanisms; correspondingly it raises new problems. It is not clear how important these are, or whether it is worthwhile to try to settle them in some principled way' ( C h o m s k y 1981:4). 17. This example is construed for expository purposes only. It is only meant t o d e m o n strate the general a p p r o a c h to the problems under discussion. 18. It is not claimed that the solution is factually correct, but only that if it is correct, the problem is solved in a principled way. 19. This idea has previously been discussed e.g. by Gleitman (1981) and White (1982). 20. This view is idealized in the sense that possibly clusters of principles may emerge at a given point in time, i.e. it is conceivable that the maturational schedule specifies fewer points of emergence than there are principles of Universal G r a m m a r . 21. See note 20; it is also conceivable that certain very f u n d a m e n t a l principles are operative f r o m the very start, while only 'higher-order' principles emerge maturationally. These questions are, of course, empirical in nature. 22. An a n o n y m o u s reviewer pointed out that the notion of 'semantically based g r a m m a r ' was unclear to h i m / h e r . I believe that the problem is partly terminological. If one gives the term ' g r a m m a r ' a rather wide interpretation in the sense of any rule 'system', then a semantically based g r a m m a r is obviously a system of semantic rules. If, however, one a d o p t s a narrower interpretation in the sense of g r a m m a r = syntactic rule system, then, of course, the term 'semantically based g r a m m a r ' is a contradictio in adjecto. However, the basic problem under discussion remains the same. U n d e r the narrow interpretation this would be: how does the child move f r o m a n o n - g r a m m a r rule system to a g r a m m a r rule system? 23. Of course, I am not claiming that all permissible phrasal categories are acquired at the same time. 24. Stowell (1981:113) proposes the following Adjacency Principle: In the configuration [ a p . . . ] or [ . . . Pa] a Case-marks p where (i) a governs p and (ii) a is adjacent to P and (iii) a is [-N].
136
Cognition
and Language
Growth
25. In his survey of languages Greenberg (1963) did not find any OVS language, arguing that OVS is not a possible word order. Mallinson & Blake (1982) f o u n d a single language with OVS which may be, however, an 'ergative' language. 26. It appears that the assumption that different surface orders are all derived f r o m one c o m m o n underlying order is restricted to the sequence of subject, verb and object. T h u s many have argued against a rule of Dative Movement, so that the different orders for direct and indirect object would be base-generated. 27. Clahsen & Muysken (1983) rely on E m o n d s ' Principle to explain certain differences in first and second language development. 28. Apparent counter-examples are constructions such as: J o h n doesn't sing nor does Mary dance. Bill wants neither that J o h n sings nor that M a r y dances. I have no explanation for this p h e n o m e n o n except that one could argue that in these cases the coordinator or rather t h a n the entire sentence is being negated. If this is correct, the above sentences would be on a par with constructions such as 'not J o h n came to the party, but Mary'.
Chapter 3
Competing Cognitive Systems
1. THE PROBLEM
At the present state of our knowledge, one of the continuing mysteries in language acquisition has to do with an issue that is frequently referred to as 'ultimate attainment'. This notion roughly refers to the 'final state' (see Chomsky 1981) of language development, i.e. the stage at which the learner achieves full native speaker competence. Although quite a few details are known about developmental processes in various types of both first and second language acquisition, it is still an open question why, as a rule, child learners are able to achieve native speaker competence, whereas adult learners, even after many years of exposure, usually retain markedly non-native elements in their L2-speech. It seems to be fairly obvious that the question of why people successfully acquire language in some cases and fail to d o so in others, is closely related to the question of what enables humans to acquire language(s) at all; i.e. what are the prerequisites for man's ability to master communication systems as complex as natural languages. This problem is frequently referred to as the logical problem of language acquisition (see Chomsky 1975, 1980; Hornstein & Lightfoot 1981; White 1982; Chapter II.2 this volume). Quite clearly, adequate exposure to language data is a necessary but by no means sufficient condition for language acquisition, since, apart from man, no other organism will learn a natural language through mere exposure to primary data (see Fanselow (1984) for a review of the relevant evidence). Consequently, the question of why human beings (can) learn language is tantamount to asking what those properties of the h u m a n mind - as opposed to the system of cognitive structures in other organisms - may be that make language learning possible. Assuming that this view of the problem is correct, in principle, I will attempt to explore the nature of those cognitive structures that enable man to construct a grammar on the basis of the empirical data commonly available to him and to examine the question of why these cognitive structures become somehow less efficient in adult language acquisition. I believe that it is only in the light of some more recent findings in
138
Cognition and Language
Growth
psycholinguistic research that the theoretical relevance of this question can be properly understood. These findings concern the fact that learning a language is a highly systematic process, governed by universal principles and regularities which, in many non-trivial cases, do not appear to be a function of environmental factors or of the learner's individual personality(l) (see also Chapter II.3 this volume). The crucial observation is that this seems to hold true not only for first, but also for second language acquisition. While it is true that first and second language acquisition processes may be different in certain domains (Felix 1978a-b, 1982), there seems to be little doubt that both are governed by a common set of basic principles and mechanisms which may be viewed as a manifestation of what can be called man's natural ability to acquire language, and which d o not, in any non-trivial way, reflect 'conditions of learning' in the sense of behaviorist learning theories (for LI see e.g. McNeill 1970, Bloom 1970, Brown 1973, Bowerman 1973, Slobin 1973, Wode 1977, Felix 1980a, Clahsen 1982b; for L2 see e.g. Huang 1971, Ravem 1974, Schumann 1975a-b, Fillmore 1976, Felix 1978b, Burt & Dulay 1980, Meisel 1980a-b, Wode 1981, Felix 1982). The systematic nature of language learning processes becomes manifest through (a) the largely invariant order in which certain sets of linguistic structures are mastered, (b) the occurrence of an ordered sequence of developmental stages in which learners regularly and consistently produce grammatically deviant structures before they attain native-like competence in a given domain of the target language, (c) recurrent structural and developmental patterns in the speech of learners from different acquisitional types, such as LI learning, L2 learning, bilingualism, foreign language teaching, etc. These observations come from a large body of studies of which, for reasons of space, I will only name a few. In a number of studies Brown (1973) and de Villiers & de Villiers (1973) found that children acquire English grammatical morphemes in a largely invariant order. The authors conclude that first language acquisition follows developmental principles which reflect specific properties of the h u m a n mind rather than features of the learning environment. Applying this research design to second language acquisition Dulay & Burt (1974a-c) examined several hundred children with different language backgrounds and found that they, too, mastered the English morphemes in much the same order. Dulay & Burt concluded f r o m these findings that the process of learning a second language cannot be adequately accounted for by behavioristic
Competing Cognitive
Systems
139
principles of habit formation either; rather, both LI and L2 acquisition, constitute what they called a creative construction process during which the learner continuously forms hypotheses about the structural properties of the language to be acquired. At each stage of the acquisitional process these hypotheses are tested against new input material and are subsequently revised until the learner's linguistic system is identical with that of the native speaker(2). In other words, the learner systematically re-creates the grammatical system of the target language by processing the input data on the basis of an apparently universal set of principles. The claim that both LI and L2 acquisition cannot be adequately characterized as a process of imitation and reinforcement receives further support from the observation that, on their way towards the target language, learners pass through an ordered sequence of developmental stages during which they regularly produce structures deviating, in a very specific and systematic manner, from the target language. Consequently, such structures do not reflect the input in any straightforward way(3). Such ordered sequences of developmental stages were first discovered by Klima & Bellugi (1966) who observed the emergence of negative and interrogative patterns in children learning English as a first language (for further evidence see Bar-Adon & Leopold (1971) and Ferguson & Slobin (1973)). Ravem (1968, 1969) found similar sequences in the acquisition of English as a second language by his two Norwegian-speaking children. Later studies such as Milon (1974) and Gillis & Weber (1976) found much the same structures in the L2 acquisition of English negation by Japanese learners. Wode (1981) studied the acquisition of English by four German-speaking children and obtained similar results. Butterworth (1972), Schumann (1975a) and Fillmore (1976) provided additional data from a Spanish Ll/English L2 combination leading to essentially the same general picture. Evidence suggesting that both first and second language acquisition are systematic processes in the sense outlined above comes also from a large number of studies dealing with languages other than English; e.g. Sanchez (1968) on Japanese, Bowerman (1973) on Finnish, Omar (1973) on Arabic, Felix (1978b) on German/English, Hansen (1981) on Hindi/English, Meisel (1980b) and Pienemann (1981) on German/Italian. Thus the available evidence suggests quite forcefully that both first and second language acquisition are governed by a common set of fundamental principles which appear to reflect properties of the human mind. Where LI and L2 developmental sequences differ, this does not seem to be due to the learner's application of essentially different learning principles; rather, as a result of their linguistic foreknowledge; L2 learners appear to be able to formulate much more specific and efficient
140
Cognition and Language
Growth
hypotheses about the target language than LI learners (see Felix 1978b). If it is true, however, that man is biologically equipped with a set of (cognitive) principles that guide and determine the process of both first and second language learning, one may reasonably wonder why it is that roughly around the age of puberty second language learners somehow lose the ability to achieve native speaker competence. If it is a general and biologically determined property of the h u m a n mind to be able to acquire natural language on the basis of limited input, then there is n o a priori reason to expect that this ability should regularly decline at a specific age. It seems to be clear that any adequate theory of language acquisition will have to explain, at least in principle, why children generally achieve full competence (in any language that they are exposed to), whereas adults usually fail to become native speakers. Furthermore, such an explanation has to be compatible with the observation that, in many cases, both adults and children appear to follow the same developmental paths and to apply the same set of fundamental principles. It appears, then, that within the domain under discussion a theory of language acquisition has to minimally answer the following questions: (a) what is common to child first and second language acquisition as opposed to adult language acquisition? (b) what are the crucial properties of those cognitive structures that make language acquisition possible? (c) in what way does the adult mind differ from the child's mental system so that differences with respect to ultimate attainment result? After discussing a number of proposals that have been advanced in the past, I will develop some notions intended to provide a theoretical framework for dealing with the issues under examination. In particular, I will argue that with the onset of puberty two essentially different cognitive systems begin to compete in the processing of language data. Neither one of these two systems can be totally excluded from language processing by conscious control; and while it is true that, in general, competition helps business, it turns out to have rather unfortunate consequences in the case of language learning.
2. ACQUISITIONAL DEFICITS
To the extent that the question has received any systematic treatment at all(4), there have been a number of proposals to explain why, in general,
Competing Cognitive
Systems
141
adults fail to achieve full native-speaker competence. Some of these proposals have focused on the physiology of cognitive growth, others have primarily been concerned with environmental aspects of language learning. In this section I will attempt to show that many of the relevant arguments are either inconsistent with the available data or logically inconclusive. 2.1. Biological aspects of adult language learning Lenneberg (1967) was among the first to appeal to biological considerations in an effort to account for acquisitional deficits in adults. According to Lenneberg the ability for 'primary acquisition'(5) is closely tied to a phenomenon known as hemispheric lateralization. In general, adults process language data in the left hemisphere of the brain, while the right hemisphere processes information relating to space, music, etc. The data Lenneberg reviews suggest to him that this functional asymmetry of the brain is itself subject to development, i.e. in a gradual process, speech functions become lateralized to the left hemisphere. The evidence for the lateralization hypothesis is taken primarily from aphasic patients and dichotic hearing tasks (see Kimura 1963, Bryden 1982). The crucial observation is that brain damage in the left hemisphere does not lead to permanent aphasic symptoms in children under the age of puberty; in fact, in younger children speech functions are transferred to the right hemisphere, if the left has been damaged(6). In contrast, adults do not fully recover from aphasia, as a consequence of their apparent inability to transfer speech functions to the non-damaged half of the brain. The critical period for a total recovery from aphasia thus coincides with the critical period for primary language acquisition. In both cases, it is approximately the onset of puberty. Lenneberg argues that the process of increasing lateralization of neo-cortical functions leads simultaneously to a loss of the brain's functional plasticity which appears to be a prerequisite for successful primary acquisition. What is important to note in this context is that Lenneberg argues for a physiologically determined critical period. The fact that language learning after puberty is less successful in terms of 'ultimate attainment' does not result from the adverse influence of environmental or personality factors, but rather reflects brain-internal physiological mechanisms; i.e. functional changes in the brain structure preclude successful language learning after puberty: failure to learn by biological necessity. Since 1967 various types of evidence have emerged revealing a number of problems with Lenneberg's hypothesis - at least as it was originally stated. First, it seems that the data Lenneberg used in support of his biological explanations are far from conclusive. In a detailed review of
142
Cognition and Language Growth
the literature Krashen (1973, 1975) shows that the process of lateralization is presumably complete by age four, possibly even at birth. Re-examination of the results of dichotic hearing tasks furthermore lends strong support to the lateralization-at-birth hypothesis. At present, there seems to be some disagreement among neuropsychologists as to how early the various brain functions become fully lateralized; however, there appears to be hardly any doubt that the process of lateralization is complete long before the onset of puberty. If this is so, then Lenneberg's assumption that the loss of the brain's functional plasticity makes naturalistic language acquisition biologically impossible for the adult loses much of its original persuasiveness. Furthermore, even if Lenneberg's observations concerning the process of lateralization were correct, one cannot simply assume that the brain's plasticity and the ability for primary acquisition are causally related. All that Lenneberg has shown is that two different phenomena are temporally linked, leaving the question of what causes the apparent loss of the acquisitional ability completely open. The crucial evidence, however, that renders a Lenneberg-type of biological explanation less plausible comes from recent studies on L2 acquisition by adults. Not only is it true - as outlined in section 1 - that first and second language acquisition are both governed by a common set of basic principles, but this also seems to hold true for adult acquisition: in the area of morpheme order studies Bailey, Madden & Krashen (1974) tested 72 adult learners with a large variety of different language backgrounds using the same methodology and elicitation techniques as Dulay & Burt (1974a-c). Bailey et al. found orders of acquisition very similar to those observed in child second language learning, a result later also corroborated by Perkins & Larsen-Freeman (1975). Schumann (1975a) studied the acquisition of English by the 33-year-old Spanishspeaking Alberto. Although, during the period of observation, Alberto did not pass much beyond certain initial stages of L2 learning, his utterances showed many of the same structural properties found in children during the early phases of the learning process. Felix (1982) observed the 48-year-old German-speaking Ella while she learned English in a naturalistic environment. In the structural domains analyzed negation, interrogation, sentence types, and pronouns - Ella approached the learning task in much the same way as other studies (e.g. Huang 1971, Fillmore 1976, Wode 1981) indicate for child L2 learners. Meisel (1980b) and his co-workers in the ZISA Project (see Clahsen et al. 1983) compiled both longitudinal and cross-sectional data on the acquisition of German as a second language by Spanish and Italian migrant workers and their children. Meisel did not find any fundamental differences between adult and child L2 acquisition. Furthermore, the
Competing Cognitive
Systems
143
ZISA findings largely support the results of Felix' (1978b) study on the acquisition of German by four English-speaking children. D'Anglejan & Tucker (1975) studied the acquisition of English complex structures by 20 French speakers in their early 20s and found developmental patterns similar to those described by C. Chomsky (1969) for child LI learners. All these findings strongly suggest that the same - or at least a very similar - set of basic principles not only operates in first and second language acquisition, but likewise cuts across child and adult learning. There is no indication in the data that the natural language learning principles are totally inaccessible for the adult or that adults exclusively use learning mechanisms which are essentially different from those that operate in the child learner. It thus seems that the idea of physiological changes in the brain as somehow 'destroying' the natural principles of language acquisition is at odds with a large body of empirical evidence on adult learning. 2.2. Environmental aspects of tutored language acquisition That learning a second language under traditional classroom conditions is probably the most cumbersome and least effective way of approaching the task of language acquisition is nowadays not even disputed by foreign language teachers and researchers (see e.g. Mohle 1975). People in the teaching profession are usually somewhat apologetic about the striking disproportion of teaching effort and learning success in the classroom. There are above all two types of arguments that are advanced to explain this acquisitional deficit. First of all, language teachers argue, there is a highly complex and intricate network of environmental factors which all combine to prevent more successful and effective learning. To name just a few of these factors: exposure to the foreign language usually exceeds little more than one hour per day and is frequently even less; artificial communication in the classroom makes language use seem unrealistic and boring; foreign language students are notoriously less well-motivated than naturalistic learners; learners differ in language aptitude and bad learners frequently prevent good learners from exploiting their full learning potentials (Butzkamm 1973, Solmecke 1976, Brown 1976, Lambert et al. 1976, Tucker et al. 1976). The detrimental influence of these and many other factors is seen to be responsible for the fact that classroom students are not able to rely on the same set of basic principles as naturalistic learners. It appears to me that environmental explanations of the type frequently advanced by language teachers are based on assumptions which are at least highly debatable. If it is true that certain environmental and motivational conditions cause the natural principles of language learning to
144
Cognition and Language
Growth
become inaccessible for the classroom student, then it follows that these principles must be highly sensitive to the influence of external conditions in terms of how effectively they can function. However, past acquisition research has shown that one of the most prominent properties of natural principles of language learning is that they are largely irresponsive to the influence of environmental and personality factors(7). Both L1 and child L2 learners differ quite naturally in motivation, learning aptitude, quantity and quality of exposure, etc.; and yet they both follow the same basic acquisitional pattern and do not generally fail to achieve native-speaker competence. Consequently, there is no justification for assuming that the specific conditions of the classroom situation as such prevent the operation of natural language-learning principles; at best it could be argued that these principles are largely immune to the influence of external factors under naturalistic conditions, but not so under classroom conditions. But this pushes the problem merely to a different level. Why is it that the susceptibility of these natural principles to environmental factors depends itself on environmental factors? Furthermore, there is some strong evidence against the assumption that natural principles of language learning are totally absent from learning in the classroom. In a series of studies Felix (1979, 1982), Felix & Simmet (1981) and H a h n (1982) have shown that in many structural domains foreign language students follow the same basic principles that have been determined for naturalistic L2 acquisition, even though it is true that in certain other domains these principles do not operate. Since the natural patterns of language learning seem to be available to the foreign language student - at least, in principle - we are confronted with much the same problem as in the case of naturalistic adult L2 acquisition: why is it that learners fail to make more extensive and effective use of the natural principles of language learning?
3. GENERAL COGNITION A N D LANGUAGE DEVELOPMENT
There can be hardly any doubt that children's intellectual and linguistic development involve cognitive processes which must in some way be interrelated, since language is generally used to verbalize and convey concepts and ideas. Consequently, there have been numerous attempts to explain regularities of language acquisition in terms of what is believed to be more general aspects of h u m a n cognitive growth (see Bever 1970, Slobin 1973, Sinclair-de Zwart 1973, Schlesinger 1977, Gass 1980). Discussions centering around the relationship between language and cognitive development have predominantly focused on meaning and vocabulary (see e.g. Wanner & Gleitman 1983). It is easily conceivable
Competing Cognitive
Systems
145
why this is so. Lexical items and meaning relations can be related, in a particularly obvious way, to conceptual and cognitive categories (Clark & Clark 1977, Nelson 1978a, Bowerman 1978, Carey 1980). Formal properties of the type that appear in the syntactic organization of natural languages are much less amenable to an interpretation in terms of general conceptual phenomena. It is even more difficult to determine the general cognitive basis of phonological patterns; so sound structures have been almost totally ignored in cognition-oriented discussions on language development. It is difficult to imagine what cognitive or conceptual features could be the basis for, say, the phoneme / p / or a phonological process such as voice assimilation. However, it seems to be clear that any adequate theory of language acquisition has to deal not only with concepts and meaning, but also with the formal aspects of language; in particular, since many of the principles and regularities that have been found to govern acquisitional processes relate to the systematicity with which children master the formal properties of natural language. Consequently, any answer to the question of what kind of mental structures must be assumed to be responsible for language processing and acquisition has also to be tested on morphosyntactic and phonological phenomena. Let us therefore examine what type of properties must be attributed to a cognitive system which permits the acquisition of those features that linguistic theory tells us are typical of natural languages. In what follows I will claim that knowing and using a language crucially involves the ability to perform formal operations of a highly abstract sort. That is, acquiring a language requires the ability to relate to each other abstract entities which do not have any directly observable manifestations and which cannot be induced from experience in any straightforward way. This fact, I will argue, has a number of serious consequences for any theoretical considerations about the relationship between language and cognition. For illustration I will discuss a few examples from both phonology and syntax. The facts themselves are not new and have been extensively discussed in the literature. What is crucial in the present context is simply how the relevant findings relate to what is presently known about the child's general cognitive development. Let us first look at phonology. Note that before the language learner/ user can analyze the morphosyntactic or semantic structure of an utterance, he has to convert the acoustic input into a phonological representation. What is actually being transmitted from the speaker to the hearer, namely the acoustic signal, does not as such have linguistic properties; consequently, any information-processing on the grammatical level presupposes the availability of a phonological representation derived, through mental operations, from the acoustic properties of sound waves.
146
Cognition and Language
Growth
Note furthermore that the young infant has to deal with highly complex phonological information as early as the very beginning of the acquisitional process, viz. at a time when the morphosyntactic and semantic structures of his speech are still relatively simple. The evidence I will present comes primarily from speech perception, in particular from studies conducted by Liberman and his co-workers in the late 50s and early 60s. Liberman's findings have been extensively discussed in Fodor, Bever & Garrett (1974) as to their implications for linguistic theory and I will partially draw on F o d o r et al.'s interpretations. The crucial problem of how speech is perceived and processed is one of identification. How does a hearer determine whether two utterances are the same or different? It is fairly obvious that a reasonable solution to this problem cannot focus on the condition that perceptually identical utterances must also have identical sound waves. It is a general fact that no two utterances of, say, the phoneme / f / are totally identical with respect to all the acoustic properties of their underlying sound waves. However, it may be reasonable to assume, as a first approximation, that two sound waves must have a critical set of acoustic properties in common - which may even be allowed to vary within certain limits - in order to be perceptually identified as one and the same utterance. A model of speech perception based on this assumption would posit some kind of filter mechanism in man's perceptual apparatus which separates the critical features necessary for speech sound identification f r o m those acoustic properties which are linguistically/perceptually irrelevant. The problem with such a model is that it is in conflict with the available empirical data. Whether or not two speech sounds will be perceived as the same cannot always be uniquely specified on the basis of the acoustic properties of their underlying sound waves. Speech sounds are essentially perceptual, not acoustic units. There are no invariant acoustic features which can be unambiguously associated with a given speech sound. Liberman's work has shown that, depending on the environment, one and the same acoustic event may be perceived as different speech sounds, and conversely, different acoustic events may be perceptually identified as the same speech sound (see Liberman et al. 1957, Liberman etal. 1961, Liberman et al. 1967, Schatz 1954). The relevant experimental data have been summarized at length by F o d o r et al., so within the context of this chapter it will be sufficient to cite two particularly suggestive cases. First, it can be shown that acoustically different signals which can be sufficiently distinguished in isolation will be perceived as the same speech sound as soon as they occur in a normal phonetic environment. This is most evident in the perception of stops. Utterances of syllables which consist of a stop + vowel sequence are acoustically characterized
Competing Cognitive
Systems
147
by a formant structure which begins with different types of upward or downward slopes - a result of the rapid change in the position of the articulators. The remaining parts of the formants show the linear concentration of energy typical of vowels. It is reasonable to assume that isolated utterances of syllables such as / b a / , / t a / , / k a / , / d a / , etc. will be perceptually distinguished on the basis of these formant-initial slopes. Liberman's experiments do, in fact, show that an artificial manipulation of the initial slopes followed by an otherwise unchanged formant structure leads to the perception of syllables with different consonants and the same vowel. The result of these experiments might suggest that it is essentially the specific shape of the initial slope that determines which consonant will be perceived. However, the situation is much more complicated. Take e.g. syllables with perceptually identical consonants but different vowels, such as / d a / , / d i / , / d u / , / d o / , etc. If it is correct that the perception of a given consonant - in this case / d / - derives from the specific initial formant slope, then utterances of the above mentioned syllables should have spectrographic structures in which the initial slope is always the same, while the following vowel segments will show concentrations of energy typical of the specific vowel. This, however, is not the case. Perceptually identical consonants will have different acoustic slopes depending on which vowel follows. That is, there are no invariant acoustic properties in the spectrographic structure that uniquely specify the perception of / d / throughout all of the syllables given above. Furthermore, the second formant, which is particularly relevant for consonantal information, varies in position between frequencies of about 2500 and 1200 Hz. It needs to be emphasized that the perceptual constancy in consonants does not derive from the fact that acoustic variations are below the threshold of discrimination. If the signals in question are isolated from their vocalic environment, they will be sufficiently identified as different glides. The reverse case can also be demonstrated. One and the same acoustic signal will be perceived as two different sounds depending on the phonetic environment in which it occurs. Take e.g. the tape recording of an utterance of the syllable / p i / . With adequate technical equipment it is possible to splice the tape at exactly the point where the consonantal spectrum transits into the vocalic formant structure. The result will be two pieces of tape, one containing the consonantal, the other containing the vocalic spectrum of the syllable / p i / . The tape with the consonantal spectrum can now be combined with separately recorded utterances of isolated vowels such as / a / or / u / . Under the assumption that / p / has invariant acoustic properties the result of such a manipulation should be perceived as / p a / and / p u / respectively. This, however, is not the case. Liberman shows that a syllable which is the result of combining a
148
Cognition
and Language
Growth
consonant / p / extracted f r o m an original recording of / p i / with a pre-recorded vowel / a / will be consistently perceived as / k a / . Here, again, acoustic properties d o not uniquely specify the perception of a sound. There is a large body of experiments which point in the same direction: invariant acoustic properties which unambiguously define the perceptual status of a speech sound apparently d o not exist. Consequently, a model of speech perception which works with a filter system to extract critical features f r o m the acoustic event will be inadequate. If, however, the concrete i n f o r m a t i o n present in acoustic signals is insufficient for correct speech perception, then obviously other types of i n f o r m a t i o n have to be a d d e d in order to arrive at the correct phonetic representation of a given utterance: 'The experimental demonstration that such invariants do not exist suggests that the perceptual problem is not that of rejecting irrelevant information present in the signal but rather that of adding information in terms of which the signal can be properly decoded.' (Fodor et al. 1974:291)
These findings suggest that speech perception involves processes by which different types of abstract i n f o r m a t i o n must be related to each other in order to convert the acoustic signal into the p r o p e r phonetic string. If it is true that the acoustic properties of the signal d o not provide a sufficient basis for arriving at a unique phonological representation, then we may assume that the i n f o r m a t i o n to be a d d e d must have to d o with the hearer's tacit knowledge a b o u t the abstract and universal properties of speech sounds in natural languages. In other words, a n adequate model of speech perception must incorporate a mechanism which will abstract away f r o m certain concrete physical properties of a given acoustic event and will interpret these on the basis of i n f o r m a t i o n a b o u t the abstract properties of possible speech sounds. T h u s it is clear that mental processes relating to speech perception must - at least partially - operate at an abstract level of representation on which the various types of i n f o r m a t i o n converge. Given that speech perception involves such a n abstract level of representation, we may assume that the relevant perceptual processes must incorporate mental operations which are of an essentially f o r m a l nature, i.e. operations which permit the speaker not only to relate concrete objects or items to each other, but also to deal with abstract i n f o r m a t i o n on the sound structure of natural languages. The fact that processing linguistic d a t a requires mental operations which are f o r m a l in the sense that they rely on abstract i n f o r m a t i o n which does not necessarily correspond to any perceived reality can likewise be d e m o n s t r a t e d in a large variety of linguistic d o m a i n s (cf.
Competing
Cognitive
Systems
149
Clark 1973, Bowerman 1978, Karmiloff-Smith 1978). Past research in generative grammar has clearly shown that natural languages are organized at levels that go far deeper than what can be perceived on the surface. In fact, syntactic categories as basic as, say, NP, VP, Det, etc. are essentially abstract entities that do not relate to anything that can be physically experienced. Moreover, most of the deeper regularities and principles that govern natural languages are of an extremely abstract nature (Chomsky 1975,1981; F o d o r e t a l . 1974, Lightfoot 1982); and it is these abstract principles that the language learner has to master in order to use language creatively. Processing syntactic information basically means mapping abstract structures onto abstract structures, and it is fairly clear that this requires mental operations of a largely formal sort. For illustration, let us consider a few examples. First, recall the difference between structure-dependent and structureindependent rules which we discussed in Chapter 1.5. Chomsky (1968, 1975) has pointed out that natural languages apparently use only structure-dependent, but never structure-independent rules. Children seem to 'know' this, since they, too, never use structure-independent rules, even if the input data they are exposed to are compatible with such rules. Obviously structure-dependent rules crucially involve information on highly abstract entities and relations, such as c-command, domination, lexical vs. phrasal categories, etc. It seems to be clear that children using structure-dependent rules have to be able to carry out mental operations which can handle this type of abstract formal information. As a further example consider the following sentences: (1)
a. b.
Each of the students admires the others, The students admire each other.
(2)
a. b.
They each want the others to win. They want each other to win.
(3)
a. b.
Each of the students expected J o h n to admire the others, * The students expected J o h n to admire each other.
(4)
a. b.
Each of them expected that the others would win. * They expected that each other would win.
These sentences show that the occurrence of the reciprocal each other is subject to some very specific restrictions; in particular, its distributional behavior is different from the one of each . . . the others. There have been various proposals in the literature to explain the ungrammaticalities of (3b) and (4b) in terms of some very abstract principles constraining the
150
Cognition and Language
Growth
application of transformations (see Chomsky 1973). Crucially, these principles incorporate very complex structural information. Thus the ungrammaticality of (3b) is accounted for by a principle known as the Specified Subject Condition (SSC)(8) which roughly asserts that no transformation may involve two elements X and Y, where X and Y appear in different clauses, if there is an intervening element Z which is the subject of the clause which also contains Y. Similarly, the ungrammaticality of (4b) is explained by the Propositional Island Constraint (PIC) which asserts that tensed sentences are opaque domains for certain transformations. More recently, a more unified account was proposed in Chomsky (1981), where the distribution of data such as (1) - (4) is accounted for by the Binding Theory. It thus appears that acquiring a language does not only involve gaining knowledge about word order, morphemes, permutations, etc., but also knowledge of such abstract principles. It seems fairly clear that children around the age of 5-7 must be familiar with such principles since errors of the type illustrated in the ungrammatical (b)-sentences or errors involving structure-independent rules have not been recorded in the utterances of children. We may thus conclude that young children are able to handle formal operations which relate abstract properties to abstract properties, i.e. mental operations whose input and output must be characterized in terms of formal abstract entities. Having discussed what kind of cognitive capacities must be assumed to be necessary for language acquisition, I would now like to turn to the question of what is known about the child's intellectual development in non-linguistic domains. The major theoretical framework within which the relationship between cognition and language development has been examined is Piaget's theory of cognitive development (Piaget 1924, 1926, 1952). Piaget distinguishes four major consecutive stages in man's intellectual growth: the sensorimotor period, the preoperational period, the period of concrete operations, and the period of formal operations. For lack of space I will not be able to illustrate or examine Piaget's theory in detail. There are various excellent accounts of Piaget's work, notably those by Flavell (1963), Phillips (1969) and Oerter (1977). A brief outline of the theory by Piaget himself can be found in Piaget (1962). It seems necessary to clarify a few points with respect to the object of Piaget's experimental and theoretical efforts. As far as I am aware of, Piaget has never explicitly dealt with problems of language acquisition. Rather, he studied the way in which children of different age groups perform on a wide variety of problem-solving tasks. It should be noted that the tasks upon which Piaget based his observations are not of any specific type in the sense that the potential range of tasks is restricted in
Competing Cognitive
Systems
151
any non-trivial way. In fact, any task which involves the solving of a problem through mental operations is a potential object of Piaget's theory. In other words, Piaget deals with human cognition at a very general level thus including any phenomena in which mental capacities are directed towards solving or dealing with a problem. If language acquisition is just one specific aspect of the child's general cognitive growth, it should be possible to show that the regularities and principles that are manifest in language development are the same as those that operate when a child performs on a problem-solving task which involves information-processing of any non-linguistic data. Within the context that will concern us in the following sections, the crucial stage is the period of formal operations which develops approximately at the beginning of the second decade in the child's life, i.e. roughly before the onset of puberty. During this period the child learns to perform mental operations at a purely abstract level of representation, ignoring possible references to concrete objects or events. This period marks the beginning of mathematical thought in which abstract units are related to each other; it allows the formation of hypotheses about abstract relations. The child is no longer focusing on the real world, on what he perceives or experiences; rather, potential entities may become the object of thought. Formal operations relate to hypothetical objects and events which do not necessarily correspond to any perceived reality. The ability to form hypotheses about abstract phenomena now allows the child to develop theories about potentialities. Formal thought does not lead to a certain conclusion as a consequence of experience in the real world, but rather as a result of the internal logical relations between certain hypotheses and assumptions: ' F o r m a l thought is for Piaget . . . a generalized orientation, sometimes explicit and sometimes implicit, t o w a r d s problem-solving: an orientation t o w a r d s organizing data (combinatorial analysis), towards isolation a n d control of variables, towards the hypothetical, and towards logical justification and p r o o f . ' (Flavell 1963:211)
It thus appears that what makes mental operations formal in the Piagetian sense is that they relate exclusively to abstract and hypothetical events/entities and do not encompass reference to real-world objects or experience. As Phillips (1969:128) observes, one of the fundamental characteristics of the period of formal operations is that 'the adolescent is operating on operations'. There are various reports in the literature (e.g. Mehler & Bever 1967; Donaldson 1978; Kuczay & Daly 1979) suggesting that children are able to make deductions and to form hypotheses at a much earlier age than the onset of puberty. Notice, however, that Piaget's theory does not deny
152
Cognition and Language
Growth
the young child's ability to form hypotheses, to work with different levels of representations, or to make abstractions. The crucial point is that prior to Stage IV any such operation will crucially involve at least one constituent element that makes reference to real-world objects or events, while it is not until the period of formal operations that the adolescent engages in mental operations which involve nothing but abstract entities (see also Philips 1969:120-135; Johnson-Laird 1983:121/122). I would now like to turn to the question of what these considerations suggest for the relationship between language acquisition and the child's general cognitive development. Recall that the phonological and syntactic data reviewed above indicate that learning a natural language requires the ability to perform mental operations involving purely abstract entities and relations, and that this ability must evidently be available to children at the beginning of the acquisitional process, viz. as early as approximately age 1.4-1.8. Even though during the earliest stages of language development children may produce fairly simple sentence structures, it is clear that the basic mechanisms of speech recognition and processing have to be available. Note, however, that Piaget's theory suggests that formal operations, i.e. those that relate exclusively to abstract and hypothetical entities/relations are not available to the child before approximately age 9-12. It appears that the obvious discrepancy between what Piaget's theory has to say about the developmental status of abstract operations and what kind of cognitive capacities are required for language acquisition is hardly compatible with the view that the human mind is made up of a uniform system of cognitive structures which operate in any task - including language acquisition - that the organism may possibly encounter. We may, however, solve this discrepancy by assuming that the mental operations necessary for language processing/acquisition build on cognitive structures crucially different from those that Piaget's theory attempts to specify. If the cognitive structures that provide the necessary basis for language acquisition were the same as those that Piaget deals with in his experiments, then language acquisition could not occur before age 9-12. These considerations may be taken to suggest that the human mind is equipped with (at least) two distinct cognitive systems which - among other things - also appear to develop in different ways. One of these cognitive systems, which provides the basis for the ability to perform mental operations on purely abstract entities and relations as early as age 2, is language-specific in the sense that it is activated only for the purpose of language acquisition (and possibly use)(9). The other cognitive system, which makes abstract-formal operations available very much later is invoked in the general area of problem-solving. This is a system which enables humans to gain knowledge in a wide variety of very different
Competing Cognitive
Systems
153
fields - from solving crossword puzzles to making scientific discoveries. It is beyond the scope of this chapter to examine the question of whether or not the general cognitive structures of problem-solving can be further subdivided according to the domains which they cover; however, it does not seem unreasonable to assume that there are at least some common principles through which man approaches a large variety of problem-solving tasks. I would suspect, however, that the language-specific and the problem-solving cognitive systems are not the only ones that control man's mental activities. There are other domains for which man seems to be specifically equipped (visual perception would be a likely candidate), so that a distinct system of cognitive structures maybe assumed for these domains as well. I will not further pursue this issue here, for details see Bower (1974) and Felix (1982). The idea that the child's language learning process and his cognitive development may proceed in independent ways and can thus be viewed as deriving from different, though interacting, cognitive systems (modules), is, of course, not new, but has been variously proposed both in the linguistic and psycholinguistic literature. Thus Keil (1980) presented 48 ambiguous sentences and 24 ambiguous pictures to 18 children ranging between 6 and 14 years of age. He found that the children's ability to comprehend linguistic ambiguities did not correlate with their skill in perceiving pictorial ambiguities. Moreover, the general developmental course in both modes showed some significant differences. In the domain of sentence ambiguities the children showed a rapid increase in ability in a relatively short period of time; in contrast, with pictures the increase was much more gradual and less dramatic. Keil (p. 219) concludes that 'at least some nontrivial component of the linguistic ambiguity detector was taskspecific to language'. Roth (1984) ran a series of experiments in which children between 3.6 and 4.6 years old were systematically taught linguistic structures still beyond their developmental grasp. The results of these experiments suggest to Roth that 'the psychological operations required for language learning (and language processing) are, at least in part, qualitatively different from those required for other kinds of conceptual learning' (p. 101). Similar views are expressed in Beilin & Kagan (1969) and Nelson (1977). A somewhat different perspective is pursued by researchers who propose distinctions with respect to how mental operations function. Thus, in the domain of second language acquisition, Krashen (1977, 1978) distinguishes between 'acquisition' and 'learning' where the former is an essentially unconscious process guided by natural principles while the latter refers to conscious mental operations. In a series of studies Reber (1976), Reber & Lewis (1977) and Reber & Allen (1978) propose a distinction between 'implicit learning' and 'analogic learning'.
154
Cognition and Language
Growth
Implicit learning 'has, at its core, a hypothesized abstraction process, a nonconscious, nonrational, automatic process whereby the structural nature of the stimulus environment is mapped into the mind of the attentive subject' (Reber & Allen 1978:191). Implicit learning thus results in tacit knowledge, i.e. mental representations of the rule system underlying the stimulus array. In contrast, 'analogic processes are basically decision-making processes which are encouraged by a particular kind of memorial space' (op.cit.:193). That is, in analogic learning individual items are stored in memory and new stimuli are interpreted in analogy to those memorially represented. Note that the distinctions proposed by Krashen and by Reber and his co-workers relate to the operational mode of cognitive processes (conscious vs. non-conscious; abstractional vs. analogic), while the ideas to be developed in this chapter (as well as those expressed by Keil and Roth) concern the issue of task-specificity of cognitive structures/systems. In the preceding paragraphs I have argued that there are two separate (though obviously interacting) cognitive systems each being responsible for a specific domain, viz. language vs. general problemsolving. I will have very little to say about how these systems will operate in particular instances, though, evidently, this is an interesting question. Note that these two issues are logically independent and thus should not be confused. It is conceivable both that one and the same system can operate differently under different conditions, and that two distinct systems may operate in an identical fashion. It should be clear that what I have argued for comes closest to the ideas developed by Chomsky (1975,1980,1981)and Fodor(1975,1981,1983), viz. to what is sometimes referred to as the 'modular approach'. The basic idea is that the h u m a n mind is organized in the form of a finite number of independent cognitive modules each having its own specific properties. It is generally held that a person's (tacit) grammatical knowledge represents one such module, while a different module contains what may be called 'conceptual knowledge'. Within the present context it does not seem necessary to review the evidence that led Chomsky and Fodor to view the h u m a n mind as an essentially modular structure. Suffice it to say that this view receives additional support from the fact that the ability to perform formal-abstract operations seems to develop differently in different domains, i.e. early in language acquisition and late in general problem-solving. For ease of reference I will henceforth refer to the system of languagespecific cognitive structures as the LS-system (roughly comparable to the Chomskian notion of'Universal G r a m m a r ' ) and to the Piagetian-type of problem-solving cognitive structures as the PS-system.
Competing Cognitive
Systems
155
4. COGNITIVE COMPETITORS
If the preceding considerations are correct, in essence, we may be led to assume that, prior to age 9 to 11, the child can perform formal operations, i.e. those that deal exclusively with abstract entities and relations, only in the domain of language (and possibly some other specific domains, such as visual perception; see e.g. Bower 1974, Chomsky 1975). That is, in the domain of language processing the young child's mental capacities are far more powerful than those that operate in general problem-solving tasks. At e.g. age 2 to 5 the child can handle problemsolving tasks only through cognitive capacities that are limited to sensorimotor and later pre-operational procedures, while at the same time language acquisition (and use) are governed by highly abstract operations which, in the domain of general problem-solving, do not emerge until many years later. I would now like to explore some of the consequences of this view, in particular the question of what happens when the child reaches the Piagetian Stage IV in his general intellectual development. Whereas, prior to Stage IV, formal operations could only occur in mental processes involving language data(10), the child is now able to perform formal operations in general problem-solving domains as well. In other words, the child may now rely on two different cognitive systems to deal with purely abstract information: a) the system of language-specific cognitive structures in which formal operations have been available for more than a decade, and b) the system of the problem-solving cognitive structures in which formal operations have just started to emerge. If it is true that, after reaching Stage IV of his cognitive development, the child/adolescent may make use of two distinct cognitive systems to perform operations on abstract information, a number of interesting questions arise: a) b)
c)
is it possible to rely both on either the LS-system or the PS- system in order to process language data? is language processing through the PS-system - provided that this is possible - as adequate and effective as language processing through the LS-system? can the child/adult freely and consciously choose between the two cognitive systems to process language data for the purpose of acquisition?
Let us turn to the first question. It seems to be fairly uncontroversial that language processing on the basis of the PS-system is not only possible, but, in fact, occurs quite frequently, notably in the linguistic profession.
156
Cognition and Language
Growth
A linguist analyzing an unknown language on the basis of oral or written material, a student attempting to write a grammar for a given set of data, or a language teacher pondering over the most effective way to present language data and rules to his students, are typical cases: they appear to process language data with essentially the same system of cognitive potentials that they utilize when trying to gain knowledge in any other intellectual domain. What they do is solve a problem which 'happens' to involve language material and they appear to approach this task in basically the same manner in which a mathematician approaches a mathematical or an engineer a technical problem. This type of'intellectual' processing has very little, if anything, to do with either language acquisition or language use, i.e. the linguist who analyzes a given language does not (expect to) simultaneously and automatically achieve creative competence in that language. Similarly, various types of IQ tests involving language material also appear to tap the PS- rather than the LS-system. After all, these tests are designed to determine the intellectual capacities of a subject, not his knowledge of the language. I would tend to think that the PS-system is also partly involved in language processing for the purpose of language use. Certain aspects of discourse planning may be taken as an example. Problems of how to address certain people, how to be verbally polite, or how to increase the force of an argument will presumably be solved by tapping the same kinds of mental potentials that operate in other problem-solving domains. That is, any type of metalinguistic activity in which thinking about language rather than using it is the primary focus draws upon the PS-system, just as thinking about any other subject matter does. It is, of course, well-known that even very young children are able to plan discourse (e.g. Karmiloff-Smith 1979), to observe different registers of politeness (e.g. Bates 1979), and to develop a certain 'awareness' of their own linguistic abilities (see Clark 1978). This seems to indicate that even young children do use their PS-system for metalinguistic activities. However, the crucial point is that as long as the PS-system does not include the ability to perform formal-abstract operations, it cannot be utilized in those linguistic problem-solving tasks that focus on the strictly formal properties of natural languages. This seems to explain why it is obviously possible to teach 6- or 7-year-olds to avoid 'bad' language, to observe different speech styles, etc., whereas attempts to teach this age group a second language in a traditional foreign-language-classroom setting have proved to be uniformly unsuccessful. Traditional foreign language teaching relies heavily on the students' general problem-solving capacities, while at the same time focusing on the formal properties of the language to be learned. If it is true that the acquisition of the formal properties of language requires purely abstract
Competing
Cognitive
Systems
157
mental operations, then one would expect PS-system-oriented foreignlanguage-teaching to be largely unsuccessful as long as the PS-system does not make available this type of abstract mental processing. This is, in fact, what seems to happen. Furthermore, there is some evidence to suggest that language learning after puberty frequently proceeds in a way that seems to involve two distinct and independent types of mental activity. In a n umber of studies Felix (1982), Felix & Simmet (1982) and H a h n (198 2) found that in certain structural domains German high school students learning English as a second language under classroom conditions approach the learning task in much the same way as naturalistic acqu irers. A substantial number of systematic errors found in the students;' English utterances showed the same structural properties that o c c u r at various developmental stages in naturalistic second language acquisition. That is, the students seemed to form and test hypotheses about the structure of the target language in much the same way as naturalistic learners (for similar results, see also Lightbown 1980, Dietrich 19;S2). Under the assumption that language learning proceeds on the bas is of a uniform system of general learning strategies this situation must appear somewhat paradoxical. Note that in naturalistic (L2) acquisitio n the hypothesis-testing procedure reflects the learner's attempt to gain knowledge about the structural properties/rules of the target language in a given domain. To be more precise, the learner forms (initially incorrect) hypotheses about the structure of the language to be acquired, simply because he does not know what these structures are. U n d e r classroom conditions, however, things are different. F r o m the very beginning, the foreign language student knows perfectly well what the structural properties are in a particular domain, because the teacher has explained the relevant rules with enormous care and in great detail. W h y , then, should the student form and test hypotheses to gain knowlege a b o u t something which he already knows? It appears to me that the idea of two distinct cognitive systems provides a fairly obvious answer to this question. Quite clearly, what the teacher's explicit explanations appeal to is the student's P'S-system, viz. the teacher explains the language material in much the same way he would presumably explain any other problem. The student thus knows the relevant structural properties in the sense that this information is internalized and represented in the PS-system. However, i t appears to be the case that PS-system-internal knowledge is not automatically transferable or available to the LS-system (for a discussion of the relevant evidence from a language-teaching perspective see H a h n (1982)). The student's hypothesis-testing procedure thus seems to serve the purpose of internalizing essentially the same kind of knowledge that is
158
Cognition and Language
Growth
already stored in the PS-system into the LS-system. Thus the assumption of two distinct cognitive systems, each with its own operational properties and domain of application, may be taken to explain why students under classroom conditions acquire the same knowledge twice, once through the teacher's explanations and once through their own mode of discovery. While, in principle, the PS-system seems to be capable of processing language data in various tasks, the question of whether or not PSprocessing is as effective and adequate as LS-processing is a much more serious issue. In absence of clear experimental evidence we have to rely on more general observations which are admittedly quite speculative. There are, I believe, both logical and empirical reasons for assuming that the PS-system is much less effective than the LS-system - if not almost totally inadequate - in handling the significant formal properties of language. Let us first look at the logical reasons. Linguistic research during the past 20 years has determined a wide range of highly abstract properties in natural language which make it extremely implausible to assume that the child could ever discover these properties through multi-purpose inductive mechanisms (see Chomsky 1975, Wexler & Culicover 1980, Lightfoot 1982, Wanner & Gleitman 1983). In the face of increasing evidence on the highly abstract nature of natural grammars it seems quite reasonable to assume that the h u m a n mind is equipped with a special 'language faculty', i.e. a biologically determined independent cognitive system whose properties constrain the range of hypotheses that the child will entertain in his search for the adult g r a m m a r ( l l ) . This independent cognitive system technically known as Universal G r a m m a r , is viewed as containing a finite set of abstract principles which guide, and are a prerequisite for, the acquisition of language (for details see Chomsky 1981, White 1982). The notion of an LS-system developed in this chapter is roughly equivalent to the idea of Universal G r a m m a r . If it is true, however, that there is an independent cognitive system which contains, as its crucial characteristics, a set of language-specific principles, it must be true - virtually by definition - that any other cognitive system, in particular the PS-system, does not contain these principles, because otherwise they would not be two different systems. If furthermore these principles are an indispensable condition for acquiring the kind of grammatical knowledge that every adult possesses, then it follows that any cognitive system other than the LS-system will be less adequate for language acquisition by virtue of not containing these principles. In other words, it is simply the logical consequence of the assumption of an LS-system that the PS-system is less efficient and less appropriate for handling language data.
Competing Cognitive
Systems
159
Let us now turn to some of the empirical reasons. Here again, linguistic analysis may be a case in point. The linguist who analyzes a given set of data and, in doing so, relies primarily on his PS-system, may gain knowledge about the language, but not knowledge o/the language (in the sense defined by Chomsky 1965, 1980); i.e. he will be able to explicitly state rules and regularities, to consciously manipulate structures, and to explicate structural relations; however, this ability as well as its underlying knowledge obviously do not lead to the acquisition of the respective language in the sense commonly associated with this term. In much the same way, nobody would seriously expect to gain command of a language by simply studying all the rules and structures as given in a traditional grammar or text book. The reason for this seems to be fairly uncontroversial. The linguistic knowledge generated by the PS-system seems to be by and large independent of the knowledge generated by the LS-system under standard acquisitional conditions. Explicit linguistic knowledge does not automatically lead to, nor is it equivalent to, the type of tacit knowledge that provides the basis for normal language use. Conversely, a person's tacit knowledge of his language does not entail the ability to state this knowledge explicitly; furthermore, it appears that in children explicit knowledge develops much later than tacit knowledge (see Pratt et al. (1984) and references cited therein). These considerations suggest that while the PS-system may generate some linguistic knowledge, it does not lead to the type of complex knowledge required for the creative use of language. It thus seems that the PS-system is simply not powerful and sophisticated enough to deal with the enormous complexity and abstractness of natural languages. The same seems to hold true in the foreign-language classroom. Consider e.g. the typical foreign-language student who has gone through an intensive language course, has memorized all the relevant rules and structures, has scored high on all written tests, but proves to be totally unable to produce more than a few elementary routines in spontaneous speech (see Felix (1982) for examples). Traditional foreign-language teaching appeals primarily to the student's problem-solving abilities, to conscious learning efforts, to explicit structure manipulations and detailed rule-explications. In accordance with such teaching principles foreign-language students approach language learning in much the same way they would approach any other problem-solving task. They are able to manipulate a given set of structures, but their tacit knowledge lags behind of what they have been taught. Note, however, that similar discrepancies do not, in general, occur with naturalistic child learners who rely on their LS-system for acquiring language. I am not aware of any evidence suggesting that child learners' explicit knowledge about their language is significantly greater than their tacit knowledge of the
160
Cognition
and Language
Growth
language (in C h o m s k y ' s sense). Rather, the opposite seems to be the case (see Pratt et al. 1984, Karmiloff-Smith 1979). Thus it seems that only the LS-system leads to the type of linguistic knowledge that is required for successful and effective language use. If it is true that the PS-system is a by and large ineffective and inadequate tool for gaining full command of the formal properties of a (second) language, one may reasonably ask whether or not a learner is free to deliberately choose between utilizing the PS-system or the LS-system. Within the context of the problem under discussion this question is, of course, irrelevant for learners below the age of puberty or rather Stage IV of Piaget's classification 12). First, there seems to be strong evidence to suggest that the LS-system is also operating in adult ( = post-puberty) learners. Various studies (e.g. Bailey & M a d d e n & Krashen 1974, Schumann 1975b, Felix 1982) have shown that, in m a n y respects, the developmental course in adult language learning resembles the one observed in child learners. Typical developmental sequences, interlanguage structures and acquisition orders occur both with adults and children. To the extent that these developmental features reflect the operation of the LS-system, one may conclude that it is apparently impossible to 'switch o f f this system, i.e. whenever a learner is exposed to L2 data, the LS-system begins to operate. There are some cases, though relatively rare, in which the LS-system appears to participate only to a minimal extent. Consider the case of traditional language instruction in the so-called 'dead' languages such as Latin, Old Greek, Sanskrit, etc. These languages are typically taught by acquainting the student first with all the grammatical rules and formal linguistic devices; language material is consistently presented in written rather than in oral form; comprehension operates via translation preceded by a conscious effort to decompose and analyze sentence structures, and resembles very much a formal linguistic analysis. Instruction in these languages seems to be a particularly clear case in which only the PS-system is tapped, resulting in explicit knowledge about rules and structures without any ensuing communicative ability or tacit linguistic knowledge. Note, however, that even in this case the operation of the LS-system seems to be marginally involved. People who have studied, say, Latin for many, many years start to develop a 'feel' for the language which is obviously not the result of deliberate efforts, but rather emerges quite naturally and without any specific intention. It thus seems that the LS-system will always - even if only marginally - participate in language processing and lead to at least some degree of acquisition. If it is true that the LS-system can never be fully de-activated in adult language acquisition, we may alternatively ask whether or not it is
Competing
Cognitive
Systems
161
possible to 'switch o f f the PS-system. There are a number of reasons which suggest that such a possibility does not exist. Recall that even young children use the PS-system in the course of language development as is evident from the numerous studies on metalinguistic awareness, discourse planning, and the like. The crucial difference between post-puberty learners and pre-puberty learners is not that the former use the PS-system for language acquisition, while the latter do not; rather, the PS-system of pre- vs. post-puberty learners differs with respect to the availability of formal-abstract operations and thus with respect to what kind of properties of natural languages can and will be handled by this system. It is not until Stage IV that the abstract-formal properties of language can be accessed by the PS-system. If, however, the PS-system has its natural share throughout the language acquisition process, there is no reason to expect that this system should suddenly cease to operate once it has become more powerful. Consequently, we may conclude that the PS-system - just like the LS-system - will always operate, however, on the formal properties of language only after Stage IV. It appears furthermore that adult learners show a number of very specific learning strategies which seem to follow from the operation of the PS-system. Adults typically tend to approach language learning in a problem-solving manner (see also Krashen 1975). They are inclined to view language learning as an intellectual challenge, in much the same way they view any other problem they are required to solve. Adults typically ask for explanations and attempt to rationalize on the rules and principles underlying language use; and they tend to consciously apply the rules they have learned. It does not seem possible for an adult to merely get absorbed in a new language environment without rationalizing on the structural properties of the language to be learned. The above considerations suggest that child ( = pre-puberty) learners and adult ( = post-puberty) learners differ, among other things, in one crucial aspect. Whereas the child relies exclusively on the LS-system to acquire the formal-abstract properties of language, because this system is the only one that provides the necessary formal operations, the adult uses both the LS-system and the PS-system for this purpose, as both systems allow him, in principle, to handle abstract information through formal operations. It is reasonable to assume that the relative extent to which the LS-system and the PS-system participate in the learning process may vary across individuals and across learning situations. However, it seems that neither one of the two cognitive systems can be totally excluded from the learning process. The adult, then, has available two types of cognitive systems which, in a sense, compete in processing language data for the purpose of acquisition. Both systems operate actively and both try to do the job. However, for reasons outlined above
162
Cognition and Language
Growth
the PS-system is much less effective than the LS-system in dealing adequately with highly abstract language information. In fact, it seems that beyond a certain elementary level of acquisition the PS-system ceases to be useful at all. The formal complexity of natural languages and the abstract nature of their governing principles by far exceed the structural capacities of the PS-system. Consequently, we may assume that adults typically fail to achieve full native-speaker competence because they partially use the 'wrong' cognitive system. To be more precise, adults cannot suppress the operation of the PS-system in areas for which it is inadequate. The competition between the LS- and the PS-system in the acquisition of formal language properties thus seems to be the reason why adults are inferior to children with respect to how successfully they can achieve a native-like command of a second language. The notion of competing cognitive systems thus suggests that changes in the cognitive, rather than the physiological system of the adolescent are responsible for differences between adult and child language learning. A native-like command in a second language will be more difficult to achieve as soon as the child reaches the stage of formal operations in his general intellectual development. The PS-system will automatically and irrevocably operate on those aspects of the input data for which it is not sufficiently equipped and this will eventually prevent total success of the type achieved by children. The notion of competing cognitive systems implies that the more the PS-system controls the learning process at the expense of the LS-system the smaller the chances will be for the learner to achieve a native-like command of the second language. Conversely, the more the LS-system determines the course of development the more likely it is that the learner will reach his goal. There are further indications that this is, in fact, true. Formal language instruction typically taps potentials of the PS-system and it has notoriously failed to produce the kind of learning success that it aims at. The notion of competing cognitive systems explains why the traditional kind of language instruction is bound to fail. Teachers are trained to make every effort to suppress operations of the LS-system because this would regularly and quite naturally produce (developmental) errors which, from the perspective of behavioristic learning theories, are seen as the major obstacle to learning success. Teachers will thus strongly encourage the student to activate as much of his PS-potential as possible in dealing with the learning task. In contrast, naturalistic language environments seem to be more conducive to strengthening operations of the LS-system than are formal classroom settings. In fact, it is commonly recognized that when a student exposed to formal teaching for a number of years is placed into a naturalistic language situation for a certain period of time (e.g. spending his vacation in the L2 country) he will, in
Competing Cognitive
Systems
163
most cases, significantly improve his command of the language. One might surmise that this is the case because, in such a situation, the LS-system will be given a fair chance to operate, while this is generally not the case under classroom conditions.
5. SOME IMPLICATIONS
Let us now discuss some implications of the notion of competing cognitive systems. Much recent discussion in second language acquisition research has been concerned with the problem of how certain environmental and personality factors, such as motivation, language aptitude, affective and attitudinal factors, social status, educational background, cultural values, etc. may influence the language learning process. It is generally recognized that L2 learners, in particular adults, exhibit a great deal of individual variation (cf. Meisel & Clahsen & Pienemann 1981) with respect to their overall second language achievements and it is sometimes claimed that some learners' L2 competence may become 'fossilized' (Selinker 1972) at various stages of the developmental process. Although it is far from clear how exactly these environmental and personality factors control the learning process, in terms of how they relate to the underlying principles of language development, several models have been proposed (Schumann 1975a-b, Dulay & Burt 1977, Krashen 1978, Dittmar 1980, Seliger 1980) which attempt to place the role of environmental factors into a proper perspective. Most of the models conceive of such factors as a kind of filter mechanism, which determines how much of the input material will be processed and how effectively this will be done. It seems to me that the question of how environmental and personality factors influence language learning cannot be adequately dealt with if only second language learning is under focus. Rather, the issue has to be approached under a broader perspective. If language development follows certain principles which derive from specific properties of the human mind, it is clear that the precise nature of these principles can best be determined if different learning conditions are examined. The crucial question is thus not just how such factors influence L2 learning, but rather why they have a potentially negative effect on second language achievement, but appear to be largely irrelevant when it comes to success in first language acquisition. Note that this difference between LI and L2 learning success cannot be adequately explained in terms of presence vs. absence of the factors in question. LI learners, too, may have different personalities and may be raised in different social, cultural, economic
164
Cognition and Language
Growth
and familial environments; yet - ignoring pathological cases - they always succeed in gaining full native command of their mother tongue. It seems then that these environmental factors are simply immaterial to success in LI learning. It has frequently been argued by language teachers (Burgschmidt & Gotz 1974, Solmecke 1976) that LI learners are successful learners because they are well-motivated and grow u p in an environment which is particularly conducive to language learning. It seems to me that this view is highly questionable. First of all, it should be emphasized that, as far as we know, the child learns his first language out of biological necessity, and not because he is properly motivated. The child does not have any choice as to whether or not he wants to learn his mother tongue; he simply has to. If - as is frequently stated - the need to communicate constituted the primary motivating force, the child could easily get by with much less than he actually achieves. If it were only for communication, there would be no need to master the complex verb inflections of e.g. German and French; the infinitive could be used just as effectively. Lenneberg (1967) observed that congenitally deaf children develop a highly effective system of gestures to communicate among themselves. Thus, communicative needs can be properly met through systems other than natural language. And yet, in addition to their gestural language, these deaf children learned their mother tongue just as any other normal infant would. Furthermore, it is difficult to see any justification for the assumption, that the young child's environment is particularly conducive to the language learning task. It seems to me that, in view of what is presently known about first language development, such a claim is not very plausible. Not only is it true that the child's environment does, in fact, very little to systematically help him with his language learning task (particularly, if compared to the enormous teaching efforts in a foreign language classroom), but it should also be recognized that language makes up little more than a small part among all the various skills and abilities that the child has to master during his early years (see Wanner & Gleitman 1983). What is necessary, then, is to explain why certain factors crucially affect L2, but do not, in the same way, affect LI development. If this question can be satisfactorily answered, then the problem of how these factors interfere with L2 learning can be examined with much more precision. I believe that the notion of competing cognitive systems offers some hope of specifying the relationship between environmental factors and language learning in a much more adequate manner than has been possible in the past. If first language acquisition and adult second language learning differ - among other things - with respect to the effect of personality and
Competing Cognitive
Systems
165
environmental factors on the learning process, then this difference is matched by a similar difference at the cognitive level, namely a difference in the types of cognitive systems involved in child and adult language learning. As pointed out in the preceding sections, the child learner, in particular the LI learner, relies exclusively on his LS-system to master the abstract properties of the target language, because this is the only system that permits the types of formal operations necessary for language acquisition. Note that the LS-system is largely immune to the influence of environmental and personality factors. This immunity explains why any child in any culture or society under any (non-pathological) condition succeeds in mastering his first language and why child L2 learners - provided they are given proper exposure - are, as a rule, successful learners. In contrast, adult L2 learners process language data on the basis of both the LS- and PS-systems. It appears then that it is above all the PS-system which is highly sensitive to the influence of environmental factors. The assumption that environmental factors strongly affect the operation of the PS-system can be justified on independent grounds. Such factors do not only affect (PS-system-based) language processing; rather, any problem-solving activity is controlled by, and subject to, environmental and personality effects. Driving a car, solving a math problem, finding one's way in an unknown city, passing a job interview may be successful or unsuccessful depending on a near-infinite set of personality a n d / o r environmental factors. The crucial question is then: why is it that the PS-system is sensitive to the influence of environmental and personality factors in a way that the LS-system is not? While the following considerations are highly speculative and tentative, I believe that it is a fair guess that this difference may have something to do with the range of applicability that the PS- and the LS-system can be assumed to have. First, it is fairly clear that any (type of) cognitive system must have a limited capacity, i.e. the amount of information it can process either at a time or within a certain period of time cannot be infinite. If this is so, then a cognitive system or subsystem must incorporate some kind of filter mechanism which prevents an overloading and thus a breakdown of the system in a particular domain. Such a filter mechanism must be responsible for either admitting or rejecting input information to be processed by the relevant cognitive system. Note that the LS-system has a fairly narrow range of applicability in terms of the type of information that will be processed at all. Only language-related information (or more precisely information relating to the formal properties of language) will be admitted to this system, while any type of information that does not relate to language data is rejected. Note, however, that, evidently, the capacity of the LS-system is suffi-
166
Cognition and Language Growth
ciently large to process formal language data of any type and any degree of complexity. In contrast, the PS-system has a comparatively wide range of applicability in terms of the types of information that can potentially be processed. In fact, the range of applicability appears to be virtually unlimited. Any type of information that relates to problem-solving activities will, in principle, be processed by the PS-system; and it is clear that there is an almost infinite variety of problem-solving tasks that an individual may be confronted with. Consequently, if the PS-system were disposed to automatically process any type of information that relates to, or involves, problem-solving activities, then the processing capacity of this system would be more or less immediately exhausted. In view of these facts one might assume that there has to be a filter mechanism which can either reject or admit problem-related information. Note, however, that the criteria for rejecting input material cannot have to do with the categorial type of information to be processed; i.e. there is no well-defined subset of problem-solving-related information that is consistently admitted at the expense of some other well-defined subset which is consistently rejected. By definition, the PS-system will admit any type of information relating to problem-solving activities. Consequently, the filter mechanism has to reduce the overall quantity of input material without paying attention to categorial types of information. This is where one may assume that environmental and personality factors enter the picture. These factors can be regarded as constituting an appropriate filter mechanism to lower the processing load of the PS-system. Under such a view, environmental factors will reduce the total amount of problem-solving-related information to be admitted to the PS-system. Let us see how this may work in detail. First, it appears that such a filter mechanism may operate both consciously and unconsciously. An individual confronted with a large number of different problem-solving tasks can consciously choose between those problems which he will attempt to solve and those which he will not deal with. That is, he can make a conscious selection depending on his beliefs, attitudes, mental states, etc., thereby reducing the processing load of the PS-system. In other cases, selection occurs more or less unconsciously. Poor motivation for a certain set of problems may lead an individual to ignore these and to deal with other problems for which he is better motivated. Educational background or social status may cause someone to consider certain problem-solving tasks as 'too difficult' and thus not to deal with them. Lack of concentration seems to be another filter mechanism which reduces the ability to successfully master problem-solving tasks. There are numerous other factors including such apparently trivial phenomena as unhappy family life, insufficient nutri-
Competing Cognitive
Systems
167
tion, stress, nervousness, etc., which lower the information input to the PS-system. In any case, environmental and/or personality factors help to prevent an overloading of the processing capacity of the PS-system. They prevent the total breakdown of this system and they guarantee that problem-solving tasks can be successfully mastered at all. If these ideas are correct, in principle, then it is fairly obvious why the PS-system is sensitive to external factors and why the LS-system is not. Since taskspecific cognitive systems are constrained by the type of information they will admit, they do not need the same type of externally controlled filter mechanism that is apparently necessary for the appropriate functioning of cognitive systems which cover a wide variety of different tasks. Consider another likely candidate for a task-specific cognitive subsystem: visual perception. Personality and environmental factors do not, in general, influence the way in which we perceive space, distance or objects. There is no indication that some people may be e.g. better motivated for visual perception and will thus fare better on a perception task than someone who is poorly motivated. In general, our mental representation of visual space is organized in the same way for everyone. In contrast, the ability to solve crossword puzzles, for which we presumably do not have an independent 'faculty', is heavily influenced by a large variety of external factors. One of the crucial questions is, of course, how the various notions and distinctions introduced in the preceding sections are related to each other: what is the exact relationship between the LS- and the PS-system; how do the different filter mechanisms influence acquisition processes; how is the output affected by the competing cognitive systems? While it is clearly premature to provide a fully satisfactory answer to these questions, the figure on page 168 is an attempt to outline some possible relations between components involved in language acquisition. The crucial claim expressed in this figure is that there are two cognitive organizers (cf. Dulay & Burt (1977) who claim that there is only one): a language-specific cognitive system and a problem-solving cognitive system. In both cases, the input data pass a filter mechanism before they reach the actual processing unit. That is, I am assuming that there are factors which will reduce the total amount of information that is actually admitted to the two cognitive organizers (PS-system and LS-system). Crucially, these factors are not part of the cognitive organizers themselves, but rather constitute an independent unit; i.e. the filter does not change the way in which language data are processed, but rather determines how much of the actual data will reach the cognitive organizers. As indicated before, the PS-filter is taken to be the sum of all those factors and variables which are commonly believed to affect success in
168
Cognition and Language
Growth
C o g n i t i v e P r o c e s s i n g in L a n g u a g e A c q u i s i t i o n
second language learning, i.e. motivation, attitude, affective factors, learning conditions, etc. It is not quite so clear whether or not we have to also assume the existence of an independent filter mechanism attached to the LS-system (the respective box is therefore drawn in dotted lines). There is, however, some indication that in child language acquisition the amount of input data admitted to the LS-system may be reduced by factors which are, however, of a different nature from those that operate in the PS-filter. Dulay & Burt (personal communication) report that child learners process primarily the speech of their peers rather than adult utterances even though they are exposed to both. In a similar way, learners tend to focus on speech that is directly addressed to them, largely ignoring what is being spoken among others. The figure indicates that one of the PS-filter outputs is directed towards the original input. There are two independent reasons for assuming such a connection. First, if the PS-filter did not affect the original input, then this would imply that the less input passes through
Competing
Cognitive
Systems
169
the PS-filter, the more will pass through the LS-filter; i.e. a learner with little motivation, poor education, and affectively ill-disposed, will be able to suppress the operation of the PS-system and will, through comparatively strong reliance on the LS-system, (successfully) learn the target language in the same way an LI learner would: such a view is obviously incorrect. Second, the back-arrow from the PS-filter to the input suggests that environmental and personality factors will not so much influence the way in which language data are processed, but will rather reduce the total amount of input eventually admitted to the system. I am not aware of any study which convincingly shows that environmental and personality factors do, in fact, change the basic manner in which language data are processed. Under adverse conditions, learners may make very slow progress, but they show essentially the same kinds of transitional errors and acquisition orders as quick learners. Studies such as Schumann (1975a), F a t h m a n (1975) and the Heidelberg Project indicate that environmental factors may reduce the learner's opportunity or willingness to be exposed to the second language. Such learners will have little contact with L2 native speakers, and when they do have to communicate with L2 speakers they can d o so with a limited repertoire of structures. However, those structures which they do master develop in much the same way as in more successful learners. Another crucial claim is that the L2learner's LI knowledge is not part of the PS-filter, but rather constitutes an independent unit. The justification for this assumption is that in contrast to environmental and personality factors LI kowledge may directly influence the learning process itself in three distinctive ways (see also Wode 1981). First, it may directly affect the processing in the LS-system. Note that LI and L2 learning may differ both in the kinds of transitional errors and in the acquisition orders that occur. Even though both LI and L2 learning are subject to developmental sequences and thus qualify as a creative construction process, LI and L2 developmental sequences need not be exactly the same. Dulay & Burt (1974a-c) found that the morpheme orders of L2 learners are not identical to those of LI learners. Although all L2 learners, regardless of mother tongue and age, had the same orders, this differed significantly f r o m the corresponding LI orders. Felix (1976) showed that L2 learners make different types of transitional errors in the acquisition of question words from those of LI learners. Felix (1978b) argued that LI learners form their earliest utterances on the basis of conceptual rather than syntactic rules, while L2 learners typically omit such a 'presyntactic' stage. Gass (1980) and Ioup & Kruse (1977) found that L2 learners acquire certain relative clause constructions in a way different from LI learners. In all cases it seems that the L2 learner can form more specific and more effective hypotheses about the target
170
Cognition and Language
Growth
language than the LI learner. That is, the L2 learner uses his LI knowledge as an additional source of information on which to base his hypothesis-testing. Consequently, LI knowledge seems to be able to affect the way in which the LS-system operates. Second, LI knowledge may directly affect operations of the PS-system. This occurs when language data are processed in a problem-solving manner with LI structures as a source of comparison. The result will be that specific LI structures are transferred onto L2 productions in a way that is usually called interference. Third, LI knowledge may act in much the same way as the environmental factors of the PS-filter. Many adults who have lived in a foreign language-community for many years simply do not bother to learn the language because their LI is for them a sufficient means of verbal communication. These adults are not unsuccessful learners in the sense that they have failed in their attempts to learn, rather they d o not even try. The LS-system and the PS-system constitute the basic processing unit. They are what Seliger (1980) calls 'innate strategies'. They constitute man's mental capacity as it relates to language and to general problemsolving. The output of the LS-system and the PS-system enters the cognitive output mixer, which determines the ratio between LS-processed and PS-processed linguistic knowledge. As mentioned earlier, individuals apparently differ with respect to how much input material will be processed by each of the cognitive systems. Some learners, especially if taught in a classroom setting, tend to approach language learning basically in a problem-solving manner, while others appear to proceed more by intuition and 'feeling'. The cognitive output mixer accounts for these individual differences. The problem with this unit is, of course, that very little can be said about its internal structure and functioning. Therefore assuming such a unit should be regarded as highly tentative. The output-competence is the total result of all acquisition processes. Note that it does not refer to the actual utterances that the learner produces. Rather, it constitutes the learner's underlying knowledge of the language. Some of this knowledge will be tacit in the Chomskian sense and can thus be traced back to processes in the LS-system. Some other types of knowledge, however, will be the outcome of problem-solving processes. This output serves as a basis for the production of actual utterances, i.e. output-performance. The relationship between competence and performance is admittedly very complex and in many respects far from clear. Consequently, the figure is fairly unspecific about this relationship. The primary motivation for emphasizing this distinction in the present context is that apparently at least two important units intervene between competence and performance (see also Meisel & Clahsen & P i e n e m a n n 1981): communication strategies and the monitor
Competing Cognitive
Systems
171
(Krashen 1978). Note that in this figure the monitor relates to performance, not to acquisition; i.e. it is placed after the output-competence which specifies what has been acquired. It seems to me that Krashen's monitor model is, in fact, a model of performance, not one of acquisition. It explains and predicts the verbal behavior of a speaker who has only a limited command of a second language. The data which Krashen refers to in his papers in support of the monitor model relate to how the learner proceeds to form actual utterances. The monitor model assumes the existence of an underlying competence, but is not concerned with how this competence develops. The major difference between communication strategies and the monitor is one of form vs. content. The monitor is basically concerned with form; it provides the learner with techniques to make his utterances more correct in terms of the formal structures of the adult language. It checks LS-generated structures for possible errors and converts them into more correct utterances. In contrast, communication strategies are concerned with the message. They help to solve the problem of how certain communicative contents can be conveyed when the appropriate formal linguistic devices to encode the message are still missing. There seems to be a large number of possible communication strategies, and learners sometimes appear to be quite inventive in getting their messages through. Burmeister & Ufert (1980) report three different types of communication strategies in artificial settings (relapse to LI; relapse to archaic structures; structure avoidance), but obviously there may be many more.
6. SOME EMPIRICAL EVIDENCE
The Competition Model, as outlined so far, makes a number of claims and predictions about the way humans acquire a second language under different conditions. In this section I will explore some of these claims in more detail and will present empirical evidence from second language acquisition research which appears to support these claims. If it is true, as the Competition Model suggests, that adult L2 learners partially use higher-level general problem-solving routines in approaching the task of learning the formal properties of language, while children will exclusively rely on the LS-system, then we should expect to find specific structural properties in adult utterances which reflect the operation of the PS-system and which are absent from corresponding child utterances. Note first of all that the LS-system may be viewed as an input system in the sense of Fodor (1983), whereas the PS-system bears crucial resemblance to what Fodor calls the 'central processing unit'. If this analogy is correct, then utterance types reflecting the PS-system
172
Cognition and Language
Growth
should exhibit a number of properties typical of general problem-solving processes. In particular, such utterances should exhibit structural patterns which are not strictly syntactic but rather derive from the interaction of a wide variety of different information. Recall that F o d o r argues that one of the principal features of input systems is that they are informationally encapsulated (i.e. constrained as to the type of information they accept), whereas the CPU will, in principle, accept any available type of information. In view of these considerations we should expect child utterances to be much morz syntactically constrained, whereas adult utterances should be less syntactically constrained to the benefit of influence exerted by non-syntactic information. I believe that this is, in fact, the case. A detailed examinationoftherelevantempiricaldatashowsquiteclearly that there are, in fact, many crucial differences between the types of utterances produced by adults and children. Within the present framework I will argue that these differences reflect the operation of the PS-system. In discussing the relationship between first and second language acquisition in Felix (1978b), I emphasized that, f r o m the very beginning, child second language learners appear to strictly model their utterances on a small number of clearly indentifiable syntactic patterns. The early utterances of L2 children are usually comparatively short and reflect very specific syntactic rules which, however, may be different f r o m those of the adult language. Child L2 learners commonly start out with equational sentences and then proceed to acquire rules relating to patterns of the verb phrase or noun phrase. L2 children d o not appear to simply put together words whose relationship to each other is of a purely conceptual nature. It rather seems that the search for specific syntactic rules characterizes much of the L2 children's early learning efforts. In contrast, if we look at some of the structures produced by adult L2 learners, we find numerous instances of utterances which do not exhibit any detectable syntactic patterns. Rather, such utterances seem to be - at least in part - syntactically unorganized sequences of words or chunks which are tied together on the basis of certain conceptual or communicative considerations. Where the meaning of such utterances is comprehensible at all, this is only because the general situational context allows the hearer to make successful guesses at what the speaker intends to say. For illustration, consider the following data taken from Schumann (1981): (5)
Is one man drink too much? (There was one man who was drunk.)
Competing Cognitive
Systems
173
(6)
Down this, down below this man you need put other man/men es, uh, smoke one pipe, no? (Down below this man you need to put the other man who is smoking a pipe.)
(7)
And for work, in Mexico, for hour for the, 70 or 80 dollar, 80,80 peso for 8 hour. (As for the work in Mexico you get 80 pesos for 8 hours.)
(8)
Family, oh, sister, yeah, brother in Paso, Texas, in Los Angeles sister, yeah. (As for my family, I have a sister and brother in El Paso, Texas and another sister in Los Angeles.)
(9)
This country, three year. (As for this country, I have been a painter here for three years.)
(10)
Fresno working for the crop. (As for Fresno, I worked on the crop there.)
(11)
2 day por week, 3 day por week, no long time, no good. (2 or 3 days a week is o.k., but more than this is no good.)
(12)
In my appartment, no good. Good, good work every day. (Staying in my appartment is no good. It's good to work every day.)
(13)
Pero, mi study everybody more time, no? The schedule, the school, everybody mas long, no? The engineer, the doctor, muy good, no? Very good for me. No money, no school. (School takes a long time. It would be very good to be a doctor or an engineer. But without money you can't go to school.)
Schumann classifies utterances such as (5)-(13) as 'paratactic' and 'juxtapositional'. It appears that these two terms capture some very precise intuitions about the structural make-up of the corresponding productions. Note first of all that the speakers of (5)-(13) are still at afairly early stage of acquisition, as evidenced e.g. by sentence-initial no. In view of the fact that the speaker's acquisition process has barely begun, the utterances recorded by Schuman are fairly long in terms of the number of words that occur, and at the same time rather complex and sophisticated in terms of the message to be conveyed. However, it is
174
Cognition and Language Growth
fairly obvious that the structural patterns in these utterances are very far from reflecting the syntactic rules that a native speaker would use in order to express a comparable message. In fact, it seems that, at least in some cases, the utterances are lacking any clear syntactic organization at all. Schumann's classification of these sentences as 'juxtapositional' or 'paratactic' seems to reflect the observation that at least some of the utterances consist of little more than a sequence of propositions which are merely placed next to each other in a syntactically largely unrelated manner. (9) e.g. represents a particularly suggestive case. The relationship between the two prepositions this country and three year is totally unspecified in terms of any overt syntactic relation. Without reference to the particular situational and conversational context (9) can hardly be interpreted in any unique way. What the speaker really has in mind can only be inferred from knowledge about the general situational background. Much the same seems to be true for e.g. (12). The relation between good and work is syntactically unspecified and would, in fact, be interpreted incorrectly if adult syntactic rules were applied (namely as a modifier + head construction). In (13) the function of everybody does not seem to resemble anything that this word normally expresses in adult English. It seems that in (13) everybody functions as a kind of modality marker which indicates that the propositions me study more time, the schedule, the school mas long are not only true for the speaker, but more generally for many people. Schumann's data are by no means exceptional. In Pienemann's (1980) study on Spanish and Italian adult learners of German as a second language we find numerous instances of what appear to be sequences of juxtaposed propositions. Here are some of the more typical examples: (14)
Für mich ganz gut Deutsche. 'For me pretty good Germans.' (The Germans are very nice to me.)
(15)
Is meine Vater arbeitet bis gearbeitet in Marke. 'Is my father works a little work in market.' (My father is working in a store.)
(16)
Ne, wenn da einer besoffen is, verheiratet ne. 'No, if there one drunk is, married no.' (If somebody drinks a lot and is married, that's no good.)
(17)
Schule da 18 Jahre du du du Arbeit, de Vater Mutter. 'School then 18 years you you you work the father mother.'
Competing Cognitive
Systems
175
Note that many of the utterances are extremely difficult to interpret, even if contextual information is considered. Thus, (17) appears to be an exceptionally clear case of an utterance constructed simply by juxtaposing a number of words or propositions. Crucially, Pienemann found this type of utterance only in adults, but not in children. In Dittmar's (1982) study on the acquisition of German as a second language by Spanish, Italian, and Turkish adult learners we find utterances such as the following: (18)
as an answer to the question: 'how did you manage to come to Germany?' ich bundestag España schreiben: viel Kollege deutschland, fort emigrante, ich schreiben, alles kontrollieren kommission alemania . . . , spreche, heute arbeitslos . . . , ich Avila komme, doktor, alles blut, spreche doktor: gut, fort Alemania 'I senate España write: many colleague Germany, away emigrante, I write, everything control committee aleman . . . , speak, today unemployed . . . , I Avila come, doctor, everything blood, speak doctor: good, away Alemania.'
(19)
viel jähre du nicht eine Maschine, heute ja 'many year you not a machine, today yes'
(20)
sechsunddreißig krieg España, fertig, ich nicht mehr arbeit ... ich zehn jähre, ich completo, august, november zehn jähre, montag fort, vesper schule fort 'thirty-six war España, finished, I no more work, ... I ten years, I completo, August, November, ten years, Monday away, vesper school away'
(21)
sechsundzwanzig Kinder komme, sechsunddreißig zehn jähre 'twenty-six children come, thirty-six ten years'
There is very little doubt that the utterances recorded by Dittmar show many of the same properties that appear in Pienemann's data and Schumann's data. Individual propositions are merely juxtaposed without any overt syntactic organization and in many cases the speaker's intention remains rather obscure. What the speaker appears to be doing is building up a syntactically unstructured sequence of words and chunks. Of course, I am not suggesting that all - or even the majority of - adult utterances are of this type. There are numerous other utterances in which adults proceed in much the same way as children and produce utterances
176
Cognition and Language
Growth
with clearly detectable syntactic patterns. The crucial point, however, is that, apart from such syntactically patterned sentences, adults frequently produce utterances which consist of fairly long sequences of juxtaposed propositions, while child L2 learners apparently do not regularly come up with such utterances. If these observations are correct, in essence, then the Competition Model would suggest that the adult's semantically complex, but syntactically largely unstructured utterances reflect the operation of the PS-system, viz. they are generated by general problem-solving strategies rather than by a language-specific input system. If this is so, we may still want to be more specific as to how the properties of adult utterances as outlined above relate to general problem-solving processes. Let us first consider the problem of why adults produce utterances which do not seem to reflect any detectable syntactic patterns of the target language. Why do adults choose mere juxtaposition of individual propositions instead of using the appropriate syntactic rules? The answer to this question is, of course, straightforward: adults do not use the appropriate syntactic patterns, because they don't know them. The adult learner is simply unable to express his intentions through an adequate syntactic pattern. It seems to me that the above utterances provide very illustrative examples of the discrepancy between what the learner wants to say and what he is able to say. Nevertheless, through the strategy of juxtaposing words or propositions the adult may still be successful in communicating his message, but this is only because the general contextual background aided by the interlocutor's imaginative power provides additional clues. It thus seems that communication can only be successfully maintained because all available types of information combine to make the intended message transparent. One might, of course, object that the phenomenon of insufficient competence is clearly not restricted to adults. In Felix (1978b) I reported that at least some of the children we observed were fully aware of their limited linguistic competence and showed very strong emotional reactions when they realized that their competence was insufficient to adequately encode a certain message. The crucial observation is, however, that child reactions are fundamentally different from those of adults. In situations in which their linguistic competence did not suffice to express something they wanted to say, the children either reacted by not saying anything at all or they reacted emotionally rather than verbally; i.e. they frequently started screaming or cursing; they physically attacked the interlocutor or they switched to their mother tongue to express their despair, annoyance and frustration. However, one thing the children typically did not do was to simply accept their insufficient competence as it was and try to get the message through by using other communicative a n d / o r verbal strategies.
Competing Cognitive
Systems
177
What, then, is the origin of this behavioral difference between children and adults? Again, the answer appears to be fairly straightforward. Adults cannot react in the same way as children because screaming or being silent is simply not accepted as appropriate behavior on the part of adults. Adults are expected to overcome their linguistic limitations by utilizing problem-solving strategies of a highly sophisticated sort. That is, adults are expected to simply face their insufficient competence and solve the problem of getting their message through anyway. One may plausibly assume that the adult will approach this problem, which 'happens' to involve linguistic material, in much the same way he would approach any other problem he has to solve. Conversely, it appears that children cannot display the same type of behavior as adults because these sophisticated problem-solving strategies, which integrate linguistic and non-linguistic information, are simply not available to them. It thus seems that the specific verbal behavior of adult L2 learners relates in a very natural way to the suggestion of the Competition Model that aduits will partially utilize the PS-system to 'solve problems' of language acquisition in a way that, for principled reasons, is not available to children. The claim that adult learners partially use cognitive structures different from those used by children receives further empirical support from some of Muysken's (1982) observations concerning the acquisition of Dutch and German word order by child LI and adult L2 learners. Muysken reports that child learners do not seem to have much trouble in discovering the basic SOV order of Dutch and German at a relatively early stage of acquisition, whereas adult learners consistently assume an SVO base order for these two languages. Problems arise as soon as the learner has to deal with word order distinctions in main clauses and embedded clauses (Dutch and German have SOV in embedded clauses, but SVO in main clauses). Starting with an underlying SOV order children have to acquire a rule 'Verb-Second' which applies only in main clauses and moves the finite verb into a position (presumably COMP) following either the subject or some preposed elements, such as adverbials, objects, prepositional phrases, and the like. This rule seems to be an instance of the general rule 'move a' proposed as a rule of Universal Grammar (see Chomsky 1981); in particular, the 'Verb-Second' rule is in accordance with Emonds' (1976) Structure-Preserving-Constraint, which states that an element can only move to a position which carries the same categorial specification as the element itself. Crucially, Emonds' constraint holds only for embedded sentences, but not for root sentences. Adults, however, who assume an underlying SVO order will have to learn a rule which moves the finite verb to final position in embedded sentences. As noted by Clahsen (1982a) this rule is not struc-
178
Cognition and Language
Growth
ture-preserving in Emonds' sense, thus violating a principle in Universal G r a m m a r . Quoting data from Clahsen (1982b), Muysken furthermore observes that a learner who assumes underlying SVO order in Dutch or German will have to learn various rules involving elements which do not form a natural class. That is, these rules are not only more complex in some intuitive sense, but they cannot even be formulated as rules of grammar within the current framework of linguistic theory. In other words, these rules violate various principles and constraints of Universal G r a m m a r . While the rules that the adult has to learn, given the SVO assumption, are not statable within the current framework of generative grammar, they are, of course, statable 'in a theory of pattern alternations, part of our general cognitive capacities' (Muysken, op.cit.:4). The fact that adults acquire rules that do not obey the constraints of Universal G r a m m a r leads Muysken to the conclusion that adults utilize general cognitive strategies in the acquisition of language, whereas children use language-specific cognitive structures which are constrained by certain formal linguistic properties. These differences between child and adult acquisition of G e r m a n word order provide further evidence in support of the Competition Model. Muysken's data show that adults approach the learning task in a problem-solving manner, while children rely on a highly constrained language-specific cognitive system.
7. TWO MODELS OF LANGUAGE LEARNING
In a recent paper White (1981) criticizes certain prevalent views in the existing psycholinguistic literature and develops some ideas which appear to correlate to a significant extent with the claims of the Competition Model. In particular, White argues against the idea that the child constructs his grammar in much the same way as the linguist does, i.e. the view of children as being 'little linguists'. White suggests instead that the child's changing grammatical competence reflects the fact that he/she perceives different sets of data at different developmental stages. In contrast, the linguist will optimize his grammatical analyses by successively sharpening and refining his hypotheses about the language he is studying. According to White there appear to be at least two conflicting views concerning the way in which the child arrives at the grammar of the adult language. Under one of these views advocated e.g. by McNeill (1970), Kay & Sankoff (1974), McCawley (1977), Fodor & Smith (1978), language acquisition is seen as a gradual and largely cumulative process; viz. during the earlier stages the child will first acquire a set of basic
Competing Cognitive
Systems
179
grammatical rules such as phrase structure rules, as well as presumably a few general transformations. As the child proceeds in his language development, he will acquire new rules which are then added to the set of already existing rules. Under this view, language acquisition is thus seen as a process in which the child builds up his competence in a kind of piecemeal fashion. The crucial aspect of this view is that it implies that at any stage of the acquisition process the earlier grammar(s) will be a proper subset of the grammar presently held. In other words, there is a twofold input to the process of constructing a new grammar at a given stage of acquisition: a) the grammar of the previous stage, and b) those new data that the old grammar cannot account for and which thus necessitate the construction of a new grammar. Under the alternative view, which is the one that White argues for, language acquisition is not seen as a cumulative, but rather as a disjunctive process. According to White the essential difference between two stages of language acquisition is that the child perceives different sets of data. At any stage the child will construct an optimal grammar for the data that he perceives. The crucial assumption is that the child will totally abandon the grammar of the previous stage and will start from scratch to construct a new grammar to account for the newly perceived set of data. The child does not add new rules to an already existing grammar, he does not extend or enlarge the old grammar on the basis of new data; rather, he will, metaphorically speaking, forget his old grammar and start to construct a new grammar. In other words, the old grammar does not serve as input to the process by which the new grammar is constructed; rather, it is only the child's specific perception of the data that forces him to restructure his grammatical knowledge. These two conflicting views may be represented by the following two models. MODEL I Grammar
L/ÎD
1
M O D E L II Stage 1 : Stage 2:
Data —
LAD
New Data —
LID
—
Grammar 1 Grammar 2
— —
Output 1 Output 2
Model I represents the view expressed by McCawley (1977), Derwing (1973) and others. Under this view "the input to any grammar includes the previous grammar as well as new data, and each stage in the acquisition process involves just a minor or 'local' change to each successive
180
Cognition and Language Growth
grammar; so the mature grammar is gradually attained in piecemeal fashion, rule-by-rule, with the child successively adding one rule of the mature system" (White 1981:252). Under Model II "the input to grammar II is the output of grammar I, i.e. data in the form of the child's own utterances, as well as new data, namely the language of adults and peers. Grammar I does not itself serve as input to the subsequent grammar" (White 1981:252). White argues that Model II is a more adequate representation of the process of language acquisition, because the view expressed in Model I raises a number of serious conceptual and empirical problems. One of these problems is that researchers advocating Model I seem to believe that during the early stages of acquisition the child's grammar is somehow closer to Universal Grammar. That is, the child's grammar is seen as containing only those rules and principles which are assumed to characterize man's biological endowment for the acquisition of language. The child's later development is then characterized by the acquisition of those rules that are language-specific and thus account for variations within the class of possible grammars. It is easy to see that, under current conceptions of Universal Grammar, this view cannot be correct. The reason is simply that the principles and constraints assumed to be part of UG do not exist as such, but only with respect to certain structural representations. It simply does not make sense to say that a child observes e.g. Subjacency, the Specified Subject Constraint or the Empty Category Priniciple(13) unless he produces the types of structures on which these principles operate. It is thus clear that the child's grammar can only be evaluated against the data that it is a grammar of. Consequently, as White argues, it does not make sense to compare child grammars at different stages of acquisition or even with the adult grammar. In particular, it doesn't make sense to argue that the child's grammar is more elaborate at a later stage compared to an earlier stage. A child's grammar can only be as elaborate as it is optimal for the set of data which the child perceives and which the grammar has to account for. In other words, the child's grammar is optimal at any stage of the acquisition process, namely optimal for the set of data that the child perceives. It thus appears that the re-structuring of grammars is a crucial phenomenon in the acquisition of language. The term 're-structuring' is meant here to refer to the process by which a grammar that is no longer consistent with the incoming data as they are perceived is given up and replaced by a new grammar that can optimally account for the newly perceived set of data. The view of language acquisition as exemplified in Model I raises further empirical and conceptual problems. First of all, if the child's
Competing Cognitive
Systems
181
grammar at any given stage of development contained all previous grammars as a proper subset, then the child would have to keep a record of his entire developmental history and would also have to have access to any type of previous grammar. Such a view, however, seems to be highly implausible, if alone for the reason that this would be an undue burden on the child's memory. But such a view would also have serious consequences for the linguist studying language acquisition. If it were true that the grammar at each stage could be a non-optimal grammar for the data that the child perceives at that time, then it would be impossible for the linguist to determine the child's grammar by merely looking at the data that the child produces at that time. That is, one could only determine the grammar at any particular stage if one knew the complete number and properties of all previous grammars. Nevertheless, most researchers including those propagating Model I use evidence from a particular stage in the child's development to make statements about the possible form of the child's grammar. Secondly, if the form of a child's grammar at a given stage directly reflected the child's entire developmental history, then, as White quoting Chomsky (1975) points out, factors such as order of data presentation as well as the types of hypotheses entertained should crucially affect the final state that the child attains. Since, obviously, different children are presented with different data at different times and not all children entertain exactly the same hypotheses, we would expect to find children with different final states depending on their individual developmental history. However, this is not the case. Although children may be exposed to different data in different orders, all children attain final states that are essentially the same. It seems to me that White has convincingly shown that Model I does not provide an adequate account of language acquisition. One might nevertheless suspect that gaining knowledge in other domains could proceed in a way suggested by Model I. It is at least conceivable that there are cognitive domains in which humans gain knowledge by successively accumulating pieces of evidence and experience. A crucial test would be to show that in these domains people proceed by incorporating old 'grammars' into new 'grammars', thus keeping record of the history in which knowledge is gained. I wish to suggest that Model I represents the way in which people attain knowledge in the problem-solving domain. Consider e.g. the case of a linguist constructing a grammar for a given set of data. What happens if the linguist is confronted with additional data that the grammar he has constructed so far does not account for? Unlike the child he will not simply throw away the old grammar and start again from scratch; rather he will test old hypotheses against new hypotheses and elaborate on what is already known about the respective problem. In
182
Cognition and Language
Growth
other words, the linguist will consistently compare different grammars, different hypotheses, and their empirical consequences. Obviously, he will also keep a (complete) record of his (and other linguists') past work evaluating different approaches and different proposals with respect to their empirical adequacy. Unlike the child, the linguist may make mistakes, constructing non-optimal grammars whose inadequacy is only revealed if more sophisticated data are considered. All in all, the linguist will typically proceed by gradually building up knowledge that has accumulated during the years and will always test his own ideas against competing proposals. It seems to me that the way in which the linguist proceeds in constructing grammars of natural languages, while being significantly different from the way in which children proceed, is quite typical of the way in which humans in general go about solving problems. It is intuitively clear that solving a problem of sufficient complexity requires an approach in which pieces of evidence are accumulated in a piecemeal fashion. The most natural way of dealing with a problem is to study individual aspects and to add new pieces of knowledge to what one already knows. That is, one makes systematic use of one's previous experience and knowledge. Under such a perspective the distinction between Model I and Model II correlates quite naturally with the distinction between the LS-system and the PS-system as proposed in this chapter. It furthermore correlates with some of the properties that led F o d o r (1983) to propose a distinction between input systems and the central processing unit. In the problem-solving system knowledge is gained cumulatively, in a piecemeal fashion, where earlier knowledge is always part of later knowledge, viz. where pieces of knowledge are added up. Within the languagespecific system, however, (linguistic) knowledge develops through the child's specific ability to construct optimal grammars for sets of perceived data. In other words, the language-specific system enables the child to disregard earlier grammars and earlier hypotheses, thus allowing him to construct an optimal grammar in a quick and straightforward way. Under this assumption certain problems concerning the growth of knowledge seem to fall into a new perspective. First of all, it becomes clear that children are not little linguists. They are not little linguists, because they use a cognitive system different from the one that is commonly used by the linguist. That is, even though the task of a linguist and the task of the child are essentially the same, the two proceed by tapping different cognitive abilities. This assumption also seems to explain why the child is consistently successful, while the linguist is not, and at the same time why the child needs only a few years to master something which it takes generations of linguists to come to grips with.
Competing
Cognitive
Systems
183
Here again, the answer is that different cognitive systems are involved. The principles of the language-specific system embody very severe constraints on the class of possible grammars, so that relatively little positive evidence is required for the child to choose the correct grammar; in contrast, the PS-system, which does not contain these principles, allows for a considerably larger class of grammars, so that either more evidence is required or that limited evidence will potentially lead to the wrong grammar.
NOTES 1. This is, of course, not to claim that environmental or personality factors are totally irrelevant, but rather that there are certain aspects of the acquisition process which are not influenced by such factors. 2. Some recent studies ( C h o m s k y 1981, Lightfoot 1982, F o d o r 1983) indicate that the view of language acquisition as a n essentially hypothesis-testing procedure is not without problems. There is good reason t o believe that in many cases data have a triggering function rather than being the basis for hypothesis f o r m a t i o n . 3. See also Newport & Gleitman & Gleitman (1977). 4. Of course, almost all studies have noticed the p h e n o m e n o n but, in general, have offered only ad hoc solutions. 5. 'Primary acquisition' should not be equated with 'first language acquisition'; rather what Lenneberg seems to have in mind is acquisition on the basis of primary d a t a without any specific form of instruction. 6. This statement is a gross oversimplification and should be modified in accordance with the results reported by Dennis & W h i t a k e r ( 1976) and Dennis & K o h n ( 1975); i.e. cognitive structures dealing with certain formal properties of language do not seem to be completely transferable. 7. This statement should not be misunderstood as indicating that external factors are totally irrelevant. The claim is rather that there are certain aspects of acquisition for which these factors are irrelevant. 8. The 'Sentential Subject Constraint' and the 'Specified Subject C o n d i t i o n ' are currently no longer considered t o be independent principles, but rather fall out naturally f r o m the theory of government a n d binding as developed in Chomsky (1981, 1982). 9. The claims inherent in the notion of competing cognitive systems are essentially neutral with respect to the relationship between competence and processing. 10. And also in perception data under the qualifications mentioned several times earlier. 11. Such an assumption is, of course, not committed to any specific proposals concerning the exact structure of this cognitive system; i.e. even if one rejects C h o m s k y ' s specific version of generative g r a m m a r , one might want to assume innate mechanisms working on grammatical data in order to explain the logical problem of language acquisition (see Lightfoot (1982) and Chapter II.2 this volume). 12. The PS-system is, of course, available to younger learners and is presumably utilized quite extensively; however, at earlier age periods the PS-system does not yet incorporate formal operations. Consequently, the choice between the LS-system and the fully developed PS-system is irrelevant below Stage IV. 13. For details see C h o m s k y (1981, 1982).
References
Abraham, W. ed. 1984, Erklärende Syntax des Deutschen. Tübingen. Narr. Adjemian, Ch. & Liceras, J. 1982, Accounting for adult acquisition of relative clauses: universal grammar, LI, and structuring the intake. Ms. University of Ottawa. Adorno, T. ed. 1969, Der Positivismusstreit in der deutschen Soziologie. Darmstadt. Luchterhand. Anderson, S. & Kiparsky, P. eds. 1973, A Festschrift for Morris Halle. New York. Holt, Rinehart & Winston. Bach, E. 1974, Syntactic Theory. New York. Holt, Rinehart & Winston. Bailey, N. & Madden, C. & Krashen, S. 1974, Is there a 'natural sequence' in adult second language learning? Language Learning 24, 235-243. Baker, C. 1979a, Syntactic theory and the projection problem. Linguistic Inquiry 10, 533-581. Baker, C. 1979b, Remarks on complementizers, filters and learnability. Ms. University of Stanford. Bar-Adon, A. & Leopold, W. eds. 1971, Child language: a book of readings. Englewood Cliffs, N.J. Prentice Hall. Bartsch, R. & Lenerz, J. & Ullmer-Ehrich, V. 1977, Einführung in die Syntax. Kronberg. Scriptor. Bates, E. 1979, The emergence of symbols: cognition and communication in infancy. New York. Academic Press. Bates, E. & McWhinney, B. & Smith, S. 1983, Pragmatics and syntax in psycholinguistic research. In: Felix, S. & Wode, H. eds. 1983, 11-30. Beilin, H. & Kagan, J. 1969, Pluralization rules and the conceptualization of number. Developmental Psychology 1, 697-706. Bellugi, U. 1967, The acquisition of the system of negation in children's speech. Ph.D.Diss. Harvard. Bellugi, U. & Brown, R. eds. 1964, The acquisition of language. Monographs of the Society of Research on Child Development 29, Part I. Chicago. University of Chicago Press. Berwick, R. & Weinberg, A. 1983, The role of grammar in models of language use. Cognition 13, 1-61. Berwick, R. & Weinberg, A. 1984, The grammatical basis of linguistic performance. Cambridge, Mass. MIT Press, den Besten, H. 1976, Surface lexicalization and trace theory. In: van Riemsdijk, H. ed. 1976, 12-36. Bever, T. 1970, The cognitive basis for linguistic structures. In: Hayes, J. ed. 1970,279-362. Bever, T. & Langendoen, D. 1971, A dynamic model of the evolution of language. Linguistic Inquiry 2, 433-463. Bierwisch, M. & Heidolph, K. eds. 1970, Progress in linguistics. The Hague. Mouton. Bloom, L. 1970, Language development: form and function in emerging grammars. Cambridge, Mass. MIT Press. Bloom, L. 1973, One word at a time. The Hague. Mouton.
186
Cognition and Language
Growth
Bower, T. 1974, Development in infancy. San Francisco. Freeman. Bowerman, M. 1973, Early syntactic development: a cross-linguistic study with special reference to Finnish. Cambridge. Cambridge University Press. Bowerman, M. 1974, Learning the structure of causative verbs: a study in the relationship of cognitive, semantic and syntactic development. In: Clark, E. ed. 1974, 142-178. Bowerman, M. 1978, The acquisition of word meaning: an investigation into some current conflicts. In: Waterson, N. & Snow, C. eds. 1978, 263-287. Braine, M. 1963, The ontogeny of English phrase structure: the first phase. Language 39, 1-13. Bresnan, J. 1978, A realistic transformational grammar. In: Halle, M. & Bresnan, J. & Miller, G. eds. 1978, 1-59. Bresnan, J. 1979, Theory of complementation in English syntax. New York. Garland. Bresnan, J. ed. 1982, The mental representation of grammatical relations. Cambridge, Mass. MIT Press. Brown, D. ed. 1976, Papers in second language acquisition. Language Learning, Special Issue 4. Brown, D. & Yacio, K. & Crymes, A. eds. 1977, Teaching and learning English as a second language. TESOL Quarterly, 13-24. Brown, R. 1973, A first language: the early stages. London. Allen & Unwin. Brown, R. & Fräser, C. 1963, The acquisition of syntax. In: Cofer, C. & Musgrave, A. eds. 1963, 158-197. Brown, R. & Hanlon, C. 1970, Derivational complexity and order of acquisition in child speech. In: Hayes, J. ed. 1970, 11-53. Bryden, M. 1982, Laterality. New York. Academic Press. Burgschmidt, E. & Götz, D. 1974, Kontrastive Linguistik deutsch/englisch. Munich. Hueber. Burmeister, H. & Ufert, D. 1980, Strategy switching? In: Felix, S. ed. 1980b, 109-122. Burt, M. & Dulay, H. 1980, On acquisition orders. In: Felix, S. ed. 1980b, 265-327. Burt, M. & Dulay, H. & Finocchiaro, M. eds. 1977, Viewpoints on English as a second language. New York. Regents. Butterworth, G. 1972, A Spanish-speaking adolescent's acquisition of English syntax. M.A. Thesis UCLA. Butzkamm, W. 1973, Aufgeklärte Einsprachigkeit. Heidelberg. Quelle & Meyer. Campbell, R. & Smith, P. eds. 1978, Recent advances in the psychology of language. New York. Plenum Press. Caplan, D. ed. 1980, Biological studies of mental processes. Cambridge, Mass. MIT Press. Carey, S. 1980, Maturational factors in human development. In: Caplan, D. ed. 1980,1-7. Castellan, N. & Pisoni, D. & Potts, G. eds. 1977, Cognitive theory, vol. 2. Hillsdale N.J. Erlbaum. Chomsky, C. 1969, The acquisition of syntax in children from 5 to 10. Cambridge, Mass. MIT Press. Chomsky, N. 1965, Aspects of the theory of syntax. Cambridge, Mass. MIT Press. Chomsky, N. 1968, Language and mind. New York. Harcourt, Brace & Jovanovich. Chomsky, N. 1970, Remarks on nominalization. In: Jacobs, R. & Rosenbaum, P. eds. 1970, 184-222. Chomsky, N. 1973, Conditions on transformations. In: Anderson, S. & Kiparsky, P. eds. 1973, 232-286. Chomsky, N. 1975, Reflections on language. New York. Pantheon Books. Chomsky, N. ed. 1977a, Essays on form and interpretation. New York. Elsevier North Holland, Inc. Chomsky, N. 1977b, On wh-movement. In: Culicover, P. & WasowT. & Akmajian, A. eds. 1977, 71-132.
Competing
Cognitive
Systems
187
Chomsky, N. 1980, Rules and representations. Oxford. Blackwell. Chomsky, N. 1981, Lectures on government and binding. Dordrecht. Foris. Chomsky, N. 1982, Some concepts and consequences of the theory of government and binding. Cambridge, Mass. MIT Press. Chomsky, N. & Lasnik, H. 1977, Filters and control. Linguistic Inquiry 8.2, 425-504. Christen, H. 1981, Einführung in die Chemie. Frankfurt. Diesterweg. Clahsen, H. 1980, Psycholinguistic aspects of L2 acquisition. In: Felix, S. ed. 1980b, 57-79. Clahsen, H. 1982a, The acquisition of German word order: a test case for cognitive approaches to L2 development. Paper presented at the Second European-North American Workshop on Second Language Acquisition Research, Göhrde. Clahsen, H. 1982b, Spracherwerb in der Kindheit. Eine Untersuchung zur Entwicklung der Syntax bei Kleinkindern. Tübingen. Narr. Clahsen, H. & Meisel, J. & Pienemann, M. 1983, Deutsch als Zweitsprache. Der Spracherwerb ausländischer Arbeiter. Tübingen. Narr. Clahsen, H. & Muysken, P. 1983, The accessibility of move a and the acquisition of German word order by children and adults. Ms. Düsseldorf, Amsterdam. Clark, E. 1973, What's in a word? In: Moore, T. ed. 1973, 65-110. Clark, E. ed. 1974, Some aspects of the conceptual basis for first language acquisition. Papers and Reports on Child Language Development, 23-51, Stanford. Clark, E. 1978, Discovering what words can do. Papers from the Parasession on the Lexicon. Chicago. University Press. Clark, H. & Clark, E. 1977, Psychology and language. New York. Harcourt, Brace & Jovanovich. Clark, H. & Haviland, S. 1974, Psychological processes as linguistic explanation. In: Cohen, D. ed. 1974, 91-124. Clement, D. ed. 1978, Empirische Rechtfertigung von Syntaxen. Bonn. Bouvier. Clifton, C. & Kurtz, I. & Jenkins, J. 1965, Grammatical relations as determinants of sentence similarity. Journal of Verbal Learning and Verbal Behavior 4, 112-117. Clifton, C. & Odom, P. 1966, Similarity relations among certain English sentence constructions. Psychological Monographs 80. Cofer, C. & Musgrave, A. ed. 1963, Verbal behavior and learning: problems and processes. New York. Holt, Rinehart & Winston. Cohen, D. ed. 1974, Explaining linguistic phenomena. Washington, D.C. Hemisphere. Collins, W. ed. 1979, Children's language and communication. Minnesota Symposia on Child Psychology 12. Hillsdale, N.J. Erlbaum. Cross, T. 1977, Mother's speech adjustments: the contributions of selected child listener variables. In: Snow, C. & Ferguson, C. eds. 1977, 151-188. Culicover, P. & Wasow, T. & Akmajian, A. eds. 1977, Formal syntax. New York. Academic Press. Dahl, O. 1979, Typology of sentence negation. Linguistics 17, 79-106. d'Anglejan, A. & Tucker, G. 1975, The acquisition of complex English structures by adult learners. Language Learning 25, 281-296. Dato, D. ed. 1975, Developmental psycholinguistics: theory and applications. Washington D.C. Georgetown University Press. Davidson, D. & Harman, G. ed. 1972, Semantics of natural language. Dordrecht. Reidel. DeCamp, D. & Hancock, I. eds. 1974, Pidgins and Creoles: current trends and prospects. Washington D.C. Georgetown University Press. Dennis, M. & Kohn, B. 1975, Comprehension of syntax in infantile hemiplegics after cerebral hemidecortication: left-hemisphere superiority. Brain and Language 2, 472-482. Dennis, M. & Whitaker, H. 1976, Language acquisition following hemidecortication:
188
Cognition
and Language
Growth
linguistic superiority of the left over the right hemisphere. Brain and Language 3, 404-433. Derwing, B. 1973, T r a n s f o r m a t i o n a l g r a m m a r as a theory of language acquisition: a study of the empirical, conceptual, and methodological f o u n d a t i o n s of c o n t e m p o r a r y linguistic theory. Cambridge. Cambridge University Press. Dietrich, R. 1982, Selbstkorrekturen. Fallstudien zum mündlichen G e b r a u c h des Deutschen als Fremdsprache durch Erwachsene. Zeitschrift f ü r Literaturwissenschaft und Linguistik 12, 120-149. Dittmar, N. 1980, Ordering adult learners according to language abilities. In: Felix, S. ed. 1980b, 205-231. D i t t m a r , N. 1982, On temporality. Paper presented at the Second E u r o p e a n - N o r t h American W o r k s h o p on Second Language Acquisition Research, G ö h r d e . D o n a l d s o n , M. 1978, Children's minds. Glasgow. Dulay, H. & Burt, M. 1974a, Errors and strategies in child second language acquisition. T E S O L Quarterly 8, 129-136. Dulay, H. & Burt, M. 1974b, Natural sequences in child second language acquisition. Language Learning 24, 37-54. Dulay, H. & Burt, M. 1974c, A new perspective on the creative construction process in child second language acquisition. Language Learning 24, 235-278. Dulay, H . & Burt, M. 1977, R e m a r k s on creativity in language acquisition. In: Burt, M. & Dulay, H . & Finocchiaro, M. eds. 1977, 95-126. Emonds, J. 1976, A transformational a p p r o a c h t o English syntax: Root, structurepreserving, and local transformations. New York. Academic Press. Emons, R. 1982, Englische Nominale. Konstituenz und syntagmatische Semantik. Tübingen. Niemeyer. Engdahl, E. 1983, Parasitic gaps. Linguistics and Philosophy 6, 5-34. Erreich, A. & Valian, V. & Weizemer, J. 1980, Aspects of a theory of language acquisition. J o u r n a l of Child Language 7, 157-179. Fanselow, G. 1984, Deutsche Verbalprojektionen und die Frage der Universität konfigurationaler Syntaxen. Ph.D.Diss. Passau. Fasold, R. & Shuy, R. eds. 1977, Studies in language variation: semantics, syntax, phonology, pragmatics, social situations, ethnographic approaches. Washington D . C . Georgetown University Press. F a t h m a n , A. 1975, The relationship between age and second language productive ability. Language Learning 25, 245-253. Felix, S. 1976, W h - p r o n o u n s in first and second language acquisition. Linguistische Berichte 44, 52-64. Felix, S. 1977, Natürlicher Zweitsprachenerwerb und Fremdsprachenunterricht. Linguistik und Didaktik 31(8), 231-247. Felix, S. 1978a, Some differences between first and second language acquisition. In: Waterson, N. & Snow, C. eds. 1978, 469-479. Felix, S. 1978b, Linguistische Untersuchungen zum natürlichen Zweitsprachenerwerb. Munich. Fink. Felix, S. 1979, Zur Relation zwischen natürlichem und gesteuertem Zweitsprachenerwerb. In: Kloepfer, R. et al. eds. 1979, 355-370. Felix, S. 1980a, Cognition a n d language development: a G e r m a n child's acquisition of question words. In: Nehls, D. ed. 1980, 91-109. Felix, S. ed. 1980b, Second language development: trends and issues. Tübingen. Narr. Felix, S. 1981, Competing cognitive structures in second language acquisition. Paper presented at the E u r o p e a n - N o r t h American W o r k s h o p on Second Language Acquisition Research, Lake A r r o w h e a d . Felix, S. 1982, Psycholinguistische Aspekte des Zweitsprachenerwerbs. Tübingen. N a r r .
Competing
Cognitive
Systems
189
Felix, S. 1984, Parasitic gaps in G e r m a n . In: A b r a h a m , W. ed. 1984, 173-200. Felix, S. & H a h n , A. 1985, Fremdsprachenunterricht und Spracherwerbsforschung. Eine A n t w o r t a n K.-R. Bausch u n d F. Königs. Die Neueren Sprachen 84, 5-21. Felix, S. & Simmet, A. 1982, Der Erwerb der Personalpronomina im F r e m d s p r a c h e n u n terricht. Neusprachliche Mitteilungen 3/81, 132-144. Felix, S. & Wode, H. eds. 1983, Language development at the crossroads. Tübingen. Narr. Ferguson, Ch. & Slobin, D. eds. 1973, Studies of child language development. New York. Holt, Rinehart & Winston. Fillmore, L. 1976, The second time a r o u n d : cognitive and social strategies in second language acquisition. Ph.D.Diss. Stanford. F i s h m a n , J. ed. 1971, Advances in the sociology of language. The Hague. M o u t o n . Flavell, J. 1963, The developmental psychology of Jean Piaget. Princeton, N.J. van Nostrand. Fletcher, P. 1981, Descriptions and explanation in the acquisition of verb-forms. J o u r n a l of Child Language 8, 93-108. F o d o r , J . A . 1975, The language of t h o u g h t . New York. Harvester Press. F o d o r , J. A. 1980, Fixation of belief and concept acquisition. In: Piattelli-Palmarini, M. ed. 1980, 143-149. F o d o r , J . A . 1981, Representations. Philosophical essays on the foundations of cognitive science. Cambridge, Mass. M I T Press. F o d o r , J . A . 1982, Why paramecia d o n ' t have mental representations. Ms. M I T . F o d o r , J . A . 1983, The modularity of mind. Cambridge, Mass. M I T Press. F o d o r , J. A. & Bever, T. & Garrett, M. 1974, The psychology of language: an introduction to psycholinguistics a n d generative g r a m m a r . New York. McGraw-Hill. F o d o r , J . A. & Smith, M. 1978, What kind of exception is 'have got'? Linguistic Inquiry 9, 45-66. F o d o r , J . D . 1978, Parsing strategies and constraints on transformations. Linguistic Inquiry 9, 427-473. Gass, S. 1980, L2 data: its relevance for language universals. Paper presented at the T E S O L Convention, San Francisco. G a z d a r , G. 1981, U n b o u n d e d dependencies and coordinate structure. Linguistic Inquiry 12, 155-184. Gillis, M. & Weber, R. 1976, The emergence of sentence modalities in the English of Japanese-speaking children. Language Learning 26, 77-94. Gleitman, L. 1981, Maturational determinants o f l a n g u a g e growth. Cognition 10,103-114. G o o d l u c k , H. & Solan, L. eds. 1978, Papers on the structure and development of child language. Occasional Papers in Linguistics 4. University of Massachusetts, Amherst. Greenberg, J. ed. 1963, Universals of language. Cambridge, Mass. M I T Press. H a h n , A. 1982, Fremdsprachenunterricht und Spracherwerb. Linguistische Untersuchungen zum gesteuerten Zweitsprachenerwerb. Ph.D.Diss. Passau. Halle, M. & Bresnan, J. & Miller, G. eds. 1978, Linguistic theory and psychological reality. Cambridge, Mass. M I T Press. Hansen, L. 1980, Learning and forgetting a second language: the acquisition, loss, a n d re-acquisition of Hindi-Urdu negative structures by English speaking children. Ph.D.Diss. University of California, Berkeley. Hatch, E. 1978, Second language acquisition. A b o o k of readings. Rowley, Mass. Newburg House. Hatch, E. 1982, Cognition and language. A short reaction to the S. Felix paper. Paper presented at the Second E u r o p e a n - N o r t h American W o r k s h o p on Second Language Acquisition Research, G ö h r d e . Hayes, J. ed. 1970, Cognition and the development of language. New York. Wiley. Heidelberger Projekt 'Pidgin-Deutsch' 1976, Untersuchungen zur Erlernung des
190
Cognition and Language
Growth
Deutschen durch ausländische Arbeiter. Arbeitsbericht III. Germanistisches Seminar der Universität Heidelberg. Hornstein, N. 1977, S and X-bar convention. Linguistic Analysis 3, 137-176. Hornstein, N. & Lightfoot, D. eds. 1981, Explanation in linguistics: the logical problem of language acquisition. New York, L o n d o n . L o n g m a n . H u a n g , J. 1971, A Chinese child's acquisition of English syntax. M . A . Thesis U C L A . Hyams, N. 1983, The acquisition of parametrized g r a m m a r . P h . D . Diss. City University of New York. Inhelder, B. 1979, Language and knowledge in a constructivist f r a m e w o r k . In: PiattelliPalmarini, M. ed. 1980, 132-137. Ioup, G. & Kruse, A. 1977, Interference vs. structural complexity in second language acquisition: language universals as a basis f o r natural sequencing. In: Brown, D. et al. 1977, 159-171. J a c k e n d o f f , R. 1972, Semantic interpretation in generative g r a m m a r . Cambridge, Mass. M I T Press. J a c k e n d o f f , R. 1977, X-bar syntax: a study of phrase structure. Cambridge, Mass. M I T Press. J a c o b , F. 1977, Evolution of tinkering. Science 10b/77, Vol. 196, No. 4295, 1161-1166. Jacobs, R. & Rosenbaum, P. eds. 1970, Readings in English transformational g r a m m a r . Washington D.C. Georgetown University Press. J o h n s o n - L a i r d , P. 1983, Mental models: towards a cognitive science of language, interference and consciousness. Cambridge, Mass. M I T Press. J o h n s t o n , S. & Slobin, D. 1979, see H a t c h , E. 1982. Karmiloff-Smith, A. 1978, The interplay between syntax, semantics, and phonology in language acquisition processes. In: Campbell, R. & Smith, P. eds. 1978, 1-23. Karmiloff-Smith, A. 1979, A functional a p p r o a c h to child language: a study of determiners and reference. Cambridge. Cambridge University Press. Katz, J. & Postal, P. 1964, An integrated theory of linguistic descriptions. Cambridge, Mass. M I T Press. Kay, P. & Sankoff, G. 1974, A language-universal a p p r o a c h to pidgins a n d Creoles. In: D e C a m p , D. & Hancock, I. eds. 1974, 61-72. Kayne, R. 1975, French syntax: the transformational cycle. Cambridge, Mass. M I T Press. Kayne, R. 1983, Connectedness. Linguistic Inquiry 14, 223-249. Keil, F. 1980, Development of the ability t o perceive ambiguities: evidence for the task specificity of a linguistic skill. J o u r n a l of Psycholinguistic Research 9, 219-230. Kimura, D. 1963, Speech lateralization in young children as determined by an auditory test. J o u r n a l of Comparative and Physiological Psychology 56, 899-902. Kintsch, W. 1974, The representation of meaning in memory. Hillsdale, N . J . E r l b a u m . Klima, E. & Bellugi, U. 1966, Syntactic regularities in the speech of children. In: Lyons, J . & Wales, R. eds. 1966, 183-208. Kloepfer, R. & Rothe, A. & K r a u ß , H. Ä K o t s c h i , T. eds. 1979, B i l d u n g u n d A u s b i l d u n g i n der Romania. Munich. F i n k . Kohrt, M. 1976, Koordinationsreduktion und Verbstellung in einer generativen G r a m m a t i k des Deutschen. Tübingen. Niemeyer. Koster, J. 1975, D u t c h as a SOV language. Linguistic Analysis 1, 111-132. Koster, J. 1978, Locality principles in syntax. D o r d r e c h t . Foris. Köster, J. 1982, D o syntactic representations contain variables? Ms. University of Tilbury. Krashen, S. 1973, Lateralization, language learning, and the critical period: some new evidence. Language Learning 23, 63-74. Krashen, S. 1975, The development of cerebral dominance and language learning: more new evidence. In: D a t o , D. ed. 1975, 179-192.
Competing
Cognitive
Systems
191
Krashen, S. 1977, The monitor model for adult second language performance. In: Burt, M. & Dulay, H. & Finocchiaro, M. eds. 1977, 152-161. Krashen, S. 1978, Individual variation in the use of the monitor. In: Ritchie, W. ed. 1978, 175-183. Krashen, S. 1981, Second language acquisition and second language learning. Oxford. Pergamon Press. Kuczay, S. & Daly, M. 1979, The development of hypothetical reference in the speech of young children. Journal of Child Language 6, 563-579. Kuhn, T. 1962, The structure of scientific revolutions. University of Chicago Press. Kuhn, T. 1970, Logic of discovery or psychology of research? In: Lakatos, I. & Musgrave, A. eds. 1970, 1-24. Labov, W. 1971, The study of language in its social context. In: Fishman, J. ed. 1971, 152-216. Lakatos, I. 1970, Falsification and the methodology of scientific research programs. In: Lakatos, I. & Musgrave, A. eds. 1970, 91-196. Lakatos, I. & Musgrave, A. eds. 1970, Criticism and the growth of knowledge. Cambridge. Cambridge University Press. Lambert, W. & Gardener, R. & Olton, R. & Turnstall, K. 1976, Eine Untersuchung der Rolle von Einstellungen und Motivation beim Fremdsprachenlernen. In: Solmecke, G. ed. 1976, 85-103. Lasnik, H. 1976, Remarks on coreference. Linguistic Analysis 2, 1-23. Lees, R. 1961, The constituent structure of noun phrases. American Speech 16, 159-168. Lenerz, J. 1977, Zur Abfolge nominaler Satzglieder im Deutschen. Tübingen. Narr. Lenneberg, E. 1967, Biological foundations of language. New York. Wiley. Lenneberg, E. 1969, On explaining language. Science 164, 634-643. Lewis, M. & Cherry, L. 1977, Social behavior and language acquisition. In: Lewis, M. & Rosenblum, L. eds. 1977, 227-247. Lewis, M. & Rosenblum, L. eds. 1977, Interaction, conversation, and the development of language. New York. Wiley. Liberman, A. & Cooper, F. & Shankweiler, D. & Studdert-Kennedy, M. 1967, Perception of the speech code. Psychological Review 74, 431-461. Liberman, A. & Harris, K. & Eimas, P. & Lisker, L. & Bastian, J. 1961, An effect of learning in speech perception: the discrimination of durations of silence with and without phonemic significance. Language and Speech 4, 175-195. Liberman, A. & Harris, K. & Hoffman, H. & Griffith, B. 1957, The discrimination of speech sounds within and across phoneme boundaries. Journal of Experimental Psychology 3, 358-368. Lightbown, P. 1980, The acquisition and use of questions by French L2-learners. In: Felix, S. ed. 1980b, 151-176. Lightfoot, D. 1982, The language lottery: toward a biology of grammars. Cambridge, Mass. MIT Press. Lyons, J. 1977, Semantics. Cambridge. Cambridge University Press. Lyons, J. & Wales, R. eds. 1966, Psycholinguistic papers. Edinburgh. University Press. Maling, J. 1972, 'On gapping and the order of constituents'. Linguistic Inquiry 3,101-108. Mallinson, G. & Blake, B. 1981, Language typology. Amsterdam. North-Holland. Marantz, A. 1982, On the acquisition of grammatical relations. Linguistische Berichte 80, 32-69. Matthei, E. 1978, Children's interpretation of sentences containing reciprocals. In: Goodluck, H. & Solan, L. eds. 1978, 153- 169. Mayerthaler, W. 1980, Ikonismus in der Morphologie. Semiotik 2, 19-37. Mazurkewich, I. 1981, Second language acquisition of the dative alternation and markedness: the best theory. Ph.D.Diss. Université de Montréal.
192
Cognition
and Language
Growth
McCawley, J. 1977, Acquisition models as models of acquisition. In: Fasold, R. & Shuy, R. eds. 1977, 51-64. McNeill, D. 1970, The acquisition of language: the study of developmental psycholinguistics. New York. H a r p e r & Row. McWhinney, B. 1975, Pragmatic patterns in child syntax. Papers and Reports on Child Language Development, S t a n f o r d University 10, 153-165. Mehler, J . & Bever, T. 1967, The cognitive capacity of young children. Science 6, 8. Meisel, J. 1980a, Strategies of second language acquisition: more t h a n one kind of simplification. Ms. Wuppertal H a m b u r g . Meisel, J. 1980b, Linguistic simplification: a study of immigrant worker's speech and foreigner talk. In: Felix, S. ed. 1980b, 13-40. Meisel, J. 1982, The role of transfer as a strategy of natural second language acquisition. Paper presented at the Second E u r o p e a n - N o r t h American W o r k s h o p on Second Language Acquisition Research, G ö h r d e . Meisel, J . & Clahsen, H. & Pienemann, M. 1981, O n determining developmental stages in natural second language acquisition. Studies in Second Language Acquisition 3, 109-135. Miller, G . & McKean, K. 1964, A chronometric study of some relations between sentences. Quarterly J o u r n a l of Experimental Psychology 16, 297-308. Miller, W. & Ervin, S. 1964, The development of g r a m m a r in child language. In: Bellugi, U. & Brown, R. eds. 1964, 9-35. Milon, J . 1974, The development of negation in English by a second language learner. T E S O L Quarterly 8, 137-143. Möhle, D. 1975, Zur Beschreibung von Stufen der Kommunikationsfähigkeit im neusprachlichen Unterricht. Praxis 22, 4-13. Moore, T . ed. 1973, Cognitive development a n d the acquisition of language. New York. Academic Press. Muysken, P. 1982, The acquisition of G e r m a n word order by children and adults. Paper presented at the Second E u r o p e a n - N o r t h - A m e r i c a n W o r k s h o p on Second Language Acquisition Research, G ö h r d e . Nehls, D. ed. 1980, Studies in language acquisition. Heidelberg. G r o o s . Nelson, K. 1977, Facilitating children's syntax acquisition. Developmental Psychology 13, 101-107. Nelson, K. 1978a, Semantic development a n d the development of semantic memory. In: Nelson, K. ed. 1978b, 39-80. Nelson, K. ed. 1978b, Children's language. Vol. I. New York. Erlbaum. Newport, E. 1977, Motherese: the speech of mothers to young children. In: Castellan, N. & Pisoni, D . & Potts, G. eds. 1977. Newport, E. & Gleitman, L. & Gleitman, H. 1977, Mother, I'd rather d o it myself: some effects a n d non-effects of maternal speech style. In: Snow, C. & Ferguson, C. eds. 1977, 109-150. Nicholas, H . & Meisel, J. 1983, Second language acquisition: the state of the art (with particular reference to the situation in West-Germany). In: Felix, S. & W o d e , H. eds. 1983, 63-89. Oerter, R. 1977, Moderne Entwicklungspsychologie. D o n a u w ö r t h . Auer. O m a r , M. 1973, The acquisition of Egyptian Arabic as a native language. The Hague. Mouton. Park, T. 1974, The acquisition of G e r m a n syntax. Ms. Bern. Perkins, K. & Larsen-Freeman, D. 1975, The effect of formal language instruction o n the order of m o r p h e m e acquisition. Language Learning 25, 237-243. Perlmutter, D. 1971, Deep a n d surface structure constraints in syntax. New York.
Competing
Cognitive
Systems
193
Peters, S. & Ritchie, R. 1973, On the generative power of transformational grammars. Information Science 6, 49-83. Phillips, J. 1969, The origins of intellect: Piaget's theory. San Francisco. Freeman. Piaget, J. 1924, Le jugement et le raisonnement chez l'enfant. Paris. Delachoùx & Niestlé. Piaget, J. 1926, La représentation du monde chez l'enfant. Paris. Alcan. Piaget, J. 1952, The origins of intelligence in children. New York. International Universities Press. Piaget, J. 1962, The stages of the intellectual development of the child. Bulletin of the Menninger Clinic 26, 120-145. Piattelli-Palmarini, M. ed. 1980, Language and learning. The debate between Jean Piaget and Noam Chomsky. London. Routledge & Keagan Paul. Pienemann, M. 1980, The second language acquisition of immigrant children. In: Felix, S. ed. 1980b, 41-56. Pienemann, M. 1981, Der Zweitspracherwerb ausländischer Arbeiterkinder. Bonn. Bouvier. Pinker, S. 1982, A theory of the acquisition of lexical-interpretive grammars. In: Bresnan, J. ed. 1982, 655-726. Popper, K. 1969, Die Logik der Sozialwissenschaften. In: Adorno, T. ed. 1969, 103-123. Popper, K. 1970, Normal science and its dangers. In: Lakatos, I. & Musgrave, A. eds. 1970, 51-58. Pratt, C. & Tunmer, W. & Bowey, J. 1984, Children's capacity to correct grammatical violations in sentences. Journal of Child Language 11, 129-141. Putnam, H. 1979, What is innate and why: comments on the debate. In: Piattelli-Palmarini, M. ed. 1980, 287-309. Quine, W. 1960, Word and object. Cambridge, Mass. MIT Press. Quine, W. 1972, Methodological reflections on current linguistic theory. In: Davidson, D. & Harman, G. eds. 1972, 442-454. Radford, A. 1981, Transformational syntax. Cambridge. Cambridge University Press. Ravem, R. 1968, Language acquisition in a second language environment. IRAL 6, 175-185. Ravem, R. 1969, First and second language acquisition. BAAL Seminar on Error Analysis, Edinburgh. Ravem, R. 1974, Second language acquisition. Ph.D.Diss. Essex. Reber, A. 1976, Implicit learning of artificial grammars. Journal of Verbal Learning y i d Verbal Behavior 6, 855-863. Reber, A. & Allen, R. 1978, Analogic and abstraction strategies in syntactic grammar learning: a functionalist interpretation. Cognition 6, 189-221. Reber, A. & Lewis, S. 1977, Implicit learning: an analysis of the form and structure of a body of tacit knowledge. Cognition 5, 333-361. Reinhart, T. 1976, The syntactic domain of anaphora. Ph.D.Diss. MIT. Riemsdijk, H. van, ed. 1976, Green ideas blow up. University of Amsterdam, Publicaties van het Instituut voor Algemene Taalwetenschap, No. 13. Ritchie, W. ed. 1978, Second language acquisition research: issues and implications. New York. Academic Press. Rizzi, L. ed. 1982, Issues in Italian syntax. Dordrecht. Foris. Robins, R. 1964, General linguistics. An introductory survey. London. Longman. Roeper, T. 1978, Linguistic universals and the acquisition of gerunds. In: Goodluck, H. & Solan, L. eds. 1978, 1-36. Ross, J. 1967, Constraints on variables in syntax. Ph.D.Diss. MIT. Ross, J. 1970, Gapping and the order of constituents. In: Bierwisch, M. & Heidolph, K. eds. 1970, 249-259.
194
Cognition and Language
Growth
Roth, F. 1984, Accelerating language learning in young children. Journal of Child Language 11, 89-107. Safir, K. 1982, Inflection, government, and inversion. Linguistic Review 1, 417-466. Sampson, G. 1979, A non-nativist account of language universals. Linguistics and Philosophy 3, 99-104. Sanchez, M. 1968, Features in the acquisition of Japanese grammar. Ph.D.Diss. Stanford. Schatz, D. 1954, The role of context in the perception of stops. Language 30, 47-56. Schiefelbusch, R. & Lloyd, L. eds. 1974, Language perspectives: acquisition, retardation and intervention. Baltimore. University Park Press. Schlesinger, I. 1974, Relational concepts underlying language. In: Schiefelbusch, R. & Lloyd, L. eds. 1974, 129-151. Schlesinger, I. 1977, The role of cognitive development and linguistic input in language acquisition. Journal of Child Language 4, 153-169. Schumann, J. 1975a, Second language acquisition: the pidginization hypothesis. Ph.D.Diss. Harvard. Schumann, J. 1975b, Affective factors and the problem of age in second language acquisition. Language Learning 25, 209-235. Schumann, J. 1978, The pidginization process: a model for second language acquisition. Rowley, Mass. Schumann, J. 1981, Non-syntactic speech in the Spanish-English Basilang. Paper presented at the European-North American Workshop on Second Language Acquisition Research, Lake Arrowhead. Schumann, J. 1982, Art and science in second language acquisition. Paper presented at the TESOL Convention, Honolulu. Scovel, T. 1978, The effect of affect on foreign language learning: a review of anxiety research. Language Learning 21, 129-142. Seliger, H. 1980, First and second language acquisition: the question of developmental strategies. Paper presented at the Second Language Research Forum, Los Angeles. Selinker, L. 1972, Interlanguage. IRAL 10, 209-231. Sinclair, J.M. & Coulthard, R. 1975, Towards an analysis of discourse: the English used by teachers and pupils. London. Oxford University Press. Sinclair-de Zwart, H. 1973, Language acquisition and cognitive development. In: Moore, T. ed. 1973, 9-26. Singh, R. 1981, Where variable rules fail: variably deleted final consonants revisited. Recherches Linguistiques Montréal 17. Slobin, D. 1973, Cognitive prerequisites for the development of grammar. In: Ferguson, C. & Slobin, D. eds. 1973, 175-186. Smith, N. 1973, The acquisition of phonology. A case study. Cambridge. Cambridge University Press. Snow, C. 1979, The role of social interaction in language acquisition. In: Collins, W. ed. 1979, 157-182. Snow, C. & Ferguson, C. eds. 1977, Talking children: language input and acquisition. Cambridge. Cambridge University Press. Solmecke, G. ed. 1976, Motivation im Fremdsprachenunterricht. Paderborn. Schöningh. Stegmüller, W. 1974, Probleme und Resultate der Wissenschaftstheorie und analytischen Philosophie. Bd.l Wissenschaftliche Erklärung und Begründung. Berlin. Springer. Sternefeld, W. 1982, Konfigurationelle und nicht-konfigurationelle Aspekte einer modularen Syntax des Deutschen. Arbeitspapiere des SFB 99, Konstanz. Stowell, T. 1981, Origins of phrase structure. Ph.D.Diss. MIT. Thiersch, C. 1978, Topics in German syntax. Ph.D.Diss. MIT. Tucker, R. & Hamayan, E. & Genesee, F. 1976, Affective, cognitive, and social factors in second language acquisition. Canadian Modern Language Review 32, 214-226.
Competing
Cognitive
Systems
195
Vergnaud, J.-R. 1979, Quelques éléments pour une théorie formelle des cas. Thèse d'état. Paris. de Villiers, J. & de Villiers, P. 1973, A cross-sectional study of the acquisition of grammatical morphemes in child speech. Journal of Psycholinguistic Research 2, 267-278. Wanner, E. & Gleitman, L. eds. 1983, Language acquisition: the state of the art. Cambridge. Cambrigde University Press. Waterson, H. & Snow, C. eds. 1978, The development of communication. New York. Wiley. Wexler, K. & Culicover, P. 1980, Formal principles of language acquisition. Cambridge, Mass. MIT Press. White, L. 1981, The responsibility of grammatical theory to acquisitional data. In: Hornstein, N. & Lightfoot, D. eds. 1981, 241-283. White, L. 1982, Grammatical theory and language acquisition. Dordrecht. Foris. Wode, H. 1977, Four early stages in the development of Ll-negation. Journal of Child Language 4, 87-102. Wode, H. 1981, Learning a second language. Vol. 1. An integrated view of language acquisition. Tübingen. Narr. Wong-Fillmore, L. 1976, The second time around: cognitive and social strategies in second language acquisition. Ph.D.Diss. Stanford.
Index
acquisition adult language, 118, 137, 142, 144, 160, 164, 178 first language, 9 , 1 3 , 5 1 , 7 5 , 1 3 8 , 1 3 9 , 1 4 0 , 142, 143, 163, 164, 172, 183 order of (language), 11,98,135,142,160, 169 primary, 141, 183 second language, 9 , 1 0 , 1 1 , 1 2 , 1 3 , 5 1 , 7 5 , 98, 138, 139, 140, 142, 143, 144, 153, 157, 163 acquisitional deficits, 141 adequacy explanatory, 40, 46, 54, 78, 79, 163 ambiguity, 17, 53, 153 aphasia, 141 artificial intelligence, 79 base c o m p o n e n t , 49, 125 base structure, 62 behavior rule governed, 78 social, 68, 69 speech, 71, 72 verbal, 30, 38, 73, 171, 177 binding theory, 78, 150 case filter, 88, 105, 134 case theory, 44, 88, 89, 124, 127 central processing unit, 171, 182 cognitive complexity, 54, 55, 56, 58, 59, 60, 63 primacy, 26 cognitivist hypothesis, 11 c o m m u n i c a t i o n , 27, 64, 69, 77, 100, 101, 135, 137, 143, 164, 168, 170, 171, 176 competence, 25, 27, 29, 39, 40, 41, 43, 44, 45,46,47,55,68,69,70,73,77,78,79, 100, 102, 110, 118, 138, 140, 141, 144, 156, 162, 163, 170, 171, 176, 177, 178, 179, 183
computation serial, 25 parallel, 26 mathematical, 43 computational device, 41 capacity, 26, 42 complexity, 26 load, 46 process, 43, 44 constraint propositional island, 150 structure preserving, 126, 127, 177 coreferential, (-ity), 22, 23, 37, 38, 78 creative construction process, 139, 169 critical period, 141 deep structure, 22 description vs. explanation, 30, 33, 50, 51 developmental sequence, 11, 98, 99, 113, 114, 115, 135, 139, 160, 169 domain cognitive, 13, 30, 43, 55, 181 empiricism errors vs. mistakes, 122, 126 evidence conceptual, 26 empirical, 17, 18, 21, 22, 23, 26, 50, 53, 56, 58, 59, 143, 171 negative, 79, 84, 86, 104, 134 non-demonstrative, 21 psycholinguistic, 26, 112 explanatory theory, 31, 66, 67, 73, 74, 81, 97, 113 extraposition, 61, 62 formal operations, 145, 150, 151, 152, 154, 155, 156, 161, 162, 165, 183 generalization, 14, 22, 72, 84, 8 5 , 8 6 , 8 7 , 8 8 ,
198
Competing Cognitive
Systems
89, 95, 96, 111 genetic p r o g r a m , 52, 108, 109, 114, 120, 131, 132 government and binding, 183 grammar core, 134
universal, 51, 52, 79 use, 24, 26, 27, 28, 29, 40, 43, 44, 55, 82, 84, 134, 143, 155, 156, 159, 160, 161 lateralization, 141, 142 learning theory, 34, 140, 145
generative, 22, 26, 40, 41, 45, 47, 78, 79, 82,96,105,106,113,125,149,178,183 optimal, 110, 111, 179, 182 possible, 45, 46, 48, 119, 127, 180, 183 t r a n s f o r m a t i o n a l , 45, 46 universal, 52, 82, 105, 106, 107, 108, 111, 112, 113, 114, 115, 116, 119, 120, 123, 124, 125, 126, 127, 131, 132, 133, 135, 154, 158, 177, 178, 180 grammaticality judgements, 18, 28, 53, 70, 71, 77
maturation(al), 107, 110, 114, 115, 119, 120, 122, 123, 124, 126, 127, 131, 132, 133, 135 maximal projection, 118, 122, 123, 128 mechanism cognitive, 38, 104, 105 filter, 163, 165, 166, 167, 168 perceptual, 51, 52, 111 m e m o r y limitations, 27, 28, 44, 47, 63, 67,
hemisphere, 141 hypothesis-testing, 101, 104, 157, 170, 183 idealization, 12, 13, 65, 67, 68, 69, 73, 74, 80, 97, 114, 135 inductive generalizations, 85, 86, 95, 102 innate, 11, 15, 96, 97, 108, 109, 114, 115, 170, 183 input system, 171, 172, 176, 182 interaction, 15, 16, 1 7 , 2 5 , 3 3 , 4 0 , 4 5 , 6 4 , 6 5 , 67, 74, 82, 108, 109, 112, 127, 172 interference, 170 knowing h o w vs. knowing that, 31, 32 knowledge acquisition of, 13, 24, 27, 28, 29, 34, 36, 38, 46, 83, 85, 87, 93, 96, 152 conceptual, 154 innate, 96, 97, 106, 114 intuitive, 19, 20, 36, 59, 78 tacit, 2 1 , 2 4 , 2 5 , 26, 27,28, 29, 31, 32, 33, 34, 35, 36, 40, 43, 44, 45, 78, 148, 154, 159, 160 language acquisition device, 15, 111 types, 11, 12, 105, 137 the developmental problem of, 82, 97, 99, 100, 112 the logical problem of, 46, 81, 82, 83, 84, 8 7 , 9 4 , 9 5 , 9 7 , 106, 107, 112, 132,134, 137, 183 language faculty, 12, 52, 102, 114, 158
77 mental operations, 145, 148, 149, 150, 151, representation(s), 18, 20, 22, 23, 24, 26, 31, 33, 40, 43, 45, 48, 55, 63, 114, 134, 154, 167 structures (mind), 10, 11, 34, 83, 97, 104, 153, 158, 163, 174 mental state final, 108
152 25, 78, 99,
initial, 38, 108, 109 model competition, 171, 176, 177, 178 maturational, 132, 133 monitor, 171 parameter, 106, 107, 108, 109, 110, 132, 133 negation, 41, 43, 71, 98, 99, 101, 102, 103, 104, 105, 110, 127, 128, 139, 142 non-verbal c o m m u n i c a t i o n , 64 one-substitution, 90, 91, 92, 93 organism, 30, 33, 34, 78, 83, 108, 114, 137, 152 parameter, 106, 107, 109, 111, 132, 133 parasitic gaps, 77, 94 perception, 29, 48, 52, 64, 110, 112, 115, 132, 145, 146, 147, 148, 153, 167, 179 p e r f o r m a n c e , 20, 29, 40, 43, 44, 45, 47, 55, 67, 68, 70, 72, 73, 74, 78, 129, 170, 171 phrase structure, 36, 41, 45, 88, 123, 179 poverty-of-stimulus, 78, 133 principle(s) A-over-A, 58 abstract, 34, 36, 38, 39, 69, 73, 74, 106,
Competing Cognitive
199
Systems
149, 150, 158 adjacency, 135 case resistance, 134 cognitive, 10, 15, 27, 140 empty category, 180 innate, 108, 109 non-coreference, 38, 78 operating, 45, 75, 104, 105 projection, 105 structure preserving, 122, 126, 127 universal, 11, 12, 13, 14, 15, 60, 63, 74, 96, 114, 115, 132, 133, 135, 138, 139 of Universal G r a m m a r , 107, 111, 112, 113, 114, 115, 116, 119, 120, 123, 124, 125, 126, 127, 131, 132, 135 problem solving, 150, 151, 152, 153, 154, 165, 166, 167, 170, 171, 176, 177, 178, 181, 182 process(ing) developmental, 75, 98,99, 106, 114, 132, 137, 163 g r a m m a r / g r a m m a t i c a l , 46, 76, 87, 89, 134 i n f o r m a t i o n , 145, 151 mental, 13, 14, 18, 25, 2 6 , 4 4 , 4 8 , 85,148, 155, 157 real-time, 44 sentence, 25, 41, 42, 43, 45, 47, 63 speech, 47 psychological reality, 17, 18, 1 9 , 2 0 , 2 1 , 4 0 , 43, 76 psychology, I I , 26, 29, 30, 33, 34, 134 recoverability (of deletion), 131 representation, 87, 96, 110, 180 rule(s) grammatical, 49, 56, 5 9 , 6 0 , 1 1 7 , 1 6 0 , 1 7 8 interpretative, 41 lexical, 23 phrase structure, 45, 179 recursive, 39 structure (in-)dependent, 57, 58, 59, 60, 80, 149, 150 transformational, 63, 120
variable, 71, 72, 73, 74 semantic c o m p o n e n t , 22 interpretation, 22 relations, 116, 117, 118, 120 specified subject condition, 150, 183 sentential subject condition, 88, 180, 183 sociolinguistic(s), 65, 69, 73 speech c o m m u n i t y , 19, 67, 69, 72, 100 perception/recognition, 25, 26, 29, 78, 146, 148, 152 production, 25, 2 6 , 2 9 , 3 0 , 3 9 , 4 1 , 4 2 , 4 3 , 44, 45, 52, 54, 77, 78 stages acquisitional, 11, 102, 104, 139,173,177, 179, 180 of (language^ development, 133,135,138 structure ( i n d e p e n d e n c e , 56, 57, 60, 79, 80, 149, 150 subjacency (constraint), 61, 62, 63, 64, 80, 180 surface filter(s), 49, 50, 51, 52, 53, 54 surface structure(s), 22, 35 syntactic variables, 58, 80 transformation(s), 22, 23, 3 8 , 4 1 , 4 2 , 4 3 , 4 6 , 61, 78, 126, 150, 179 transformational c o m p o n e n t , 46, 49 ultimate attainment, 137, 140, 141 universalist a p p r o a c h , 75 hypothesis, 11 verb-final, 122, 126 verb-second, 120, 122, 126, 177 word order, 3 4 , 3 8 , 1 2 0 , 1 2 1 , 122, 123,124, 125, 126, 127, 136, 177, 178 X - b a r schema, 119, 120, 121, 122, 123,124, 127, 128