294 119 1MB
English Pages 284 Year 2005
Mathematical Models for Speech Technology Stephen E. Levinson University of Illinois at Urbana-Champaign, USA
Mathematical Models for Speech Technology
Mathematical Models for Speech Technology Stephen E. Levinson University of Illinois at Urbana-Champaign, USA
Copyright 2005
John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ, England Telephone (+44) 1243 779777
Email (for orders and customer service enquiries): [email protected] Visit our Home Page on www.wiley.com All Rights Reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except under the terms of the Copyright, Designs and Patents Act 1988 or under the terms of a licence issued by the Copyright Licensing Agency Ltd, 90 Tottenham Court Road, London W1T 4LP, UK, without the permission in writing of the Publisher. Requests to the Publisher should be addressed to the Permissions Department, John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ, England, or emailed to [email protected], or faxed to (+44) 1243 770620. This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold on the understanding that the Publisher is not engaged in rendering professional services. If professional advice or other expert assistance is required, the services of a competent professional should be sought. Other Wiley Editorial Offices John Wiley & Sons Inc., 111 River Street, Hoboken, NJ 07030, USA Jossey-Bass, 989 Market Street, San Francisco, CA 94103-1741, USA Wiley-VCH Verlag GmbH, Boschstr. 12, D-69469 Weinheim, Germany John Wiley & Sons Australia Ltd, 33 Park Road, Milton, Queensland 4064, Australia John Wiley & Sons (Asia) Pte Ltd, 2 Clementi Loop #02-01, Jin Xing Distripark, Singapore 129809 John Wiley & Sons Canada Ltd, 22 Worcester Road, Etobicoke, Ontario, Canada M9W 1L1 Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books. Library of Congress Cataloging-in-Publication Data Levinson, Stephen C. Mathematical models for speech technology / Stephen Levinson. p. cm. Includes bibliographical references and index. ISBN 0-470-84407-8 (cloth) 1. Speech processing systems. 2. Computational linguistics. 3. Applied linguistics–Mathematics. 4. Stochastic processes. 5. Knowledge, Theory of. I. Title. TK7882.S65L48 2005 006.4′ 54′ 015118–dc22 2004026215 British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library ISBN 0-470-84407-8 Typeset in 10/12 Times by Laserwords Private Limited, Chennai, India Printed and bound in Great Britain by Antony Rowe Ltd, Chippenham, Wiltshire This book is printed on acid-free paper responsibly manufactured from sustainable forestry in which at least two trees are planted for each one used for paper production.
To my parents Doris R. Levinson and Benjamin A. Levinson
Contents Preface
xi
1
Introduction
1
1.1 1.2 1.3
Milestones in the history of speech technology Prospects for the future Technical synopsis
1 3 4
2
Preliminaries
9
2.1
The physics of speech production 2.1.1 The human vocal apparatus 2.1.2 Boundary conditions 2.1.3 Non-stationarity 2.1.4 Fluid dynamical effects The source–filter model Information-bearing features of the speech signal 2.3.1 Fourier methods 2.3.2 Linear prediction and the Webster equation Time–frequency representations Classification of acoustic patterns in speech 2.5.1 Statistical decision theory 2.5.2 Estimation of class-conditional probability density functions 2.5.3 Information-preserving transformations 2.5.4 Unsupervised density estimation – quantization 2.5.5 A note on connectionism Temporal invariance and stationarity 2.6.1 A variational problem 2.6.2 A solution by dynamic programming Taxonomy of linguistic structure 2.7.1 Acoustic phonetics, phonology, and phonotactics 2.7.2 Morphology and lexical structure 2.7.3 Prosody, syntax, and semantics 2.7.4 Pragmatics and dialog
2.2 2.3
2.4 2.5
2.6
2.7
9 9 14 16 16 17 17 19 21 23 27 28 30 39 42 43 44 45 47 51 52 55 55 56
Contents
viii
3
Mathematical models of linguistic structure
3.1
Probabilistic functions of a discrete Markov process 3.1.1 The discrete observation hidden Markov model 3.1.2 The continuous observation case 3.1.3 The autoregressive observation case 3.1.4 The semi-Markov process and correlated observations 3.1.5 The non-stationary observation case 3.1.6 Parameter estimation via the EM algorithm 3.1.7 The Cave–Neuwirth and Poritz results Formal grammars and abstract automata 3.2.1 The Chomsky hierarchy 3.2.2 Stochastic grammars 3.2.3 Equivalence of regular stochastic grammars and discrete HMMs 3.2.4 Recognition of well-formed strings 3.2.5 Representation of phonology and syntax
57 57 80 87 88 99 107 107 109 110 113 114 115 116
4
Syntactic analysis
119
4.1
Deterministic parsing algorithms 4.1.1 The Dijkstra algorithm for regular languages 4.1.2 The Cocke–Kasami–Younger algorithm for context-free languages Probabilistic parsing algorithms 4.2.1 Using the Baum algorithm to parse regular languages 4.2.2 Dynamic programming methods 4.2.3 Probabilistic Cocke–Kasami–Younger methods 4.2.4 Asynchronous methods Parsing natural language 4.3.1 The right-linear case 4.3.2 The Markovian case 4.3.3 The context-free case
119 119 121 122 122 123 130 130 131 132 133 133
5
Grammatical Inference
137
5.1 5.2 5.3 5.4
Exact inference and Gold’s theorem Baum’s algorithm for regular grammars Event counting in parse trees Baker’s algorithm for context-free grammars
137 137 139 140
6
Information-theoretic analysis of speech communication
143
6.1 6.2
The Miller et al. experiments Entropy of an information source 6.2.1 Entropy of deterministic formal languages 6.2.2 Entropy of languages generated by stochastic grammars 6.2.3 Epsilon representations of deterministic languages Recognition error rates and entropy 6.3.1 Analytic results derived from the Fano bound 6.3.2 Experimental results
143 143 144 150 153 153 154 156
3.2
4.2
4.3
6.3
57
Contents
ix
7
Automatic speech recognition and constructive theories of language
157
7.1 7.2
Integrated architectures Modular architectures 7.2.1 Acoustic-phonetic transcription 7.2.2 Lexical access 7.2.3 Syntax analysis Parameter estimation from fluent speech 7.3.1 Use of the Baum algorithm 7.3.2 The role of text analysis System performance Other speech technologies 7.5.1 Articulatory speech synthesis 7.5.2 Very low-bandwidth speech coding 7.5.3 Automatic language identification 7.5.4 Automatic language translation
157 161 161 162 165 166 166 167 168 169 169 170 170 171
8
Automatic speech understanding and semantics
173
8.1 8.2
8.4 8.5
Transcription and comprehension Limited domain semantics 8.2.1 A semantic interpreter 8.2.2 Error recovery The semantics of natural language 8.3.1 Shallow semantics and mutual information 8.3.2 Graphical methods 8.3.3 Formal logical models of semantics 8.3.4 Relationship between syntax and semantics System architectures Human and machine performance
173 174 175 182 189 189 190 190 194 195 197
9
Theories of mind and language
199
9.1 9.2
The challenge of automatic natural language understanding Metaphors for mind 9.2.1 Wiener’s cybernetics and the diachronic history 9.2.2 The crisis in the foundations of mathematics 9.2.3 Turing’s universal machine 9.2.4 The Church–Turing hypothesis The artificial intelligence program 9.3.1 Functional equivalence and the strong theory of AI 9.3.2 The broken promise 9.3.3 Schorske’s causes of cultural decline 9.3.4 The ahistorical blind alley 9.3.5 Observation, introspection and divine inspiration 9.3.6 Resurrecting the program by unifying the synchronic and diachronic
199 199 201 205 210 212 213 213 214 214 215 215 216
7.3
7.4 7.5
8.3
9.3
Contents
x
10
A Speculation on the prospects for a science of mind
219
10.1 The parable of the thermos bottle: measurements and symbols 10.2 The four questions of science 10.2.1 Reductionism and emergence 10.2.2 From early intuition to quantitative reasoning 10.2.3 Objections to mathematical realism 10.2.4 The objection from the diversity of the sciences 10.2.5 The objection from Cartesian duality 10.2.6 The objection from either free will or determinism 10.2.7 The postmodern objection 10.2.8 Beginning the new science 10.3 A constructive theory of mind 10.3.1 Reinterpreting the strong theory of AI 10.3.2 Generalizing the Turing test 10.4 The problem of consciousness 10.5 The role of sensorimotor function, associative memory and reinforcement learning in automatic acquisition of spoken language by an autonomous robot 10.5.1 Embodied mind from integrated sensorimotor function 10.5.2 Associative memory as the basis for thought 10.5.3 Reinforcement learning via interaction with physical reality 10.5.4 Semantics as sensorimotor memory 10.5.5 The primacy of semantics in linguistic structure 10.5.6 Thought as linguistic manipulation of mental representations of reality 10.5.7 Illy the autonomous robot 10.5.8 Software 10.5.9 Associative memory architecture 10.5.10 Performance 10.5.11 Obstacles to the program 10.6 Final thoughts: predicting the course of discovery
219 220 220 221 223 224 225 225 226 227 228 228 228 229
Bibliography
243
Index
257
230 231 231 232 234 234 235 235 237 238 238 239 241
Preface Evolution This monograph was written over the past four years to serve as a text for advanced graduate students in electrical engineering interested in the techniques of automatic speech recognition and text to speech synthesis. However, the book evolved over a considerably longer period for a significantly broader purpose. Since 1972, I have sought to demonstrate how mathematical analysis captures and illuminates the phenomena of language and mind. The first draft was written in 1975 during my tenure as a J. Willard Gibbs instructor at Yale University. The manuscript grew out of my lecture notes for a graduate course in pattern recognition, the main component of which was a statistical approach to the recognition of acoustic patterns in speech. The connection to language and mind was the result of both incorporating syntactic and semantic information into the statistical decision-theoretic process and observing that the detection and identification of patterns is fundamental to perception and intelligence. The incomplete manuscript was set aside until 1983, at which time an opportunity to resurrect it appeared in the guise of a visiting fellowship in the Engineering Department of Cambridge University. A revised draft was written from lecture notes prepared for another course in pattern recognition for third-year engineering students. This time, topics of syntax and semantics were augmented with several other aspects of linguistic structure and were encompassed by the notion of composite pattern recognition as the classification of complicated patterns composed of a multi-leveled hierarchy of smaller and simpler ones. This second draft also included a brief intellectual history of the collection of ideas designated by what I will later argue is an unfortunate name, artificial intelligence (AI), and a recognition of its role in speech. Once again the manuscript was set aside until the occasion of my appointment to the Department of Electrical and Computer Engineering at the University of Illinois. In 1997 I began organizing a program of graduate study in speech signal processing that would include both instruction in the existing technology and research to advance it. In its present form, the program comprises three tightly integrated parts: a course devoted to speech as an acoustic signal, another course on the linguistic structure of the acoustic signal, and research directed at automatic language acquisition. The first course required little innovation as there are several texts that provide standard and comprehensive coverage of the material. This book is a modification of my long-dormant manuscript and is now the basis for both the second course covering mathematical models of linguistic structure and the research project studying automatic language acquisition.
xii
Preface
Goals and Methods Linguists, electrical engineers, and psychologists have collectively contributed to our knowledge of speech communication. In recognition of the interdisciplinary nature of the subject, this book is written so that it may be construed as either a mathematical theory of language or an introduction to the technologies of speech recognition and synthesis. This is appropriate since the speech technologies rest on psycholinguistic concepts of the modularity of the human language engine. On the other hand, the models and techniques developed by electrical engineers can quite properly be regarded as the single most comprehensive collection of linguistic knowledge ever assembled. Moreover, linguistic theories can only be applied and tested by embedding them in a mathematically rational and computationally tractable framework. However, mathematical and computational models are useful only to the extent that they capture the essential structure and function of the language engine. To the best of my knowledge, no single text previously existed that both covers all of the relevant material in a coherent framework and presents it in such a multidisciplinary spirit. A course of this nature could, heretofore, have been taught only by using several texts and a collection of old scholarly papers published in a variety of journals. Moreover, when significant portions of the material have been included in books on speech processing, they have been, without exception, presented as immutable canon of the subject. The unpleasant fact is that while modern speech technology is a triumph of engineering, it falls far short of constructing a machine that is able to use natural spoken language in a manner even approaching normal human facility. There is, at present, only an incomplete science of speech communication supporting a correspondingly limited technology. Based on the assumption that the shortcomings of our technology are the consequence of gaps in our knowledge rather than pervasive error, it does not seem unreasonable to examine our current knowledge with an eye toward extracting some general principles, thereby providing students with the background required to read the existing literature critically and to forge a strategy for research in the field that includes both incremental improvements and revolutionary ideas. Sadly, the recent literature is almost exclusively about technical refinements. There are several specific pedagogic techniques I have adopted to foster this perspective. Discussions of all of the mathematical models of linguistic structure include their historical contexts, their underlying early intuitions and the mechanisms by which they capture the essential features of the phenomena they are intended to represent. Wherever possible, it is shown how these models draw upon results from related disciplines. Since topics as diverse as acoustics and semantics are included, careful attention has been paid to reconciling the perspectives of the different disciplines, to unifying the formalisms, and to using coherent nomenclature. Another guiding principle of this presentation is to emphasize the meaningful similarities and relationships among the mathematical models in preference to their obvious but superficially common features. For example, not all models that have state spaces or use dynamic programming to explore them serve identical purposes, even if they admit of identical formal descriptions. Conversely, there are some obscure but significant similarities amongst seemingly disparate models. For example, hidden Markov models and stochastic formal grammars are quite different formally yet are similar in the important
Preface
xiii
sense that they both have an observable process to account for measurements and an underlying but hidden process to account for structure. Finally, students should know what the important open questions in the field are. The orientation of this book makes it possible to discuss explicitly some of the current theoretical debates. In particular, most current research is aimed at transcribing speech into text without any regard for comprehension of the message. At the very least, this distorts the process by placing undue emphasis on word recognition accuracy and ignoring the more fundamental roles of syntax and semantics in message comprehension. At worst, it may not even be possible to obtain an accurate transcription without understanding the message. Another mystery concerns the relative importance of perceptual and cognitive processes. Informed opinion has vacillated from one extreme to the other and back again. There is still no agreement, as different arguments are often organized along disciplinary boundaries. When this book is used as a text for a graduate course on speech technology, Chapters 1 and 2 should be considered a review of a prerequisite course on speech signal processing. Chapters 3 through 8 contain the technical core of the course and Chapters 9 and 10 place the material in its scientific and philosophical context. These last two chapters are also intended as guidance and motivation for independent study by advanced students. Whereas a technical synopsis of the contents of this book is given in Chapter 1, here I shall analyze it in a more didactic manner. The prerequisite material covered in Chapter 2 comprises succinct if standard presentations of the physics of speech generation by the vocal apparatus, methods of spectral analysis, methods of statistical pattern recognition for acoustic/phonetic perception, and a traditional taxonomy of linguistic structure. From these discussions we extract a few themes that will appear frequently in the succeeding chapters. First, the speech signal is a non-stationary time–frequency distribution of energy. This both motivates the importance of the short-duration amplitude spectrum for encoding the intelligence carried by the signal and justifies the use of the spectrogram which is shown to be an optimal representation in a well-defined sense. Linear prediction is seen as a particularly useful spectral parameterization because of its close relationship to the geometry and physics of the vocal tract. Second, speech is literate. Thus, the spectral information must encode a small finite alphabet of symbols, the sequencing of which is governed by a hierarchy of linguistic rules. It follows, then, that any useful analysis of the speech signal must account for the representation of structured sequences of discrete symbols by continuous, noisy measurements of a multivariate, non-stationary function of time. This is best accomplished using non-parametric methods of statistical pattern recognition that employ a topological metric as a measure of perceptual dissimilarity. These techniques not only are optimal in the sense of minimum error, but also provide a justification for the direct normalization of time scales to define a metric that is invariant with respect to changes of time scale in signal. The next six chapters are devoted to a detailed examination of techniques that address precisely these unique properties of the speech signal and, in so doing, capture linguistic structure. We begin with a study of probabilistic functions of a Markov process. Often referred to in the literature as hidden Markov models (HMMs), they have become a ubiquitous yet often seriously misunderstood mathematical object. The HMM owes its widespread application to the existence of a class of techniques for robust estimation of its
xiv
Preface
parameters from large collections of data. The true value of the HMM, however, lies not in its computational simplicity but rather in its representational power. Not only does it intrinsically capture non-stationarity and the transformation of continuous measurements into discrete symbols, it also provides a natural way to represent acoustic phonetics, phonology, phonotactics, and even prosody. In this book we develop the mathematical theory incrementally, beginning with the simple quantized observation case. We include a standard proof of Baum’s algorithm for this case. The proof rests on the convexity of the log-likelihood function and is somewhat opaque, providing little insight into the reestimation formulas. However, by relating the parameter estimation problem for HMMs to the classical theory of constrained optimization, we are able to give a novel, short, and intuitively appealing geometric proof showing that the reestimation formulas work by computing a finite step in a direction that has a positive projection on the gradient of the likelihood function. We then progress to models of increasing complexity, including the little-known cases of non-stationary observation distributions and semi-Markov processes with continuous probability density functions for state duration. We end the presentation of Chapter 3 with an account of two seminal but often overlooked experiments demonstrating the remarkable power of the HMM to discover and represent linguistic structure in both text and speech. The Cave–Neuwirth and Poritz experiments are then contrasted with the common formulation based on the special case of the non-ergodic HMM as a means of treating piecewise stationarity. As powerful and versatile as it is, the HMM is not the only nor necessarily the best way to capture linguistic structure. We continue, therefore, with a treatment of formal grammars in the Chomsky hierarchy and their stochastic counterparts. The latter are seen to be probabilistic functions of an unobservable stochastic process with some similarities to the HMM. For example, we observe that the right linear grammar is equivalent to the discrete symbol HMM. However, the more complex grammars provide greater parsimony for fixed representational power. In particular, they provide a natural way to model the phonology and syntax of natural language. Based on these formalisms, Chapter 4 approaches the problem of parsing, that is, determining the syntactic structure of a sentence with respect to a given grammar. Despite its central role in linguistics, this problem is usually ignored in the speech processing literature because it is usually assumed that word order constraints are sufficient for transcription of an utterance and the underlying grammatical structure is superfluous. We prefer the position that transcription is only an intermediate goal along the way to extracting the meaning of the message, of which syntactic structure is a prerequisite. Later we advance the idea that, in fact, transcription without meaning is a highly error-prone process. Parsing a spoken utterance is beset by two sources of uncertainty, variability of the acoustic signal and ambiguity in the production rules of the grammar. Here we show that these uncertainties can be accounted for probabilistically in two complementary ways, assigning likelihoods to the words conditioned on the acoustic signal and placing fixed probabilities on the rules of the grammar. Both of these ideas can be efficiently utilized at the first two levels of the Chomsky hierarchy and, in fact, they may be combined. We develop probabilistic parsing algorithms based on the Dijkstra and Cocke–Kasami–Younger algorithms for the right linear and context-free cases, respectively.
Preface
xv
In Chapter 5, we address the inverse of the parsing problem, that of grammatical inference. This is the problem of determining a grammar from a set of possibly well-formed sentences, the syntactic structure of which is not provided. This is a classical problem and is usually ignored by linguists as too difficult. In fact, the difficulty of this problem is regarded by strict Chomskians as proof that the human language engine is innate. We, however, treat the problem of grammatical inference as one simply of parameter estimation. We show that the reestimation formulas for the discrete symbol HMM and the little-known Baker algorithm for stochastic context-free grammars are actually grammatical inference algorithms. Once the stochastic grammars are estimated, their deterministic counterparts are easily constructed. Finally, we show how parsing algorithms can be used to provide the sufficient statistics required by the EM algorithm so that it may be applied to the inference problem. Chapter 6 is a divertimento in which we reflect on some of the implications of our mathematical models of phonology, phonotactics, and syntax. We begin by recalling an instructive experiment of Miller et al. demonstrating quantitatively that human listeners use linguistic structure to disambiguate corrupted utterances. This phenomenon is widely interpreted in the speech literature to mean that the purpose of grammar is to impose constraints on word order and thereby reduce recognition error rates in the presence of noise or other naturally occurring variability in the speech signal. Moreover, this analysis of Miller is the unstated justification for ignoring the grammatical structure itself and using only word order for transcription. The information-theoretic concept of entropy is correctly used in the literature on speech recognition as a measure of the uncertainty inherent in word order, leading to the intuition that recognition error rate rises with increasing entropy. Entropy is typically estimated by playing the Shannon game of sequential prediction of words from a statistical analysis of large corpora of text or phonetic transcriptions thereof. Here we take a unique approach showing how the entropy of a language can be directly calculated from a formal specification of its grammar. Of course, entropy is a statistical property most easily obtained if the grammar is stochastic. However, we show that entropy can be obtained from a deterministic grammar simply by making some weak assumptions about the distributions of sentences in the language. Taking this surprising result one step further, we derive from the Fano bound a quantitative relationship among the entropy of a language, the variability intrinsic to speech, and the recognition error rate. This result may be used to explain how grammar serves as the error-correcting code of natural language. All of the foregoing material is unified in Chapter 7 into a constructive theory of language or, from the engineer’s perspective, the design of a speech recognition machine. We discuss two basic architectures, one integrated, the other modular. The latter approach is inspired by psycholinguistic models of human language processing and depends crucially on the Cave–Neuwirth and Poritz experiments featured in Chapter 3. We note the use of the semi-Markov model to represent aspects of prosody, phonotactics, and phonology. We also demonstrate the ability of the modular system to cope with words not contained in its lexicon. In evaluating the performance of these systems, we observe that their ability to transcribe speech into text without regard for the meaning of the message arguably exceeds human performance on similar tasks such as recognizing fluent speech in an unknown
xvi
Preface
language. And yet, this remarkable achievement does not provide speech recognition machines with anything remotely like human linguistic competence. It seems quite natural, then, to try to improve the performance of our machines by providing them with some method for extracting the meaning of an utterance. On the rare occasions when this idea is discussed in the literature, it is often inverted so that the purpose of semantic analysis becomes simply that of improving word recognition accuracy. Of course, this is a very narrow view of human linguistic behavior. Humans use language to convey meaningful messages to each other. Linguistic competence consists in the ability to express meaning reliably, not to simply obtain faithful lexical transcriptions. It is in this ability to communicate that our machines fail. Chapter 8, therefore, is devoted to augmenting the grammatical model with a semantic one and linking them in a cooperative way. We begin with a description of a laboratory prototype for a speech understanding system. Such a system should not simply be a transcription engine followed by a text processing semantic module. We note that such a system would require two separate syntax analyzers. Whereas, if the parsing algorithms described in Chapter 4 are used, the requisite syntactic structure is derived at the same time that the word order constraints are applied to reduce the lexical transcription error rate. The most straightforward approach is to base the understanding system on the simplified semantics of a carefully circumscribed subset of natural language. Such formal artificial languages bear a strong resemblance to programming languages and can be analyzed using compiler techniques. Such systems may be made to carry out dialogs of limited scope with humans. However, the communication process is quite restricted and brittle. Extension of the technique to another domain of discourse is time-consuming because little if any data can be reused. What is required to enable the machine to converse in colloquial discourse is a generalized model of unrestricted semantics. There are many such models, but they all reduce to mathematical logic or searching labeled, directed graphs. The former rests on the intuition that the extraction of meaning is equivalent to the derivation of logically true statements about reality, said statements being expressed formally in first-order logic. The latter model rests on the intuition that meaning emerges out of the properties of and relationships among objects and actions and can be extracted by finding suitable paths in an abstract graph. Such ideas have yet to be applied to speech processing. Thus, Chapter 8 concludes in an unsatisfying manner in that it provides neither theoretical nor empirical validation of a model of semantic analysis. Up to this juncture, the exposition is presented in the customary, turgid scientific style. The mathematics and its application are objective and factual. No personal opinions regarding their significance are advanced. For Chapters 9 and 10, that conservatism is largely discarded as the unfinished work of the first eight chapters deposits us directly on the threshold of some of the very deepest and most vociferously debated ideas in the Western philosophical tradition. We are forced to confront the question of what sort of theory would support the construction of a machine with a human language faculty and we are obliged to assess the role of our present knowledge in such a theory. This profound shift of purpose must be emphasized. In the two concluding chapters, then, the mathematics nearly vanishes and, to preserve some semblance of intellectual responsibility, I employ the first person singular verb form.
Preface
xvii
It is my strongly held belief that a simulation of the human language engine requires nothing less than the construction of a complete human mind. Although this goal has proved to be utterly elusive, I insist that there is no inherent reason why it cannot be accomplished. There is, however, a cogent reason for our quandary revealed by a critical review of the intellectual history of AI. In a remarkable work entitled Fin-de-Si`ecle Vienna: Politics and Culture, Carl E. Schorske gives a highly instructive explanation for the unkept promises of AI. He convincingly argues that cultural endeavors stagnate and fail when they become ahistorical by losing contact with both their diachronic history (i.e. their intellectual antecedents) and their synchronic history (i.e. their connections to independently developed but related ideas), and become fixated in the technical details of contemporary thought. Although Schorske did not include science in his analysis, his thesis seems highly appropriate there, too, with AI as a striking instance. Specifically, the loss of history in rapid response to an overwhelming but narrow discovery is made manifest by comparing the work of Norbert Wiener and Alan Turing. The first edition of Wiener’s Cybernetics was published in 1948, the very year that ENIAC, the first electronic, stored program, digital computer became operational. From this very early vantage point, Wiener has a fully diachronic perspective and recognizes that from ancient times to the present, metaphors for mind have always been expressed in the high technology of the day. Yet he clearly sees that the emerging computer offers a powerful tool with which to study and simulate, information and control in machines and organisms alike. By 1950, Turing, on the other hand, had developed a deep understanding of the implications of his prior work in the foundations of mathematics for theories of mind. Since the Universal Turing Machine, and, hence, its reification in the form of the digital computer, is capable of performing almost any symbolic manipulation process, it is assumed sufficient for creating a mental model of the real world of our everyday experience. This intuition has evolved into what we today refer to as the “strong theory of AI”. It is an almost exclusively contemporary view and was, in fact, Turing’s preferred interpretation of thought as a purely abstract symbolic process. There is, however, a historical aspect to the remarkable 1950 paper. This is not surprising since the ideas it expresses date from the mid-1930s, at which time the metaphors for mind derived from classical electromechanical devices. In the penultimate paragraph of the paper, Turing offers an astounding and often overlooked alternative to the technical model of thought as symbolic logic. He suggests that the symbols and the relations among them could be inferred from real-world sensory data, a cybernetic and hence, historical view. Unfortunately, the next generation of thinkers following Wiener and Turing fully endorsed the mind–software identity and en route lost all semblance of the historical trajectory. Based on my interpretation of Schorske, I submit that there have been no conceptual advances since then in the AI tradition. There has been some technical progress but no enlightenment. This is a rather frustrating conclusion in light of the elegance of Turing’s theory which seemed to promise the immediate construction of an indisputably mechanical mind. The key to revitalizing research on the theory of mind lies in synthesizing the synchronic and diachronic histories in what I call the cybernetic paradigm. This presently unfashionable mode of interdisciplinary thought unifies Turing’s and Wiener’s work and
xviii
Preface
comprises much of the material of the first eight chapters of this volume. My synthesis leads to the following constructive theory of brain, mind, and language. The disembodied mind is a fantasy. A well integrated sensorimotor periphery is required. Thought is almost exclusively the product of associative memory rather than symbolic logic. The memory is highly sensitive to spatiotemporal order and its episodic structure integrates all sensorimotor stimuli. Thus, there are no isolated perceptual or cognitive functions. Memory is built up from instincts by the reinforcement of successful behavior in the real world at large. As a cognitive model of reality is acquired, a linguistic image of it is formed using specialized brain structures. This “language engine” is primarily responsive to semantic information while other levels of linguistic structure exist to make semantics robust to ambiguity. I note in passing that this theory is in direct opposition to the well-known Chomskian view that language is grammar. That is, the difficulty in language acquisition is precisely the difficulty of learning the acoustic/phonetic, phonological, morphological, prosodic, and syntactic rules that define language. Whereas, according to the theory described above, Chomskian grammar is both an error-correcting code that makes communication reliable and a framework upon which semantics is built. When the language is fully acquired, most mental processes are mediated linguistically and we appear to think in our native language, which we hear as our mind’s voice. Finally, I describe a means of testing this theory of cognition by building an autonomous intelligent robot. For the purposes of this experiment, sensorimotor function includes binaural audio, stereo video, tactile sense, and proprioceptive control of motion and manipulation of objects. Thus, I am able to exploit the synergy intrinsic in the combined sensorimotor signals. This sensory fusion is essential for the development of a mental representation of reality. The contents of the associative memory must be acquired by the interaction of the machine with the physical world in a reinforcement training regime. The reinforcement signal is a direct, real-time, on-line evaluation of only the success or failure of the robot’s behavior in response to some stimulus. This signal comes from three sources: autonomous experimentation by the robot including imitation, instruction of the robot by a benevolent teacher as to the success or failure of its behavior, and instruction of the robot by the teacher in the form of direct physical demonstration of the desired behavior (e.g. overhauling the robot’s actuators). Such instruction makes no use of any supervised training based on preclassified data. Nor does the robot use any predetermined representation of concepts or algorithms. There is no research known to me which is based on quite the combination of ideas I have described or quite the spirit in which I invoke them. A unique feature of the approach I advocate is the central role of language in the formation of the human mind. As of this writing, my experiments have produced a robot, trained as described above, of sufficient complexity to be able to carry out simple navigation and object manipulation tasks in response to naturally spoken commands. The linguistic competence of the robot is acquired along with its other cognitive abilities in the course of its training. This result is due to the synergistic effect that the behavior of a complex combination of simple parts can be much richer than would be predicted by analyzing the components in isolation. Of course, I make no claim to have built a sentient being and I recognize that my hypotheses are controversial. However, in my best scientific and technical judgment, when a mechanical mind is eventually constructed, it will much more closely resemble
Preface
xix
the ideas expressed in the final two chapters than it does those of the previous six which are so vigorously pursued at present. I am, of course, fully aware that those readers who find the technical aspects of this book worthwhile may well regard the final two chapters as a wholly inappropriate flight of fancy. Conversely, those who are intrigued by my metaphysics may judge the plethora of technical detail in the first eight chapters to be hopelessly boring. After 35 years of research on this subject, my fondest hope is that a few will find the presentation, as a whole, a provocative albeit controversial reflection on some significant scientific ideas and, at the same time, an exciting approach to an advanced technology of the future.
Acknowledgments Since this book evolved over a period of years, it is hardly surprising that many people influenced the thought processes that determined its ultimate form. My interest in automatic speech understanding originated in my doctoral research at the University of Rhode Island under the patient direction of D. W. Tufts. Foremost among the many invaluable contributions he made to this book was that he allowed me the freedom to explore the very ideas that lead to this book despite his ambivalence toward them. He also recognized the relevance of and acquainted me with the early pattern recognition literature, much of which was applied to the acoustic patterns of speech. At Yale University, where this book was first conceived, I greatly benefited from my interaction with F. Tuteur and H. Stark who encouraged me in many ways, not the least of which was their faithful attendance of and participation in my first offering of a course on pattern recognition. At the same time I was introduced to the crucial relationship of dynamic programming to parsing by R. J. Lipton and L. Snyder. My 22 years at the Bell Telephone Laboratories, mostly under the supervision of J. L. Flanagan, was an extraordinary learning experience. I still marvel at my good fortune to have been a member of his Acoustics Research Department and Information Principles Research Laboratory. During that entire period, Dr. Flanagan was my most steadfast advocate. I must also single out three colleagues, L. R. Rabiner, A. E. Rosenberg, and M. M. Sondhi, without whose continuous collaboration I would never have understood numerous fundamental concepts nor had access to software and experimental data with which to study them. My first real understanding of the theory of hidden Markov models resulted from my very stimulating association with the mathematicians, A. B. Poritz, L. R. Liporace and J. Ferguson of the Institute for Defense Analyses. The second partial draft of this book was written as the result of an invitation from F. Fallside to come to the Engineering Department of Cambridge University. Through his efforts and with the generous support of Bell Laboratories, a visiting fellowship sponsored by the Science and Engineering Research Council of the UK was arranged. It is a source of deep regret to me that Prof. Fallside did not live to see this final draft. I finally completed this manifesto as the result of a joint appointment to the Beckman Institute and the Department of Electrical and Computer Engineering at the University of Illinois. To rescue me from a pervasive climate of indifference to my ideas, S. M. Kang, J. Jonas, and T. S. Huang encouraged me to come to the Prairie and supported me in pursuit of the ideas to which I have devoted my entire professional life.
xx
Preface
I am especially indebted to my loving wife of 25 years, Dr. Diana Sheet’s, whose perspectives as a historian sensitized me to the tidal forces that shaped the intellectual and cultural waterfront of the early twentieth century. Her suggested reading list and our many discussions of the effect of generational rebellion in the development of a discipline and the tension between synchronic and diachronic analyses are reflected in the final two chapters. I offer a final note of recognition and thanks to Sharon Collins who expertly typed the manuscript, often from barely legible handwritten drafts, and illustrations drawn in India ink on mylar, and rewarded my frequent complaints with only an occasional barbed reply. Thanks are also due to my colleague Mark Hasegawa-Johnson for his critical reading of Chapters 9 and 10 and to the graduate students at Yale, Cambridge, and Illinois who suffered through the early versions of the course, identified numerous opacities and errors in the text, and made important suggestions to improve the presentation. Unfortunately, one can never truly chronicle the evolution of even his own ideas. Many of mine derive from frequent interactions with colleagues throughout my career. I have been particularly fortunate in the exchange of ideas afforded me by my membership in several IEEE Technical Societies and the Acoustical Society of America. More recently, I have expanded my horizons greatly as a result of discussions with several participants in the 1998 semester-long faculty seminar entitled “Mind, Brain and Language”, sponsored by the Center for Advanced Study at the University of Illinois. Of course, any mathematical blunders or poor judgment in the selection of subject matter I may have made or any intellectual improprieties in which the reader feels I have indulged are attributable not to my aforementioned benefactors but to me alone. I am gratified by the opportunity to proffer my theories, however incisive or specious they eventually prove to be. S. E. Levinson Urbana-Champaign
1 Introduction 1.1 Milestones in the History of Speech Technology From antiquity, the phenomenon of speech has been an object of both general curiosity and scientific inquiry. Over the centuries, much effort has been devoted to the study of this remarkable process whereby our eating and breathing apparatus is used to transform thoughts in the mind of a speaker into vibrations in the air and back into congruent thoughts in the mind of a listener. Although we still do not have satisfactory answers to many of the questions about speech and language that the ancients pondered, we do have substantial scientific knowledge of the subject and an evolving technology based on it. It is difficult to select a particular event or discovery as the origin of speech technology. Perhaps the speaking machine of W. von Kempelen [153] in the mid-eighteenth century qualifies. We can, however, safely say that the great body of classical mathematics and physics enabled the invention of the telephone, radio, audio recording, and the digital computer. These technologies gradually became the primary components of the growing global telecommunications network in which the conflicting criteria of high-fidelity and low-bandwidth transmission demanded that attention be focused on the nature of the speech signal. In the 1940s, basic research was conducted at Bell Telephone Laboratories and the Research Laboratory of Electronics at the Massachusetts Institute of Technology in auditory physiology, the psychophysics of acoustic perception, the physiology of the vocal apparatus, and its physical acoustics. Out of this effort a coherent picture of speech communication emerged. New instruments such as the sound spectrograph and the vocoder were devised for analyzing and generating speech signals. Much of this knowledge was encapsulated in the source–filter model of speech production which admitted of both a mathematical formulation and a real electrical implementation. Building on this foundation, analog circuitry was invented for both narrowband voice transmission and recognition of spoken numbers by classification of acoustic patterns extracted from the speech signal. By the early 1950s, it had been recognized that the digital computer would become the tool of choice for analyzing signals in general and speech in particular. As a result, speech research spent the next two decades or so converting the analog circuitry to its digital equivalent. The relationship between digital signal processing (DSP) and speech analysis was mutually beneficial. Because the bandwidth of the speech signal was well matched to the processing speeds of the early computers, the new DSP techniques proved Mathematical Models for Speech Technology. Stephen Levinson 2005 John Wiley & Sons, Ltd ISBN: 0-470-84407-8
2
Mathematical Models for Speech Technology
to be easy to use, efficient, and effective. Many DSP algorithms, such as those for linear prediction and Fourier analysis, were particularly appropriate for speech and were quickly adopted. This, in turn, resulted in the development of new and more general theories and methods of DSP. The mathematical theories of information and communication, random processes, detection and estimation, and spectral analysis went through similar transformations as they were adapted for digital implementations. One important outcome of this metamorphosis was the development of statistical pattern analysis. Such techniques were precisely what was needed for automatic speech recognition and they were quickly applied. As in the case of DSP, the success of pattern recognition for speech processing led to the development of new general methods of pattern recognition. During this period, another basic new mathematical theory appeared, that of probabilistic functions of a Markov process, commonly known as hidden Markov models (HMMs) [27]. This theory was destined to become the core of most modern speech recognition systems. Concurrently, microelectronic technologies were rapidly developing. In particular, new devices for fast arithmetic and special addressing schemes appeared, making small, lowpower speech processors readily available. These devices were responsible for the debut in the early 1970s of the first of several generations of inexpensive speech recognition systems for industrial applications. The availability of all of these new digital techniques brought about spectacular advances in speech recognition and naturally encouraged research on ever more difficult problems such as recognition of fluent utterances independent of the speaker. The unbounded enthusiasm for these endeavors prompted John Pierce to write his infamous letter entitled “Whither Speech Recognition”. Published in 1969, it was a scathing criticism of speech recognition research warning that until cognitive processes were understood and included in speech recognition machines, no progress would be made. Perhaps Pierce was unaware that his concerns were being addressed elsewhere independent of the work in speech recognition. Studies of language were under way. In particular, Zelig Harris [119] and Noam Chomsky [45] had proposed formal specifications of grammar and theories of their role in natural language. Marvin Minsky [222] and his students at the MIT AI Laboratory proposed computational methods for representing the semantics of natural language. Finally, in 1970 Allen Newell and several colleagues [233] drafted a report to the Advanced Research Project Authority (ARPA) suggesting that formal models of syntax and semantics be incorporated into acoustic pattern recognition algorithms to enable the construction of more sophisticated systems that could understand spoken messages in the context of a simple, well-specified task. The first attempt to realize the goals set forth in the Newell report was the ARPA speech understanding initiative. Under this program several efforts were undertaken to construct a speech recognition system based on the standard, modular model of the human language engine. Naive implementations of this model failed. This was both disappointing and surprising in light of the success of this model in speech synthesis. Although several components and partial systems were built by teams at Carnegie Mellon University [206] and Bolt Beranek and Newman, Inc. [332], they were never effective in speech recognition. While the ARPA project was collapsing, Frederick Jelinek and his colleagues at IBM and, independently, James Baker, then a student at Carnegie Mellon University, introduced the hidden Markov model to speech recognition [20, 147]. The theoretical papers on the
Introduction
3
HMM had been written by Leonard Baum and his colleagues at the Institute for Defense Analyses in the late 1950s [25, 26, 27, 28, 29]. Sadly, the applications of their work did not appear in the open literature at that time, which may account for the delay of nearly a decade before the method was used for speech recognition. In the HMM-based systems, all aspects of linguistic structure are integrated into a monolithic stochastic model the parameters of which can be determined directly from a corpus of speech. The architecture also supports an optimal statistical decision-theoretic algorithm for automatic speech recognition. Due to these important properties, the HMM methodology succeeded where the naive linguistic model failed and an important lesson was learned. Chaotic, rule-based implementations of the otherwise useful modular model cannot be optimized since they lack the mathematical rationality and computational tractability of the HMM-based systems. At present, all speech recognition systems use the integrated HMM-based approach. Some versions of it are now commercially available for use on personal computers; however, their performance is not as reliable as one might wish. The success of the HMM-based system focused attention on the transcription of speech into text for use in a voice-operated typewriter or dictation machine. One important aspect of the modular approach that the integrated HMM-based system does not address is that of message comprehension. This is because only word order constraints have computationally tractable implementations that can be naturally fit into the HMM framework. Although the need for semantics and underlying syntactic structure is obvious, the lack of a compatible mathematical formulation makes it less attractive. At the present time, the use of syntactic structure and semantic analysis is still an open question. Some early speech understanding systems were actually constructed by Raj Reddy [272, 178] and this author [179, 180] based on straightforward application of compiler technology to carefully circumscribed data retrieval tasks. Unlike the HMM-based recognition systems, these experiments remained in the laboratory for considerable time, ultimately appearing in greatly simplified form in some telephone-based applications. On the other hand, there are some simple, commercially successful uses of speech understanding. These limited applications substitute automatic recognition of isolated words and phrases from a limited vocabulary for a small number of single keystrokes on a telephone touch pad. This straightforward exchange allows a speaker to perform some simple functions selected from a carefully constructed menu. Such systems are used by travel agencies and financial institutions over the public telephone network. They are quite robust and well tolerated by the general population. This brief account brings us to the present state of the art. In the sequel, we examine in detail the theories and techniques that brought us to this juncture and we consider how we might advance beyond it.
1.2 Prospects for the Future The ultimate goal of speech technology is the construction of machines that are indistinguishable from humans in their ability to communicate in natural spoken language. As noted, the performance of even the best existing machines falls far short of the desired level of proficiency. Yet, a variety of human–machine communication tasks have been demonstrated as research prototypes and some of that technology is now available commercially.
4
Mathematical Models for Speech Technology
Solving the ultimate puzzle is valuable both as an intellectual achievement and for the practical benefits it would confer on society. Eventually, telecommunications will be provided by a vast digital packet switched network the terminal devices of which, whether they be fixed or portable, will be more like computers than telephones and will be on-line continuously. The present-day Internet has provided us enough of a glimpse of this future to know that its value lies in its ability to connect every terminal to every known source of information on the planet. If everyone is to take full advantage of this remarkable resource, it must appear to every network subscriber as if he has his own personal librarian to help him acquire whatever information or service he requires. Since there are not enough trained librarians to go around, the service must be automated. The point-and-click interface of today is inadequate for that purpose, especially for hand-held devices. By contrast, a perfected speech technology would provide universal access to most of the information available on the Internet by means of ordinary conversation. This would greatly improve the ease and efficiency with which a mass society purchases goods and services, maintains financial, medical, and other personal records, and obtains information. An advanced technology could also be a component of prosthetic aids for people afflicted with speech, hearing, and even cognitive disorders. Most practitioners of speech technology believe that this futuristic vision is close at hand. It is commonly supposed that the performance of today’s best experimental systems is only an order-of-magnitude in error rate away from human performance and that even existing technology is commercially viable. It is also a widely held view that the order of magnitude improvement required for human-like performance will be achieved by incremental improvement rather than revolutionary new insights and techniques [185]. Regardless of how the technology advances – and there is no reason to suppose it will not – it is reasonable to expect that when the ultimate goal has been achieved, some of the existing technology, imperfect though it may be, will have survived in familiar form. It is prudent, therefore, to study the present state of the art while looking for radical new methods to advance it.
1.3 Technical Synopsis Modern speech processing technology is usually considered to comprise the three related subfields of speech coding, speech synthesis, and automatic speech recognition. The latter two topics refer to techniques for transforming text into speech and speech into text, respectively. Speech coding is the process of faithful and efficient reproduction of speech usually for communication between humans. We shall not address speech coding here except to note, in passing, that a system composed of a speech recognition device and a speech synthesizer could be made into the ultimate coder in which speech is transcribed into text, transmitted at 50 bits per second and then converted back to speech. Methods for speech recognition and synthesis are more naturally applied to the construction of systems for human–machine communication by voice. It is the theory of such systems to which these pages are largely devoted. We begin with the acoustic signal and proceed level by level through the hierarchy of linguistic structure up to and including the determination of meaning in the context of a conversation. Chapter 2 is a review of the material considered to be prerequisite for our mathematical analysis of the structure of language. This material is presented in highly condensed form
Introduction
5
as there are definitive texts for each of the four topics covered. First we review the physics of speech production in the human vocal apparatus. Readers wishing a thorough treatment of this subject are urged to consult Flanagan [86]. The physics of speech generation leads to the source–filter model of Dudley [69] and to the importance of the short-duration amplitude spectrum. Representation of the spectrum using Fourier analysis leads to the optimal formulation of the spectrogram while linear prediction analysis yields a particular parameterization of the filter closely related to the governing physics and geometry. Comprehensive studies of these topics may be found in Riley [274] and Markel and Gray [211], respectively. Fletcher [89] provides a thorough treatment of categorical perception, the process by means of which humans classify acoustic patterns. This function is well described by the theory of statistical pattern recognition. Here, we follow Patrick [241], emphasizing the non-parametric, Bayesian approach. Finally, we review the types of linguistic structure for which we will later develop detailed, faithful, mathematical models. We adopt the broad taxonomy of C. S. Peirce [244] and then refine and augment it with the classical presentation found in Chomsky and Halle [47]. Chapters 3 through 8 provide mathematical models of several aspects of linguistic structure. We begin with two powerful analytical tools, the probabilistic function of a Markov process, otherwise known as the hidden Markov model, and the formal grammar. First the HMM is developed in full mathematical detail, beginning with the basic discrete symbol case. We then proceed to generalize the elementary case to that of elliptically symmetric distributions, of which the Gaussian is a special instance. Then we advance to the universal case of Gaussian mixtures and two special cases, the autoregressive process, related to linear prediction, and the non-stationary autoregressive case. Next, turning our attention to the hidden process, we relax the constraint of exponentially decreasing state durations and consider semi-Markov processes and the problem of correlated observations. In a similar manner, we develop the formal grammar by considering the members of the Chomsky hierarchy in order of increasing parsimony of expression. For reasons of computational complexity, the detailed analyses are confined to the right-linear and context-free cases. Finally, we recount two classical experiments based on the HMM demonstrating how these models discover and represent linguistic structure. We show how both models can be used to capture acoustic phonetics, phonology, phonotactics, syntax, and even some aspects of prosody. These mathematical models have desirable properties. They reflect the natural constraints on the order in which words and sounds are allowed to appear and they specify the permissible phrase structures of the well-formed sequences. Phrase structure will later be seen to be important to representation of meaning. In Chapter 4 we develop parsing algorithms that enable both the optimal use of ordering constraints in speech recognition and the determination of the underlying structure for subsequent use as an outline of semantics. The parsing algorithms are seen to be applicable to both deterministic and stochastic specifications of linguistic structure for either right-linear or context-free grammars. The simple right-linear case has an equivalent HMM. Finally, we show how these models may be used to express the syntax of natural language.
6
Mathematical Models for Speech Technology
Chapter 5 addresses the problem of inference of linguistic structure from data. First we cast this as a generic problem of parameter estimation. The computational requirements of the estimation problem can be reduced by using parsing algorithms to count the occurrences of particular types of structures. This allows us to transform the problem into one of statistical estimation for which the well-known EM algorithm is ideally suited. The classical experiments described in Chapter 3 may now be considered as instances of grammatical inference. Chapter 6 provides an information-theoretic characterization of and explanation for the classical results of a set of experiments carried out by Miller et al. [220]. Their results confirm the intuitively appealing notion that grammar is an error-correcting code. We show how the classical Fano bound can be used to relate the entropy of a formal language, the equivocation of an acoustic pattern recognizer, and the error probability of a speech recognition system. Chapter 7 combines the results of all of the foregoing chapters for the purpose of designing speech recognition systems. We contrast two architectures, the integrated system and the modular system. In the former, which is based on the non-ergodic HMM, all levels of linguistic structure are assimilated into a single stochastic model and recognition is based on an algorithm for evaluating its likelihood function. The latter is based on the ergodic HMM as used in the Poritz [250] experiment but requires different models for each specific aspect of linguistic structure. The individual models operate sequentially according to the traditional conception of the human language engine. Finally, we demonstrate how speech synthesis algorithms can be used to aid in the construction of both of these systems. Whereas Chapter 7 is concerned with systems that recognize speech by transcribing it into ordinary text, Chapter 8 addresses the problem of understanding a spoken message in the sense of executing a command as intended by the speaker. This requires not only the incorporation of semantics – which, for this purpose, is defined as an internal, symbolic representation of reality – but also a mapping from lexical and syntactic structure to meaning. The simplest means to accomplish this is to adapt the semantics of programming languages to building a compiler for a useful subset of natural language. We describe a particularly instructive example of a system of this kind that is capable of performing some of the functions of a travel agent. Unfortunately, this method cannot be extrapolated to encompass unrestricted conversation. We must, then, consider more general models that might be capable of representing the semantics of natural language. There are two such models available, mathematical logic and labeled, directed graph searching. These more general models of semantics have yet to be incorporated into a speech understanding system. Thus, our theory of language and our experiments on human–machine communication by voice are incomplete. In the final two chapters, we take up the challenge of advancing our theories and technologies. Obviously there is a significant component of speculation in so doing. Chapter 9 begins with the premise that communication with machines in natural spoken language requires nothing less than a complete, constructive theory of mind. We carefully examine the two existing theories, the information- and control-theoretic (i.e. cybernetic) perspective of Wiener [330], and the symbolic computation view of Turing [319]. We then offer an explanation why such cogent theories have, thus far, failed to yield the expected results.
Introduction
7
The reason is simply that the theories are complementary but have, to date, always been studied independently. In Chapter 10 we propose a new theory of mind and outline an experimental program to test its validity. The theory is a version of the notion of embodied mind which is a synthesis of the cybernetic and computational perspectives. The experimental platform is an autonomous robot that acquires cognitive and linguistic abilities by interacting with the real world. When this approach was first suggested by Turing, it was technologically infeasible. Today it is plausible. In fact, we give a detailed description of our experiments with an autonomous robot that has acquired some limited abilities to navigate visually, manipulate objects and respond to spoken commands. Of course, we have not yet succeeded in building a sentient being. In fact, there are some daunting obstacles to extending our methods to that point. However, at the time of this writing, a community of researchers [1] around the world is organizing itself to pursue this ambitious goal by a variety of approaches all in the same spirit as the one described here. This kind of interdisciplinary research has an impeccable scientific pedigree and it offers the prospect of new insights and corresponding technological advances.
2 Preliminaries 2.1 The Physics of Speech Production Speech is the unique signal generated by the human vocal apparatus. Air from the lungs is forced through the vocal tract, generating acoustic waves that are radiated at the lips as a pressure field. The physics of this process is well understood, giving us important insights into speech communication. The rudiments of speech generation are given in Sections 2.1.1 and 2.1.2. Thorough treatments of this important subject may be found in Flanagan [86] and Rabiner and Schafer [265]. 2.1.1 The Human Vocal Apparatus Figure 2.1 shows a representation of the midsagittal section of the human vocal tract due to Coker [51]. In this model, the cross-sectional area of the oral cavity A(x), from the glottis, x = 0, to the lips, x = L, is determined by five parameters: a1 , tongue body height; a2 , anterior/posterior position of the tongue body; a3 , tongue tip height; a4 , mouth opening; and a5 , pharyngeal opening. In addition, a sixth parameter, a6 , is used to additively alter the nominal 17-cm vocal tract length. The articulatory vector a is (a1 , a2 , . . . , a6 ). The vocal tract model has three components: an oral cavity, a glottal source, and an acoustic impedance at the lips. We shall consider them singly first and then in combination. As is commonly done, we assume that the behavior of the oral cavity is that of a lossless acoustic tube of slowly varying (in time and space) cross-sectional area, A(x), in which plane waves propagate in one dimension (see Fig. 2.2). Sondhi [303] and Portnoff [252] have shown that under these assumptions, the pressure, p(x, t), and volume velocity, u(x, t), satisfy ∂p ρ ∂u = ∂x A(x, t) ∂t
(2.1a)
∂u A(x, t) ∂p = , ∂x ρc2 ∂t
(2.1b)
− and −
Mathematical Models for Speech Technology. Stephen Levinson 2005 John Wiley & Sons, Ltd ISBN: 0-470-84407-8
Mathematical Models for Speech Technology
10 a6
a3
a1 a2
a4
a5
Coker’s articulatory model
Figure 2.1
x A (X ) MOUTH
GLOTTIS
A (X )
Ai +1 Ai
∆x
O
i ∆x (i +1) ∆x
a 6 = L = n ∆x
x
Figure 2.2 The acoustic tube model of the vocal tract and its area function
which express Newton’s law and conservation of mass, respectively. In (2.1) ρ is the equilibrium density of the air in the tube and c is the corresponding velocity of sound. Differentiating (2.1a) and (2.1b) with respect to time and space, respectively, and then eliminating the mixed partials, we get the well-known Webster equation [327] for pressure, ∂ 2p 1 ∂p ∂A 1 ∂ 2p + . = ∂x 2 A(x, t) ∂x ∂x c2 ∂t 2
(2.2)
Preliminaries
11
The eigenvalues of (2.2) are taken as formant frequencies. We elect to use the Webster equation (in volume velocity) to compute a sinusoidal steady-state transfer function for the acoustic tube including the effects of thermal, viscous, and wall losses. To do so we let p(x, t) = P (x, ω)ej ωt and u(x, t) = U (x, ω)ej ωt , where ω is angular frequency and j is the imaginary unit. When p and u have this form, (2.1a) and (2.1b) become (cf. [252]) dP = Z(x, ω)U (x, ω) dx
(2.3a)
dU = Y (x, ω)P (x, ω), dx
(2.3b)
− and −
respectively. In order to account for the losses we define Z(x, ω) and Y (x, ω) to be the generalized acoustic impedance and admittance per unit length, respectively. Differentiating (2.3b) with respect to x and substituting for −dP /dx and P from (2.3a) and (2.3b), respectively, we obtain 1 d 2U dU dY − Y (x, ω)Z(x, ω)U (x, ω), = 2 dx Y (x, ω) dx dx
(2.4)
which is recognized as the “lossy” Webster equation for the volume velocity. The sinusoidal steady-state transfer function of the vocal tract can be computed by discretizing (2.4) in space and obtaining approximate solutions to the resulting difference equation for a sequence of frequencies. Let us write Uik to signify U (ix, kω) where the spatial discretization assumes x = L/n with i = 0 at the glottis and i = n at the lips, as is shown in Fig. 2.3. Similarly, we choose ω = /N and let 0 ≤ k ≤ N . We shall define Ai , Yik , and Zik in an analogous manner. Approximating second derivatives by second central differences and first derivatives by first backward differences, the finite difference representation of (2.4) is just k k k k Ui+1 Yik − Yi−1 − 2Uik + Ui−1 Uik − Ui−1 1 + Zik Yik Uik , (2.5) = (x)2 x x Yik which is easily simplified to the three-point recursion formula k k Y Y i−1 k k Ui+1 + Ui−1 = Uik 3 + (x)2 Zik Yik − i−1 −2 . Yik Yik
(2.6)
Given suitable values for U0k and U1k for 0 ≤ k ≤ N , we can obtain the desired transfer function from (2.6). We shall return to consider the numerical properties of this formula later. First, however, we must find appropriate expressions for Y and Z to account for the losses. Losses arise from thermal effects and viscosity but are primarily due to wall vibrations. A detailed treatment of the wall losses is found in Portnoff [252] and is neatly
Mathematical Models for Speech Technology
12
RADIUS
Al x
X=0 Glottis X=L Lips
Figure 2.3
The discretized acoustic tube model of the vocal tract
summarized by Rabiner and Schafer [265]. Portnoff assumes that the walls are displaced ξ(x, t) in a direction normal to the flow due to the pressure at x only. The vocal tract walls are modeled by a damped spring-mass system for which the relationship between pressure and displacement is p(x, t) = M
∂ξ ∂ 2ξ +b + k(x)ξ(x, t), 2 ∂t ∂t
(2.7)
where M, b, and k(x) are the unit length wall mass, damping coefficient, and spring constant, respectively. The displacement of the walls is assumed to perturb the area function about a neutral position according to A(x, t) = A(x) + S(x)ξ(x, t),
(2.8)
where A(x) and S(x) are the neutral area and circumference, respectively. By substituting (2.1a) into (2.1b) and, ignoring higher-order terms, transforming into the frequency domain, Portnoff goes on to observe that the effect of vibrating walls is to add a term YW to the acoustic admittance in (2.3b), where [k(x) − ω2 M] − j ωb YW (x, ω) = j wS(x, ω) . (2.9) [k(x) − ω2 M]2 + ω2 b2 The other losses that we wish to consider are those arising from viscous friction and thermal conduction. The former can be accounted for by adding a real quantity Zv to the acoustic impedance in (2.3a), where S(x) ωρµ 1/2 , (2.10) Zv (x, ω) = 2 A (x) 2 where µ is the viscosity of air.
Preliminaries
13
The thermal losses have an effect which is described by adding a real quantity YT to the acoustic admittance in (2.3b), where
YT (x, ω) =
S(x)(η − 1) ρc2
λω 2Cp ρ
1/2
,
(2.11)
in which λ is the coefficient of heat conduction, η is the adiabatic constant, and Cp is the heat capacity. All the constants are, of course, for the air at the conditions of temperature, pressure, and humidity found in the vocal tract. In view of (2.1), (2.9), (2.10), and (2.11) it is possible to set Z(x, ω) = j ωρ/A(x) + Zv (x, ω)
(2.12)
Y (x, ω) = j ωA(x)/ρc2 + YW (x, ω) + YT (x, ω).
(2.13)
and
There are two disadvantages to this approach. First, (2.12) and (2.13) are computationally expensive to evaluate. Second, (2.9) requires values for some physical constants of the tissue forming the vocal tract walls. Estimates of these constants are available in [139] and [86]. A computationally simpler empirical model of the losses which agrees with the measurements has been proposed by Sondhi [303] in which Z(x, ω) = j ωρ/A(x)
(2.14)
ω02 A(x) jω + + (βj ω)1/2 . Y (x, ω) = ρc2 α + jω
(2.15)
and
Sondhi has chosen values for the constants, ω0 = 406π, α = 130π, β = 4, which he then shows give good agreement with measured formant bandwidths. Moreover, the form of the model agrees with the results of Portnoff, as becomes clear when we observe that YW (x, ω) in (2.9) will have the same form as the second term on the right-hand side of (2.15) if k(x) ≡ 0 and the ratio of circumference to area is constant. In fact, Portnoff used k(x) = 0 and the second assumption is not unreasonable. The third term on the right-hand side of (2.15) may be seen to be of the same form as (2.10) and (2.11) (under the assumption that the ratio of S to A is constant) by noting that (j ω)1/2 = (1 + j )(ω/2)1/2 .
(2.16)
Mathematical Models for Speech Technology
14
2.1.2 Boundary Conditions With a description of the vocal tract in hand, we can turn our attention to the boundary conditions. Following Flanagan [86], we have assumed the glottal excitation to be a constant volume source with an asymmetric triangular waveform of amplitude V . Dunn et al. [71] have analyzed such a source in detail. What is relevant is that the spectral envelope decreases with the square of frequency. We have therefore taken the glottal source Ug (ω) to be Ug (ω) = V /ω2 .
(2.17)
For the boundary condition at the mouth we use the well-known Portnoff [252] and Rabiner and Schafer [265] relationship between sinusoidal steady-state pressure and volume velocity, P (L, ω) = Zr (ω)U (L, ω),
(2.18)
where the radiation impedance Zr is taken as that of a piston in an infinite plane baffle, the behavior of which is well approximated by Zr (ω) = j ωLr /(1 + j ωLr /R).
(2.19)
Values of the constants which are appropriate for the vocal tract model are given by Flanagan [86] as R = 128/9π 2
(2.20)
Lr = 8[A(L)/π]1/2 /3πc.
(2.21)
and
It is convenient to solve (2.4) with its boundary conditions (2.17) and (2.18) by solving a related initial-value problem for the transfer function H (ω) = U (L, ω)/U (0, ω).
(2.22)
At x = L, A(L) dU = (j ω)P (L, ω) − dx x=L ρc2
(2.23)
from which the frequency domain difference equation −
k Unk − Un−1 An = j kω 2 Pnk x ρc
(2.24)
Preliminaries
15
can be derived. Let Unk = 1
(2.25)
Pnk = Zr (kω)Unk .
(2.26)
and note that, from (2.18),
Substituting (2.25) and (2.26) into (2.24), we see that k Un−1 = 1 + j kω
An x Zr (kω). ρc2
(2.27)
k Now solving (2.6) for Ui−1 , we get the reversed three-point recursion relation k k k k Ui−1 /Yik ]}, − Uik [3 + (x)2 Zik Yik − Yi−1 /Yik − 2)]{Ui+1 = [1/(Yi−1
(2.28)
where Zik = j kωρ/Ai
(2.29)
Yik = (Ai /ρc2 )[j kω + ω02 /(α + j kω) + (βj kω)1/2 ].
(2.30)
and
It is worthy of note that the ratio of admittances can be simplified and the Zik Yik product is independent of i, so that (2.28) becomes k k = (Ai−1 /Ai − 2)−1 {Ui+1 − Uik [3 + (x)2 Z(k)Y (k) − Ai−1 /Ai ]}. Ui−1
(2.31)
Given the initial conditions of (2.25) and (2.27), we can compute U0k by evaluating (2.31) for i = n − 1, n − 2, . . . , 1. Then from (2.22), H (kω) = Unk /U0k = 1/U0k .
(2.32)
Finally, we compute the vocal tract output by multiplying the transfer function by the excitation from (2.17) and the radiation load from (2.19), Pˆk = Pˆ (kω) = H (kω)Ug (kω)Zr (kω),
(2.33)
for 1 ≤ k ≤ N . Figure 2.4 shows the power spectrum, 10 log10 (|Pˆ (ω)|2 ), plotted in dB and some parameters for the phoneme /ah/. The area function used was obtained from X-ray measurements and appears in Flanagan [86]. Figure 2.4 illustrates the single most important aspect of the speech. The intelligence in speech is encoded in the power spectrum of the acoustic pressure wave. Different articulatory configurations result in signals with different spectra, especially different resonance frequencies called formants, which are perceived as different sounds. We shall return to consider how these sounds form the basis of the linguistic code of speech in Section 2.7.
Mathematical Models for Speech Technology
16 40
F1 = 667 BW = 47 F2 = 1157 BW = 43 F3 = 2530 BW = 141
30
P(w) (dB)
20
10
0
−10
−20
400
1200
2000
2800
FREQUENCY (Hz)
Figure 2.4 Frequency domain solution of the Webster equation
2.1.3 Non-Stationarity The speech signal, p(t), is the solution to (2.2). Since the function A(x, t) is continuously varying in time, the solution, p(t), is a non-stationary random change in time. Fortunately, A(x, t) is slowly time-varying with respect to p(t). That is, ∂A ≪ ∂p . (2.34) ∂t ∂t Equation (2.34) may be taken to mean that p(t) is quasi-stationary or piecewise stationary. As such, p(t) can be considered to be a sequence of intervals within each one of which p(t) is stationary. It is true that there are rapid articulatory gestures that violate (2.34), but in general the quasi-stationary assumption is useful. However, as we shall discuss in Sections 2.3, 2.4, and 2.6, special techniques are required to treat the non-stationarity of p(t). 2.1.4 Fluid Dynamical Effects Equation (2.2) predicts the formation of planar acoustic waves as a result of air flowing into the vocal tract according to the boundary condition of (2.17). However, the Webster equation ignores any effects that the convertive air flow may have on the function p(t). If, instead of (2.1a) and (2.1b), we consider two-dimensional wave propagation, we can write the conservation of mass as ∂p ∂u ∂u = = −M 2 , ∂x ∂y ∂t
(2.35)
Preliminaries
17
where M is the Mach number. We can also include the viscous and convective effects by observing ∂ux ∂ ∂p ∂ ux uy + =− − ∂t ∂x ∂x ∂x
1 NR
∂uy ∂ux + ∂x ∂x
− µx µy ,
(2.36a)
∂uy ∂ ∂p ∂ ux uy + =− − ∂t ∂y ∂y ∂y
1 NR
∂uy ∂ux + ∂y ∂y
− µx µy .
(2.36b)
In (2.36a) and (2.36b) the first term on the right-hand side is recognized as Newton’s law expressed in (2.1a) and (2.1b). The second term is the convective flow. The third term accounts for viscous shear and drag at Reynolds number, NR , and the last term for turbulence. Equations (2.35) and (2.36) are known as the normalized, two-dimensional, Reynolds averaged, Navier–Stokes equations for slightly compressible flow. These equations can be solved numerically for p(t). The solutions are slightly different from those obtained from (2.2) due to the formation of vortices and transfer of energy between the convective and wave propagation components of the fluid flow. Typical solutions for the articulatory configuration of Fig. 2.2 are shown in Figs. 2.4 and 2.5. There is reason to believe that (2.35) and (2.36) provide a more faithful model of the acoustics of the vocal apparatus than the Webster equation does [327].
2.2 The Source–Filter Model The electrical analog of the physics discussed in Section 2.1 is the source–filter model of Dudley [69] shown in Fig. 2.6. In this model, the acoustic tube with time-varying area function is characterized by a filter with time-varying coefficients. The input to the filter is a mixture of a quasi-periodic signal and a noise source. When the filter is excited by the input signal the output is a voltage analog of the sound pressure wave p(t). The source–filter model is easily implemented in either analog or digital hardware and is the basis for all speech processing technology.
2.3 Information-Bearing Features of the Speech Signal The conclusion to be drawn from the previous two sections is that information is encoded in the speech signal in its short-duration amplitude spectrum [86]. This implies that by estimating the power spectrum of the speech signal as a function of time, we can identify the corresponding sequence of sounds. Because the speech signal x(t) is non-stationary it has a time-varying spectrum that can be obtained from the time-varying Fourier transform, Xn (ω). Note that x(t) is the voltage analog of the sound pressure wave, p(t), obtained by solving (2.2).
Mathematical Models for Speech Technology
18
2.0
Axial velocity
1.5 1.0 0.5 0.0
Pressure
−0.5
P = P * .05 - 1.0
−1.0 −1.5 −2.0 0
5
0
1000
10
20 15 Seconds (*.001)
25
30
10
0
P(w) (dB)
−10
−20 −30
−40
−50
−60 2000
3000
4000
5000
Frequency (Hz)
Figure 2.5 Speech signals and their spectrum obtained by solving the Navier–Stokes equations
Preliminaries
19
SOURCE WHITE NOISE EXCITATION
PERIODIC PULSE TRAIN ISOLATED IMPULSE
SOURCE CHARACTERISTIC
TIME-VARYING FILTER
SPEECH
COEFFICIENTS
CONTROL
Figure 2.6 The source–filter model of speech production
2.3.1 Fourier Methods The short-time Fourier transform is computed by observing the speech signal x(t) through N −1 , where wn = .54 − .46 cos( 2πn a finite window, {wn }n=0 N ). For computation x(t) is sampled yielding the time series {xn }∞ , where x = x(nT ) and T is the sampling interval n n=0 measured in seconds. The short-time Fourier transform, Xn (ω), of the signal xn is given by Xn (ω) =
∞
wn−m xm e−j ωm .
(2.37)
m=−∞
Recall that the index, n, refers to time nT , indicating that Xn (ω) is a function of both time and angular frequency, ω. The signal, x(t), can be recovered from its short-time Fourier transform according to
π 1 Xn (ω)ej ωn dω (2.38) xn = w0 2π −π The power spectrum, Sn (ω), of the signal at time nT is just Sn (ω) = |Xn (ω)|2 ,
(2.39)
and, for 0 ≤ ω ≤ Bx and n = 0, 1, . . . , this is called the spectrogram. Sn (ω) is characterN −1 . istic of the sound {xn }n=0 Equation (2.37) can be conveniently evaluated at frequencies ωk = 2πk N by means of the discrete Fourier transform (DFT) Xn (ωk ) =
N−1 m=0
wm xm e
−j 2πkm N
.
(2.40)
Mathematical Models for Speech Technology
20
The computation of (2.40) is performed using the well-known fast Fourier transform (FFT). Then the spectrogram, Skn , at frequency ωk and time nT is just Skn = |Xn (ωk )|2 ,
(2.41)
and Skn is an information-bearing feature of x(t). Because Xn (ω) changes with time, it must be sampled at a rate sufficient to permit the reconstruction of x(t). The bandwidth, Bx , of the speech signal x(t), is approximately 5 kHz, so the sampling rate, Fs , is 10 kHz. For the Hamming window, of length N = 100, {wn }, the bandwidth, B, is B=
20 000 2Fs = = 200 Hz. N 100
(2.42)
Thus the Nyquist rate for the short-time Fourier transform is 2B = 400 Hz. which at Fs = 10 000 requires a value of Xn (ωk ) every 25 samples. Since N = 100, the windows should overlap by 75%. A typical spectrogram computed from (2.41) is shown in Fig. 2.7. Time is on the horizontal axis, frequency is on the vertical, and the power is the level of black at a given time and frequency.
NOV 16 1999
mj1trunc
8 7
FREQ (KHZ)
6 5 4 3 2 1 0
0
100
200
300
400
500
600
700
800
900
TIME (ms)
Figure 2.7
Spectrogram of the sentence “When the sunlight stri(kes)”
1000
Preliminaries
21
2.3.2 Linear Prediction and the Webster Equation The method of linear prediction provides a particularly appropriate representation of the power spectrum of the speech signal. If we assume that the speech signal at time n is well predicted by a linear combination of p previous samples, we may write xn =
p
ak xn−k + en ,
(2.43)
k=1
where the weights, ak , of the linear combination are called the linear prediction coefficients (LPCs) and the error at time n, en , is small with respect to xn when averaged over time. A comprehensive treatment of linear prediction of speech is given in Markel and Gray [211]; however, for the purposes of this book, the following summary will suffice. Equation (2.43) is equivalent to the source–filter model in the sense that the source function, µn , is µn = Gen
(2.44)
for some constant gain, G. The filter is an all-pole filter with transfer function, H (z), given by H (z) =
1−
G p
k=1 ak z
−k
,
(2.45)
where the ak are just the LPCs from (2.43). As in the case of Fourier analysis, (2.43)–(2.45) hold for short intervals of approximately 10 ms duration. For each such interval, we can obtain an optimal estimate of the ak by a minimum mean square error (MMSE) technique. The prediction error, En , is defined by
En =
N+p−1
2 en+m .
(2.46)
m=0
The MMSE is obtained by solving ∇a En = 0,
(2.47)
which is equivalent to solving the linear system
··· .. . R(p − 1) · · ·
R(0) .. .
a R(p − 1) 1 a2 .. .. . . R(0) ap
R(1) .. =. , R(p)
(2.48)
Mathematical Models for Speech Technology
22
where R(k) is the autocorrelation function of x(t) at time n given by
Rn (k) =
N +k−1
(2.49)
wm wm+k xn+m xn+m+k
m=1
with N ∼ = 100. Because of the Toeplitz property of the correlation matrix and the relationship of the right-hand side of (2.48) to that matrix, there is an efficient algorithm for solving for the ak due to Durbin [265]. Let E 0 = R(0). Then, for 1 ≤ i ≤ p, compute the partial correlation coefficients (PARCORs) according to
ki =
1 Rn (i) − E (i−1)
i−1 j =1
aj(i−1) Rn (i − j ) .
(2.50)
Then, for 1 ≤ i ≤ p, compute the LPCs from ai(i) = ki
(2.51)
(i−1) aj(i) = aj(1−i) − ki ai−j .
(2.52)
and, for 1 ≤ j ≤ i − 1,
Then the residual error is updated from E (i) = (1 − ki2 )E (i−1) .
(2.53)
Finally, the desired LPCs for a pth-order predictor are (p)
aj = aj ,
for
1 ≤ j ≤ p.
(2.54)
The PARCORs are the negatives of the reflection coefficients, that is, ki = −
Ai+1 − Ai Ai+1 + Ai
(2.55)
From (2.55) we see that the LPCs are related to the area function, A(x), in the Webster equation. In fact, Wakita [325] has shown that the linear predictor of (2.43) is equivalent to the solution of the lossless Webster equation. Thus the very general method of linear prediction is actually a physical model of the speech signal. In addition, the poles, zi of H (z) in (2.45) are just the formants or resonances that appear in the solution to the Webster equation as indicated in Fig. 2.4. Write the poles in the form of zi = |z|ej θi .
(2.56)
Preliminaries
23
The formant frequencies, fi , and bandwidths, σi , are determined by 2πfi T
(2.57)
|z| = eσi T
(2.58)
θi = and
respectively. The LPCs characterize the power spectrum of the speech signal in the sense that lim |H (ω)|2 = |Sn (ω)|2 ;
p→∞
(2.59)
thus an alternative to the information-bearing features of the speech signal Skn , defined in (2.41), is the set of LPCs defined in (2.54). In practice, the cepstral coefficients, cn , defined by cn =
1 2π
π
log |H (ω)|ej ωn dω,
(2.60)
−π
are often used in preference to the ai themselves. Equation (2.60) can be evaluated recursively from c0 = log G
(2.61)
and, for n = 1, 2, . . . ,
cn = an +
n−1 k an−k ck . n
(2.62)
k=1
Still other feature sets are obtained by taking time derivatives of the cn and by applying a non-linear transformation to the frequency, ω, in (2.60). This modification is called the mel-scale cepstrum [59, 314]. For the purposes of this discussion, all such feature sets will be considered to be equivalent. We will refer to this assumption in Chapter 3.
2.4 Time–Frequency Representations As discussed in Section 2.1.3, the speech signal is intrinsically non-stationary. Since all feature sets of the signal are derived from time–frequency distributions of its energy, it follows that there is some ambiguity in any such representation of the speech signal. The following analysis serves to explain and quantify the effects of non-stationarity.
Mathematical Models for Speech Technology
24
In Section 2.3.1 we defined the spectrogram as a method for representing the time variation of energy in the speech signal. Rewriting (2.37) and (2.39) in continuous time, the spectrogram Sx (ω, t), of the signal, x(t), observed through the window g(τ ), becomes Sx (ω, t) =
∞
g(τ )x(t + τ )e
−j ωτ
−∞
2 dτ .
(2.63)
It is natural to ask whether or not we can find a better time–frequency representation in some well-defined sense. In particular, we seek another representation, Fx (ω, t), that will give better estimates in both time and frequency of the time variation of the spectrum due to non-stationarity. For example, the Wigner transform, Wx (ω, t), defined by
∞ τ −j ωt τ ∗ x t− e dτ, (2.64) x t+ Wx (ω, t) = 2 2 −∞ τ
2
will give perfect resolution of the FM chirp, xc (t) = ej (ω0 t+ 2 mt ) , at the expense of a discontinuous component, δ(ω)2 cos(2ω0 t). In comparison, Sx (ω, t) will give continuous but poor resolution of xi (t). Perhaps there is an optimal compromise. We begin by imposing the weak constraint on Fx (ω, t) that it be shift invariant in both time and frequency. If this condition is satisfied then any Fx (ω, t) has the form Fx (ω, t) =
1 φ(ω, t) ∗ ∗Wx (ω, t) 2π
(2.65)
for some kernel ψ(ω, t). The symbol ** indicates convolution in time and frequency, so that (2.65) may be conveniently evaluated by means of the two-dimensional Fourier transform pair,
∞ X(ν, τ ) = F {x(ω, t)} = x(ω, t)e−j (νt+ωτ ) dω dt (2.66a) −∞
and x(ω, t) = F
−1
1 {X(ν, τ )} = 2π
∞
X(ν, τ )ej (νt+ωτ ) dν dτ.
(2.66b)
−∞
Then (2.65) may be replaced by Fx (ω, t) = F −1 {(ν, τ )Ax (ν, τ )} ,
(2.67)
(ν, τ ) = F {φ(ω, t)}
(2.68a)
Ax (ν, τ ) = F {Wx (ω, t)} .
(2.68b)
where
and
Preliminaries
25
For reasons that will become clear, (ν, τ ) defined in (2.68a) is sometimes called the point-spread function, and Ax (ν, τ ) the ambiguity function. From (2.67) and (2.68) we see that the spectrogram of (2.63) is a special case of (2.65) and can be written as Sx (ω, t) = Wg (ω, t) ∗ ∗Wx (ω, t),
(2.69)
where Wg (ω, t) is understood to be the Wigner transform of the window function, g(τ ). A consequence of (2.69) is that if Fx (ω, t) is shift invariant and positive, then any such distribution can be written as a superposition of spectrograms of the form
∞ Sx (ω, t, gα (τ )) dα, (2.70) Fx (ω, t) = −∞
where the spectrogram is constructed from the window function gα (τ ) depending only on the parameter, α. Any Fx (ω, t) of the form of (2.70) will have some degree of localization and smoothness. That is, the effect of the kernel function, φ(ω, t), will be to spread out the energy at (ω, t) over an ellipse centered at that point and with semi-axes θω and θt . Perfect localization corresponds to θω = θt = 0. We also require that the distribution of energy be smooth, by which is meant that energy at point (ν, τ ) in the transform domain is distributed over an ellipse centered at that point and with semi-axes ν and τ . Perfect smoothness corresponds to the condition that ν = τ = 0. Riley [274] has shown that Fx (ω, t) is governed by an uncertainty principle according to which it is always the case that σω τ ≥
1 2
(2.71a)
σt ν ≥
1 2
(2.71b)
and
with equality if and only if φ(ω, t) = βe−t
2 /2θ 2 T
e−ω
2 /2θ 2
,
(2.72)
where β, σ , and σT are constants depending on φ(ω, t). There are two important implications of (2.71) and (2.72). First and foremost, any choice of Fx (ω, t) will cause some loss in resolution in both time and frequency. We cannot overcome this consequence of non-stationarity. However, we will, in Section 2.6, explore a different approach to the problem of non-stationarity based on (2.34). Second, from (2.69), (2.70) and (2.72) it is clear that a useful Fx (ω, t) is a spectrogram based on a Gaussian window in time and frequency with θT θ = 12 . The improvement resulting from this distribution may be judged by comparing the conventional spectrogram shown in Fig. 2.8a with the improved spectrogram of Fig. 2.8b. Due to computational complexity, the smoothed spectrogram is not used in practice. As a result, the features typically used may have somewhat higher variances than could otherwise be obtained. It is clear from Fig. 2.8 that the conventional spectrogram does provide reasonable information-bearing features.
Mathematical Models for Speech Technology
26
(i)
(ii) (a)
Figure 2.8 (a) (i) Spectrogram of the word “read” computed from contiguous 8.0 ms speech segments; (ii) pitch synchronous spectrogram of the word “read”. (b) (i) Smoothed pitch synchronous spectrogram of the word “read”; (ii) smoothed pitch synchronous spectrogram of the word “read”
Preliminaries
27 0.726
0.000 0.477 0.382 0.206 0.191
TIME
0.095 0.000 0
1000
2000
3000
5000
4000
FREQ
READ
(i)
0.607
0.000 0.350 0.230 0.210 0.140
TIME
0.070 0.000 0
1000
2000
5000
4000
3000 FREQ
DO (ii) (b)
Figure 2.8 (continued )
2.5 Classification of Acoustic Patterns in Speech Everyday experience confirms the highly variable nature of speech. We are aware of the wide ranges of voices, accents, and speaking styles present in ordinary discourse. Yet, virtually all people seem to correctly understand spoken messages effortlessly. One explanation for this remarkable ability is that speech is literate. That is, speech is characterized by a relatively small number of distinct and somehow invariant acoustic patterns. Were this not so, there would be little hope of learning to speak because, as Bruner et al. [40] observe: [W]ere we to utilize fully our capacity for registering the differences in things and to respond to each event encountered as unique, we would soon be overwhelmed by the complexity of our environment . . . . The learning and utilization of categories represents one of the most elementary and general forms of cognition by which man adjusts to his environment. [40]
The ability of humans to perform the kind of categorical perception described above has long been recognized as essential to our mental abilities. Plato [248] explained the phenomenon in his theory of forms as follows. You know that we always postulate in each case a single form for each set of particular things, to which we apply the same name. Then let us take any set you choose. For example there are many particular beds and tables. But there are only two forms, one of bed and one of table. If you look at a bed, or anything else, sideways or endways or from other angles, does it make any difference to the bed? Isn’t it merely that it looks different without being different? And similarly with other things. . . . The apparent size of an object, as you know, varies with its distance from our eye. So also a stick will look bent if you put it in the
Mathematical Models for Speech Technology
28
water, straight when you take it out and deceptive differences of shading can make the same surface seem to the eye concave or convex; and our minds are clearly liable to all sorts of confusions of this kind. . . . Measuring, counting and weighing have happily been discovered to help us out of these difficulties and to ensure that we should not be guided by apparent differences of size, quantity and heaviness, but by calculations of number, measurement and weight.
In common parlance, we speak of our human abilities to recognize patterns. If asked, we would almost certainly agree with the proposition that this ability we possess is a significant aspect of our intelligence. Upon further reflection, we would find the conventional meanings of the terms “pattern” and “intelligence” to be vague enough to cause us difficulty in stating precisely how they are related. We call many diverse objects, events and ideas “patterns”. We refer, for example, to sequences of numbers as they occur in puzzles as having a “pattern”. We notice the “pattern” of a word exactly contained in a longer one though the two may be unrelated in meaning. We speak of styles of musical compositions as displaying patterns which render the romantic easily distinguished from the classical. Certainly to solve a puzzle or discover a camouflaged sequence of letters or appreciate music requires intelligence. But what exactly does that entail? The modern explanation of pattern recognition is that the variability of patterns in general and acoustic patterns in particular can be understood by a rigorous appeal to the theory of probability. In Chapter 10 we shall return to the question of pattern recognition and carefully compare the Platonic theory of forms with the modern mathematical theory which we review below. 2.5.1 Statistical Decision Theory The problem treated in the statistical pattern recognition literature [241] is illustrated in Fig. 2.9. It is customary, though by no means necessary, to choose Rn , the n-dimensional Euclidean space, as the vector space the points of which, xj , represent the “objects” under consideration. The vector xj is the n-tuple (x1j , x2j , xnj ) whose components xj k are called features. Each coordinate axis is a scale on which its corresponding feature is measured, so the space is called the feature space. In these pages, the features will be Fourier spectra as defined in (2.41), LPCs computed from (2.50)–(2.54), or cepstra derived from (2.60) or (2.62). Based on the argument made in (2.42), vectors of such features are measured approximately every 2.5 ms. Later, in Chapter 3, we will refer to these vectors as observations. In general, we shall be interested in the N -class pattern recognition problem which is that of devising a decision rule, f , which classifies an unknown “object”, x, as a member of at most one of the N classes. We are required to say “at most one class” because it is often desirable to make no assignment of a particular vector x. This choice is referred to as the “rejection option” and is but a technical matter which we shall not consider further here. The patterns are regions of the feature space and will be designated by ωi for 1 ≤ i ≤ N . The union of the ωi is called and n {ωi }N i=1 = ⊆ R .
(2.73)
Preliminaries
29
Xn
D (x, wi )
wi
x
D (x, wj )
wj
x1
Figure 2.9
The point–set distance function as a measure of pattern dissimilarity
The decision rule, f , is written as f (x) = ωi ,
(2.74)
meaning that the vector x is assigned to the ith class by the rule f . A rule of the form (2.74) must be constructed on the basis of a training set, Y , comprising the set of vectors {yj }M j =1 . Most often, though not necessarily, the training set wil be labeled, that is, each vector will be correctly marked with its class membership. The subset of a labeled training set corresponding to the ith class will be called Yi , so that yj ∈ Yi → yj ∈ ωi . This will be denoted by writing y(i) j . Obviously Y = ∪N i=1 Yi ,
(2.75)
where (i) m
i Yi = {yj }j =1 ,
(2.76)
and from (2.75) it is clear that M=
N i=1
mi .
(2.77)
Mathematical Models for Speech Technology
30
The performance of a decision rule will be evaluated on the basis of a test set X = {xl }L l=1 . This set is assumed to be correctly labeled, but we shall omit the superscript indicating class membership when the meaning is unambiguous without it. To get the truest measure of performance of a decision rule, X ∩Y =φ
(2.78)
should be strictly observed. The most direct measure of performance is the extent to which f (y(i) j ) = ωi ,
for 1 ≤ i ≤ N, 1 ≤ j ≤ mi ,
(2.79)
and f (x(i) l ) = ωi ,
for 1 ≤ l ≤ L.
(2.80)
In practice, (2.79) will more often be satisfied than (2.80). It will often be useful to define a “prototype” of the ith class y(i) p . The most common definition is y(i) p
mi 1 y(i) = j , mi
(2.81)
j =1
which gives, in some sense, an “average” exemplar of the pattern ωi . 2.5.2 Estimation of Class-Conditional Probability Density Functions As indicated in Figure 2.9, non-parametric decision rules assign an unlabeled vector membership in the pattern to which it is closest in terms of a well-defined distance measure, D(x, ωi ). This distance is the distance from a point to a set which must be constructed from an ordinary topological metric. Recall that in elementary topology one defines a metric d(x, y) on a vector space, say Rn , as any function satisfying d(x, y) ≥ 0,
(2.82a)
d(x, y) = d(y, x),
(2.82b)
d(x, z) ≤ d(x, y) + d(y, z),
(2.82c)
the positivity, symmetry and triangle inequality conditions. In practice, conditions (2.82) are often not strictly observed. Some well-known metrics are the following: d(x, y) =
0, 1,
if x = y, otherwise.
(2.83)
Preliminaries
31
This seemingly trivial metric can be extremely useful in problems involving a finite number of attributes which are either present or not. In continuous feature spaces the Minkowski p-metrics,
dp (x, y) =
N
| xi − yi |p
i=1
p1
(2.84)
,
are often used. There are three special cases of (2.84) in common usage: p = 1, p = 2, and p = ∞, giving rise to the Hamming, Euclidean and Chebyshev metrics, respectively. For p = ∞, d(x, y) = max {| xi − yi |} .
(2.85)
i
In the next section we shall see the importance of metrics of the form d(x, y) = (x − y)T (x − y)′ ,
(2.86)
where T is any positive definite n × n matrix and the prime denotes vector transpose. Any of the metrics (2.83)–(2.86), or others still, may be used to define point–set distances. Perhaps the simplest of these is the distance to the prototype, D(x, ωi ) = d x, y(i) (2.87) p , where y(i) p is defined by (2.81). The family of “nearest-neighbor” distances is, for reasons which will be seen later, highly effective. Here we let D(x, ωi ) = d x, x(i) (2.88) [k] ,
where x(i) [k] is the kth nearest neighbor to x in the set ωi . The kth nearest neighbor is usually found by sorting the distances d(x, y(i) j ) for 1 ≤ j ≤ mi so that (i) (i) d x, y(i) ≤ d x, y ≤ . . . ≤ d x, y j1 j2 jm .
(2.89)
D(x, ωi ) = (ki )−1 ,
(2.90)
k ki = Yi ∩ x[ℓ] ℓ=1 .
(2.91)
i
The training vector in the kth term in the sequence (2.89) is, of course, x(i) [k] . There is a √ “rule of thumb” which states that one should set k ≤ mi . There is an unusual distance which leads to what, for obvious reasons, is called the “majority vote rule”:
where ki is given by
Mathematical Models for Speech Technology
32
We use | S | to denote the cardinality of the set S. Hence (2.91) defines ki as the number of members of Yi which are among the K nearest neighbors of x without respect to class. In a spirit similar to that of (2.88), one may define K 1 (i) D(x, ωi ) = d x, x[k] , K
(2.92)
k=1
which is just the average distance from x to the K nearest members of Yi . Alternatively, we may weigh the distance by the number of samples of Yi which are within the distance (2.92) of x. Thus let K
D(x, ωi ) =
1 (i) d x, x[k] , ℓ
(2.93)
k=1
where K 1 (i) (i) ℓ = max j | d x, x[j ] ≤ d x, x[k] . j K
(2.94)
k=1
From (2.87)–(2.94) we get the family of non-parametric decision rules f (x) = ωi ⇐⇒ D(x, ωi ) ≤ D(x, ωj ),
1 ≤ j ≤ N.
(2.95)
Rules of the form (2.95) are closest to our intuitive sense of classification. They are easy to implement since the training process consists simply in collecting and storing the labeled feature vectors. These rules often outperform all others in terms of classification accuracy. They may, however, be quite costly to operate, especially in high-dimensional spaces. Thus far, we have simply stated the rules, justifying them only by an appeal to intuition. We defer, until the end of the next section, a more rigorous analysis of their underlying principles. Parametric Decision Rules The primary method of classification to be considered is that in which an unknown vector is said to belong to the class of highest probability based on the values of the observed features. This is denoted by f (x) = ωi ⇐⇒ P (ωi | x) ≤ P (ωj | x),
1 ≤ j ≤ N.
(2.96)
We can construct a classifier by treating feature vectors as random variables. A consequence of Bayes’ law is that P (ωi | x) =
p(x | ωi )P (ωi ) , p(x)
(2.97)
Preliminaries
33
where P (ωi ) is the “prior probability” of the ith class, that is, the probability of a vector coming from ωi before it is observed; p(x) is the probability density function of the feature vectors without respect to their class and p(x | ωi ) is called the ith class-conditional probability density function, by which term is meant that p(x | ωi ) is the probability density function of feature vectors which correspond to the ith pattern only. Since the factor p(x) is common to p(x | ωi ) for all i and assuming for a moment that p(ωi ) = N1 for all i, making the patterns equally likely a priori, we can rewrite (2.96) in view of (2.97) as f (x) = ωi ⇐⇒ p(x | ωi ) ≥ p(x | ωj ),
for 1 ≤ j ≤ N.
(2.98)
The decision rule (2.98) is called the maximum a posteriori probability (MAP) rule. The MAP rule is sometimes augmented by prior probabilities and a loss or cost function to derive the “minimum risk rule”. The total risk of deciding ωi at x, R, is the loss summed over all classes, which is computed from
Ri =
N j =1
Lij
p(x | ωi )P (ωi ) , p(x)
(2.99)
where the loss function Lij is the penalty incurred for misclassifying ωj as ωi . It is customary to let Lij = cij (1 − δij ),
(2.100)
where cij is a fixed cost for the ωj to ωi classification error and δij is the Kronecker delta function. In this case we can substitute f (x) for ωi in (2.99) and solve it by finding that f (x) which minimizes R. The solution f (x) = ωi ⇐⇒ Lij P (ωj )p(x | ωj ) ≤ Lkj P (ωk )p(x | ωk ),
for 1 ≤ k ≤ N, (2.101)
is called the “minimum risk” decision rule. If we set cij = 1, then the risk becomes Pe , the probability of classification error. For this particular loss function, called the zero–one loss function, (2.101) becomes f (x) = ωi ⇐⇒ p(x | ωi )P (ωi ) ≥ p(x | ωj )P (ωj ),
for 1 ≤ j ≤ N.
(2.102)
Rule (2.102), therefore, minimizes Pe , meaning that if the p(x | ωi ) and P (ωi ) are known exactly, then no other classification scheme relying only on the feature vectors Y , can yield a lower rate of incorrect classifications as the number of trials tends to infinity. A standard proof of the optimality of (2.102) is given by Patrick [241]. The practical meaning of the theoretical result is that we assume that p(x|ωi ) can be estimated to any desired degree of accuracy and that good estimates will provide a asymptotically optimal performance in the limit as M gets large (see (2.77)).
Mathematical Models for Speech Technology
34
This important result can be demonstrated as follows. Let us define the point risk of a decision rule, r(f (x)), by r(f (x)) =
p(x | ωj )P (ωj )/p(x)
(2.103)
j =i
= 1 − p(x | ωi )P (ωi )/p(x) = Pe . From (2.103) it is clear that Pe is minimized by the decision rule of (2.101), which we shall designate as f ∗ (x). The global risk of a decision rule, R(f (x)) is defined as R(f (x)) = ERn {r(f (x))}
r(f (x))p(x) dx =
(2.104)
Rn
=
N
Rn i=1
L(f (x), i)p(x | ωi ).
Now note that all terms in the integrand of (2.104) are positive so it must be minimized pointwise. Since we know from eq. (2.103) that the point risk is minimized by f ∗ (x), so must the global risk. Thus f ∗ (x) is the optimal decision rule. In many cases of practical importance, the physics of the process under study will dictate the form of p(x | ωi ) to be a member of a parametric family of densities. The values of the parameters can then be estimated from the training data. Perhaps the most useful parametric form is the multivariate normal or Gaussian density function, p(x | ωi ) =
(2π)n/2
−1 1 1 ′ E − 2 (x−µi )Ui (x−µi ) , 1/2 | Ui |
(2.105)
where µi is the mean vector defined by µi = E{x(i) }
(2.106)
and Ui is the covariance matrix whose j kth entry uij k is given by uij k = E
! xj(i) − µij xk(i) − µik .
(2.107)
The expectation operator, E, appearing in (2.106) and (2.107) computes the expected value of p(x) from E{g(x)} =
g(x)p(x) dx. Rn
(2.108)
Preliminaries
35
The utility of the Gaussian density function stems from two facts. First, quite often the actually observed features are really linear combinations of independent non-normal random variables which are either unknown to us or not directly measurable. The wellknown central limit theorem tells us that such features are well described as Gaussian random variables in the following sense. If xk are independent and identically distributed random for 1 ≤ k ≤ K and having finite mean µ and variance σ 2 , then the sum, variables K Sk = k=1 xk , is a random variable such that lim p
k→∞
θ y2 Sk − kµ pλ (Y). As in standard HMM, there is an initial guess for each of the parameters. Then each parameter is updated until the likelihood appears to have converged such that an ad hoc criterion has been met (e.g., when the increase in the likelihood value is less than some number ε). For this discussion, a fixed number of iterations is used instead because it can be seen from the plots of the likelihood functions when the ad hoc criterion has been reached. Even though the ad hoc criterion or the fixed number of iterations is easier to test for, this is not valid in general. This is because there is a possibility for the likelihood function to reach convergence, but the parameter values may still be changing. The true convergence criterion should depend on the parameter values and not the likelihood values, and this is achieved when the parameter values no longer change. When the true convergence criterion has been met, then the set of values found corresponds to the point where the gradient of the Q function with respect to each of the different parameters is 0. This is explained below. Auxiliary Function The reestimation formulas are derived from the auxiliary function Q(λ, λ) = pλ (Y, x) log pλ (Y, x).
(3.201)
x
As noted in Section 3.1.2, we have: 1. Q(λ, λ) > Q(λ, λ)) ⇒ pλ (Y) > pλ (Y). 2. Under mild orthodoxy conditions on Y, Q(λ, λ) has a unique global maximum λ. This global maximum is a critical point and coincides with λ if and only if λ is a critical point of pλ (Y).
Mathematical Models of Linguistic Structure
105
3. The point λ at which Q(λ, λ) is maximized is expressible in closed form in terms of the inductively computable quantities {ατ (j, d), βτ (j, d)}. Statement 1 says that increasing the value of the auxiliary Q function with respect to the new parameters means that the probability density of Y or likelihood of Y also increases. Statement 2 says that the likelihood for the new parameter values λ will always increase the likelihood with respect to current parameters, unless it is a critical point. This is because with every update, the new parameters will always give a better or equal likelihood value. Statement 3 says that it is practicable to determine the new parameter values because an inductive process can be stopped after a certain number of iterations; however, it may be impossible to find the new parameters using the closed-form solution. Outline of Derivation of Reestimation Formulas The derivations are very long and will not be given in detail. However, a basic outline is provided. In essence, the gradient of the Q function with respect to each of the five parameters is taken, subject to the stochastic constraints. First, the auxiliary function Q(λ, λ), given earlier in terms of pλ (Y, x) and log pλ (Y, x), is written in terms of the five parameters by substituting 3.181, 3.183, and 3.184 into 3.180, then imposing the following stochastic constraints for some of the parameters in the new parameter set λ: S
a ij = 1,
i = 1, . . . , S,
(3.202)
j =1
and
D
P (d|j ) = 1,
j = 1, . . . , S.
(3.203)
d=1
The first constraint 3.202 states that the sum of the probabilities leaving a state should be equal to 1 and the second constraint 3.203 states that the sum of the probability of duration for any state should equal 1. Next, use Lagrange multipliers to incorporate these constraints into the Q function to give another function Q∗ (λ, λ). Thus the new auxiliary function, Q∗ (λ, λ), with the constraints, is D S S S ∗ Q (λ, λ) = Q(λ, λ) − εj θi a ij − 1 − P (d|j ) − 1 (3.204) i=1
j =1
j =1
d=1
where θi and εj are Langrange multipliers. Then the gradient of Q∗ (λ, λ) with respect to each of the five new parameters is set equal to 0 (as in the standard procedure for finding the critical point of a function). For example, ∂Q∗ (λ, λ)/∂a ij = 0 is solved. From the above discussion, the critical point will correspond to a global maximum of Q. Since Q has increased with respect to the new parameters, this means that the likelihood with respect to the new parameter has also increased, that is, pλ (Y) > pλ (Y).
Mathematical Models for Speech Technology
106
Reestimation Formulas The variance for the first M data samples does not require reestimation: σ2 =
M 1 2 yt . M
(3.205)
t=1
The rest of the parameters do require re-estimation. The reestimation formulas for aˆ ij , Pˆ (d|j ), and σˆ j2 are as follows:
D
T ατ −d (i,d)aij βτ∗ (j )
Tτ =M+1 Dd=1 , α τ =M+1 d=1 τ −d (i,d)βτ −d (i,d)
T ατ (j,d)βτ (j,d)
D
T τ =M+1 , j δ=1 ατ (i,δ)βτ (i,δ) τ =M+1
aˆ ij = Pˆ (d|j ) = σˆ j2 =
T
τ =M+1
i, j = 1, . . . , S, = 1, . . . , S, d = 1, . . . , D,
+d−1 (yt −µj (t,τ ))2 ατ (i,d)βτ (i,d) τt=τ
D , dα (i,d)β (i,d) τ τ τ =M+1 d=1
D
d=1 T
j = 1, . . . , S.
(3.206) (3.207) (3.208)
In the above formulas, ατ (i, d), βτ (i, d), β ∗ (j ) and µj (t, τ ) have already been defined. The reestimates for the autoregressive coefficients were calculated as follows. For each j = 1, . . . , S, N M
m=1 n=0
j j Rmqnr cˆmn = R0q0r
q = 1, . . . , M, r = 0, . . . , N,
(3.209)
where j Rmqnr =
D T
ατ (j, d)βτ (j, d)
τ =M+1 d=1
t+d−1 t=τ
yt−m yt−q un (t − τ )ur (t − τ ).
(3.210)
Letting M = 2, N = 1, for j = 1, . . . , S, 3.209 can be written out explicitly as j j j j j j cˆ10 R R1100 R1110 R2100 R2110 0100 j j j j j j c ˆ R R R R 1101 1111 2101 2111 11 R0101 (3.211) = j j j j j j R1200 R2110 R2200 R2210 cˆ20 R0200 j j j j j j R0201 cˆ21 R1201 R1211 R2201 R2211
which can be written compactly as
R j cˆ j = r j .
(3.212)
The R j matrix is symmetric; this can be seen from 3.210 since it is symmetric about n and r, as well as m and q. Therefore, it is only necessary to compute the values in the upper triangle of this matrix. As always, when using a computer to solve 3.211, one should avoid finding the inverse of R j to find cˆ j because this operation may be unstable
Mathematical Models of Linguistic Structure
107
numerically and also takes a lot of computation. Rather, other more numerically stable and faster methods for solving the regression coefficients cˆ j can be used, which do not involve matrix inversion. As in Section 3.1.3, if R j is positive definite, since it is already symmetric, Cholesky factorization can be used [265]. The computational complexity of this model renders it difficult to implement and costly to apply. Tan [309] has produced a successful Monte Carlo simulation of the algorithm but, despite its obvious appropriateness, it has yet to be applied to the speech signal. 3.1.6 Parameter Estimation via the EM Algorithm The parameter estimation problem is analogous to the statistical approach to a seemingly very different problem, that of estimation from incomplete data. Dempster et al. [62] indicate that in the 1950s, statisticians were thinking of a random process for which only incomplete data was available as a doubly stochastic process. The output of the “true” but hidden process x was thought of as passing through a second process y which censored the input and produced the observed data. Both processes were considered to be parameterized in λ and the problem was to determine λ from the observables. The statistician’s solution to this problem has come to be known as the EM algorithm: E for expectation and M for maximization. The algorithm is succinctly described as follows. Let τ (x) be any sufficient statistic for x, meaning that, in a precise sense, τ contains complete information about x. Suppose that we have an estimate of the parameter λ. Then the so-called E-step of the algorithm calls for the estimation of τ (x) from τ = E{τ (x)|y, λ},
(3.213)
where E{·|·} is the expectation operator. Thus solving (3.213) gives a new estimate of τ (x), τ , conditioned on the present estimate of the parameter λ and the observations y. This is followed by the M-step in which we solve E{τ (x)|λ} = τ
(3.214)
to get a new estimate, λ, of the parameter. Iterative applications of (3.213) and (3.214) yield better estimates of λ. In general, there are no closed-form solutions for (3.213) and (3.214) so the Bayesian solution is used, yielding λ as the maximum likelihood estimator of the parameter given λ and τ . Generalizations and proof of convergence of the EM algorithm are given in [62]. Baum and his colleagues [27–29] recognized that the parameter estimation problem for the HMM could be considered an interesting special case of a doubly stochastic process, whereupon they were able to derive a particular solution to (3.213) and (3.214). 3.1.7 The Cave–Neuwirth and Poritz Results We now turn our attention to two important empirical results based on the theory of hidden Markov models described in Sections 3.1.1 through 3.1.6. The two experiments clearly demonstrate the remarkable power of the HMM to both discover and represent aspects of linguistic structure as manifest in text and speech. This is possible because,
108
Mathematical Models for Speech Technology
like linear prediction described in Section 2.4.2 and time scale normalization discussed in Section 2.6.2, the HMM is a general model of a time series that is particularly appropriate for speech and naturally captures acoustic phonetics, phonology, and phonotactics. Markov [212] used the stochastic model that now bears his name to analyze the text of Eugene Onegin. More recently, Cave and Neuwirth [43] have given a modern interpretation of his experiments in a highly instructive way. Using ordinary text from a newspaper as an observation sequence, they estimated the parameters of a family of HMMs. They then showed how the parameter values can be used to infer some significant linguistic properties of English text. The observation data for the experiment comprised T = 30 000 letters of ordinary text from the New York Times. The text was preprocessed to remove all symbols except the 26 letters of the English alphabet and a delimiter (blank or space) used to separate words. Thus, M was fixed at 27. The observation sequence, O, was then the entire text. A family of models corresponding to N = 1, 2, 3, . . . , 12 was generated by applying the algorithm of (3.11)–(3.13) to the entire observation sequence until convergence was reached. The state sequence for each model was determined from (3.7) and put into correspondence with the observation sequence. Many of the entries in the A and B matrices converged to zero, indicating impossible state transitions and impossible observations for a given state, respectively. These four quantities make possible the formulation of two kinds of rules governing English text. First are those rules that identify a particular state with a linguistic property of an observation. Conversely, we have rules that determine which state generated a particular observation based on the linguistic properties of the observation. The case for N = 6 provides a good example. In this case, it was discovered that only vowels were produced by state 2. This is an example of the first type of rule. Whenever a blank was present in the observation sequence, the model was in state 6. This is an example of the second type of rule. In addition, it was found that state 3 was associated exclusively with consonants, state 1 was a vowel successor, and state 5 was a consonant successor. Some more complicated rules were also discovered; for example, word-final letters could come only from states 1, 2, or 4 whereas word-initial letters could only arise in states 2, 3, or 5. Of course, the state transition matrix determines the allowable sequences of letters having specific properties. It is remarkable that the HMM discovers these important properties of text (for which linguistic categories had already been named) without any information other than that which is implicit but camouflaged in the text. The Poritz [250] result comes from an experiment which was, no doubt, inspired by the Cave–Neuwirth result. The experiment is identical in spirit, with speech as the observable process instead of text. In this case, the autoregressive model of Section 3.1.3 was derived from readings of short paragraphs of about 40 seconds in duration. A five-state model was estimated and then the state sequence corresponding to the speech signal was determined. By listening to those intervals of the signal that correspond to each of the five states, their identities were easily determined. State 1 produced only vowels. State 2 indicated silence. State 3 was reserved for nasals; state 4, plosives; and state 5, fricatives. Moreover, the state transition matrix determined the phonotactic rules of these five broad phonetic categories. The spectra corresponding to the state-dependent autoregressive processes were exactly those that would be predicted by solutions to the Webster equation
Mathematical Models of Linguistic Structure
109
for the vocal tract geometry appropriate to the phonetic category. In a manner analogous to that of the Cave–Neuwirth result, the Poritz result shows how broad phonetic categories and phonotactics can be discovered in the speech signal without recourse to known linguistic categories. These two results are the best available demonstration of the appropriateness of the HMM formalism for extraction and representation of linguistic structure. These results form the empirical basis for the speech recognition systems which we will study in Chapter 7.
3.2 Formal Grammars and Abstract Automata We may think of a language as a (possibly infinite) list of sentences, each composed of a sequence of words. Allowable sequences of words are produced according to a finite set of grammatical rules. We shall call such a set of rules a formal grammar G, and the list generated by it, the language, L(G). The grammar G is a mathematical object composed of four parts and is designated by G(VN , VT , S, R). VN signifies a finite set of non-terminal symbols disjoint with respect to VT , a finite set of terminal symbols. The terms “terminal” and “non-terminal” refer to whether or not, respectively, the symbols may appear in a sentence. Non-terminals are traditionally designated by upper-case letters, terminals by lower-case. Thus a sentence will contain only lower-case letters. The distinguished non-terminal S is called the start symbol since all sentences in the language are derived from it. The kernel of the grammar is the set R of rewriting or production rules, a member of which is customarily indicated by α → β, where α and β stand for elements of the set {VN ∪ VT }∗ , and the notation L∗ is taken to mean the set of all subsets of L. A rewriting rule allows the replacement of the arbitrary string appearing on the left-hand side of the rule by the arbitrary string on the right wherever the left-hand member appears in any other string. A special case is that for which β = φ, the null symbol. The effect of this rule is to cause the string α to vanish. A grammar generates sentences in a language by the following operations. A string γ is said to derive the string δ, written γ ⇒ δ, if and only if γ = η1 αη2 , δ = η1 βη2 , and ∃α → β ∈ R where η1 and η2 are arbitrary strings. The transitive closure of the ∗ derivation operator is γ ⇒ δ, read γ ultimately derives δ, meaning that γ = ζ0 , δ = ζT , and ζt−1 ⇒ ζt for 1 ≤ t ≤ T . Then ∗
L(G) = {W ∈ VT∗ | S ⇒ W },
(3.215)
meaning that a language comprises all those strings of terminal symbols that S ultimately derives under the set R. From (3.215) one may thus infer that if G imposes any constraint on word order then L(G) ⊂ VT∗ . A simple example will serve to clarify the notation. Let VN = {S, A}, VT = {a, b}, and R be defined by S → aS, S → bA, A → bA, A → φ.
(3.216)
Mathematical Models for Speech Technology
110
According to (3.216), the start symbol can be transformed into a sequence of arbitrarily many as followed by another sequence of any number of bs. Thus we may write L(G) = {a m bn | m, n > 0}.
(3.217)
3.2.1 The Chomsky Hierarchy The Chomsky hierarchy [46] is a particularly significant taxonomy of formal grammars according to their complexity and representational powers. Depending upon the form of R, grammars are classified as either regular, context-free, context-sensitive, or phrasestructure, in increasing order of complexity. Regular grammars, of which (3.216) is an example, have rules of the form A → a or A → aB. Context-free grammars, so called because their production rules may be applied independent of the symbols surrounding the non-terminal on the left-hand side, have rules of the form A → α. Naturally, context-sensitive grammars have rules of the form αAβ → αγβ which map the non-terminal A onto the string γ only in the left and right context of α and β, respectively. Finally, the phrase-structure grammars have the unrestricted rules α → β. Each class in the hierarchy is properly included in the one above it. That this differentiation among formal grammars is deeply meaningful requires a proof which goes too far afield for the purposes of this exposition. The reader interested in pursuing these ideas should consult [112], [120], and [133]. Since, however, that fact has implications for the algorithms we are about to discuss, the following examples should lend it some credence. Note that in (3.217) there is no relationship between m and n. If we require, for example, m = n for any m, then no set of regular rules will suffice. However, the context-free rules S → aSb, S → φ will generate exactly the language L(G) = {a n bn | n > 0}. In other words, regular grammars cannot count while contextfree grammars can count the elements of one set, that is, the length of the string of bs. Similarly, if we wish to append another string of as, then no set of context-free rules is powerful enough. The context-sensitive rules S → ABSa, BA → AB, S → φ, Aa → aa, Ab → ab, Bb → bb, Ba → ba do, in fact, generate the language, L(G) = {a n bn a n | n > 0}. Thus whereas context-free grammars can count one set, context-sensitive grammars can count two. Finally, we note, without offering any justification, that, in a certain sense, any computational process can be expressed as a phrase-structure grammar [319]. In (3.219) we derive the sentence a 3 b3 c3 for the context-sensitive case: according to the rewrite rules of (3.218). The rules are numbered and the rule used for a particular step in the derivation is indicated by the number of the rule in parentheses. (1) (2) (3) (4) (5) (6) (7) (8)
S → ABSc S→λ BA → AB Ab → ab Aa → aa Bb → bb Bc → bc A→a
(3.218)
Mathematical Models of Linguistic Structure
S (1) (1) (1) (2) (7) (3) (6) (3) (3) (6) (5) (5) (8)
111
ABSc ABABScc ABABABSccc ABABABccc ABABAbccc ABAABbccc ABAAbbccc AABAbbccc AAABbbccc AAAbbbccc AAabbbccc Aaabbbccc aaabbbccc
(3.219)
Properties and Representations The languages generated by formal grammars have some useful computational properties and representations. A particularly interesting case is the finite language for which |L(G)| = N < ∞.
(3.220)
Any finite language can be generated by a regular grammar from which we can count the sentences in the language. Let the matrix C have elements cij defined by cij = |{Ai → vAj }|
(3.221)
for any v ∈ VT and 1 ≤ i, j ≤ |VN |. Powers of C have a particular significance, namely the number, Nk , of sentences of length k is given by N k = e s Ck e f ,
(3.222)
where es is the vector (1, 0, 0, . . . , 1) and ef is the vector (0, 0, 0, . . . , 1). These vectors correspond to an arrangement of the grammar such that the start symbol is the first non-terminal and the null symbol is the last. The generating function P (Z) =
K
Nk Z k
(3.223)
k=1
will be important in the discussion of grammatical constraint in Section 6.2.4. In (3.223) K is the length of the longest sentence, which must be finite since K
Nk = N
(3.224)
k=1
which, from (3.220), is assumed finite. Any context-free grammar can be written in a canonical form called Chomsky normal form (CNF) in which all rules are either of the form A → a or A → BC. The CNF is
Mathematical Models for Speech Technology
112
S
B
C
E
D
W1
W
W +1
F
Wk
Wk +1
Wm
W |w |
Figure 3.11 Parse tree for a sentence in a context-free language
naturally related to a binary tree as shown in Fig. 3.11. The root of the tree is the start symbol. All other nodes of the tree are labeled with non-terminals. Each node, except for the leaves of the tree and their predecessors, have two successors corresponding to the left and right non-terminals on the right-hand side of A → BC, respectively. The CNF of a context-free grammar gives rise to the binary tree structure of Fig. 3.11, which is analogous to the directed graph structure for regular grammars shown in Fig. 3.12. A → α = B1 B2 . . . Bm ,
Bi ∈ {VN ∪ VT },
(3.225)
becomes
A → αC1 C2 . . . Cm Ci → Bi ⇔ Bi ∈ VT
(3.226)
then A → C1 D1 , D1 → C2 D2 , D2 → C3 D3 , .. .
(3.227)
Dm−3 → Cm−2 Dm−2 , Dm−2 → Cm−1 Cm . An example of the construction of (3.225)–(3.227) is given below. Consider the CF production rules
Mathematical Models of Linguistic Structure
a) b) c) d)
113
S → bA, A → a, A → aS, A → bAA,
e) f) g) h)
S → aB, B → b, B → bS, B → aBB.
(3.228)
The rules (3.228) generate the language
n1 m1 n2 m2
L(G) = a b
a b
nt mt
...a b
nT
...a b
mT
|
T t=1
nt =
T t=1
%
mt .
(3.229)
Note that (3.228b) and (3.228f) are already in CNF. Then (3.228a) and (3.228e) become
and
S −→ C1 A, C1 −→ b,
(3.230)
S −→ C2 B, C2 −→ a,
(3.231)
respectively. Similarly, (3.228c) and (3.228g) are transformed into
and
A −→ C3 S, C3 −→ a,
(3.232)
B −→ C4 S, C4 −→ b,
(3.233)
respectively. Finally, (3.228d) and (3.228h) result in the CNF rules A −→ C5 AA, C5 −→ b, A −→ C5 C6 ,
(3.234)
B −→ C7 BB, C7 −→ a, B −→ C7 C8 .
(3.235)
and similarly,
3.2.2 Stochastic Grammars Another approach to the use of formal syntax in speech recognition is one based upon stochastic grammars by means of which we shall be able to uncover some interesting interrelationships among the algorithms we have discussed thus far. Stochastic grammars are similar to the deterministic ones we have been examining except that their production
Mathematical Models for Speech Technology
114
rules have associated probabilities. The stochastic grammar Gs (VN , VT S, Rs , θ ) has nonterminal, terminal, and start symbols over which its stochastic productions Rs are defined to have the form pαβ
(3.236)
α → β,
where pαβ > 0 is understood to be the probability of applying the rule α → β. The characteristic grammar Gs of the stochastic grammar Gs is just the deterministic grammar formed by removing the probabilities from all rules in Rs . Thus stochastic grammars are assigned to classes of the Chomsky hierarchy according to the classes of their respective characteristic grammars. If W ∈ L(Gs ), then W ∈ L(Gs ) and W has the probability P (W ). If Gs is unambiguous, that is, there exists exactly one derivation for each W ∈ L(G), then P (W ) is just the product of the probabilities associated with the rules from which ∗ S ⇒ W . That is, P (W ) =
∗
S ⇒W
pαt−1 αt .
(3.237)
t
If
P (W ) = 1,
(3.238)
W ∈L(Gs )
the grammar Gs is said to be consistent. All regular stochastic grammars are consistent. The conditions under which stochastic context-free grammars are consistent are known [34]. Consistency conditions for more complex grammars are not known. A summary of the theory of stochastic grammars is available in [94]. In the case of stochastic languages, W ∈ L(Gs ) if and only if P (W ) > θ. This determination can be made for the regular and context-free classes by the algorithms discussed in Chapter 4. 3.2.3 Equivalence of Regular Stochastic Grammars and Discrete HMMs There is an important relationship between doubly stochastic processes and stochastic grammars. First, discrete-symbol HMMs are equivalent to regular stochastic grammars. pij Production rules of the form Ai → vAj account for the hidden Markov chain in that the non-terminal symbols Ai , Aj correspond to states qi , qj . Productions of the form pjk
Aj → vk correspond to the observable process, so that pjk is equivalent to bjk = bj (Ot ) when Ot = vk ∈ VT . Note that the likelihood function of the discrete symbol HMM for the observation sequence O = O1 O2 . . . OT can be written as L(O|λ) =
T
q∈QT t=1
aqt−1 qt bqt (Ot ).
(3.239)
Mathematical Models of Linguistic Structure
115
If we identify the sentence W ∈ L(G) with the observation sequence, O, meaning that Ot ∈ VT , then we can compare (3.226) and (3.229) to see that pαt αt+1 = aqt qt+1 bqt+1 (θt+1 ).
(3.240)
Going in the other direction has no unique solution. If the pαβ are known then the aij and bjk can be assigned any values so that their product is pαβ . A stochastic context-free grammar is not equivalent to any HMM but it has a related pijk
structure. Rules of the form Ai → Aj Ak constitute a hidden stochastic process, and rules Ak
of the form Ai → Uk an observable one. 3.2.4 Recognition of Well-Formed Strings Given any sequence of terminal symbols, W ∈ V ∗ , we would like to determine whether or not W ∈ L(G). In the cases of regular and context-free grammars, this question is answered by abstract automata called finite-state automata (FSA) and push-down automata (PDA), respectively. An FSA is a labeled, directed graph in which the vertices (states) are labeled with nonterminal symbols, and the edges or state transitions are labeled with terminal symbols. For every rule of the form A → aB, the FSA contains a state transition labeled a joining the two states labeled A and B. This is depicted in Fig. 3.12. A state, labeled A, may be designated as a final state if there is a rule of the form A → a. Then W ∈ L(G) if and only if there is a path from state S to a final state whose edges are labeled, in order, with the symbols of W .
a
A
aB
A
A′ a
B
d(A,a) = B
l
b
a A
C
B
< ab >
A B
aB bC
Figure 3.12 The relation between right-linear production rates and finite-state automata
Mathematical Models for Speech Technology
116
For the context-free case, the PDA can construct a tree such as the one shown in Fig. 3.11 whenever the string, W , is well formed. For the regular and context-free cases, the FSA and PDA, respectively, are minimal automata in the sense that they are the least complex machines that can answer the membership question. As such they are rather inefficient for determining the structure (e.g. derivation) of W ∈ L(G). For that reason, we will not consider them further. In Chapter 4 we will again address these questions in computationally efficient algorithms not based on either the FSA or PDA. 3.2.5 Representation of Phonology and Syntax Formal grammars were developed, at least in part, to provide an abstract representation of the grammar of natural language. The definitions given above can be made more intuitively appealing by emphasizing that motivation. Context-sensitive grammars are ideally suited for describing phonological phenomena such as the way the pronunciation of a given sound changes because of its phonetic environment. Rules of the form αAβ −→ αγβ exactly capture this effect since they may be interpreted to mean that the phones derived from the non-terminal A are rendered as the
THE
21
IN
20
A
NG NI OR M
19
14
M
SE
AK
RE
E
TIO RVA
SOME 3
NEED
A
E LI K
NO
8
M UC H
17
TO P
18
PA Y
CA 23
9
CLASS 16
N-S
6
AT
LD
WILL
U WO
HOW
22
5
FIRST 15
7
PLEASE
INFORMATION 4
SE
2
IG HT
WANT
I 1
N
TO
12
FL
RN TU E R
13
IN
RE FA
24
IS 10
THE
SH
11
Figure 3.13 A finite state automaton for a restricted subset of natural language
Mathematical Models of Linguistic Structure
117
specific phones derived from the string γ (thereby allowing for insertions and deletions) when the preceeding and succeeding sounds derive from α and β, respectively. Cohen and Mercer [50] have compiled a comprehensive grammar for American English. Regular languages are appropriate for the representation of carefully circumscribed subsets of natural language applicable to a particular limited domain of discourse. In Fig. 3.13 we have used ordinary vocabulary words for the terminal symbols. We can thus generate an English sentence by starting at node 1 and following any path to either node 5 or node 6, reading the associated labels as we traverse each edge. For obvious reasons, such a directed graph is often called a state-transition diagram of the grammar. For context-free rules, an approximation of a natural language may be obtained. For example, non-terminal symbols can be thought of as generalized parts of speech. In particular, the start symbol S represents the part of speech sentence. (The angle brackets are
(a)
Phonetic
a33 a12
a23
1
2
b1k
b2k
p (d |1)
p (d |2 )
(b)
a34 3
a45 4
|k| Base for m
|k|
| ac |
|t|
cat (c)
Trigram
big balck cat
Prob [ cat | black, big ]
Figure 3.14
The non-ergodic HMM used for speech recognition
5
118
Mathematical Models for Speech Technology
used to distinguish the word they enclose from the same vocabulary word.) The production rule sentence → subject predicate may be interpreted to mean that sentence may have two components called subject and predicate which appear in that order. A grammar of English might also contain the rules subject → adjective noun and predicate → verb adverb. If we also include the rules adjective → white, noun → horses, verb → run, and adverb → fast, then we can produce the sentence “white horses run fast”. Note that the absence of the angle brackets signifies terminal symbols which are, in this case ordinary members of the English lexicon. If we now add further rules which invite the replacement of the names of the parts of speech by specific vocabulary words of that type, then the rules given above can be used to produce many sentences having the same grammatical structure; namely, adjective noun verb adverb. The context-free rule discussed in Section 3.2.1 of the form S −→ aSb is, for obvious reasons, called a “center embedding” rule. This rule exists in ordinary English. Start with the sentence The rat ate the cheese, which is easily derived from the context-free rules above. Now use the center embedding rule to add a modifier of “rat” giving the sentence The rat the cat chased ate the cheese. Similarly, adding a modifier for cat we get an unusual but well-formed sentence, The rat the cat the dog bit chased ate the cheese. Even more unwieldy sentences can be created by recursive application of center embedding. Stochastic grammars can also be used to represent ordinary syntax. This is usually done in the form of a Markov chain as shown in Fig. 3.14c. The word order probap1 bility P (Wn |Wn−1 , Wn−2 ) is equivalent to a set of production rules of the form A1 → p2 p3 Wn−2 A2 , A2 → Wn−1 A3 , and A3 → Wn A4 . We will return to a more detailed consideration of these ideas in Section 4.3.
4 Syntactic Analysis 4.1 Deterministic Parsing Algorithms A particular problem addressed by formal language theory that is directly relevant to our discussion is that of parsing sentences in a language. Specifically, given G and W ∈ VT∗ , ∗ we wish to know whether or not W ∈ L(G) and, if so, by what sequences of rules S ⇒ W . Recall from Section 3.2.4 that these questions can be answered by different kinds of automata for languages in the different complexity classes of the Chomsky hierarchy. In particular, right-linear or regular languages can be analyzed by finite-state automata, and context-free languages by push-down automata. Unfortunately, the conceptual simplicity of these machines makes them inefficient analyzers of their respective languages in the most general cases. There are, however, optimally efficient parsers that can be implemented on more complex machines such as general-purpose computers or other Turing-equivalent models of computation. We shall consider the significance of this fact in Chapter 9. 4.1.1 The Dijkstra Algorithm for Regular Languages The optimal general parser for regular languages is obtained by casting the parsing problem as an optimization problem. Suppose W ∈ VT∗ , W = w1 w2 . . . wn . Each wj ∈ W corresponds to the interval from tj −1 to tj in the speech signal and has the cost C(wj ), and the sentence W has total cost C(W ) given by
C[W ] = min∗ V ∈VT
|W |
C [vj | tj −1 , tj ]
j =1
(4.1)
(see Fig. 4.1). Parsing symbol strings has no explicit notion of time. Only word order is required. Hence, the interval (tj −1 , tj ) can be replaced by the word order index, j . If we let C(vj ) = 0 if and only if vj = wj and 1 otherwise, then C(W ) = 0 if and only if ∗ W ∈ L(G). In the process of computing C(W ), we will get S ⇒ W as a by-product. Mathematical Models for Speech Technology. Stephen Levinson 2005 John Wiley & Sons, Ltd ISBN: 0-470-84407-8
Mathematical Models for Speech Technology
120
x (t ) W ∈ (G )
w2
w1 t0
t1
wk t2
t3
t4
tk
Figure 4.1 Lexical segments of a speech signal
The solution to this problem can be obtained by a dynamic programming algorithm due to Dijkstra [65]. In the case of regular grammars, let ψj (B) be the prefix of length j of some W ∈ L(G) having minimum cost, and denote by φj (B) its cost. Initially ψ0 (B) = λ∀B ∈ VN and φ0 (B) = 0 if and only if B = S, and ∞ otherwise. Then for 1 ≤ j ≤ |W | and ∀B ∈ VN , φj (B) = min {φj −1 (A) + C [a | tj −1 , tj ]}
(4.2)
ψj (B) = ψj −1 (A) ⊗ a,
(4.3)
a = arg min {φj (B)}.
(4.4)
{A→aB}
and
where {A→aB}
Then Wˆ = ψ|W | (λ) and C [Wˆ ] = φ|W | (λ). For notational convenience we have assumed that λ ∈ VN . Note that (4.2) and (4.3) are similar to the Viterbi algorithm (3.7) and (3.8) except that the set over which the optimization occurs is different. Note also that in both algorithms the required number of operations is proportional to |W | · |VN |.
Syntactic Analysis
121
4.1.2 The Cocke–Kasami–Younger Algorithm for Context-free Languages The context-free case is based on the general context-free parsing algorithm of Younger [338]; cf. [101]. There is an added complication in this case since the grammar must first be transformed into Chomsky normal form [133, pp. 51ff.] so that the rules are of either the form A → BC or A → a (see Section 3.2.1). Let ψij (A) be the string, α, spanning the ith to the j th word position in W of minimum ∗ cost such that S ⇒ α, and let φij (A) be its cost. Initially, φii (A) = min {C [a | ti−1,i ]}
(4.5)
ψii (A) = a = arg min {φij (A)}
(4.6)
{A→a}
and {A→a}
for 1 ≤ i ≤ |W |. All other values of φ and ψ are undefined at this point. Then for 1 ≤ i, j ≤ |W |, φij (A) = min (4.7) min {φil (B) · φl+1,J (C)} {A→BC}
i≤l H (L(G)), the uncertainty in Wˆ . Thus the two curves are ordered as predicted by (6.71) at any fixed value of σ which is inversely related to signal to noise ratio.
7 Automatic Speech Recognition and Constructive Theories of Language The theories and methods described in Chapters 3, 4, and 5 can be combined and used to form a constructive theory of language and a technology for automatic speech recognition. We will consider two approaches to the “language engine”, one integrated and the other modular. We will then briefly describe the way that our mathematical models can be applied to the problems of speech synthesis, language translation, language identification, and a low-bit-rate speech communication.
7.1 Integrated Architectures The first approach to the “language engine” is one in which several levels of linguistic structure are captured and compiled into a single monolithic model that can then be used to automatically transcribe speech into text. This model was introduced by Jelinek et al. [146] and has been refined over the past three decades. It is the state of the art in automatic speech recognition and the basis for most commercial speech recognition machines. The basic model is shown in Fig. 7.1 in which the dashed lines indicate that all representations of linguistic structure are analyzed by a single process. Acoustic phonetics and phonology are represented by an inventory of sub-word models. Typically there are several allophonic variants of each of the phones listed in Table 2.2. Each phone is represented by a three-state HMM of the type illustrated in Fig. 3.5 in which the Markov chain is non-ergodic and the observable processes are Gaussian mixtures. Allophonic variation is described by triphone models in which the acoustic properties of a given phone are a function of both the preceeding and following phones. That is, each phone appears in as many different forms as are required to account for all of the phonetic contexts in which it occurs. The lexicon is simply a pronouncing dictionary in which each word is “spelled” in terms of the triphones of which it is made in citation form. Thus the word v is the sequence of Mathematical Models for Speech Technology. Stephen Levinson 2005 John Wiley & Sons, Ltd ISBN: 0-470-84407-8
Mathematical Models for Speech Technology
158
SPEECH INPUT
SPECTRAL ANALYSIS
FEATURE VECTOR
RECOGNIZED SENTENCE
SENTENCE-LEVEL MATCH
WORD-LEVEL MATCH
WORD MODEL
LANGUAGE MODEL
WORD MODEL COMPOSITION
SUBWORD MODELS
Figure 7.1
LEXICON
GRAMMAR
SEMANTICS
The integrated architecture for automatic speech recognition (from Rabiner [257])
phones v = f1 f2 . . . fk ,
(7.1)
where the ith phone, fi , is the particular allophone (triphone) fi = (fi−1 , fi , fi+1 ).
(7.2)
The intuition behind the triphone model is that each phonetic unit has a target articulatory position characterized by the second state and transitions into and out of it characterized by the first and third states, respectively. The initial transition of phone fi is modified by the preceeding phone, fi−1 . Similarly, the final transition is affected by fi+1 . The idea is illustrated in Fig. 7.2. In the model of Fig. 7.1, the only notion of syntax is that of word order as specified by an n-gram probability. If the sentence, W , is the word sequence W = v 1 v 2 . . . vK ,
(7.3)
where vk is some entry (word) in the lexicon of the form of (7.1), then word order is specified by p(vk |vk−1 , vk−2 , . . . , vk−n ). If an n-gram has probability 0, then the corresponding word sequence is not allowed. All of the information described above may be compiled into a single, large lattice of the form shown in Fig. 7.3 and 7.4. The large oval states represent words, and the small
q1
fi −1
q2
q3
fi
Figure 7.2
The triphone model for fi
fi + 1
Automatic Speech Recognition and Constructive Theories of Language
159
MODEL FOR WORD W
1
A
Figure 7.3
2
3
4
5
CONFLUENT STATES
B
The HMM representation of the phonology of the production rule A → W B
V1
V2
Vn
w1
w2
wk
Figure 7.4 The integrated HMM
circular states the quasi-stationary regions of the phones. Decoding the sentence, W , is the process of finding the path through the lattice of maximum likelihood. The dynamic programming algorithm for this process is just max
φt (j, v) ={j − 1 ≤ i ≤ j } {φt−1 (i, v)aij(v) bj(v) (Ot )},
(7.4)
Mathematical Models for Speech Technology
160
which accounts for state transitions and observations within the word v. The model for the word v is just the concatenation of all the triphone models listed for the word v in the lexicon. The transitions between words are evaluated according to φt (1, w) = max {φt−1 (1, w), φt−1 (N, v)p(w|v)}. {p(w|v)}
(7.5)
Note that, for convenience, we have written (7.5) to maximize only over bigram probabilities. The algorithm can easily be generalized to account for n-grams. The trellis can also be searched by means of a “best-first” algorithm according to which (v) (v) ˆ φt (q) = φt−1 (p)apq bq (Ot )L,
(7.6)
where the priority queue is initialized according to φ1 (1) = b1(v) (O1 )Lˆ T −1 .
(7.7)
Lˆ is the heuristic function, p and q are any two states in the trellis that are connected by a transition, and the priority queue, φ, is arranged so that φti1 (pi1 ) > φti2 (pi2 ) > · · · > φtiN (piN ).
(7.8)
The value of Lˆ is usually taken to be Lˆ t = E{L(Ot , Ot+1 , . . . , OT |λ)}.
(7.9)
In light of (7.8), (7.6) is interpreted to mean that p is the first entry of the queue and is extended to state 1, the value of which is then inserted into the queue in the proper position to maintain the ordering of (7.8). This algorithm will yield the same result as that of (7.4). In actual practice the trellis is unmanageably large. If there are 30 000 words in the lexicon each containing five phones thus requiring 15 states, then for each frame, Ot , there are of the order of 105 nodes in the lattice. If a typical sentence is 5 seconds in duration, then, at 100 frames per second, there are 107 nodes in the lattice each one of which requires the computation of either (7.4) or (7.6). This cannot be accomplished within the constraints of real time. The solution to the real-time problem is to heuristically prune the lattice. There are two methods for doing so. The first method is usually called “beam search”, according to which (7.4) is modified to allow only “likely” states by thresholding φt (j, v) according to φt (j, v) =
φt (j, v), 0,
if φt (j, v) < φmax , otherwise.
(7.10)
According to (7.10) only those nodes of the lattice are evaluated whose likelihood is some small factor, , times the maximum value.
Automatic Speech Recognition and Constructive Theories of Language
161
The best first search of (7.6) is pruned by limiting the size, N , of the priority queue of (7.8). As a result only the N best nodes will be explored and less likely ones will be dropped from the queue. These heuristics may induce search errors. That is, at some time t, the heuristic may terminate the globally optimal state sequence because it is locally poor. Empirical studies are required to set the values of either or N to achieve the desired balance between computation time and rate of search errors.
7.2 Modular Architectures The modular architecture is based on a completely different model of the “language engine” than is used in the integrated architecture. The modular design uses a separate representation of each level of linguistic structure and analyzes each level independently but in sequence. This design is shown in Fig. 7.5, from which we see that the phonetic and phonological structure is analyzed first with respect to the hidden semi-Markov model of Fig. 7.6, yielding a phonetic transcription. The phone sequence is then analyzed to determine the identities and locations of the words, which process is called lexical access and which produces a word lattice. Finally, the syntactic analysis is performed by means of an asynchronous parsing algorithm. 7.2.1 Acoustic-phonetic Transcription Since each state of the acoustic-phonetic model corresponds to a single phone, a phonetic transcription may be obtained by finding the optimal state sequence using the dynamic programming algorithm τ −1 φt (j ) = max max φt−τ (i)aij dij (τ ) bij (Ot−θ ) . (7.11) 1≤i≤N
τ u10 = F.
(8.9)
An element of U can be computed from the values of other elements, for example, flight time, Tf , determined by point of origin, O, destination, D, arrival time, Ta , and departure
Automatic Speech Understanding and Semantics
181
Table 8.2 Elements of the task model Element
Symbol
u1 u2 u3 u4 u5 u6 u7 u8 u9 u10 u11 u12 u13 u14 u15
D M Dw Td Ta Nf C A Ns F Nt Np P Tf O
Definition Destination city Meals served Day of the week Departure time Arrival time Flight number Flight class Aircraft type Number of stops Fare Number of tickets Telephone number Method of payment Elapsed (flight) time Flight origin city
time, Td . Origin and destination supply time zone information, while arrival and departure time give elapsed time. We say, then, that ⊘
[u1 = Du4 = Td u5 = Ta u15 = 0] => u14 = Tf .
(8.10)
Finally, an element of U can be computed from user-supplied information which is not part of a flight description and is not stored as such. For instance, a departure date uniquely specifies a day of the week, Dw , by F
[nm nd ny ] => u3 = Dw ,
(8.11)
where nm is the month, nd is the date, ny is the year, and F is a perpetual calendar function. We can now give an example of a complete action. Suppose W was a request for the fare of a previously selected flight. Semantic decoding would enable action no. 14: a14 = (How much fare, u10 = F = 0, K = 23, u10 = F ).
(8.12)
That is, on a fare request, if u10 is some non-zero value, set the response code, K, to 23 and leave u10 unchanged. A value of F = 0 would indicate that a flight had not been selected as illustrated in (8.12), and a different response code would be issued, causing a message so indicating to be generated. The complete ensemble of actions which the system needs to perform its task is composed of 37 4-tuples similar to that of (8.6). This brings us to consideration of the response generation procedure. Responses in the form of English sentences are generated by the context-free grammar, Gs : Gs = [VN , VT , σ0 , P ],
(8.13)
Mathematical Models for Speech Technology
182
where VN is the set of non-terminal symbols, VT is the set of terminal symbols (a vocabulary of 191 English words), σ0 is the start symbol, and P the set of production rules. The production rules are of two forms: σ0 → γ ∈ (VN ∪ VT )∗
(8.14)
B → b; B ∈ VN , b ∈ VT or b = λ,
(8.15)
and where λ is the null symbol. There are 30 productions of the form of (8.14) in P . Each one specifies the form of a specific reply and is designated by a response code, K. There are several hundred productions of the type of (8.15). Their purpose is to insert specific information into the skeleton of a message derived from a production of the other kind. As an example, consider an input requesting to know the number of stops on a specific flight. The appropriate response code is K = 26 and the production rule to which it corresponds is σ0 → THIS FLIGHT MAKES B1 B2 . If u9 = Ns = 2, then the following productions will be applied: B1 → TWO, B2 → STOPS, resulting in the output string of symbols S = THIS FLIGHT MAKES TWO STOPS.
In the actual implementation, S is represented in the form of a string of ASCII characters. This is the form accepted by the text-to-speech synthesizer [236, 237] which produces an intelligible speech signal from S by rule. 8.2.2 Error Recovery The components of the system described in Section 8.2.1 are integrated under a formal control structure shown in the flow chart of Fig. 8.3. It has two modes of operation, a normal mode and one for recovery from some error condition. The former is quite straightforward and is best illustrated by a complete example of the system operation. Consider the input sentence Wˆ = I WANT TO GO TO BOSTON ON TUESDAY MORNING. The state diagram of the sentence is Fig. 8.4, from which we immediately see that state sequence, q, is q = (1, 2, 3, 7, 33, 11, 12, 13, 14, 15). Four state–word pairs from S apply: (33, (12, (14, (15,
GO) BOSTON) TUESDAY) MORNING)
= α1 , = α2 , = α3 , = α5 .
Automatic Speech Understanding and Semantics
183
^ − ^ w ,q,d (w i)
YES
ACOUSTIC AMBIGUITY ?
NO
YES
SEMANTIC AMBIGUITY ? NO
DETERMINE CAUSE OF AMBIGUITY NO DETERMINE WHAT INFO. IS STILL REQUIRED
YES
NO
NEW INFORMATION GIVEN ?
DETERMINE ANSWER FROM PROCEDURAL KNOWLEDGE
NO
END
NO
YES
GENERATE RESPONSE
DETERMINE CAUSE OF FAILURE
COMPLETE FLIGHT SPECIF. OR QUERY ?
ANSWER IN DYNAMIC WORLD MODEL ?
PROCEDURE SUCCESSFUL ? YES
UPDATE DYNAMIC WORLD MODEL
Figure 8.3 The error recovery algorithm
The actions invoked are the following: α1 α2 α3 α5
= = = =
(GO, U, 0, U0 ), (BOSTON, U0 , 0, U1 ← U0 + u1 ← D), (TUESDAY, U1 , 0, U2 ← U1 + u3 ← Dw ), L (MORNING, U2 , 1, U3 ← U2 + U4 ← Td ; U3 => C).
YES
Mathematical Models for Speech Technology
184 I 1
WANT 2
TO 3
GO 7
MORNING 15
Figure 8.4
TO 33
TUESDAY
14
11 BOSTON
ON
13
12
State sequence for semantic analysis
Table 8.3 State of the task model after processing the input sentence of Fig. 8.4 u1 = 1 u2 = 0 u3 = 2 u4 = 1000 u5 = 1047 u6 = 3 u7 = 1 u8 = 208 u9 = 0 u10 = 56 u11 = 1 u12 = 0 u13 = 0 u14 = 0 u15 = 7
Boston no meals Tuesday ten a.m. ten forty seven a.m. flight number three coach (by default) DC-9 no stops $56.00 1 seat (by default) phone number unknown method of payment unknown flight time not calculated New York (by default)
Action α1 causes the task model in any state to be initialized to state U0 and no response to be made. Next, α2 changes the state from U0 to U1 by fixing the destination; no response is generated. Similarly, α3 causes the day of the week to be defined. Finally, α5 fixes an approximate hour of departure permitting a database lookup which gives a complete flight specification. The response code is set to 1. The state of the task model after the lookup is shown in Table 8.3. The response code, K = 1, causes application of the production rule σ0 → FLIGHT NUMBER B1 B2 LEAVES B3 AT B4 B5 B6 B7 ARRIVES IN B8 AT B9 B10 B11 B12 . From the meaning corresponding to u6 , we have B1 → λ, B2 → THREE. From u15 we get B3 → NEW YORK.
Automatic Speech Understanding and Semantics
185
From u4 , B4 B5 B6 B5
→ λ, → λ, → TEN, → A.M.
From u1 , B8 → BOSTON. And finally, from u5 , B9 B10 B11 B12
→ → → →
TEN, FORTY, SEVEN, A.M.
Thus, S is FLIGHT NUMBER THREE LEAVES NEW YORK AT TEN A.M. ARRIVES IN BOSTON AT TEN FORTY SEVEN A.M. The voice response unit then utters the sentence. An error condition occurs, putting the system in error recovery mode, for one of two reasons. Either the total distance for one or more content words exceeds a preset rejection threshold, or, due either to an error on the part of the user of the system or a catastrophic failure of the acoustic/syntactic processing, the (q, Wˆ ) pair is inconsistent with the current state of the task model. The error recovery mode is essentially an elaborate heuristic, the purpose of which is to prevent communication from collapsing in the presence of ambiguity. The procedure is shown schematically in the flow chart of Fig. 8.3. The effect of this heuristic is to formulate a response to the input responsible for the error condition which will elicit from the user whatever information is required to resolve the ambiguity. The difficulty of this task is somewhat reduced by the fact that, by construction of the grammar, the appearance of a syntactic ambiguity is impossible. The decision blocks in the flow chart choose the sentential form of the response (i.e. production rules of the form (8.14)) while the processing blocks select the appropriate terminal symbols using rules of the form (8.15). Some examples of operation in this mode are given below. By detaching the semantic processor, we can measure the accuracy with which the syntax-directed level-building algorithm can transcribe sentences. For this purpose, a set of 50 sentences using every vocabulary word and every state transition was constructed. These sentences were then spoken over dialed-up telephone lines by four speakers, two male and two female, at an average rate of 171 words/min. The test sentences ranged in length from 4 to 17 words. Two sentences containing telephone numbers were only weakly syntactically constrained, while others requesting or giving flight information were quite stylized. The utterances were band-pass-filtered from 200 to 3200 Hz, digitized at 6.67 kHz sampling rate and stored on disk files which were subsequently input to the syntax-directed level-building algorithm. The results of this test are summarized in Table 8.4. In order to keep response time to a minimum, all on-line tests were performed in the speaker-dependent mode. Even under these conditions it takes about 1 minute to get a
Mathematical Models for Speech Technology
186 Table 8.4 Effects of syntax on speech recognition Speaker (sex)
S.E.L. (M) A.E.R. (M) K.L.S. (F) C.E.S. (F) Totals
Total sentences
Total words
59 50 50 50 209
Speaker-trained
Speaker-independent
Correct sentences
Correct words
Correct sentences
478 424 424 424
56 46 42 35
470 417 393 349
55 42 43 41
470 413 413 408
1750
179
1629
181
1704
86%
93%
87%
Correct words
97%
response to a sentence 2 seconds length. To determine the correct response rate of the system, it was evaluated on a corpus of 351 sentences spoken by five male and two female speakers. After syntactic processing, 21 sentences contained word recognition errors. However, after semantic analysis 15 of these were correctly understood, despite the errors, and correct responses were made. In the remaining 6 cases, a semantic anomaly was detected, causing the machine to pose a question to the user who could then correct the problem. Thus we achieved a recognition accuracy of more than 98% and, when recognition errors occurred, communication did not break down. These two tests give a statistical characterization of the system as one which is reasonably robust and fails gracefully. It is important, however, to look at some details of performance. For this purpose we constructed several ‘problems’ for the system to ‘solve’ each of which was supposed to reveal its ability to perform specific functions. Some sample transactions are listed below. The first is the simplest kind of interchange, a simple flight request. Machine: THIS IS THE BELL LABORATORIES FLIGHT INFORMATION SYSTEM MAY I HELP YOU User: I WOULD LIKE TO MAKE A RESERVATION PLEASE M: PLEASE SPECIFY YOUR TRAVEL PLANS U: I WANT TO GO TO BOSTON ON TUESDAY MORNING M: FLIGHT NUMBER THREE LEAVES NEW YORK AT TEN A.M. ARRIVES IN BOSTON AT TEN FORTY SEVEN A.M. U: I WANT TO LEAVE AT ELEVEN O’CLOCK M: FLIGHT NUMBER FOUR LEAVES NEW YORK AT ELEVEN A.M. ARRIVES IN BOSTON AT ELEVEN FORTY FIVE A.M. U: I WILL TAKE FLIGHT NUMBER FOUR M: YOUR RESERVATION ON FLIGHT NUMBER FOUR TO BOSTON ON TUESDAY IS CONFIRMED THANK YOU A slightly more interesting conversation is the following: M:
THIS IS THE BELL LABORATORIES FLIGHT INFORMATION SYSTEM MAY I HELP YOU
Automatic Speech Understanding and Semantics
U: M: U: M: U: M: U: M: U: M: U: M: U: M: U: M: U: M:
187
I WOULD LIKE TO GO TO CHICAGO PLEASE SPECIFY THE DAY OF YOUR DEPARTURE I WANT TO LEAVE AT FOUR P.M. PLEASE SPECIFY THE DAY OF YOUR DEPARTURE I WANT TO DEPART ON THE TWO FIVE MAY FLIGHT NUMBER NINE TWO LEAVES NEW YORK AT FOUR P.M. ARRIVES IN CHICAGO AT FIVE TWENTY FIVE P.M. WHAT PLANE IS ON THE FLIGHT THE AIRCRAFT IS A BOEING SEVEN TWENTY SEVEN HOW MUCH IS THE FARE THE ROUND TRIP COACH FARE IS ONE HUNDRED FORTY TWO DOLLARS I WANT A FIRST CLASS SEAT ONE SEAT THANK YOU WHAT IS THE FARE THE ROUND TRIP FIRST CLASS FARE IS ONE HUNDRED EIGHTY FOUR DOLLARS PLEASE REPEAT THE ARRIVAL TIME THE ARRIVAL TIME IS FIVE TWENTY FIVE P.M. I WILL TAKE THE FLIGHT YOUR RESERVATION ON FLIGHT UMBER NINE TWO TO CHICAGO ON FRIDAY IS CONFIRMED THANK YOU
The following set of user inputs was carefully chosen to give the machine some difficulty: M: U: M: U: M: U: M: U: M: U: M: U: M: U: M: U: M:
THIS IS THE BELL LABORATORIES FLIGHT INFORMATION SYSTEM MAY I HELP YOU I WANT ONE FIRST CLASS SEAT ON FLIGHT NUMBER THREE ONE TO DENVER ON SUNDAY I AM SORRY THE FLIGHT IS NOT AVAILABLE ON SUNDAY I WANT TO GO ON SATURDAY FLIGHT NUMBER THREE ONE LEAVES NEW YORK AT NINE FORTY FIVE A.M. ARRIVES IN DENVER AT ONE TEN P.M. WHAT IS THE FLIGHT TIME THE FLIGHT TIME IS FIVE HOURS AND TWENTY FIVE MINUTES HOW MANY STOPS ARE THERE ON THE FLIGHT THIS FLIGHT MAKES ONE STOP I WANT A NON-STOP FLIGHT I AM SORRY THERE ARE NO MORNING NON-STOP FLIGHTS AVAILABLE I WANT TO GO AT FOUR P.M. FLIGHT NUMBER ONE SIX LEAVES NEW YORK AT FOUR FIFTY P.M. ARRIVES IN DENVER AT SEVEN P.M. WHAT STOPS ARE THERE ON THE FLIGHT THIS FLIGHT MAKES NO STOPS IS A MEAL SERVED ON THE FLIGHT DINNER IS SERVED
188
Mathematical Models for Speech Technology
U: M: U: M:
I WILL TAKE FLIGHT SIX FIVE I AM SORRY THERE IS NO FLIGHT SIX FIVE TO DENVER I WILL TAKE FLIGHT ONE SIX YOUR RESERVATION ON FLIGHT NUMBER ONE SIX TO DENVER ON SATURDAY IS CONFIRMED THANK YOU
Questions need not occur in a logical sequence, as illustrated below: M: U: M: U: M:
THIS IS THE BELL LABORATORIES FLIGHT INFORMATION SYSTEM MAY I HELP YOU AT WHAT TIME DOES FLIGHT FOUR FOUR TO LOS ANGELES DEPART THE DEPARTURE TIME IS NINE P.M. WHAT IS THE FARE FROM NEW YORK TO SEATTLE ON SUNDAY MORNING THE ROUND TRIP COACH FARE IS THREE HUNDRED EIGHTY EIGHT DOLLARS
Finally, we observe that the machine can, to some extend, deal with non-sequiturs: M: U: M: U: M: U: M:
THIS IS THE BELL LABORATORIES FLIGHT INFORMATION SYSTEM MAY I HELP YOU I WANT SOME INFORMATION WHAT DO YOU WANT TO KNOW I WILL TAKE THE FLIGHT WHAT DID YOU SAY IS A MEAL SERVED ON THE FLIGHT FOR WHAT FLIGHT ARE YOU REQUESTING INFORMATION
From the above, the reader can easily observe that the dialogs which the system can sustain are not highly natural or sophisticated. The fact remains that speech communication, however stilted, has been achieved. Two results of this achievement are of significance. First, when one interacts with a system that communicates in a merely vaguely natural way, his perception of this machine is changed. Conventional notions of speech recognition accuracy and algorithms for data retrieval assume a secondary importance as attention is sharply focused on transmission of information. It is quite clear that the state of the art in speech recognition is advanced enough to support research in complete human–machine communication systems. Second, the synergistic effect of integrating several crude components into an interactive system is to produce a machine with greater capacities than might otherwise be expected. As the sophistication of the components increases and as their interaction becomes more complex, their behavior will at some point become a surprise even to their builders. In this last regard, one point should be emphasized. Unlike most systems which are reputed to be intelligent, the response of this one to a given input cannot be predicted, nor can a particular type of behavior be produced on demand. In this sense, the system can create surprises, even for its constructors.
Automatic Speech Understanding and Semantics
189
8.3 The Semantics of Natural Language The method of semantic analysis for limited domains discussed in Section 8.2 does not truly capture the semantics of natural language. The meanings of words are restricted to their use as information in the database but the general common-sense meanings are not present. Thus “Boston” is merely a page in the Official Airline Guide, not a city nor any of the things that we ordinarily associate with the notion of “city”. The same is true of the semantics of “going” or “time”. Formalizing the semantics of natural language in all of its generality is a difficult problem to which there is presently no comprehensive solution. Most research on the subject rests on two principles. The first is that semantics depends on syntax. The second is that semantic analysis must generate a symbolic representation of the physical world that allows for predictions of reality by reasoning, is expressive enough to extend to all aspects of reality, and allows for different syntactic structures to generate the same meaning. Syntax is connected to semantics in two principle ways. Structure-building rules of the form A −→ BC, A, B, C ∈ VN , provide an abstraction of meaning. For example, the abstract meaning of S −→ NP VP is that a sentence has an actor (NP), an action, and an object acted upon (VP). Then, the second syntactico-semantic relationship is captured by lexical assignment rules of the form A −→ w, A ∈ VN , w ∈ VT . Thus the meaning of the word, w, is the real concept to which it refers and its syntactic rule is determined by the part of speech represented by A. By applying lexcial semantics to the abstract structure, a specific meaning is obtained. There are many variations on this idea, all of which fall into two categories: graphical methods and logical methods. In graphical methods, the nodes of a graph represent lexical semantics and the directed, labeled edges of the graph express relationships between the nodes they connect. Making the edges directed allows the notion of ordering the nodes in time, space or other scales. Logical methods rest on the idea that sentences have meaning when they make true assertions about the world. The truth is verified by formal logical operations. If a sentence can be shown to be true then its meaning can be derived with the help of syntax. 8.3.1 Shallow Semantics and Mutual Information Although it does not address the question of semantic structure or the means by which words are related to reality, information theory does give a means of capturing word sense. That is, words have different connotations when they are used in conjunction with other words. The word “bank” refers to different things when we speak of a “river bank” and a financial institution. The deep semantics of these usages is the existence of a common meaning, if there is one. In the integrated architecture, semantics is only weakly represented by mutual information. That is the word pair v1 v2 has mutual information
I (v1 , v2 ) = log
p(v1 , v2 ) . p(v1 )p(v2 )
(8.16)
Mathematical Models for Speech Technology
190
bark action subset has parts
animal
leg
dog
has quality
has
qua li
ty
ha
alive
sp
ro
pe
lapl r ty
furry
Figure 8.5 A typical semantic network
Values of I (v1 , v2 ) may be used to bias the n-gram probabilities. This is semantic information in the sense that two words that appear together frequently may have similar or complementary meanings. 8.3.2 Graphical Methods Most graphical interpretations of semantics, of which there are numerous variants, can be traced back to the semantic net of Quillian [254]. In semantic nets, the relationships that label the edges are those of subset, quality, property, conjunction, disjunction, negation, instance, etc. The method is best explained by the diagram of Fig. 8.5 which represents the concept of “dog”. The unstated assumption of this approach is that the required fidelity and expressiveness of this model can be achieved simply by exhaustively and laboriously making graphs of all the many objects in the world and connecting them appropriately. Unfortunately, to date, only toy examples have been implemented to demonstrate the principle. 8.3.3 Formal Logical Models of Semantics Logical methods of semantic analysis rely on two types of formal logic, the propositional logic and the first-order predicate calculus, the latter being an extension of the former. An important motivation for the development of mathematical logic was to address questions about the foundations of mathematics and particularly the nature of mathematical truth. Although mathematical truth is not the same as psychological truth, there is a strong intuitive sense that there is a close relationship between logical reasoning and how our minds know the “truth” about our quotidian existence and how that knowledge is expressed in natural language. It is this intuition that we will examine now. Then, in Chapter 9, we will return to a consideration of the consequences of formal logic in the development of mathematical models of cognition. If there is to be a mathematical model of the general semantics of natural language – as opposed to the highly circumscribed domains of discourse exemplified by the method of
Automatic Speech Understanding and Semantics
191
Section 8.2 – it must include a complete mental representation of physical reality. Formal logic is well suited to the task in that it has the following desirable properties. First, it provides a verifiable model, the ability of which to represent reality can be empirically evaluated. It allows for a canonical meaning that can be expressed in different syntactic structures. It includes an inference procedure whereby conclusions can be drawn about specific or related ideas and events. Finally, it is expressive enough to encompass a complete model of the world. A brief description of formal logic will suffice to demonstrate how a logical formulation of semantics displays these desirable properties. We begin with propositional logic which is concerned with the truth of statements called predicates. Thus in this formalism, there are two constants, T for “true” and F for “false”. There are arbitrarily many predicates, P , Q, R, . . . , each of which is either true or false. The predicates are formed according to the following syntax atom
−→
T
atom
−→
F
atom
−→
P
atom
−→
Q
(8.17)
.. . sentence
−→
atom
sentence
−→
complexsentence
complexsentence
−→
( sentence )
complexsentence
−→
sentence op sentence
complexsentence −→ ¬ sentence
op −→ ∧ (logical “and ”) op −→ ∨ (logical “or”) op −→ ¬ (negation) op −→ ⇒ (implication) op −→⇐⇒ (if and only if ) Predicates joined by logical operators are evaluated according to the operator precedence ordering ¬, ∧, ∨, ⇒, ⇐⇒ unless enclosed in parentheses, in which case the parenthetical relations must be resolved first. Thus the predicate ¬P ∨ Q ∨ R ⇒ S
(8.18)
((¬P )V (Q ∧ R)) ⇒ S.
(8.19)
is equivalent to Given truth values for P , Q, R, S, the predicate (8.19) can be evaluated as either T or F .
Mathematical Models for Speech Technology
192
The propositional logic of (8.17) is not sufficiently rich for a general model of semantics. We can, however, obtain a sufficiently expressive model called the first-order predicate calculus by augmenting (8.17) with additional operators and constants, variables, quantifiers and functions. To accommodate these additions we use a similar syntax. formula −→ atom
(8.20)
formula −→ formula op formula
formula −→ quant var formula
formula −→ ¬ formula
atom −→ pred (term)
term −→ function(term)
term −→ const
term −→ var
op −→ = (equality) quant −→ ∀(universal , i .e. “for all ”) quant −→ ∃ (existential , i .e. “there exists”) quant −→ ∃! (unique existential ) const −→ A const −→ B const −→ C .. . var −→ x var −→ y var −→ z .. . function −→ F function −→ G function −→ H .. . All other symbols are as defined in (8.17). Notice that in (8.20) both functions and predicates have arguments. A predicate can have a null argument, in which case it is as defined in the propositional logic. Predicates, functions and formulas all take values of either T or F .
Automatic Speech Understanding and Semantics
193
As noted earlier, a desirable property of a semantic model is the capacity for inference or reasoning. Both propositional logic and first-order predicate calculus admit of formal procedures for inference. The following rules of inference allow the evaluation of complex formulae. If P is true and (P ⇒ Q) is true then Q is true. This is the formal expression of “if–then” reasoning. If ni=1 P If Pi is i is true then Pi is true for i = 1, 2, 3, . . . , n. true for i = 1, 2, . . . , n, then ni=1 Pi is true. If Pi is true for some i, then ni=1 Pi is true for any subset {pi |1 ≤ i ≤ n}. Double negation means that ¬(¬P ) ⇐⇒ P is always true for any P . Finally, one can use the notion of contradiction in the resolution rule; if ¬P ⇒ Q is true and Q ⇒ R is true, then ¬P ⇒ R is true. In general, these rules of inference are transitive, so that we can use chains of the form P0 ⇒ P1 ⇒ . . . ⇒ Pn . In Section 8.3.4 we will see the way inference is used in semantic analysis. Of primary importance, however, is the expressiveness of the first-order predicate calculus. The foregoing discussion has been entirely abstract. We now must be specific and consider a particular ontology in order to apply the abstraction. That is, we need to choose a set of predicates and functions that will allow us to symbolically represent nearly all aspects of reality in natural language. We start with lexical semantics which is just the meaning of isolated words or, alternatively, a mapping from a word to the object, action or idea to which it refers. For example, the predicate, book(x), is true if and only if the variable x is, in reality, a book. Similarly the verb “give” is represented by the function give(P , Q, R), where P = object(x), Q = donor(y) and R = recipient(z). The function, give(P , Q, R), is defined to mean that the object x is transferred from the donor, y, to the recipient, z. Also included amongst the necessary predicates are those that represent time by means of verb tense and aspect (i.e. event time relative to message time). In addition, there should be ways to indicate beliefs and imagination as putative but not necessarily real entities. An example of the use of the first-order predicate calculus to express natural language is the following. Consider the sentence “Every dog has his day”, rendered logically as follows: ∀d dog (d) ⇒ ∃ a day (a) ∧ owns(d, a) ∧ has (d, a).
(8.21)
A direct translation of (8.21) is: For all d, where d is a dog, it is the case that there exists a day, a, and d owns a and the function has(d, a) is true, that is, d possesses a. Two issues are immediately apparent. First, the symbolic representation implicit in the examples given above does not include any method for making the mapping between symbol and referent. The symbols are utterly abstract. We will consider this problem further in Chapter 10. Second, as a purely practical matter, in order to make this method expressive, we need to exhaustively enumerate all elements of the ontology of common reality and devise a predicate or function for each in a way that allows for consistent verifiability. If we construe the ontology in a more circumscribed way, we are back to limited domain semantics and the logic might just as well be replaced by a finite-state machine. The CYC project of Lenat [177] is an attempt to codify the common-sense knowledge of lexical semantics. The project has met with dubious success for written language only. In Chapter 10, we shall also suggest a means to avoid the problem of exhaustive enumeration.
Mathematical Models for Speech Technology
194
8.3.4 Relationship between Syntax and Semantics Syntactic structure has a strong effect on meaning. Figure 8.6 shows two different parses for the sentence “John saw the man with the telescope”. In the first case the adjectival prepositional phrase “with the telescope” is attached to the direct object “man”, and the sentence is interpreted to mean that the man was carrying the telescope. In the second case the prepositional phrase is adverbial, modifying the verb “saw”, yielding the interpretation that John used the telescope to see the man. The syntactico-semantic connection is best illustrated by the famous example due to Chomsky [45]. He proposes that the sentence “Colorless green ideas sleep furiously” is syntactically well formed but semantically anomolous, that is, meaningless. It is easy to verify the syntactic validity of the sentence. It can be parsed with respect to the grammar of Section 4.3.3 yielding a single parse tree. However, application of the logical inference technique of Section 8.3.3 would discover the logical contradiction between the predicates colorless(x) and green(x) which cannot both be true for a given x. Furthermore, the
S NP
VP
N
NP
V DET
NP PP
N P
NP DET
John
saw
the
man
with
the
S NP
VP
VP
N V
NP DET
John
saw
PP
the man
Figure 8.6
P N
with
NP DET
the
N telescope
Syntactic structure affects meaning
N telescope
Automatic Speech Understanding and Semantics
195
function sleep(animal (x)) would be undefined since animal(ideas) would be false. Also, furiously(action) would be undefined since action = sleep(x) is an invalid argument of the function furiously(action). Thus the sentence is logically false and/or undefined, hence meaningless. However, there is a semantically valid interpretation of the sentence. Suppose we take “colorless” to be the opposite of “colorful”, thus colorless(x) = uninteresting(x). Also let green(x) = naive(x). Under these definitions “colorless” and “green” are not logically inconsistent so green(ideas) would be true. Moreover, an idea can be “dormant” so sleep(ideas) = dormant(ideas) is well defined. Finally, furiously(sleep) could be interpreted to mean that the ideas were forced into dormancy and when they awake, they will do so resentfully. This is perhaps a poetic interpretation of the sentence, but it is far from absurd and it illustrates the difficulty of designing an exhaustive symbolic ontology. Having considered lexical semantics, we must now examine the meaning of sentences. It is here that syntax becomes important. First, note that all lexical items are assigned a part-of-speech label. This is the role of the lexical assignment rules of Section 3.2.1. For example, we have production rules such as verb −→ run or noun −→ boy. The lexical assignment rules make a strong connection to semantics because nouns are specific objects or ideas that play the role of either agents or entities acted upon. Verbs are actions, functions of the agents. Adjectives are qualities or properties of nouns. Adverbs are qualities or properties of verbs, adjectives or other adverbs. Prepositions provide location, orientation or direction in space and time. These syntactico-semantic constituents are combined according to a predicate argument structure, the most basic of which is the subject–verb–object (SVO) rule S −→ NP VP = (agent, action, object).
(8.22)
The significance of (8.22) is that the basic syntactic rule S −→ NP VP maps onto the semantic interpretation (SVO). Then the complete syntactico-semantic analysis proceeds as follows. First the syntactic structure for the sentence, “John gave the book to Mary” shown in Fig. 8.7 is obtained from a parser and grammar as explained in Sections 4.1.2 and 4.3.3. The parse tree is built up from the lexical entries to the root, S, of the tree. Then the λ-calculus is used to verify that the sentence is well formed. In this notation, the term λx simply means that the argument, x, of the function, F (x, y), has not been bound to a particular value. When the value of x is determined, say A, then λxF (x, y) is replaced by F (A, y). The result of this operation is shown in Fig. 8.8. A second pass through the parse tree from the lexical entries to the root resolves all of the λ meaning that the sentence is semantically valid. Had there been an unresolved λ then the sentence would be semantically anomolous. For the example as given, all arguments are bound and we get a logical expression of the meaning In this simple case, the predicate is unambiguous so there is no need to use inference procedures to check for consistency. In general that is not the case, as was indicated in Chomsky’s example.
8.4 System Architectures At the time that the first speech understanding systems were built [180, 272], speech recognition was based on recognizing whole words as the fundamental acoustic patterns.
Mathematical Models for Speech Technology
196 S
NP VP
V
N (John)
NP
PP
NP
PREP N
DET
John
gave
the
book
to
N
Mary
Figure 8.7 Syntactic framework for semantics S gave (object (book), donor (John), recipient (Mary))
NP donor (John) VP ly gave (object (book), y, recipient (Mary))
V lxlylz gave (x, y, z )
N (John)
NP (object (book), ^ recipient (Mary))
NP object (book)
PP Recipient (Mary)
PREP DET
John
gave
the
N (book)
book
to
S ≡ gave (object (book), donor (John), recipient (Mary))
Figure 8.8
Semantic analysis using the λ-calculus
N (Mary)
Mary
Automatic Speech Understanding and Semantics
197
One consequence of this was that only small vocabularies – a few hundred words – could be reliably recognized, and consequently only highly stylized utterances pertaining to a highly circumscribed topic of conversation could be understood. From the early experiments, it soon became obvious that speech understanding systems needed larger vocabularies to demonstrate truly interesting behavior. The techniques of whole word recognition could not be extended to large vocabularies because they relied on a crude and implicit method of capturing the lower levels of linguistic structure in word- or phrase-length acoustic templates. To attain the desired versatility, explicit representations of phonetics, phonology and phonotactics were required. That is, recognition had to be based on the inventory of fundamental sounds (phonetics) in speech as the primitive acoustic patterns to be recognized. Then rules about how these sounds change in different phonetic contexts (phonology) had to be applied and finally, rules specifying the order in which phonetic units can appear in sequences (phonotactics) had to be imposed. These aspects of linguistic structure were well understood and carefully documented by Chomsky [45], Chomsky and Halle [47] and others [238, 239]. However, the number and subtlety of these rules made their simple, direct incorporation in a computer program very cumbersome and fragile. The solution to the conflicting requirements of large vocabularies and robust principles for representing known linguistic structure emerged from studies in the application of hidden Markov models to speech recognition (Chapters 3 and 7). Speech understanding systems are usually conceived as having two functionally separate parts: a “front end” which performs the signal processing and pattern recognition and a “back end” which takes the transcription produced by the “front end” and derives from it the intended meaning of the utterance. The “front end” may be thought of as the speech recognition part and the “back end” as the speech understanding part. The “back end” is based on the methods described in this chapter. The only connection between the two parts is a transcription of the speech input into text. As we shall see in Chapters 9 and 10, this is a weak model.
8.5 Human and Machine Performance The foregoing discussion completes our consideration of the entire speech chain from acoustics through semantics. The section on general semantics is, of necessity, incomplete. While the methodology is straightforward, the details are absent. This is because they depend critically on an exhaustive implementation of all necessary predicates and functions. No doubt, different individuals would make different choices of ontologies and produce different representations of them. Even the best of such compilations would have, at best, a tenuous hold on reality but there are no known procedures to automatically generate the semantic constituents from data, as was done for all other aspects of linguistic structure. As a result, there are no general speech understanding systems that have a natural language ability even remotely comparable to human competence. Unfortunately, this remains an open problem. Chapters 9 and 10 offer some thoughts about how to solve it.
9 Theories of Mind and Language 9.1 The Challenge of Automatic Natural Language Understanding The progression of mathematical analyses of the preceding pages may be used to construct machines that have a useful but limited ability to communicate by voice. It is now appropriate to ask what is required to advance the technology so that machines can engage in unrestricted conversation. The conventional answer is incremental improvement of existing methods. In these final two chapters, I offer an alternative. I suggest that machines will be able to use language just as humans do when they have the same cognitive abilities as humans possess, that is, they have a mind. After centuries of study, the mind remains one of the most elusive objects of scientific inquiry. Among the many mysteries are how the mind develops and functions, how it engenders intelligent behavior, and how it is related to language. Such questions have been formulated in different ways and have been addressed from different perspectives. No general agreement amongst different schools of thought has yet emerged. Nonetheless, I wish to enter the fray. My answer is in two parts, a brief historiography and an experimental investigation derived from it. The essential feature of my brief intellectual history is that it comprises both a diachronic and a synchronic analysis. That is, the subject is considered with respect to both its evolution across historical epochs and its development within a particular period. In the case of the sciences of mind, both perspectives are known to philosophers and historians but are often ignored by scientists themselves. They, and even those readers who are already steeped in the historical facts, may wish to endure yet another interpretation and the significance I ascribe to it.
9.2 Metaphors for Mind The diachronic view of mind is best expressed by the founder of cybernetics, Norbert Wiener. In the introduction to his seminal 1948 treatise, Cybernetics, Wiener observed that over the entire history of Western thought, metaphors for mind have always been expressed in terms of the high technology of the day. He says: Mathematical Models for Speech Technology. Stephen Levinson 2005 John Wiley & Sons, Ltd ISBN: 0-470-84407-8
200
Mathematical Models for Speech Technology
At every stage of technique since Daedalus or Hero of Alexandria, the ability of the artificer to produce a working simulacrum of a living organism has always intrigued people. This desire to produce and to study automata has always been expressed in terms of the living technique of the age. [330]
Wiener devised a particular metaphor for the mind which he regarded as spanning the entire historical trajectory. He explained: Cybernetics is a word invented to define a new field of science. It combines under one heading the study of what in a human context is sometimes loosely described as thinking and in engineering is known as control and communication. In other words, cybernetics attempts to find the common elements in the functioning of automatic machines and of the human nervous system, and to develop a theory which will cover the entire field of control and communication in machines and living organisms. The word cybernetics is taken from the Greek kybernetes, meaning steersman. If the 17th and early 18th centuries were the age of clocks, and the later 18th and 19th centuries the age of steam engines, the present time is the age of communication and control. [330]
Thus it is clear that Wiener construed mental function as the cooperation of many kinds of processes which he characterized as information flow and control and which he called cybernetics. For example, he generalized the very technical notion of the negative feedback principle to include the adaptation of complex systems and living organisms to changing environments. Similarly, he expanded the definition of stability to apply to any process, including cognition, used to maintain biological homeostasis. These ideas, he reasoned, would lead to an understanding of the incredible reliability of human cognitive functions such as perception, memory and motor control. Although the synchronic history is best expressed in rigorous mathematical terms, it, too, can be faithfully summarized. The discipline with the unfortunate name of artificial intelligence – unfortunate because the word “artificial” cannot be cleared of its pejorative connotation of “fake” – was constructed out of the remains of the attempt to establish mathematics on an unassailable foundation. The failure of this effort led to a specific model of mental function. In 1937, Turing [319] resolved the decidability problem posed by Hilbert [273] by means of a model of computation which, despite its abstract nature, made a powerful appeal to a physical realization. Exactly when he came to appreciate the implications of his result is a subject of some debate [118, 128]. However, by 1950, his universal computer emerged as a “constructive” metaphor for the mind, allowing him to clearly set down what we today refer to as the strong theory of AI. This, in principle, could be experimentally verified by taking an agnostic position on the question of what the mind really is and requiring only that its behavior be indistinguishable from that of a computational mechanism by a human observer. In his seminal paper of that year which established the foundations for the field of AI, Turing asserted: The original question, “Can machines think?” I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.” [320]
Theories of Mind and Language
201
The adventure starting from the crisis in the foundations of mathematics, around 1910, and ending with the emergence of the strong theory of AI is a fascinating journey which helps to explain why the Turing machine was so seductive an idea that it caused a revolution in the philosophy of mind. 9.2.1 Wiener’s Cybernetics and the Diachronic History I shall begin by elaborating upon Marshall’s commentary [213] on Wiener’s definition of cybernetics cited above. In so doing, I take the liberty of summarizing the entire history of thought about mind in the single chart of Figure 9.1. Each row of Fig. 9.1 is a coarsely quantized time line on which the evolution of one of four mechanical metaphors for mind is traced by selecting specific examples. Each one represents a diachronic history that is simply an expansion of the notion expressed by Wiener about the significant inventions that became metaphors for mind in different centuries. Each individual entry in the chart represents an isolated history of the specific invention. There is also a migration of ideas, a synchronic history, as one proceeds along the columns. The right-hand column composes what I shall later describe in detail as the cybernetic paradigm. The metaphor of “control” begins with the invention, perhaps too ancient to attribute, of the rudder. I know of no specific philosophical theory of mind that finds its roots in the nautical steering mechanism; however, allusions to rudders permeate our language in such expressions as “steering a course to avoid hazards”, presumably by thinking. This suggests that the control of motion was recognized as a natural analogy to intellectual activity, a spirit controlling a body. Equally plausible is the connection between mental function and the Archimedean science of hydraulics. The motivation might have been the need for sanitation in growing metropolitan areas. To early philosophers such as Herophilus, the control of flow of fluids through ducts provided a striking image of thoughts and emotions coursing through the channels of the human mind.
Pre-industrial period
Industrial period
Information age
Mathematical abstraction
Rudder
Governor Thermostat
Feedback amplifier
Control theory
Hydraulic systems
Telegraph
Internet
Communication Information theory
Wax tablets
Photographic plates
AUDREY
Pattern classification theory
Clocks
Analytical engine
ENIAC
Theory of computation
Figure 9.1
Metaphors for mind
202
Mathematical Models for Speech Technology
In Section 2.5 we cited the Platonic theory of forms as a precursor to statistical pattern recognition. One could view early written symbols as designations for forms by stylized indentations made in wax or stone. Similarly, mental images were likened to impressions on wax tablets created by impinging sensory stimuli and manipulated by thought processes. The ability to count and mechanical aids for that activity were also known in ancient times. The notion of generalized mechanical calculation seems not to emerge until later. European scholars fascinated by the wizardry of the French and German horologists envisioned the brain as a vast clockwork. In particular, Leibniz proposed his theory of monads as the fundamental calculators of the universe. The industrial period witnessed a significant advance of the cybernetic paradigm toward a more recognizable and cogent form. Control mechanisms such as the governor and the thermostat, which tamed the temperamental steam engines, provided a vivid image of thought as the control of motion and power. Also in this period, thoughts ceased to be imagined as fluid flows controlled by pumps and valves but rather as electrical messages transmitted by telegraph wires. Wax tablets were discarded as an embodiment of memory in favor of the more refined photographic plate. And Babbage carried the identification of thought as clockwork to its mechanical extreme in his analytical engine. In our own modern information age, the mastery of electromagnetic and electronic phenomena, enabled by the impressive power of classical mathematical analysis, resulted in the recasting of the cybernetic paradigm in electronics rather than mechanics. Control over steam engines was extended to intricate servomechanisms according to the principles of the feedback amplifier. Telegraphy evolved into telephone and then into the modern global communication network in which information, whether it be text, image, audio, or video, is digitally encoded, packetized, multiplexed, routed, and reconstructed at its designated destination. The internet is so complex that it is often likened to the human central nervous system. While photography is able to record visual images in exquisite detail, it is, by itself, incapable of analyzing the content of an image. It thus accounts for memory but not perception. Electrical circuits, however, offered the possibility of analyzing and identifying patterns that were stored in a memory. One of the earliest examples of this class of devices was AUDREY, which was capable of reliably recognizing spoken words [70]. The principles on which AUDREY was designed were easily generalized and extended to other problems of automatic perception. While Babbage apparently did not recognize the full generality of his analytical engine, Turing did indeed understand the universality of his computer. Once again, modern electronic technologies provided an embodiment of an abstract theory. In the late 1940s and early 1950s, ENIAC [107], EDVAC [107], and several other computers were constructed, ushering in the era of the “electronic brain” and allowing, for the first time in history, serious entertainment of the notion of building a thinking machine. Today, these histories are encapsulated in four mathematical disciplines: control theory, information theory, statistical decision theory, and automata theory. Collectively, these areas of applied mathematics formalize and rationalize problems of control, communication, classification, and computation. I call the unification of these ideas the cybernetic paradigm or, making an acronym of its components, C4 . I assert that the cybernetic paradigm is the ideal tool for the eventual construction of an artificial intelligence. It
Theories of Mind and Language
203
provides a quantitative, rational means for exploring the several intuitively appealing metaphors for mind that have persisted in Western culture. At the risk of overstating the argument, I remind the reader that C4 is a union of different but related theories. Its utility derives from this collective property. One cannot hope to emulate human intelligence by considering only a part of the collection. It is worth considering the components of the cybernetic paradigm in slightly more detail in their modern abstract form. This will enable me to describe how the constituent theories are related to each other, how they may be used to explain processes occurring in the human organism and why they are uniquely appropriate for the design of an intelligent machine. The kinds of machines to which the four theories are applicable are represented by the following four canonical diagrams. Figure 9.2 shows the prototypical control system. The plant is required to maintain the response, y(t), close to a dynamic command, x(t). This is accomplished by computing a control signal, z, and comparing it with the input. The plant may be something as simple as a furnace, and its controller a thermostat. At the other extreme, the plant might be a national economy, and the controller the policies imposed upon it by a government. Figure 9.3 is the famous model due to Shannon [295] of a communication system. A message represented by the signal, x(t), is transmitted through some medium and is received as a signal, y(t), which is decoded and interpreted as the intended message. To be useful, y(t) should be similar, in a well-defined sense, to x(t). The model applies to a spacecraft sending telemetry to earth or two people having a conversation. Figure 9.4 depicts the canonical pattern recognition machine. The signal, x(t), is measured and its relevant features extracted. These values are compared with prototypical values stored in memory and a decision is made by selecting the reference pattern, y(t), to which the input signal is most similar. Once again, the range of processes fitting into
n (t )
x (t )
Σ
x^ (t )
plant
control
Figure 9.2
The feedback control system
n (t )
x (t )
Figure 9.3
channel
x^(t )
The classical model of a communication system (After Shannon)
Mathematical Models for Speech Technology
204 n (t )
x (t )
Σ
decision
w
memory
Figure 9.4
x (t )
The classical pattern recognition system
logic
y (t )
memory
Figure 9.5
The canonical model of a computer (after Turing)
this schema is quite large. It encompasses simple problems such as automatic recognition of barcodes and subtle ones such as identifying an artist from his paintings. Finally, Fig. 9.5 shows an abstract digital computer. Data and/or programs, x(t), are read into a finite-state logic element that operates on x(t) and produces an output, y(t). Such devices may be highly specialized, such as a four-function arithmetic calculator, or very general in purpose, as in the case of a mainframe computer. These four systems are interchangeable in some cases. For example, a control system can be used to perform pattern recognition if the output, y(t), is quantized and is always the same for a well-defined class of command signals, x(t). A pattern recognizer may be thought of as a communication channel whose fidelity is measured by the probability of a correct classification. And, as I shall remind the reader in the next section, a generalpurpose computer can be programmed to simulate the other three systems. It is also the case that implementations of any of these systems in electronic hardware have, as subsystems, one or more of the other systems. Thus, the control circuit in a servomechanism can be a microcomputer. Conversely, the auxiliary memory of a computer system such as a “hard drive” may contain several servomechanisms for fetch and store operations. The connection between the logic unit and the memory of a computer is a communication channel, while sophisticated communication channels usually contain computers to perform the coding functions. In pattern recognition systems, the features may be measured by instruments that are stabilized by feedback control systems and feature extraction may be accomplished by numerical computation. These are but a few examples of the interrelations among the four pillars of the cybernetic paradigm. Figures 9.2 through 9.4 all show the symbol, n, signifying an unwanted signal called noise because any physical measurement has some uncertainty associated with it. All of the systems must be designed to function in the presence of corrupting noise. The computer depicted in Fig. 9.5 is not so afflicted, nor does it have any provision for dealing with ambiguity. I shall return to this important distinction when I discuss the role of the cybernetic paradigm in the mind.
Theories of Mind and Language
205
Not only are the systems of Figs. 9.2 through 9.5 present in all of our modern machinery, they are also essential to the proper function of the human organism. Homeostasis and locomotion are accomplished by means of electrochemical feedback control systems. Neural transduction of the electrically encoded messages necessary for locomotion and distribution of biochemically encoded commands required for homeostasis through the circulatory system are well described as communication systems. Sensory perception is presumed to be achieved by exquisite feature extractors and decision mechanisms. And, of course, control of the entire organism in thought and action is assumed to be a computational process. Wiener’s historical perspective insists that intelligence is not simply a process of abstract thought but rather of the harmonious functioning of all aspects of our physical being and, as such, it requires all aspects of the cybernetic paradigm. 9.2.2 The Crisis in the Foundations of Mathematics The isolated history of AI is the history associated with the single box in the lower right-hand corner of Fig. 9.1. It is the story of an insight which resulted from the glorious failure of an attempt to establish mathematics on an unimpeachable theoretical foundation once and for all. Although cracks in the structure of mathematics had been visible since Hellenic times (e.g. Euclid’s inability to eliminate the troublesome parallel postulate), the crisis did not become acute until the late nineteenth century as a result of thinking about large numbers encountered in the summation of infinite series. A brief outline of events is the following: 1. Cantor’s theory of transfinite numbers [42] leads to controversy over the continuum hypothesis. 2. The ensuing debate causes a threefold schism of mathematics into the intuitionist, formalist, and logical schools. 3. Intuitionists, such as Brouer [39], deny the existence of infinite numbers and accept only arbitrarily large numbers. 4. Logicians, such as Russell and Whitehead [285], trace the problem to impredicative (i.e. self-referential) sets. 5. The formalists, such as Hilbert [273], assert that all mathematical questions can be resolved by proofs based on an appropriate set of axioms. There can be no “ignorabimus”. 6. G¨odel [104] proves the incompleteness theorem and thereby invalidates the formalist approach. 7. Turing [319] proves the undecidability theorem, thereby strengthening G¨odel’s result. 8. The mechanism of Turing’s proof is a universal computer in the sense that it can emulate any possible computation. 9. Church [48] derives a similar result using lambda calculus. 10. The Church–Turing hypothesis is recognized as the theoretical foundation of AI. A comprehensive treatment of the many aspects of this outline is well beyond the scope of this short essay. Such a discussion would require consideration of everything from the common thread of mathematical reasoning using the technique of diagonalization
Mathematical Models for Speech Technology
206
to the philosophical conflicts between realism and spiritualism and between free will and determinism. However, a bit more detail is required to support the position I am advocating. Let me begin my rendition of the story with Cantor and his ideas about different degrees of infinity. Cantor was the son of a Lutheran minister who wanted his boy to pursue a life of service to God. Cantor, however, was not prepared to abandon his work in mathematics. He rationalized the conflict away by professing his hope that the beauty of his mathematics would attest to the glory of God and thus both appease his father and satisfy his intellectual aspirations. Cantor’s transfinite numbers are the cardinalities of different infinite sets. He began by comparing the set of “natural numbers” (i.e. positive integers) with the set of rational numbers. He observed that there are infinitely many naturals but, since the rationals are pairs of naturals, there should, in some sense, be more of them. But Cantor demonstrated by means of a diagonalization argument that this intuitively appealing notion is, after careful examination, flawed. The essence of his argument is illustrated in Fig. 9.6. By tracing along the indicated path, it is clear that there is a one-to-one mapping from naturals onto rationals. This means that the naturals and rationals have the same infinite cardinality which Cantor called the first transfinite number, aleph null (ℵ0 ). Cantor then asked whether or not such a map exists from naturals onto reals. To answer the question, he constructed the matrix of Fig. 9.7 in which each row is a power series representation of a rational number, and the columns of the matrix correspond to the integral powers. Call the ij th entry in the matrix dij . Now consider the real number whose nth digit is dnn + 1. In the example it would be 2.64117.. . . This number cannot be a row of the matrix because, by construction, it differs from the nth row in at least the nth column. Thus there cannot be a one-to-one mapping from naturals onto reals. This is interpreted to mean that there are more reals than naturals. This argument is particularly important because it invokes the summation of a power series such as a Fourier series which was the source of the argument about infinite numbers. The cardinality of the reals is represented by symbol c, designating the continuum. In the sense of isomorphism illustrated in Figs. 9.6 and 9.7, c > ℵ0 . Then there must be some number, ω, that lies between the two. Cantor proposed that w is actually the largest integer and, using it, he constructed an entire, linearly ordered number system such that ℵ0 < ω < ω1 < ω2 < . . . < c,
1 2 3 4 5 . . .
Figure 9.6
1
2
3
4
5 ......
1 2 3 4 5 . . .
1/2 1 3/2 2 5/2 . . .
1/3 2/3 1 4/3 5/3 . . .
1/4 2/4 3/4 1 5/3 . . .
1/5 ..... 2/5 ..... 3/5 ..... 4/5 ..... 1 ..... . . .
Mapping the rationals onto the integers
(9.1)
Theories of Mind and Language
207
1 2 3 4 5 6 . . .
Figure 9.7
1
2
3
4
5
6 ...
1. 0. 0. 0. 0. 0. . . .
0 5 3 2 2 1 . . .
0 0 3 5 0 6 . . .
0 0 3 0 0 6 . . .
0 0 3 0 0 6 . . .
0 ... 0 ... 3 ... 0 ... 0 ... 6 ... . . .
The reals cannot be mapped onto the integers
which he called the transfinite number system. Cantor was convinced that the beauty of his creation was a tribute to God’s own handiwork, but his colleagues did not even accept its validity, let alone its sanctity. A bitter controversy ensued causing, or at least contributing to, Cantor’s lapse into insanity. The controversy centered around what came to be known as the “continuum hypothesis” which denies the existence of the system of (9.1). Intuitionists were opposed on the grounds that there is no infinite number let alone orders of infinite numbers. It was equally clear to the logicians that the whole enterprise made no mathematical sense because implicit in the diagonalization argument is the notion of impredicative sets which are based on self-reference. That is, an infinite set, the integers, is a subset of itself, the rationals. This kind of construction must lead to inconsistencies that they called antinomies. As we shall soon see, they were partially correct. The formalists, lead by Hilbert, took a more sympathetic approach. They said that the continuum hypothesis should admit of a definitive resolution; it should be possible to prove that either transfinite numbers exist or they do not exist. In fact, formalists generalized their position on the continuum hypothesis asserting that it must be possible to prove every true statement within a well-defined, suitably rich axiomatic (i.e. formal) system without recourse to some absolute physical interpretation, intuitive appeal, or lack thereof. This was a bold position since classical mathematics was inspired by physics. Moreover, the formalist’s doctrine declared that there should be no constraint on mathematical thought as long as logic and an appropriate set of axioms are not violated. Under these conditions, there can be no ignorabimus. Nevertheless, no proof of the existence of Cantor’s strange mathematical objects was forthcoming. In 1931, Kurt G¨odel destroyed Hilbert’s hopes of establishing absolute certainty within any “interesting” axiomatic system by proving the incompleteness theorem. Theorem 1 (G¨odel). Any axiomatic system the structure of which is rich enough to express arithmetic on the natural numbers is either complete or consistent but not both. An axiomatic system is complete if all true statements within it can be proven true and all false statements can be proven false. An axiomatic system is consistent if both a statement and its negation cannot be simultaneously true.
208
Mathematical Models for Speech Technology
This remarkable theorem is proven by contradiction. In essence, the proof produces a version of the liar’s paradox in which one is given a card the front of which asserts that the statement on the back of the card is false. The back of the card, however, states that the statement on the front of the card is true. Thus both statements are true if and only if they are false. Generating a formal contradiction of this type within an ordinary arithmetic system is arduous and a complete rendering of the proof cannot be given here. However, we can give a sketch of the proof in a manner that will provide a good intuition for the main ideas. The reader interested in a detailed rendering of the incompleteness theorem should consult Nagel and Newman [229]. To gain a better appreciation of G¨odel’s basic argument, consider the well-known Richard paradox. All arithmetic statements can be written, however awkwardly, in ordinary English. Then the statements can linearly ordered, in a manner analogous to the construction of Fig. 9.7, by simply arranging the statements in lexicographic order. Thus, every arithmetic statement has an integer associated with it. Next we define a special property of arithmetic statements. We will say that a statement is Richardian if and only if the statement is not true of its own ordinal number. For example, if the statement “N is prime” were to occur in the N th position of the list of statements where N is not a prime number, then the statement would be Richardian. Alternatively, if N were, in fact, prime, then the statement would not have the Richardian property. Notice that the Richardian property is, by definition, a statement about arithmetic and it obviously can be expressed in English. Eventually the following statement will appear on our lexicographically ordered list with ordinal number, Nr : The sentence of number Nr is Richardian. By definition of Richardian, the statement number Nr is Richardian if and only if it is not Richardian. In fact, this is not a paradox at all because the method used for generating the statements, namely English, has not been defined in the arithmetic system. However, the structure of the argument is useful if the Richardian property can be replaced by some other property that can be strictly defined within the arithmetic system. Cantor’s diagonalization technique, as it appears in the Richard paradox, can be used to prove the incompleteness theorem. To do so, the error of the Richard paradox must be avoided by making explicit that performing arithmetic operations and proving theorems about arithmetic are actually different encodings of the same fundamental symbolic process. G¨odel devised an ingenious method for mapping statements about the integers onto the integers without going outside arithmetic itself. The method has come to be known as G¨odel numbering or indexing. G¨odel’s coding scheme begins with the primary operations of arithmetic shown in Fig. 9.8. This is essentially the logical system described in Section 8.3.3. The numbering system shown in Fig. 9.8 can be used to form the G¨odel number for any arithmetic statement. Consider the example of (∃x)(x = Sy),
(9.2)
which asserts the existence of some number, x, that is the successor of some other number, y. First we assign an integer to each symbol according to the table of Fig. 9.8. Thus (9.2) is represented by the sequence of integers 8 4 11 9 8 11 5 7 13 9
(9.3)
Theories of Mind and Language
209
symbol
integer
example or meaning
˜
1 2 3 4 5 6 7 8 9 10
negation logical OR implication existence equality zero successor left bracket right bracket delimiter
x y z . . .
11 13 17
numerical variables represented by primes greater than 10
p q r . . .
112 132 172
arithmetic propositions represented by squares of primes greater than 10 such as x = y or p=q
P Q R . . .
113 133 173
logical predicates represented by cubes of primes greater than 10 such as prime or composite
=> ∃ = 0 S ( ) ,
Figure 9.8 G¨odel numbering system
from which we construct the G¨odel number, N , for (9.2) by using the integers in the sequence (9.3) as powers of successive primes. That is, N = (28 )(34 )(511 )(79 )(118 )(1311 )(175 )(197 )(2313 )(299 ),
(9.4)
which is a large but finite number. The next step is the definition of a sequence of predicates ending with Dem(x, y) which means that the sequence of arithmetic statements having G¨odel number x is a proof of the statement with G¨odel number y. The lengthy construction of this predicate is omitted here. The important point is that this predicate allows the formation, reminiscent of the Richard paradox, “The theorem, G, of G¨odel number N is not provable”. Once we have this statement, we can, in principle, form a matrix in which each statement is evaluated for every integer and shown to be either true or false. This enumeration procedure will, in principle, eventually lead to the diagonal element of the matrix corresponding to G being evaluated on its own G¨odel number, N . This leads to the paradox that some G is provable if and only if it is not provable. Thus it must be the case that there is some G
Mathematical Models for Speech Technology
210
that is not provable or else there is some G that is both provable and not provable. In other words, the system is either incomplete or inconsistent. A corollary to the incompleteness theorem is that there is no escape from its effect. Suppose one were to find a true theorem that is not provable. Such a theorem could simply be added to the list of axioms but that would simply postpone the agony because G¨odel’s result ensures that another unprovable theorem would be created. In fact, to bring this long discussion back to Cantor, in 1963, Cohen [49] showed that the continuum hypothesis is independent of the axioms of set theory and can be appended or not, either choice leading to some other unprovable result. And so Hilbert’s formalist goal was shown to be unattainable. It is interesting to note that the failure to make mathematical reasoning absolute came quickly on the heels of a similar failure in physics to vindicate the Enlightenment philosophy by constructing an absolute interpretation of reality. Early twentieth-century physics was stricken by confusion resulting from thinking about small masses moving at high velocities. The confusion was resolved by the 1905 theory of special relativity and the 1927 theory of quantum mechanics with its intrinsic principle of uncertainty. Together these theories denied the possibility of absolute frames of reference and exact positions and momenta. This failure in physics has recently been seriously misinterpreted in a way that has an impact on theories of mind. I will return to this problem in the next chapter. First, however, I must finish this history with an account of its most important event. 9.2.3 Turing’s Universal Machine Although the formalists were devastated by G¨odel’s theorem, there was still hope. Hofstadter [130] explains the loophole, noting that in 1931 one could have imagined that there were only a few anomalous unprovable theorems or, better still, that there existed a formal procedure whereby one could decide whether or not any theorem was provable. Were that the case, then a weaker formalism could still be pursued in which all provable theorems could be proven. Unfortunately for the formalists, Turing resoundingly quashed this hope in 1936 with his undecidability theorem. Although Church [48] proved the same result beautifully and elegantly using his recursive function theory, it was Turing’s method with its decidedly mechanical flavor which led to the digital computer and AI. Turing proposed the machine shown in Fig. 9.9. It comprises three main parts, a tape or memory arranged in cells in which one symbol may be written, a head or sensor capable of reading the symbols written on the tape, and a finite state controller. The operation of the Turing machine is completely specified by a set of instructions of the form I = {< qi , aj , qk , d, al >}.
(9.5)
A single instruction is the term of (9.5) enclosed in angle brackets and is understood to mean that when the machine is currently in state qi and the head is reading symbol aj , the state changes to qk , the head moves as specified by d, and the symbol al is written on the tape. The states, qi , are members of a finite set, Q. The symbols are selected from a finite alphabet, A, and the head movements are restricted allowing d to assume values of only +1, 0, or −1 corresponding to movement one cell to the right, no movement, or one cell to the left, respectively.
Theories of Mind and Language
211 infinite tape (memory) ∋
symbol aj A
alphabet
aj moveable tape read/write head Logic unit with internal states qi Q ∋
Figure 9.9
Turing machine
Ordinary Turing machines are those for which Q, A, and I are chosen so that the machine computes a specific function. We can then consider the entire ensemble of such machines indexed by their G¨odel numbers defined as follows. Let [−1] = 3; [0] = 5; [+1] = 7; [ai ] = 9 + 4i; [qj ] = 11 + 4j . Then the j th instruction < q, a, r, d, b > has the G¨odel number gj = (2[q] )(3[a] )(5[r] )(7[d] )(11[b] ).
(9.6)
Finally, a Turing machine with instructions {I1 , I2 , . . . , In } has G¨odel number G defined by G=
n
g
pj j ,
(9.7)
j =1
where pj is the j th prime and gj is the G¨odel number of the j th instruction computed from (9.6). Then Turing makes the stunning observation that one need not construct a specialpurpose machine to evaluate a specific function because there is a universal machine that will emulate the behavior of any other Turing machine. One need only know the G¨odel number of the desired machine. If one writes that G¨odel number on the tape of the universal machine, it will exactly compute the corresponding function and write the answer on the tape just as the indexed machine would do. In modern parlance, we would call the G¨odel number of the desired machine a program for the universal machine now known as a digital computer. We shall return to this crucial idea in a moment. Let us first examine how this universal machine was used to prove the undecidability theorem. In the matrix of Fig. 9.10, the ij th element, aij , is the j th symbol of the result calculated by the machine with G¨odel number Gi . Thus the nth row of the matrix, anm , n = 1, 2, 3, . . . , is a computable number or a decidable theorem. Now form the number {ann + 1} for n = 1, 2, 3, . . . , N . By definition it is not in the matrix constructed thus far, hence we do not know whether or not it is computable. So we continue generating the matrix and we either find the number or not. If we do not find it, we still cannot
Mathematical Models for Speech Technology
212
1 2 3 . . .
Figure 9.10
1
2
3
4
5
...
a11 a21 a31 . . .
a12 a22 a32 . . .
a13 a23 a33 . . .
a14 a24 a34 . . .
a15 a25 a35 . . .
... ... ... ... ... ...
Matrix of the digits of computable numbers
decide whether or not it is computable. If we do find it, it gives rise to a new number constructed as previously done which is not in the matrix. Thus we will never be able to test all the numbers and there must always be uncomputable ones. Hence we cannot separate the theorems into provable and unprovable classes. In order to carry out this enumeration and diagonalization, we must have a universal machine to carry out all possible computations. The aij are computed by giving the universal machine the G¨odel numbers Gi from (9.7). There is no machine more powerful than the universal machine in the sense that the addition of states, symbols, tapes, or initial data on the tape will not enable the augmented machine to enlarge the class of computable numbers it generates. 9.2.4 The Church–Turing Hypothesis A technically correct if somewhat narrow interpretation of Turing’s work is that it killed the formalist school of mathematics by depriving it of any sense of perfectibility. A slightly broader interpretation is that it dispatched the Enlightenment tradition, already left moribund by the successive shocks of relativity, uncertainty, and incompleteness. Indeed, undecidability might be seen as the fourth and final blow that rid the world, once and for all, of any illusion of perfectibility. Nevertheless, an optimist will be quick to remind us that, in effect, Turing freed mathematics from the onerous burden of discovering God’s one and only true mathematics and allowed mathematicians to invent new objects and theories having an aesthetic of their own as well as the possibility of future utility. The terror of being cut adrift from some of our most cherished moorings of intellectual security is also mitigated by the success of relativity and quantum mechanics. I should like to propose, however, that the most far-reaching effect of undecidability was that it made possible the development of a constructive theory of mind. I choose my words carefully here because, though very tempting, it is, I shall soon argue, a serious error to make an immediate identification of the universal Turing machine and the human mind. To understand the long and tortuous chain of reasoning stretching from the Turing machine to mind, we must first recall that there were other theories, contemporaneous with Turing’s, from which the undecidability result might have emerged. Earlier we alluded to Church’s lambda calculus, and there was also the Kleene [157] formalism and the McCulloch–Pitts networks [215]. What distinguishes Turing’s work from that of his contemporaries is that Turing’s theoretical vehicle was blatantly mechanical
Theories of Mind and Language
213
and fairly screamed for a physical embodiment, whereas the equivalent approaches of his colleagues were studiously abstract and not at all physically compelling. In fact, Hodges [128] suggests that it was Turing’s practical and mechanistic world view that fostered his almost palpable approach to the seemingly obscure problems of meta-mathematics. His biographer’s interpretation notwithstanding, Turing’s universal machine, however physically appealing, might have remained, in another era, a purely mental construct. In the early decades of the twentieth century, however, automatic reading and writing of symbols on tapes were well known as were devices that could rapidly switch among two or more stable states. Thus, one could easily imagine building a universal Turing machine capable of performing in an acceptably short interval the vast number of operations required for its utility. Even so, there is yet another intuition required to bring forth a new metaphor for mind. Turing surely noticed that he had devised a single “machine” that could be instructed to emulate any member of an infinite class of “machines”. It has been argued by Hamming [118] that Turing originally thought of his machine as merely a numerical calculator and did not appreciate its more general ability to manipulate abstract symbols in accordance with abstract rules. I regard this speculation as arrogant nonsense. Turing’s letters [128] indicate that even as a schoolboy he had been captivated by mechanical analogies to human physiological functions. It is this rational, tractable versatility that lends credence to the notion of a mechanism of thought. The leap from the Turing machine to mind is expressed in the Church–Turing hypothesis, which contends that any process admitting of a formal specification called an effective procedure can be effected by a universal Turing machine. The motivation for this hypothesis lies in the range of activities in which we humans engage and which are generally considered to be indicative of our intelligence. We can imagine that we could, if necessary, write out a sequence of directions describing how any given activity is performed. We also observe that human skills can be taught by formal instruction. We deduce from these observations that intelligent behavior, the consequence of mental function, can be formally specified and hence, in principle, realized by a suitably programmed universal Turing machine. The only missing piece of the puzzle is the G¨odel number for the mind. This is the objective of the modern constructive theory of mind.
9.3 The Artificial Intelligence Program 9.3.1 Functional Equivalence and the Strong Theory of AI The history that culminates in the Church–Turing hypothesis is the basis for the modern discipline of artificial intelligence. The hypothesis is restated in the so-called strong theory of AI which I paraphrase as “The mind is a program running on the computer called the brain”. The clear implication is that thought processes in our minds are functionally equivalent to the manipulation of abstract symbols by a suitably constructed computer program.
214
Mathematical Models for Speech Technology
9.3.2 The Broken Promise When it first appeared, this new theory of mind was quite shocking. In fact, one could easily argue that it was more socially dislocating than the undecidability result that generated it. After all, it is one thing to imagine the mind as some immense clockwork, all the while secure in the knowledge that such a machine could never actually be built. It is quite a different matter to propose that an obviously constructible machine be capable of thought. Yet, it appeared quite likely that the new idea would succeed. Turing himself expended a substantial effort in convincing the scientific community and the public alike that an intelligent machine could now be built. The strong theory of AI has some weaknesses. The term “effective procedure” as applied to human behavior is not rigorously defined, making it impossible to determine that there is an equivalent program for the universal machine. Then, even if “effective procedure” were well defined, the Church–Turing hypothesis still requires a proof. And then, even if a proof were available, it would still be required to constructively demonstrate that there are “effective procedures” for at least a large number of thought processes. Finally and most importantly, how shall we select those thoughts, skills, and actions that constitute intelligent behavior? Is intelligence just thinking or is there more? On the other hand, the strong theory has a powerful intuitive appeal. Even if there are substantial gaps in the chain of reasoning, it is still the case that the digital computer is capable of performing an infinite number of different symbol manipulation processes and should, therefore, be sufficient for creating a mental model of the world and thereby displaying intelligent behavior. It is easy to understand how the first generation of scientists and engineers who developed the computer were easily persuaded that the intuitions were essentially correct and that the creation of a true thinking machine was not only inexorable but also close at hand. Unfortunately, the theory was far more difficult to reduce to practice than anyone had imagined. The incipient field of AI found itself perpetually making ever more extravagant promises and irresponsible claims followed by rationalizations after regularly failing to realize them. It is not unfair to say that AI has shed a great deal of light on computation while doing little more than heat up debates on mind. One naturally wonders why the promise remains unfulfilled to this day. 9.3.3 Schorske’s Causes of Cultural Decline In his magnum opus on European intellectual history, Schorske [292] gives a highly instructive explanation for the unkept promises of AI. He convincingly argues that cultural endeavors stagnate and fail when they lose contact with their intellectual antecedents (i.e. their diachronic and synchronic histories) and become fixated in the technical details of contemporary thought. In the fields of greatest importance to my concern – literature, politics, art history, philosophy – scholarship in the 1950’s was turning away from history as its basis for self understanding . . . . In one professional academic field after another, then, the diachronic line, the cord of consciousness that linked the present pursuits of each to its past concerns, was either cut or fraying. At the same time as they asserted their independence of the past, the academic disciplines became increasingly independent of each other as well. . . . The historian seeks rather to locate and interpret the artifact temporally in a field where two lines
Theories of Mind and Language
215
intersect. One line is . . . diachronic, by which he establishes the relation of a text or system of thought to previous expressions in the same branch of cultural activity. . . . The other is . . . synchronic; by it he assesses the relation of the content of the intellectual object to what is appearing in . . . a culture at the same time. The diachronic thread is the warp, the synchronic one is the woof in the fabric of cultural history.
Although Schorske did not include science in his analysis, his thesis seems highly appropriate there, too, with AI as a striking instance. I submit that the chart of Fig. 9.1 is faithful to Schorske’s notion, expressed in the excerpt above, of the intellectual cloth into which the history of mechanical metaphors for mind is woven. Applying Schorske’s argument to AI, I conclude that the dismal outcome of the early experiments in AI can be attributed to the loss of the relevant synchronic and diachronic histories. The digital computer is the technological tour de force of our age. In keeping with Wiener’s observation, our high technology quickly generated a new metaphor for mind. In fact, soon after their commercial introduction, computers were publicly called “electronic brains” and became the heroes and villains of popular cinema. The new metaphor was, indeed, more justifiable than any of its predecessors. The computer was versatile, reliable and fast, operating at electronic speeds capable of performing overwhelmingly lengthy calculations in heretofore unimaginably brief times. Surely, this was a thinking machine and we were utterly seduced by its power and beauty. 9.3.4 The Ahistorical Blind Alley The difficulty arose only when, due to a subtle misinterpretation of a developing theory of computation, we fell into the trap of hubris from which we seemed unable to extricate ourselves. The result was that proper consideration was not accorded to history. Because we knew that the computer could carry out any computation, we concluded that we could concentrate exclusively on symbolic computation. This was an unfortunate but not unreasonable error. Since, according to the new metaphor, intelligence was understood to be the result of manipulating symbols and since the computer is the ultimate symbolic calculator, the simulation of intelligence must be reducible to entering the appropriate symbols into the computer and executing the appropriate programs which operate on those symbols. Everything that intelligence entails can be contained in the computer in an abstract, symbolic representation. 9.3.5 Observation, Introspection and Divine Inspiration To many practitioners of the new art, the argument was incontrovertible. As we shall see, it is far from specious. The fallacy is discovered only when one asks where the symbols and programs originate. The obvious answer was that the programmer would chose the symbols and processes based on careful observation, introspection, argumentation, extreme cleverness, and divine inspiration. It was perfectly clear that the mind is representational, so why not simply discover the representations by any available means? This methodology is inherently restricted to a particular moment in time. It has only two technical concerns, programs and data, which, in an automata-theoretic sense, are
Mathematical Models for Speech Technology
216 Aspect
Computational Model
Cybernetic Model
representation coding memory stimuli focus design strategy
symbols discrete local independent syntax synthetic static
signals continuous distributed integrated semantics adaptive dynamic
Figure 9.11 Aspects of mind and language
equivalent. Thus, attention is focused on knowledge representation and organization. Programs are procedural knowledge, whereas data is declarative knowledge, and both are entirely symbolic. AI systems that are built on this premise fall under the rubric of rule-based designs. As such, they have two distinct components. First, they utilize the rigorous techniques of computing for both algorithms and data structures. For example, algorithms may be based on fundamental principles of searching, sorting, parsing, and combinatorial optimization. Data structures often employ binary trees, directed graphs, linked lists, queues, and formal grammars. Sometimes, however, it is not clear how to use such techniques to simulate some aspects of intelligence, thus giving rise to the second component, heuristics. In this case, the programmer constructs rules on an ad hoc basis. Often expressed as logical predicates, the rules take the form: if the predicate P is true then execute function F . (Compare this with the explanation of semantics given in Chapter 7.) Both the predicates and the actions can be very complicated. Such rules often operate on lists having no discernible order. In instances when quantification is required, values on completely arbitrary scales are assigned. The only connection that rule-based systems have to physical reality is their author. Thus, intelligence is treated as a disembodied, abstract process. As such, it must be based on a priori choices of symbols representing whatever properties the programmer can detect in the real objects they signify and a loose collection of rules describing relations among the objects. The methods by means of which symbols are created, numerical values assigned, and rules inferred are largely subjective. This often results in inconsistent definitions, arithmetic absurdities and the competition – or even contradiction – of many rules to explain the same phenomenon. The extreme example of this approach is the CYC project of Lenat [177], and it should come as no surprise that after decades of exhaustingly detailed work, this and other such systems have met with little success. 9.3.6 Resurrecting the Program by Unifying the Synchronic and Diachronic At this point it is instructive to compare the components of mind in the different perspectives espoused by Turing and Wiener. The simple chart of Fig. 9.11 suffices for the purpose. The most common interpretation of the ENIAC box (in Fig. 9.1) and the one favored by Turing himself is an implementation in which the machine engages only in abstract thought and communicates with the real world via only a keyboard. However, in the
Theories of Mind and Language
217
penultimate paragraph of the 1950 paper Turing [320] suddenly offers an alternative approach from a more nearly Schorskian perspective that is much more aligned with the cybernetic paradigm: We may hope that machines will compete with men in all purely intellectual fields. But which are the best ones to start with? Even this is a difficult decision and people think that a very abstract activity like playing chess would be the best. It can also be maintained that it is best to provide the machine with the best sense organs that money can buy and then teach it to understand and speak English. This process could follow the normal teaching of a child. Things would be pointed out and named, etc. Again, I do not know what the right answer is but I think both approaches should be tried.
I argue that although the ahistorical approach could, in principle, succeed – we could guess the G¨odel number for mind – it is highly unlikely. This approach ignores one of the most important purposes of intelligence, that of ensuring the survival of the relatively slow, relatively weak, relatively small human animal in a complex and often hostile environment. While it is true that some of the representations necessary for survival could be transmitted genetically or culturally, many critical behaviors are acquired by each individual through long periods of interaction with his environment. In order to acquire sufficient knowledge about and skills to function well within his surroundings, that is, to define symbols and build programs, sensorimotor function is required. This is the domain of the cybernetic paradigm, and thus I advocate an approach based on a synthesis of the synchronic and diachronic histories in the spirit of Turing’s alternative.
10 A Speculation on the Prospects for a Science of Mind 10.1 The Parable of the Thermos Bottle: Measurements and Symbols The story is told of a conversation that ensued when a small group of scientists representing different disciplines met over lunch. The discussion wandered politely, becoming intense upon reaching the matter of important unanswered questions. A physicist spoke passionately about the quest for a unified field theory. A biologist reminded his colleagues about the debate over the causes and timing of evolutionary development. A psychologist lamented the confusion over the nature of consciousness. And an anthropologist raised the issue of the environmental and genetic determinants of culture. An AI researcher who had been writhing uncontrollably in his desire to participate in the scientific braggadocio finally managed to insinuate himself into the conversation. “My colleagues and I”, he ventured, “have been seeking to understand the thermos bottle.” When this evinced only quizzical stares from his companions, he became impatient. “Well, look,” he urged, “in the winter you put hot tea into a thermos and it stays hot.” Puzzlement turned to incredulity. With frustration rising in his voice he persisted, “But in the summer you fill it with iced tea and it stays cold.” Bemused silence descended. In total exasperation he cried, “Well. . . how does it know!!?” What some, perhaps, will find deliciously and maliciously funny about this anecdote is its caricature of a far too prevalent proclivity to ascribe all phenomena to cognitive process rather than physical law. Buried beneath the perhaps mean-spirited nature of the parable, there lies a profound insight. Mental activity entails both physics and computation in complementary roles. To say, as I did in Chapter 9, that the mind is to be understood by a synthesis of its synchronic and diachronic histories is to recognize that while abstract, symbolic representation of the world is required, reality is presented to us only in the form of continuous, sensorimotor measurement. That is, the symbols, which are the basic elements of the contemporary perspective, derive from the measurements that are the province of the historical viewpoint. Cognition depends on symbolic representation, whereas measurement relies on physical sensors and actuators. Measurements may be transformed into symbols. Unless this transformation is well understood, there
Mathematical Models for Speech Technology. Stephen Levinson 2005 John Wiley & Sons, Ltd ISBN: 0-470-84407-8
220
Mathematical Models for Speech Technology
can be no hope of formulating a sophisticated theory of mind without which, I argue, it is impossible to develop a useful technology for human–machine communication by voice. Most engineers working on human–machine communication would argue that exactly the reverse of my argument is true. That is, if an advanced technology depends on understanding the mind, then there is little hope of achieving the goal. The mind is properly relegated to the realm of philosophy, and thus it can never be understood in a way that has any scientific basis or technological implications. I utterly reject this point of view. Just as the mathematical models presented in the first eight chapters of this volume are both a theory of language and the basis of speech technology, so can a rigorous theory of mind support a more advanced language processing technology. However, the new science we seek will not evolve from armchair research. Before I offer a concrete, experimental approach to the problem, I am obliged to make a brief digression to consider whether or not it is even sensible to search for a science of mind.
10.2 The Four Questions of Science In order to speculate on how a science of mind might develop, it is helpful to consider other questions addressed by the sciences. As I suggested in the parable of the thermos bottle, science comprises four areas of inquiry that, taken together, completely cover its legitimate domain. Science asks about cosmos, life, mind, and society. As listed, the subjects of science are arranged in order of increasing complexity. That is, life results from the intricate composition of many small, inorganic, physical objects. As living organisms evolve to higher orders of complexity, minds emerge. Finally, societies may be considered to be large organized collections of minds. 10.2.1 Reductionism and Emergence Each level of the scientific hierarchy emerges out of its predecessor in the sense that the science at one level must be distinct from but consistent with that of its predecessors. One might view this constraint as a generalization of reductionism in which successful theories at one level need not be directly expressed in terms of the essential constituents of the preceding levels but cannot result in contradictions of the theories that govern them. Thus, biological life develops from sufficiently complex combinations of inanimate matter, the behavior of which is well described by physics. Yet, it may not be useful to describe biological phenomena in terms of the behavior of the simplest physical objects involved in them. For example, it is possible, in principle, to explain the behavior of large biomolecules directly by solving the Schr¨odinger equation. Up to the present, this has only been accomplished for simple atoms, but there can be no doubt that the necessary solutions exist even if the techniques required to compute them are presently lacking [134, 131]. It appears, however, that even if such solutions were available to us, it is far more useful to consider the interactions of specific biomolecules such as proteins and amino acids to understand living organisms, remaining secure in the knowledge that these larger objects conform to the laws of physics the detailed expressions of which need not concern biologists. Not only do we obtain more parsimonious descriptions, but also one hopes that the relevant dynamics of biological systems are expressed in terms of a set of state variables that depend in a complicated way on physics but interact with each other in a simple
A Speculation on the Prospects for a Science of Mind
221
and essential way for the characterization of biological laws. For example, this effect is evident within physics itself in the case of gases. The kinetic-molecular theory assures us that gases are composed of particles in continuous motion. In principle, the behavior of gases could be determined from the laws of mechanics governing the interactions of point masses. Due to the large numbers of particles involved, a trivial measure of complexity, it is, at best, impractical to study gases from this perspective. However, the state variables of temperature, pressure, and volume provide elegant descriptors of an ideal gas and we can understand, in principle, how they are related to the underlying kinetic-molecular theory without resort to knowledge of the dynamics of individual particles. Unfortunately, no such theories yet exist in biology. The phenomenon of emergence as described above avoids, I believe, some of the difficulties that arise from theories of consilience [57], of which Wilson’s sociobiology [331] is an instance. According to Wilson and his followers, human social behavior is genetically determined. While I am sympathetic to the idea that there are genetic influences in both personal and social interactions, the proposal of a direct genetic encoding of all behavior violates the spirit of emergence. That is, it ignores the effects of the specific, and as yet unknown, psychological and sociological dynamics. However, the principles of emergence do offer the possibility of a rigorous understanding of social behavior with which Wilson might well be satisfied. Emergence also seems to provide a better way to study mind than does the blatantly reductionist theory proposed by Penrose [245], who views mind and consciousness as a result of quantum mechanics. Like consilience, the “quantum mind” skips two stages of emergence. Unlike consilience, there is absolutely no reason to suppose that mind is a quantum effect. I propose to elaborate on emergence as it relates to a science of mind in Section 10.3.1. 10.2.2 From Early Intuition to Quantitative Reasoning I raised the issue of sociobiology only for the purpose of contrasting its premises with those of emergence. I do not intend to address sociological questions further. Rather, I would like to address the somewhat less ambitious question of what a sufficiently advanced psychology (i.e. science of mind) would have to be in order to support a technology of natural human–machine communication. To do so, I need to make a digression explaining how I construe science. I subscribe to the doctrine of scientific realism which asserts, first and foremost, that science seeks to discover objective reality. This quest has a long tradition beginning with what Holton [132] calls the “Ionian Enchantment”, Thales’ idea that the world is comprehensible by means of a small number of natural laws. The subjects of these laws are assumed to be absolutely real. Thus, electrons, genes, and mental representations, listed in ascending order on my scale of scientific complexity, are assumed to exist even though they are not directly observable. A contemporary rendering of this theory is given by Margenau [210] whose “constructual plane” shows how observation and reasoning establish that all matter and process is real and there is no place for the supernatural. This does not mean that everything is knowable or that all processes are possible. Prohibitions against perpetual motion or arbitrarily high velocities are not problematic, for example, because they violate established principles. The extent to which the universe is not knowable is knowable. The
222
Mathematical Models for Speech Technology
incompleteness of a theory is easily recognizable when there is widespread controversy about it. The means by which science seeks explanations of objective reality are well known to every schoolchild. The so-called scientific method proceeds from observation to testable hypothesis leading to experimental evaluation. Although not considered part of the basic method, its motivation springs from curiosity and astute perception leading to early intuition, a critical form of reasoning in which relationships are postulated even though no causal argument can be formulated. The origins of early intuitions will be considered in Section 10.4.2. What is often omitted from considerations of the scientific method is the requirement of quantitative reasoning expressed in a mathematical formalism. Many intellectual pursuits employ reasoning akin to the scientific method but they make no pretense of using mathematics. The relationship between science and mathematics is rarely explored by historians of science who leave the task to practitioners of science such as Hadamard [114], Poincar´e [249], and Wigner who observed: The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve. [328]
Dirac elucidated the role of mathematics in science, noting that: The physicist, in his study of natural phenomena, has two methods of making progress: (1) the method of experiment and observation, and (2) the method of mathematical reasoning. The former is just the collection of selected data; the latter enables one to infer results about experiments that have not been performed. There is no logical reason why the second method should be possible at all, but one has found in practice that it does work and meets with remarkable success. This must be ascribed to some mathematical quality in Nature, a quality which the casual observer of Nature would not suspect, but which nevertheless plays an important role in Nature’s scheme. One might describe the mathematical quality in Nature by saying that the universe is so constituted that mathematics is a useful tool in its description. However, recent advances in physical science show that this statement of the case is too trivial . . . The dominating idea in this application of mathematics to physics is that the equations representing the laws of motion should be of a simple form. The whole success of the scheme is due to the fact that equations of simple form do seem to work. The physicist is thus provided with a principle of simplicity which he can use as an instrument of research. [66]
Einstein made an even stronger claim for the role of mathematics in scientific discovery: [A]ny attempt to derive the fundamental laws of mechanics from elementary experience is destined to fail. [77]
After asserting that the axiomatic foundations of physics cannot be inferred from experience but that they can be correctly deduced by mathematical reasoning, he admits the need for experiment to guide the mathematics: [N]ature actualizes the simplest mathematically conceivable ideas. It is my conviction that through purely mathematical construction we can discover these concepts and the necessary
A Speculation on the Prospects for a Science of Mind
223
connections between them that furnish the key to understanding the phenomena of nature. Experience can probably suggest the mathematical concepts, but they most certainly cannot be deduced from it. Experience, of course, remains the sole criterion of mathematical concepts’ usefulness for physics. Nevertheless, the real creative principle lies in mathematics. Thus in a certain sense I regard it true that pure thought can grasp reality, as the ancients dreamed. [77]
Although science may have humble origins in curiosity regarding common experiences and intuitive explanations of them, much more is required. Science reaches its maturity only when it expresses laws of nature in mathematics, tests them against experiments based on quantitative measurements and finds them consistent with all other known laws. For example, one is tempted to say that the citation at the beginning of Section 2.5 shows that Plato understood pattern recognition. Upon further reflection, however, it becomes clear that this conclusion can only be reached by reading the ancient text with twenty-first century eyes. Looking back, we interpret the words with respect to our modern theories. But Plato had no way to quantify his ideas. He had no rigorous theory of probability. He did not even have the analytic geometry required to define a feature space. Plato had an early intuition about patterns but no mathematical expression of it and, hence, no science. At this time, only physics is mature. As Hopfield notes, biology is maturing: [B]iology is becoming much more quantitative and integrated with other sciences. Quantification and a physical viewpoint are important in recent biology research – understanding how proteins fold, . . . how contractile proteins generate forces, how patterns can be spontaneously generated by broken symmetry, how DNA sequences coding for different proteins can be arranged into evolutionary trees, how networks of chemical reactions result in “detection”, “amplification”, “decisions”. This list could be a lot longer. [134]
Another important branch of mathematics relevant to biology is developing, the theory of dynamical systems operating far from equilibrium. Life may be seen as a collection of mechanisms preventing the descent into equilibrium or death by homogeneity. Psychology and sociology are in their infancy but there is no reason to believe that they will not someday mature in the same sense that physics has done and biology is now doing. In fact, we have already had a glimpse of the mathematics needed to express the natural laws of biology, psychology, and sociology. The rudiments of a general theory of emergence are beginning to appear. One theme will certainly be that of properties of stochastic processes. Note that we have appealed to such mathematics in Chapters 3–6. General theories of complex systems will also borrow from thermodynamics, statistical mechanics, and information theory such notions as state variables, phase transitions, multivariate nonlinear dynamics – especially those operating far from equilibrium, and complexity measures. I shall return to this point in Section 10.5. 10.2.3 Objections to Mathematical Realism My characterization of science will, no doubt, raise many objections. I cannot completely subdue them but I propose to comment as best I can on the more common ones.
224
Mathematical Models for Speech Technology
The Objection from the Diversity of the Sciences This argument rests on the proposition that biology, psychology, and sociology are intrinsically different from physics and need not follow the same path to success. The citation from Hopfield casts some doubt on the proposition as it applies to biology. The cases for psychology and sociology are not so easily made. In fact, none other than Wiener has expressed considerable pessimism in this regard. I mention this matter because of the considerable, and I think false, hopes which some of my friends have built for the social efficacy of whatever new ways of thinking this book may contain. They are certain that our control over our material environment has far outgrown our control over our social environment and our understanding thereof. Therefore, they consider that the main task of the future is to extend to the fields of anthropology, of sociology, of economics, the methods of the natural sciences, in the hope of achieving a like measure of success in the social fields. From believing this is necessary, they come to believe it possible. In this, I maintain, they show an excessive optimism, and a misunderstanding of the nature of all scientific achievement. All the great successes in precise science have been made in fields where there is a certain high degree of isolation of the phenomena from the observer. [330]
Wiener invokes the Maxwell demon problem, in which the observer has such a profound effect on the observed that no regularities can be reliably obtained, for the social sciences. While it is impossible to dismiss this argument entirely, it is essentially an argument from pessimism, the antidote to which is given by von Neumann and Morgenstern: It is not that there exists any fundamental reason why mathematics should not be used in economics. The arguments often heard that because of the human element, of the psychological factors etc., or because there is – allegedly – no measurement of important factors, mathematics will find no application, can all be dismissed as utterly mistaken. Almost all of the objections have been made, or might have been made, many centuries ago in fields where mathematics is now the chief instrument of analysis. [324]
It is worthwhile to consider a specific instance of the general argument. The geocentric theory of the solar system was gradually replaced with a heliocentric one by Copernicus, Galileo, and Kepler. It was not, however, until Newton and Leibniz introduced the calculus that the new theory could be completely vindicated. The problem with the geocentric theory was only partially due to its reliance on naive observation. The deeper difficulty was that, as a purely geometric theory, it concerned itself simply with the apparent locations of the sun and planets. There was no explanation of what caused the planets to move. The motions might just as well been the result of the planets being bolted to some giant clockwork. Until Newton, there was simply no mathematics to analyze quantities that change in time with respect to other quantities and thus no way to explain forces that might hold the planets in their orbits and determine their motions. The Ptolemaic solar system was adequate for timing religious rituals. With its epicycles upon epicycles it was quite accurate in its limited predictive abilities. But it could not support even the most elementary physics. That would have to wait until the completely new calculus vastly extended the existing static geometry. Centuries later, with the invention of several new branches of mathematics, we have come to understand that accurate measurements of time and position are impossible without a mature physics. Today, accurate navigation relies
A Speculation on the Prospects for a Science of Mind
225
on relativistic corrections in the computation of satellite orbits. It takes a certain lack of imagination to believe that there will be no new mathematics that opens up biology, psychology, and sociology just as the calculus revealed physics. The Objection from Cartesian Duality The mind–body dichotomy is usually attributed to Descartes, who believed that humans alone are endowed with both a physical body and an incorporeal soul infused in it by God. The Greek word psyche is, in modern translations, alternatively rendered as either mind or soul, indicating some identification of our mental function with the supernatural. In essence, the objection is theological, placing the sacred soul, the seat of the mind, beyond the reach of science. This is, of course, in polar opposition to my definition of scientific realism and the conflict cannot be resolved. Even if one were to construct a machine, indistinguishable in its mental abilities from a human, a believer in dualism would reject it as a nothing but a superficial simulation of mind and perhaps a blasphemy. The Objection from either Free will or Determinism There are two other contradictory theological arguments that decry any exact science as a challenge to the essence of God. The conflict was framed as early as 1820 by Laplace, who wrote: An intelligent being who knew for a given instant all the forces by which nature is animated and possessed complete information on the state of matter of which nature consists – providing his mind were powerful enough to analyze these data – could express in the same equations the motion of the largest bodies of the universe and the motion of the smallest atoms. Nothing would be uncertain for him and he would see the future as well as the past at one glance. [172]
If such an idea is odious to the theologians when expressed only as a claim about physics, imagine how blasphemous it would be if extended to include mental activity. A science of mind must be viewed as an assault on the sanctity of the soul and thus as a sacrilegious violation of ethics. But there is a contradictory aspect to determinism. Along with a soul, God has given man free will. If all of his actions are predetermined then volition is an illusion. We can perhaps escape from the paradox by asserting the existence, as a fundamental property of reality, of thermodynamic and quantum randomness. Thus the vision of Laplace must be amended to include the notion of stochastic process as a model for limitations on the precision of measurements and other phenomena that admit of only a probabilistic explanation. There is some debate about the legitimacy of probabilistic theories in scientific realism. Perhaps the best-known rejection of probability as a valid scientific explanation comes from Einstein: Quantum mechanics is very impressive. But an inner voice tells me that it is not yet the real thing. The theory produces a good deal but hardly brings us closer to the theory of the Old One. I am at all events convinced that He does not play dice. [75]
226
Mathematical Models for Speech Technology
Einstein was never able to satisfactorily resolve the issue. Quantum mechanics survives as a highly successful theory, and the mathematical theory of random processes permeates the cybernetic paradigm. Indeed, the several theories of linguistic structure discussed in Chapters 3–6 of this volume are stochastic models. No matter which side of the debate about free will and determinism one chooses, he runs afoul of religious doctrine. Perhaps Einstein has provided the solution with his enlightened faith: I believe in Spinoza’s God who reveals himself in the orderly harmony that exists, and not in a God who concerns himself with the fates and actions of human beings. [76]
Dennett [64] has argued that thermodynamic and quantum randomness do not account for free will because at the level of psychological and social behavior these random fluctuations have zero mean and, hence, no effect. This does not diminish the value of stochastic models of behavior. It simply says that free will is not a consequence of a stochastic process. We certainly feel as if we have free will. A better explanation of the phenomenon is that it is a consequence of our conscious minds (see Section 10.4). That is, we are able to decide amongst many possible courses of action and we hold ourselves responsible to make these choices. Thus the world is, if not perfectly deterministic, at least predictable up to some statistically characterizable limits. We consciously use this fact to evaluate the consequences of our actions and make decisions accordingly. The conclusion is that neither free will nor determinism prohibit a science of mind. The Postmodern Objection Postmodernism traces it origins to French and German philosophy and is largely a reaction to the discontents of modern society. As a literary device, it is used to portray the confusion and wreckage of our era. In this role it can be quite effective. As a philosophy that denies the possibility of objective truth, insisting instead that truth flows from persuasion and coercive power, it is a strident opponent of scientific realism. Perhaps the postmodern school is an expression of a profound disappointment at our seeming inability to establish a sane society. Perhaps it resulted from a misinterpretation of either the intrinsic limitations on our knowledge of physics and mathematics as described in Section 9.2.4, or the empiricists’ interpretation of science as merely a consistent explanation of reality. In any case, postmodernism mocks those claims it supposes science to be making with a counterclaim that all science is simply a social construction. Thus it should be no surprise that postmodernism reserves special contempt for definitive sciences of mind and society. The postmodern perspective is so alien to the scientist that it is difficult to imagine that anyone would actually advance such ideas. But these notions have, in fact, been seriously proposed. Latour is a leading advocate of the social construction of science. He begins with disingenuous praise for the goals of science, only to end by devaluing them as outmoded: We would like science to be free of war and politics. At least we would like to make decisions other than through compromise, drift and uncertainty. We would like to feel that
A Speculation on the Prospects for a Science of Mind
227
somewhere, in addition to the chaotic confusion of power relations, there are rational relations . . . surrounded by violence and disputation we would like to see clearings – whether isolated or connected – from which would emerge incontrovertible, effective actions . . . The Enlightenment is about extending these clearings until they cover the world . . . Few people still believe in the advent of the Enlightenment . . . [173]
Next he dismisses the validity of scientific reasoning. We neither think nor reason. Rather we work on fragile materials – texts, inscriptions, traces, or paints – with other people. These materials are associated or dissociated by courage or effort; they have no meaning, value or coherence outside the narrow [social/political] network that holds them together for a time. [173]
He then goes on to deny the possibility of any universal scientific principle: Universality exists only “in potentia”. In other words it does not exist unless we are prepared to pay a high price of building and maintaining costly and dangerous [social/political] liaisons. [173]
Aronowitz extends Latour’s notion to mathematics noting that “[N]either logic nor mathematics escapes the contamination of the social” [7]. In particular, Campbell asserts that the contaminating social influences are capitalism, patriarchy and militarism, saying that “[M]athematics is portrayed as a woman whose nature desires to be the conquered Other” [218]. The postmodern attack on science is disingenuous. Its intellectual veneer is but a subterfuge for its virulent, if bankrupt, political agenda that abhors anything it regards as authoritarian and exalts all forms of cultural and intellectual relativism. It rejoices in the replacement of the canon of the Great Books with an anti-intellectual eclecticism. What could possibly be more authoritarian than a science that claims objective truth and requires of its practitioners the mastery of an imposing canon? Hence its unbridled derision of science. Politics may be adversarial, but science is definitely not. Unlike a politician, nature does not change her design of the universe when science seems close to understanding aspects of it. Nor are the principles of scientific investigation subject to change even when nature consistently frustrates the efforts of science to discover her secrets. As Einstein once [74] remarked: “Subtle is the Lord but malicious He is not.” Of course, his reference to the Lord is intended to mean the God of Spinoza, as cited above. From the scientific perspective, postmodernism has been completely discredited by the recent Sokal hoax in which the journal Social Text published, as a serious scholarly paper, a parody entitled “Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity” [302]! Beginning the New Science I am certain that the answers to a few of the usual objections to my new positivism will not satisfy my critics. I fear that not even the realization of my proposal will accomplish that. However, I am not at all discouraged by the daunting task of producing the required new science. I am convinced that the ideas expressed above will eventually generate
228
Mathematical Models for Speech Technology
natural laws that we cannot yet even imagine. I must be content to be a participant in the early phases of development of psychology and sociology. I take comfort in the fact that physics needed more than three centuries to mature and it is not yet a finished product. While I cannot hope to invent the new mathematics that I posit exists, I can offer a working hypothesis, an experimental method, and a means of evaluating a novel constructive theory of mind.
10.3 A Constructive Theory of Mind Proceeding on the assumption that a mature, quantitative science of mind can be discovered but painfully aware that neither the mathematics nor epistemology to support it yet exists, I can only offer a proposal to explore the terrain by means of a constructive theory. I use the term “constructive” not to mean helpful but rather to imply that in the process of building a machine with the desired behavior, an analytical theory of that behavior will become evident. I propose to build a stochastic model of mind with sufficiently many degrees of freedom that its detailed structure can be estimated from measurements and optimized with respect to a fidelity criterion. Then, a post-optimization interpretation of the model will, I predict, yield the desired analytical theory. The Poritz experiment recounted in Section 3.1.7 is, by this definition, a successful constructive theory of broad-category acoustic phonetics and phonotactics. If the original model is severely underdetermined, there is the risk that the model will capture artifacts present in the measurements. Fortunately, the real world is characterized by regularities too prominent and too consistent to be accidental. Wherever possible, the model should be constrained to reflect these regularities just as Poritz did (refer to Section 3.1.7). 10.3.1 Reinterpreting the Strong Theory of AI As we noted in Section 9.3.2, the preferred demonstration of the strong theory of AI is a direct synthesis of a discrete symbolic model perfectly isomorphic to reality and unaffected by any uncertainty in the measurement of the physical correlates of the putative symbols. This method avers that the sensorimotor periphery may be safely ignored but it requires the full computational power of the universal Turing machine to be effective. In contrast, Turing’s alternative as described in Section 9.3.6 uses the power of the universal machine only to simulate the physical processes from which the symbols, defined by probability distributions, derive. Thus cognitive function emerges from the solution to the presently unknown equations of motion underlying the mechanisms of mind. This will be the legitimate solution to the problem of the thermos bottle. 10.3.2 Generalizing the Turing Test As noted in Section 9.2, Turing proposed that an artificial intelligence could, in principle, be evaluated experimentally. Taking an agnostic position on what mind really is, he suggested that the requirement should be only that the behavior of the machine be indistinguishable from that of a human by a human judge. This method of discrimination has come to be known as the “Turing test”. Turing envisioned the “imitation game” would be conducted in the domain of abstract mental activity and communication would
A Speculation on the Prospects for a Science of Mind
229
be only via a teletypewriter. If, however, we are to follow Turing’s alternative discussed in Section 9.3.6, then the test must be appropriately modified. Turing’s alternative advocates connecting the machine to the real world via sense organs, and in Section 10.4 I shall propose augmenting the sensory function with a motor function. This implies a mechanical mind that is no longer restricted to engage in abstract thought alone. Rather, the machine is embodied, interactive, and adaptive. Embodiment allows for the symbols to be grounded in perception, locomotion, proprioception, and manipulation. Interaction suggests that the machine will continually both respond to and cause changes in a real physical environment. Adaptation refers to the ability of the machine to alter its perceptions and actions in response to observed changes in its environment in order to make its behavior as successful as possible. Under Turing’s alternative, the criterion for winning the imitation game must be modified accordingly. I propose that the criterion be one of interesting behavior in which cognitive process is evident. In particular, the acquisition and subsequent use of spoken language should be considered essential. By itself, language acquisition is not sufficient to conclude that there is a mind supporting it. However, if the experiment is conducted properly, the machine can be carefully instrumented, thereby allowing for the observation of internal changes corresponding to the development of significant and identifiable mental states. Thus, the generalized Turing test should evaluate both observable and internal behavior. I will return to this idea in Section 10.5.
10.4 The Problem of Consciousness No serious treatment of a science of mind can long avoid the problem of consciousness, so before proceeding on to my proposed experiment based on Turing’s alternative, I am obliged to comment on it. The literature on consciousness has a long history and is far too vast to even survey here. There are six recent books on the subject that give, at least, a modern perspective. Readers who are intrigued by this subject may wish to indulge in works by Damasio [56], Edelman and Tononi [73], McGinn [216], Tomasello [315], Fodor [92], and Penrose [245]. Collectively they comprise some 1300 pages and present quite different although related perspectives, including neurophysiology, connectionism, robotics, philosophy, psychology, and physics. For reasons that will become clear, I propose to summarize the problems in a few brief paragraphs. Scholars differ on the definition of consciousness. There are several explanations based on information-theoretic models of perception, pattern recognition, sensory fusion, development and identification of mental states, attention, volition, and the awake–asleep distinction. Most philosophers agree that these concepts are closely related to consciousness but are not its essence. The crux of the issue is the experience, that is, the visceral feeling of our contacts with physical reality. There is, philosophers insist, a difference between function and experience. Thus, when a stimulus impinges on our sensory organs and we feel the sensation of color, sound, smell, or touch, something more than simple information processing is occurring. Moreover, it will be impossible to comprehend mind, let alone simulate it, without accounting for experience as a consequence of function and separate from it. In fact one of the best-known arguments against the symbolic computation theory of AI is due to Searle [293], who opines that a symbol processor, no matter how sophisticated it might appear, can never be conscious in the experiential sense.
230
Mathematical Models for Speech Technology
Such a machine would be, to use the vernacular, a zombie, going through the motions of everyday activity without conscious awareness. If one accepts this definition of consciousness and its implications for a theory of mind, one has few alternatives. One can become embroiled in questions of duality. One can either deny the problem or label it beyond our capacity to solve. Or one can postulate the existence of some additional natural process to explain it. The difficulty with these alternatives is that they all spring directly from the armchair. As such, they are the result of the same fallacy that plagued the symbolic computation approach to AI. The definition of consciousness is based on an informal description of a subjective process analogous to the introspective determination of the symbolic representations of reality. It is also an instance of the thermos bottle problem in which it is asked not “How does it know?” but rather “How does it experience?” while, of course, ignoring the underlying objective aspects of mind. Because of the intrinsic subjectivity of the conventional definition, there is no way to test its validity and, hence, no possible resolution of the issue. There is a simple three-part answer to all of these questions. First, consciousness is nothing other than the mind’s self-awareness. Regardless of the computations the mind is performing, it has a symbolic representation for itself that can enter into any or all of them. Second, experience is epiphenomenal. As such, it is an observable but irrelevant artifact of the particular machine in which the mind is implemented. According to my doctrine of functional equivalence, different machines will produce different, but still intelligent, behavior and will have different physical manifestations of experience. In the human and probably higher animals, the feeling of experience is a by-product of the electrochemical signals that constitute thought. Third, experience, like the mind that generates it, is emergent. When signals that mediate the complexity of an appropriately organized physical mechanism are great enough, the conscious mind emerges from them. Jaynes [145] argues that the emergence of mind was a behavior learned socially millennia ago that has been culturally transmitted since then. Jaynes argues further that the emergence was marked by the recognition that the internally audible thought process was actually one’s own and did not originate outside one’s body. This idea was the focus of acrimonious controversy. Because Jaynes made the tactical error of placing a date on the initial emergence of consciousness based on a literary analysis, he was criticized on historical grounds. Unfortunately, the issues of timing completely overshadowed the intriguing proposal that consciousness was acquired. I find this a fascinating conjecture because of its compatibility with the embodied, interactive, adaptive, emergent mind of the cybernetic paradigm. The problem then, as I argued in Section 9.3.6, is to find those quantities and the relationships among them that form the basis for mind.
10.5 The Role of Sensorimotor Function, Associative Memory and Reinforcement Learning in Automatic Acquisition of Spoken Language by an Autonomous Robot It is tempting to say that I was inspired to undertake this research simply as a result of reading Turing’s 1950 paper. Although I actually read it in an undergraduate psychology course, the original motivation is much more mundane. The methodology I am advocating here is based on a few early intuitions which arose from the difficulties I encountered in my research on speech recognition based on the material in Chapters 2–8. What I
A Speculation on the Prospects for a Science of Mind
231
and many of my colleagues observed was that the signals we were analyzing seemed to have huge variabilities. Yet humans perceive these very signals as invariant. Humans rarely misunderstand a sentence or misclassify a visual scene, even a moving one. The obvious conclusion is that humans and machines are using quite different pattern recognition techniques based on quite different learning mechanisms. Machines rely on statistical models optimized with respect to preclassified data. The models are fixed upon completion of training, whereas humans optimize performance and are continuously adapting their strategies. Machines can achieve useful levels of performance only on artificially constrained tasks, whereas humans must achieve successful behavior in a world constrained only by the laws of nature. Therefore, the first intuition is that it may well be useful to try to simulate these aspects of human behavior. These early intuitions motivating my experiments were first outlined in [184]. The following six short sections explain my working hypothesis about mind and language. Section 10.5.7 then describes my experimental method for developing and testing my hypothesis. 10.5.1 Embodied Mind from Integrated Sensorimotor Function The experiment described in Section 10.5.7 is predicated on the assertion that there is no such thing as a disembodied mind. While the mind is representational and thought processes well described as computational operations on the abstract symbolic representations, no such symbolic representations can arise without the sensorimotor function of the body. Theories of “the embodied mind” have existed since Turing himself proposed the idea in his seminal 1950 paper which described the first mathematically rigorous computational model of intelligence. More recently, Johnson [151], Lakoff and Johnson [167], and Jackendoff [141, 142] have given comprehensive treatments of the idea from the perspective of psycholinguistics. There have been other speculations on this subject, but they have not been in the mainstream of AI. More importantly, there has been very little experimental work on “embodied mind”. One research project which definitely recognizes the importance of combined sensorimotor function in intelligent behavior is the COG project of Rodney Brooks at MIT [83]. There is also significant support for the importance of integrated sensorimotor abilities from psycholinguistics in Tanenhaus et al. [311, 312] showing the relationship of vision and language and new work in neurophysiology surveyed by Barinaga [23] demonstrating the existence of neural pathways from the motor areas of the brain to the cognitive areas. The importance of the integration of all sensory and motor signals is revealed by thinking about perception and pattern recognition. If one examines the signals from individual sensory modalities independently, they appear to have large variances and hence overlapping distributions leading to large classification error probabilities. This is exactly because the signals have been projected onto a space of lower dimension from the space of higher dimension in which the integration of all sensorimotor modalities is properly represented. In that space of much larger volume, the signals are widely separated and robust classification can be achieved. 10.5.2 Associative Memory as the Basis for Thought The mind is an associative pattern recognition engine that measures the proximity of one signal to another. For general pattern recognition proximity is a measure of similarity.
232
Mathematical Models for Speech Technology
For vision, proximity implies continuity or connection. For reasoning, proximity leads to causality and prediction. I submit that the primary mechanism of cognition is an associative memory that has the following properties. First, it must be able to relate input stimuli to desirable behavior. Second, the contents of the memory must be reliably retrieved when some stimuli are missing or corrupted. Third, the memory must be content addressable so that the presentation of any single stimulus will evoke all the related stimuli and their associated responses. The associative memory is capable of storing representations of stimuli of complex structure, thereby allowing for fusion of sensory modalities and motor functions and the recognition of intricate sequences thereof. The latter is particularly important for the grammar (i.e. phonology, morphology, and syntax) of language, although other complex memories may also be encoded in the same manner. However, rules governing the formation of such sequences are merely the code in which memories are represented. They serve two particular purposes, namely, to afford the memories some immunity to corruption and to make semantic processing robust and efficient. There are many ways to implement an associative memory with the desired properties. The simplest is the nonparametric maximum likelihood pattern recognition algorithm, discussed in Section 2.5. According to this model, the extracted sensory features are simply time-stamped and concatenated in a long vector. This method has the advantage of computational simplicity but suffers from its inability to represent relationships among stimuli. The preferred approach is to use a stochastic model that captures both probability distributions of stimuli and their underlying structure. Any of the mathematical models discussed in Chapter 3 are suitable for this purpose, but the most directly applicable is the hidden Markov model in which the observation densities are used to capture the statistics of the sensory data and the hidden state transitions become associated with structure. Details of the implementation are given in Section 10.5.7. 10.5.3 Reinforcement Learning via Interaction with Physical Reality The content of the associative memory is acquired by reinforcement. I define “reinforcement” to mean that the probabilities of forming associations amongst stimuli and responses are increased when the responses are useful or successful and decreased otherwise. Initially, all associations except for a few instincts have probability zero. These probabilities can be changed based on perceived reinforcement signals. Reinforcement is provided in three ways. First, in the absence of any outside stimulus, the robot goes into autonomous exploration mode. In this mode, it moves randomly about the environment, scanning for events of interest. The pace of its motions is governed by a timer simulating attention span. While scanning, the robot compares its sensory inputs to memory. When good matches are found, the associated actions are carried out and the memory updated as required. If no relevant memories are retrieved, the robot looks for high correlations amongst its sensory inputs and stores the corresponding events in memory. Stability of the robot should allow for safe continuation of this mode for indefinite periods. This operation, which I call the cognitive cycle, is shown in Figure 10.1. The most important mode of reinforcement is that of interactive instruction by a teacher. In this mode, the instructor will give explicit hard-wired reinforcement signals to the robot. Following the procedure established in the earlier pilot experiments, the instructor will
A Speculation on the Prospects for a Science of Mind
233
Sensory/Motor Inputs
Labeled inputs, Associations
Central Decision Maker
Processed Inputs
Processed Inputs
Associative Memory
Outside World
Motor Rotations, Positions, etc.
Reflexes, Instincts
Action Decisions
Figure 10.1
Output (Actuators, Voice)
The basic cognitive cycle
give positive reinforcement by lightly flexing the robot’s whiskers and negative feedback by preventing the motion in progress, thus stalling one or more of its actuators. Reinforcement of this type can be given while the robot is in autonomous exploration mode; however, its primary purpose is for language learning. Used this way, the instructor will initiate a sequence of verbal inputs, responses, and reinforcements. The robot’s responses will be drawn from memory if an appropriate one exists. Failing that, a response will be generated at random. Correct setting of audio interest instincts and span of attention will facilitate this behavior. The third form of reinforcement is direct demonstration. In this mode, the instructor will overhaul one or more of the robot’s actuators, causing it to make some desired motion. The robot will record the sequence of positions and associate them with other sensory stimuli present during the operation, including speech. The intent is for the robot to learn verbs. For example, the instructor could turn the steering motors to the left while saying “turn left”. Another example would be to turn the robot toward a red object while saying the word “red”. It is of utmost importance to distinguish this process from simple Skinnerian behaviorism. Although the robot is trained based on stimuli and responses to them, the mapping is not a simple one. The memory is symbolic, representational, and capable of learning complex relationships such as those needed to acquire and use language. Once the mechanics of reinforcement are established, the main work of the project, the learning experiments, can begin. It should be obvious that significant ambiguity is present in the training as described above. Of course, we do not expect the robot to learn from single examples as humans often do. At least initially, instruction of the robot will require careful selection of stimuli and will result in slow progress. We expect that order of presentation will be important and we plan to experiment to see which sequences result in the most versatile, stable behavior.
234
Mathematical Models for Speech Technology
10.5.4 Semantics as Sensorimotor Memory Semantics is exactly the memorization of the correlation of sensorimotor stimuli of different modalities. A sufficiently large collection of such memories constitutes a mental model of the world. In particular, language is acquired by memorizing the associations between the acoustic stimulus we call speech and other sensorimotor stimuli. When language is acquired it enables the symbolic manipulation of most, but not all, of the mental model. The semantics of language is thus a symbolic representation of reality. However, the symbols and relations among them are not predetermined by a human creator but, rather, acquired and memorized in the course of interaction with the surroundings. The reinforcement regime will cause the contents of memory to converge to a configuration in which the acquired symbols reflect the observed regularities in the environment. It is important to note that sensorimotor function forms the basis for much of semantics. In addition to concepts such as color which can only be understood visually, temperature and pain which are essentially tactile, force which is haptic, and time which can only be understood spatially, the meanings of many common words are derived from morphemes for direction or location concatenated with morphemes denoting specific physical actions. Such words can only be understood by direct appeal to a well-developed spatial sense and motor skills. As the associative memory grows, words formed in this manner can be associated with mental activities yielding meanings for abstract operations, objects, and qualities. For example, the sensations of force and balance, which are first defined in terms of their sensorimotor correlates, can later be used in a nonphysical sense to mean “persuade” or “compare”, respectively. Obviously, the abstract words acquire their meanings only by analogy with their primary sensorimotor definitions. Regardless of how the memory is implemented and trained, an important aspect of these experiments is to take frequent snapshots of it to analyze the development of memory associated with specific functions. Such data could serve as valuable diagnostics for the training procedure. I expect that as training progresses and the robot’s behavior becomes more interesting, I might be able to identify the emergence of concept-like constructs. 10.5.5 The Primacy of Semantics in Linguistic Structure Modern linguistics is dominated by the generative paradigm of Chomsky [45] that views grammar (i.e. acoustic phonetics, phonology, phonotactics, morphology, prosody, and syntax) as the core of language. Grammar is considered to be a complex system of deterministic rules unlike the probabilistic models studied in this book. In a now classic example (see Section 8.2), Chomsky considers the sentence “Colorless green ideas sleep furiously” which is grammatically well-formed but semantically anomalous. This sentence and the ungrammatical word sequence “Ideas colorless sleep furiously green” are said to have zero probability of occurrence and thus cannot be distinguished on that basis. What must, therefore, be important is the grammatical structure, or lack thereof, of the two sequences. It is this phenomenon to which traditional linguistics attends. My working hypothesis is inherently non-Chomskian in its characterization of language. As defined in Section 10.5.3, successful behavior is the goal of reinforcement learning. Successful behavior requires intelligence which is just the procedure for extracting meaning from the environment and communicating meaningful messages. Thus language is a critical aspect of intelligence and semantics is the primary component of language. All
A Speculation on the Prospects for a Science of Mind
235
other aspects of linguistic structure are simply the mechanisms for encoding meaning and are present in service to the primary function of language, to convey meaning and to make it robust in the presence of ambiguity. 10.5.6 Thought as Linguistic Manipulation of Mental Representations of Reality It is important to emphasize that the primary mechanism of thought is mnemonic, not logical. This implies that the full computational power of the universal Turing machine is not required. The universal machine is used to implement the “fetch”, “store”, and “compare” operations of the associative memory. It is true that humans learn to reason, and this ability can contribute to successful behavior. However, logical reasoning is a very thin appliqu´e learned later in life, used only infrequently and then with great difficulty. It is not a native operation of the mind and, even when learned, it is based on memory. Humans simulate the formal logic of computers by analogic reasoning in which the unknown behavior of an object or organism is deduced from the memorized behavior of a similar object or system. Such reasoning is error-prone. But, even in competent adults, most cognitive function derives from associative memory. Memory is built up from instincts by the reinforcement of successful behavior in the real world at large. As a cognitive model of reality is formed using appropriate computational mechanisms, a structure-preserving linguistic image of it is formed. When the language is fully acquired, most mental processes are mediated linguistically and we appear to think in our native language which we hear as our mind’s voice. 10.5.7 Illy the Autonomous Robot The experimental vehicle with which the ideas outlined above are explored is a quasianthropomorphic, autonomous robot, affectionately named Illy by her creators. However, before I undertook to build Illy, I did a pilot study with a simple device. In order to keep the engineering problems to a minimum, I used a child’s toy called “Petster”, a batteryoperated platform in the form of a cat. Locomotion is provided by two wheels, each turned independently by a motor. The cat has photosensors, not cameras, for eyes, microphones in its ears, a piezoelectric sonar device on its collar, a loudspeaker in its head, and an RS232 communication port. Most important, however, are the microswitch in its tail, a touch sensor on the back of its neck and accelerometers on the motors. By connecting the port to a computer, one can send instructions to the on-board microcontroller to make the motors work and generate five different sounds through the speaker. The cat was connected to a computer equipped with a speech recognizer with a 2000-word vocabulary. That is, it had acoustic phonetic models for 2000 words but did not have a syntactic or semantic model. I built an associative memory using an information-theoretic multilayer perceptron described by Gorin [124] and used it to relate speech and other stimuli to actions. I trained the cat using the following reinforcement regime. I spoke to the machine in isolated words or short phrases. If I used words the recognizer knew, the computer would send a corresponding code to the cat which would then perform some action. If the action were appropriate I signaled by patting the cat on the neck and the perceptron weights would be changed accordingly. If the action were wrong, I signaled either by pulling the cat’s tail or preventing it from moving, thus stalling out the motors. The activity would then stop but no change would be made in the memory. After about five minutes of patient
Mathematical Models for Speech Technology
236
training, the cat could learn a number of words and phrases which would cause it to move forward, backward, left, right, stop, and make a specific one of five available sounds (e.g. purr, meeow, hiss). I used any words or phrases I chose and various ways to express the same command. Despite this variation, the learning behavior was repeatable. Details of the memory architecture and training algorithms are given in Henis and Levinson [124]. Obviously, this was not a very sophisticated experiment. It did, however, show that lexical semantics and a simplistic word-order syntax could be acquired from virtually unconstrained speech in a real-time, on-line reinforcement training scheme when phonetics, phonology, and phonotactics are prespecified. The experiment also provided a better appreciation of the theoretical issues and practical difficulties that would be encountered in advancing this kind of investigation. Based on Petster, a more precise expression of the fundamental ideas was formulated [186, 187]. The Petster device, however, was inadequate because it did not have a visual sense nor arms and hands. Therefore, a new robot and control software was built in my laboratory. Illy, The new machine, Illy (see Fig. 10.2), is based on the Trilobot platform manufactured by the Arrick Robotics company. My students and I have added audio [194, 195] and video [341, 342, 343] capabilities to the platform and have connected it to a network of small computers and workstations with large memories [186]. The robot communicates with the network via wireless ethernet [186]. I mounted two electret microphones and two small color-video cameras on the robot’s movable head to provide for binaural hearing and binocular vision. The robot also has a single arm and hand with two degrees of freedom allowing for shoulder and thumb movement, enabling both lifting and gripping. A simple microprocessor and some auxiliary circuits govern all sensory and control functions [186]. A Pentium PC with video, audio and ethernet cards is mounted on-board. To meet the significantly larger power budget for the added hardware, the original power supply was replaced by a 12 volt motorcycle battery. Illy is equipped with 14 sensors, including an array of touch-sensitive whiskers, compass, tilt sensor, thermometer, odometer, battery charge sensor, tachometer and steering
Figure 10.2
Three versions of Illy: Illy-I (center), Illy-II (left), and Illy-III (right)
A Speculation on the Prospects for a Science of Mind
237
angle indicator. There are two motors, one for each wheel, together providing both steering and locomotion. The status of all of these instruments is transmitted to the network via the radio link. There are provisions for several other user-provided control signals, including a standard RC servomotor and ample channel capacity to communicate them along with all the other data. The on-board control system accepts commands to operate the motors and read the sensors in the form of a simple alphanumeric machine language. Instructions are transmitted to the controller via the radio link. In a similar fashion, codes indicating the results of the instructions and the status of the controller are returned to the network. Instructions may be combined to form programs. Software The control system for Illy is a distributed programming environment in which processes reside transparently on any of the networked computers or the on-board computer. The system allows for real-time, on-line operation at a rate of three complete executions of the cognitive cycle of Fig. 10.1 per second. The system is robust enough to allow for long periods of reliable operation in the autonomous exploration mode. This is essential for learning to take place. The main program in this framework is called IServer. For a particular data stream, IServer sets up a ring buffer in shared memory. An example of a source process which reads audio data from the sound card and writes it to the ring buffer and a sink process for sound source localization, which accesses the audio data and uses it to determine the direction a sound is coming from is shown in the top left of Fig. 10.3. Because of the demanding requirements of the input processing and the limited computing power available on Illy, much of the processing must take place on the networked computers. To support this, the IServer program includes a special sink process with the sole purpose of taking the data in the ring buffer and sending it across the network. On another machine, a corresponding source process receives this data and writes it to the ring buffer on its machine. A sink process on this other machine accesses this data in exactly the same manner as if it were on the original machine. The lower right part of Figure 10.3 demonstrates this process, again using audio processing as an example. In this case, the sound source location program is running on Illy, and accesses audio from the ring buffer as before. A speech recognition program is running on another machine and needs access to the same audio data. For this to happen, an audio server running on Illy takes data from the ring buffer and sends it to the audio source process which writes it to its ring buffer just like any other source of audio data. The speech recognition program reads the data in the same manner as before. The ring buffer on a networked machine may also have an audio server which sends the audio data to other machines. The ring buffer is divided into segments, the total number and size of which depends on the data type. Each segment is protected by a locking semaphore, so that a source process will not write to any block that is being read from, and a sink process will not read from a block that is being written to. Each segment of data includes a generic header specifying the byte count, a time stamp, and a sequence number. This system, along with some general distributed shared memory and a separate server which manages connections among machines, forms the basis for our software framework,
Mathematical Models for Speech Technology
238 Illy (robot ) full (old)
filling
Audio Ring Buffer
Sound Source Location (Sink)
full
full network
Audio Souce (Sound Card)
Audio Source (Remote)
Audio Server (Sink)
Hal (workstation)
filling
full
Audio Ring Buffer
full (old)
Audio Server (Sink) full network
Speech Recognition (Sink)
Figure 10.3 The IServer distributed computing architecture (on-board part, top left and remote part, bottom right)
upon which the aforementioned cognitive model is implemented including a central controller and a common centralized associative memory. The system has allowed us to take many disparate components and make them work together in a seamless fashion. Associative Memory Architecture The associative memory, the function of which is described in Section 10.5.2, is designed as follows. There is a separate group of HMMs for each modality – a set of HMMs for auditory inputs, a set of HMMs for visual inputs, etc. The states of these HMMs are used as the inputs to the next layer of HMMs. The structure is illustrated in Fig. 10.4. The benefits of this arrangement are that the initial layer of input processing can be reused by multiple models on the next level, and that each level of states has a particular meaning. In the example in Fig. 10.5, the states of the auditory HMMs are used by both the auditory-tactile HMM and the audio-visual HMM. The state of the audio-visual HMM might represent the simultaneous stimuli of the word “apple” and the image of an apple. Temporal sequences of the states of the HMMs are used as inputs to higher-level HMMs for more complex recognition. For example, we use sequences of phonemes or allophones as inputs to the multi-modality models at higher levels. Performance The autonomous exploration mode depends for its operation on the following programmed instinctual behaviors. First, Illy is irritable, that is, she will always make some response to
A Speculation on the Prospects for a Science of Mind
239
Visual Inputs
Auditory Inputs
Tactile/Motor Inputs
Measurements
Modality HMMs
""
‘a’
‘eh’
‘t’
‘d’
‘p’
Movement Servo Positions Tactile Sensations etc. ...
Shape
Color Phonemes Color
Feature State Sequences
Shape
Phonemes s1
s2
s3
s1
s2
s3
Concept HMMs
s1
s2
s3
Movement Sensor Readings Servo Positions ...
Low-Level Concepts
Concept State Sequences Decision Outputs to higher-level concept HMMs
Decisions
Figure 10.4
Decision Maker
Associative memory architecture
every stimulus. Illy gets hungry when her battery is nearly drained. When that condition is detected she will sound an alarm and seek the nearest human to help her. Illy has some self-defense behaviors. She avoids collisions with large objects but will delay for a short time before initiating evasive action. She has a preference for brightly colored objects, medium intensity wideband noises, and localized rapid motions. She avoids obstacles in her path of motion, dark shadows, and high temperatures. Illy has a sleep instinct in which she is physically inactive but executes a clustering program to compress and combine data stored in memory. This idea is based on an interpretation of Robins and McCallum [275]. Finally, Illy has an instinct to imitate both speech and gestures, including arm, hand, and head motions. Some of the more complex behaviors we have been able to demonstrate include sound source localization [194], object recognition and manipulation [342], navigation in response to voice command [202], visual object detection, identification, and location [343], visual navigation of a maze [197], and spoken word and phrase imitation [158], all of which, except for acoustic source localization, are learned behaviors. Illy is shown running a maze in Fig. 10.5. While these functions hardly qualify Illy as a sentient being, I regard them as a firm foundation on which to support far richer cognitive activity. I anticipate the time when Illy will do something surprising and make a good showing on the generalized Turing test outlined in Section 10.3.2. Obstacles to the Program I am sure that critics can offer many possible reasons for what they foresee as an inevitable failure of the experiment described above. There are several aspects of my
240
Mathematical Models for Speech Technology
Figure 10.5 Illy-I runs a maze
working hypothesis that some consider suspect. My theory of functional equivalence is open to objection from those who believe that only biological brains, or at very least their mathematical isomorphs, can give rise to mind. My notions of associative memory, reinforcement, and semantics may also raise some eyebrows. I regard all of these criticisms as the same in the sense that they all assert that some essential part of my hypotheses is inadequate or absolutely incorrect. I cannot provide any better support for my ideas than I have already done. I cannot refute any competing theories any more than I have already done. However, such armchair debates will never resolve the differences of opinion and I see no purpose in prolonging them. There is a particular class of objections, however, that concern me deeply. These are problems that result from a usefully correct working hypothesis but an incomplete or inadequate experimental expression of it. I can imagine four such catastrophic errors. The first pitfall might result from an gross underestimate of the threshold of complexity required for the emergence of significant mental activity. The brain has some tens of trillions of neurons; perhaps that number of components is required to effect the kinds of behavior I hope to simulate. The second problem might arise from a misunderstanding of the relative importance of adaptation on evolutionary time scales and learning on somatic time scales. Perhaps the billions of years of evolution were required to produce specific brain structures for specific cognitive purposes. Perhaps one cannot compensate for a lack of this optimization by clever adaptive processes over the relatively short period available for reinforcement learning. A third issue might arise from a failure to appreciate the importance of acute perception and skilled motor control. Humans and animals are possessed of exquisite sensors and actuators. Perhaps these organs are required to bring perception to a level from which mental models of reality can be produced. Perhaps the cameras, microphones, servomotors, and other devices used in Illy are too crude to support intelligent behavior.
A Speculation on the Prospects for a Science of Mind
241
Finally, there is the ubiquitous problem known as the curse of dimensionality. Perhaps there are too many degrees of freedom and too little time to collect data for reinforcement learning to be effective. I have no cogent replies to these objections. Nor is there any proof that they are fatal. Perhaps the experiments will inform the argument and indicate whether or not my working hypothesis and my enthusiasm for it are justified. Highly ambitious projects like the early attempts at flight are a venerable strategy for progress.
10.6 Final Thoughts: Predicting the Course of Discovery The current euphoria about automatic speech recognition is based on the characterization of progress in the field as a “paradigm shift”. This use of the term is inappropriate and misleading. The phrase was first used by Kuhn [163] to describe scientific revolution. As applied to automatic speech recognition, it casts incremental, technical advances as profound, conceptual scientific progress. The difference is best understood by example. The change from a geocentric to a heliocentric model of the solar system alluded to in Section 10.2.3 is a Kuhnian “paradigm shift”. Placing the sun rather than the earth at the center of the solar system may seem like a radical idea. Although it is counterintuitive to the naive observer, it does not, by itself, constitute a paradigm shift. The revolutionary concept arises from the consideration of another aspect of the solar system besides planetary position. The Ptolemaic epicycles do predict the positions of the planets as a function of time. In fact, they do so more accurately than the crude elliptical orbits postulated by Kepler. Indeed, the incremental improvement made by compounding epicycles on epicycles allows the incorrect theory to appear more accurate than the coarse but correct one. Clearly, heliocentricity alone is not the paradigm shift. If, however, one asks what force moves the planets on their observed regular paths and how it accounts for their velocities and accelerations, the geocentric theory stands mute while the mechanical foundation of the heliocentric model turns eloquent. This, then, is the paradigm shift and its consequences are enormous. Epicycles may be acceptable for making ritual calendars but Newtonian mechanics not only opens new vistas but also, upon refinement, becomes highly accurate. There is a very close analogy between astronomy and automatic speech recognition. At the present moment, we think of speech recognition as transcription from speech to some standard orthography. This decoding process corresponds to the computation of celestial location only. It ignores the essence of speech, its capacity to convey meaning, and is thus incomplete. The paradigm shift needed in our discipline is to make comprehension rather than transcription the organizing principle, just as force replaced location as the central construct in celestial mechanics. If one can calculate the forces acting on the planets, one can determine their orbits, from which the positions are a trivial consequence. Similarly, if one can extract meaning from an utterance, the lexical transcription will result as a by-product. Unfortunately, as we noted in Chapter 8, the process is often inverted by the attempt to use meaning to improve transcription accuracy rather than making meaning the primary aspect. I view the experiment described in Section 10.5 as an attempt to instigate a paradigm shift in speech technology. This is an ambitious goal which runs counter to the prevailing
242
Mathematical Models for Speech Technology
ethos. We live in an age in which technique is prized over concept. This is not unreasonable because our daily lives depend on a technological cornucopia that has been slowly and steadily improving for several decades. It is not surprising that even the inventors are enchanted by the magic they themselves have wrought. Consequently, when technocrats predict the future of a promising new technology, they tend to be overly optimistic for the near term and overly pessimistic for the long haul. This happens because technical forecasting is always based on extrapolating what is presently known based on incremental improvement without regard for the possibilities of unproven, speculative approaches. Thus it is not surprising that the prediction for automatic speech recognition is that the existing performance deficits will be soon overcome by simply refining present techniques. My prediction is that advances in our understanding of speech communication will come painfully slowly but eventually, perhaps many decades hence, automatic speech recognition at human performance levels will be ubiquitous. In the near term, incremental technical advances will result in a fragile technology of small commercial value in special markets, whereas major technological advances resulting from a true paradigm shift in the underlying science will enable machines to display human linguistic competence. This, in turn, will create a vast market of incalculable social and commercial value. It is, of course, entirely possible that the technocrats are correct, that a diligent effort resulting in a long sequence of incremental improvements will yield the desired perfected automatic speech recognition technology. It is also possible that this strategy will come to grief because of the “first step fallacy” of Dreyfus [67] who warns of the impossibility of reaching the moon by climbing a tree. Such a strategy appears initially to head in the right direction but soon progress stops abruptly or tragically, far short of the goal, when the top of the tree is reached or the small upper limbs will no longer support the climber’s weight. It seems obvious to me that the prudent plan is to openly acknowledge the risks of incrementalism and devote some effort to the plausible speculative approaches. Perhaps more important, however, is recognition of the uniqueness of our particular technological goal. Unlike all other technologies that are integral parts of our daily lives because they provide us with capabilities otherwise unattainable, speech technology promises to increase the utility of a function at which we are already exquisitely proficient. Since using the present state of the art requires a serious diminution of our natural abilities and since we presently cannot leap the performance chasm between humans and machines, it seems only prudent that we should invest more in fundamental science in the expectation that it will eventually lead not only to a mature speech technology but also to many other things as yet unimagined. This strategy would, of course, alter the existing balance among science, technology, and the marketplace more in favor of precommercial experimentation while reducing the emphasis on immediate profit. There is good reason to believe, however, that ultimately this strategy will afford the greatest intellectual, social, and financial reward.
Bibliography [1] Proc. Int. Conf. on Development and Learning. Michigan State University, 2000. [2] M. Abromovitz and I. A. Stegun, editors. Handbook of Mathematical Functions. Dover Publications, New York, 1965. [3] A. V. Aho, T. G. Szymanski, and M. Yannakakis. Enumerating the cartesian product of ordered sets. In Proc. 14th Annual Conf. on Information Science and Systems, Princeton, NJ, 1980. [4] B. Aldefeld, S. E. Levinson, and T. G. Szymanski. A minimum distance search technique and its application to automatic directory assistance. Bell Syst. Tech. J., 59:1343–1356, 1980. [5] J. B. Allen. Cochlear micromechanics – a mechanism for transforming mechanical to neural tuning within the cochlea. J. Acoust. Soc. Amer., 62:930–939, 1977. [6] R. Alter. Utilization of contextual constraints in automatic speech recognition. IEEE Trans. Audio Electroacoust., AU-16:6–11, 1968. [7] S. Aronowitz. Science as Power: Discourse and Ideology in Modern Society. University of Minnesota Press, Minneapolis, 1988. [8] B. S. Atal. Private communication, 1981. [9] B. S. Atal, J. J. Chang, M. V. Mathews, and J. W. Tukey. Inversion of articulatory-to-acoustic transformation in the vocal tract by a computer-sorting technique. J. Acoust. Soc. Amer., 63:1535–1555, 1978. [10] B. S. Atal and M. R. Schroeder. Predictive coding of speech and subjective error criteria. IEEE Trans. Acoust. Speech Signal Process., ASSP-27:247–254, 1979. [11] L. R. Bahl, J. K. Baker, P. S. Cohen, A. G. Cole, F. Jelinek, B. L. Lewis, and R. L. Mercer. Automatic recognition of continuously spoken sentences from a finite state grammar. In Proc. IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, pages 418–421, Washington, DC, 1979. [12] L. R. Bahl, J. K. Baker, P. S. Cohen, N. R. Dixon, F. Jelinek, R. L. Mercer, and H. F. Silverman. Preliminary results on the performance of a system for the automatic recognition of continuous speech. In Proc. IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, pages 425–429, April 1976. [13] L. R. Bahl, J. K. Baker, P. S. Cohen, F. Jelinek, B. L. Lewis, and R. L. Mercer. Recognition of a continuously read natural corpus. In Proc. IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, pages 442–446, Washington, DC, 1979. [14] L. R. Bahl, R. Bakis, P. S. Cohen, A. G. Cole, F. Jelinek, B. L. Lewis, and R. L. Mercer. Further results on the recognition of a continuously read natural corpus. In Proc. IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, pages 872–875, Denver, CO, 1980. [15] L. R. Bahl, A. Cole, F. Jelinek, R. L. Mercer, A. Nadas, D. Nahamoo, and M. Picheny. Recognition of isolated word sentences from a 5000 word vocabulary office correspondence task. In Proc. IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, pages 1065–1067, Boston, MA, 1983. [16] L. R. Bahl and F. Jelinek. Decoding for channels with insertions deletions and substitutions with applications to speech recognition. IEEE Trans. Inform. Theory, IT-21:404–411, 1975. Mathematical Models for Speech Technology. Stephen Levinson 2005 John Wiley & Sons, Ltd ISBN: 0-470-84407-8
244
Bibliography
[17] L. R. Bahl, F. Jelinek, and R. L. Mercer. A maximum likelihood approach to continuous speech recognition. IEEE Trans. Pattern Analysis and Machine Intelligence, PAMI-5:179–190, 1983. [18] L. R. Bahl, R. Bakis, P. S. Cohen, A. G. Cole, F. Jelinek, B. L. Lewis, and R. L. Mercer. Recognition results with several experimental acoustic processors. In Proc. IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, pages 249–251, Washington, DC, 1979. [19] J. K. Baker. The dragon system: an overview. IEEE Trans. Acoust. Speech Signal Process., ASSP-23, 1975. [20] J. K. Baker. Stochastic modeling for automatic speech understanding. In D. R. Reddy, editor, Speech Recognition, pages 521–542. Academic Press, New York, 1975. [21] J. K. Baker. Trainable grammars for speech recognition. In J. J. Wolf and D. H. Klatt, editors, Speech Comun. Papers of 97th Meeting of the Acoust. Soc. Amer., pages 547–550, 1979. [22] G. H. Ball and D. J. Hall. Isodata: An interactive method of multivariate analysis and pattern classification. In Proc. IFIPS Congr., 1965. [23] M. Barinaga. The cerebellum: Movement coordinator or much more? Science, 272:482–483, 1996. [24] S. L. Bates. A hardware realization of a PCM-ADPCM code converter. Unpublished thesis, 1976. [25] L. E. Baum. An inequality and associated maximization technique in statistical estimation for probabilistic functions of a Markov process. Inequalities, III:1–8, 1972. [26] L. E. Baum and J. A. Eagon. An inequality with applications to statistical estimation for probabilistic functions of a Markov process and to a model for ecology. Bull. Amer. Math. Soc., 73:360–363, 1967. [27] L. E. Baum and T. Petrie. Statistical inference for probabilistic functions of finite state Markov chains. Ann. Math. Stat., 37:1559–1563, 1966. [28] L. E. Baum, T. Petrie, G. Soules, and N. Weiss. A maximization technique in the statistical analysis of probabilistic functions of Markov chains. Ann. Math. Statist., 41:164–171, 1970. [29] L. E. Baum and G. R. Sell. Growth functions for transformations on manifolds. Pacific J. Math., 27:211–227, 1968. [30] G. Bekesy. Experiments in Hearing. McGraw-Hill, New York, 1960. [31] R. E. Bellman. Dynamic Programming. Princeton University Press, Princeton, NJ, 1957. [32] W. H. Beyer, editor. Handbook of Tables for Probability and Statistics. Chemical Rubber Co., Cleveland, OH, 1968. [33] R. Billi. Vector quantization and Markov source models applied to speech recognition. In Proc. Int. Conf. on Acoustics, Speech, and Signal Processing, pages 574–577, Paris, France, 1982. [34] T. L. Booth and R. A. Thompson. Applying probability measures to abstract languages. IEEE Trans. Comput., C-22:442–450, 1973. [35] H. Bourland, J. Wellekens, and H. Ney. Connected digit recognition using vector quantization. In Proc. Int. Conf. on Acoustics, Speech, and Signal Processing, pages 26.10.1–26.10–4, San Diego, CA, 1984. [36] M. Braun. Differential Equations and their Applications as an Introduction to Applied Mathematics. Springer-Verlag, New York, 1975. [37] J. S. Bridle and M. D. Brown. Connected word recognition using whole word templates. In Proc. Inst. Acoust. Autumn Conf., pages 25–28, 1979. [38] R. Brooks et al. The COG project: Building a humanoid robot. In C. L. Nehaniv, ed., Computation for Metaphors, Analogy and Agents. Springer-Verlag, Berlin, 1998. [39] L. E. J. Brouer. Intuitionism and formalism. Amer. Math. Soc. Bull., 20:81–96, 1913. [40] J. S. Bruner, J. J. Goodnow, and G. A. Austin. A Study of Thinking. Wiley, New York, 1956. [41] Mario Bunge. Treatise on Basic Philosophy. D. Reidel, Dordrecht and Boston. [42] G. Cantor. Transfinite Arithmetic. Dover, New York, 1980. [43] R. L. Cave and L. P. Neuwirth. Hidden Markov models for English. In J. D. Ferguson, editor, Proc. Symp. on the Application of Hidden Markov Models to Text and Speech, pages 16–56, Princeton, NJ, 1980. [44] D. S. K. Chan and L. R. Rabiner. An algorithm for minimizing roundoff noise in cascade realizations of finite impulse response digital filters. Bell Syst. Tech. J., 52:347–385, 1973. [45] N. Chomsky. Syntactic Structures. Mouton, The Hague, 1957. [46] N. Chomsky. On certain formal properties of grammars. Inform. Contr., 2:137–167, 1959. [47] N. Chomsky and M. Halle. The Sound Patterns of English. Harper and Row, New York, 1968.
Bibliography
245
[48] A. Church. The Calculi of Lambda Conversion, Ann. Math. Stud. 6. Princeton University Press, Princeton, NJ, 1951. [49] P. J. Cohen. The independence of the continuum hypothesis. Proc. Nat. Acad. Sci USA, 50:1143–1148, 1963. [50] P. S. Cohen and R. L. Mercer. The phonological component of an automatic speech recognition system. In D. R. Reddy, editor, Speech Recognition, pages 275–320. Academic Press, New York, 1975. [51] C. H. Coker. A model of articulatory dynamics and control. Proc. IEEE, 64:452–460, 1976. [52] C. H. Coker, N. Umeda, and C. P. Browman. Automatic synthesis from ordinary English text. IEEE Trans. Audio Electroacoust., AU-21:293–298, 1973. [53] T. M. Cover and P. Hart. The nearest neighbor decision rule. IEEE Trans. Inform. Theory, IT-13:21–27, 1967. [54] R. E. Crochiere and A. V. Oppenheim. Analysis of linear digital networks. Proc. IEEE, 63:581–595, 1975. [55] P. Cummiskey, N. S. Jayant, and J. L. Flanagan. Adaptive quantization in differential PCM coding of speech. Bell Syst. Tech. J., 52:1105–1118, 1973. [56] A. Damasio. The Feeling of What Happens. Harcourt Brace, New York, 1999. [57] A. R. Damasio et al., eds, Unity of Knowledge: The Convergence of Natural and Human Science. New York Academy of Sciences, New York, 2001. [58] K. H. Davis, R. Biddulph, and S. Balashek. Automatic recognition of spoken digits. J. Acoust. Soc. Amer., 24:637–642, 1952. [59] S. B. Davis and P. Mermelstein. Comparison of parametric representation for monosyllabic word recognition in continuously spoken sentences. IEEE Trans. Acoust., Speech, Signal Process., ASSP-28:357–366, 1980. [60] R. DeMori. A descriptive technique for automatic speech recognition. IEEE Trans. Audio Electroacoust., AU-21:89–100, 1973. [61] R. DeMori, P. Laface, and Y. Mong. Parallel algorithms for syllable recognition in continuous speech. IEEE Trans. Pattern Analysis and Machine Intelligence, PAMI-7:56–69, 1985. [62] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the EM algorithm. J. Roy. Statist. Soc. Ser. B, 39:1–88, 1977. [63] P. B. Denes and M. V. Mathews. Spoken digit recognition using time frequency pattern matching. J. Acoust. Soc. Amer., 32:1450–1455, 1960. [64] D. Dennett. Consciousness Explained. Little Brown, Boston, 1991. [65] E. W. Dijkstra. A note on two problems in connection with graphs. Num. Math., 1:269–271, 1969. [66] A. M. Dirac. The relation between mathematics and physics. In Proc. Roy. Soc. Edinburgh, 59:122–129, 193. [67] H. L. Dreyfus. What Computers Can’t Do: A Critique of Artificial Reason. Harper and Row, New York, 1972. [68] R. O. Duda and P. E. Hart. Pattern Classification and Scene Analysis. Wiley, New York, 1973. [69] H. Dudley. The vocoder. J. Acoust. Soc. Amer., 1930. [70] H. Dudley and S. Balashek. Automatic recognition of phonetic patterns in speech. J. Acoust. Soc. Amer., 30:721–739, 1958. [71] H. K. Dunn, J. L. Flanagan, and P. J. Gestrin. Complex zeros of a triangular approximation to the glottal wave. J. Acoust. Soc. Amer., 34, 1962. [72] K. M. Eberhard, M. J. Spivey-Knowlton, J. C. Sedivy, and M. Tanenhaus. Eye movements as a window into real-time spoken language comprehension in natural contexts. J. Psycholinguistic Res., 24:409–436, 1995. [73] Gerald M. Edelman and Giulio Tononi. A Universe of Consciousness. Basic Books, New York, 2000. [74] A. Einstein. The Meaning of Relativity. Princeton University Press, Princeton, NJ, 1921. [75] A. Einstein. Unpublished letter to Max Borr. December 4, 1926. [76] A. Einstein. Unpublished letter to Herbert Goldstein, New York, April 25, 1929. [77] A. Einstein. On the Method of Theoretical Physics. Oxford University Press, New York, 1933. [78] L. D. Erman, D. R. Fennell, R. B. Neely, and D. R. Reddy. The hearsay-1 speech understanding system: An example of the recognition process. IEEE Trans. Comput., C-25:422–431, 1976.
246
Bibliography
[79] F. Fallside, R. V. Patel, and H. Seraji. Interactive graphics technique for the design of single-input feedback systems. Proc. IEE, 119(2): 247–254, 1972. [80] K. Fan. Les fonctions definies-postives et les fonctions completement monotones. Gauthier-Villars, Paris, 1950. [81] R. M. Fano. Transmission of Information: A Statistical Theory of Communications. Wiley, New York, 1961. [82] G. Fant. Acoustic Theory of Speech Production. Mouton, The Hague, 1970. [83] G. Fant. Speech Sounds and Features. MIT Press, Cambridge, MA, 1973. [84] J. D. Ferguson. Variable duration models for speech. In J. D. Ferguson, editor, Proceedings of the Symposium on the Application of Hidden Markov Models to Text and Speech, pages 143–179, Princeton, NJ, 1980. [85] H. L. Fitch. Reclaiming temporal information after dynamic time warping. J. Acoust. Soc. Amer., 74, suppl. 1:816, 1983. [86] J. L. Flanagan. Speech Analysis Synthesis and Perception. Springer-Verlag, New York, 2nd edition, 1972. [87] J. L. Flanagan. Computers that talk and listen: Man–machine communication by voice. Proc. IEEE, 64:405–415, 1976. [88] J. L. Flanagan, K. Ishizaka, and K. L. Shipley. Synthesis of speech from a dynamic model of the vocal cords and vocal tract. Bell Sys. Tech. J., 544:485–506, 1975. [89] H. Fletcher. Speech and Hearing in Communication. Van Nostrand, Princeton, NJ, 1953. [90] R. Fletcher and M. J. D. Powell. A rapidly convergent descent method for minimization. Computer, 6:163–168, 1963. [91] J. A. Fodor. Language of Thought, pp. 103ff. Crowell, New York, 1975. [92] J. Fodor. The Mind Doesn’t Work That Way. MIT Press, Cambridge, MA, 2000. [93] K. S. Fu. Sequential Methods in Pattern Recognition and Machine Learning. Academic Press, New York, 1968. [94] K. S. Fu. Syntactic Methods in Pattern Recognition. Academic Press, New York, 1974. [95] K. S. Fu. Syntactic Pattern Recognition. Academic Press, NY, 1974. [96] K. S. Fu. Syntactic Pattern Recognition and Applications. Prentice Hall, Englewood Cliffs, NJ, 1982. [97] K. S. Fu, editor. Digital Pattern Recognition. Springer-Verlag, Berlin, 1976. [98] K. S. Fu and T. L. Booth. Grammatical inference – introduction and survey. IEEE Trans. Syst. Man Cybern., SMC-5:95–111 and 409–422, 1975. [99] O. Fujimura, S. Kiritani, and H. Ishida. Computer controlled radiography for observation of articulatory and other human organs. Comput. Biol. Med., 3:371–384, 1973. [100] T. Fujisaki. A stochastic approach to sentence parsing. In Proc. 10th Int. Conf. on Computer Linguistics, pages 16–19, Stanford, CA, 1984. [101] L. W. Fung and K. S. Fu. Syntactic decoding for computer communication and pattern recognition. IEEE Trans. Comput., C-24:662–667, 1975. [102] M. R. Garey and D. S. Johnson. Computers and Intractability: A Guide to the Theory of NPCompleteness. W. H. Freeman, San Francisco, 1979. [103] R. A. Gillman. A fast frequency domain pitch algorithm. J. Acoust. Soc. Amer., 58:562, 1975. ¨ [104] K. G¨odel. Uber formal unentscheidbare S¨atze der Principia Mathematica und verwandter Systeme. Monatsh. Math. Phys., 38:173–198, 1931. [105] E. M. Gold. Language identification in the Limit. Inf. Control, 10:447 ff., 1967. [106] A. Goldberg. Constructions: A Construction Grammar Approach to Argument Structure. University of Chicago Press, Chicago, 1995. [107] H. Goldstine. The Computer from Pascal to von Neumann. Princeton University Press, Princeton, NJ, 1972. [108] R. G. Goodman. Analysis of languages for man–machine voice communication. Technical report, Department of Computer Science, Carnegie Mellon University, May 1976. [109] I. S. Gradshteyn and I. M. Ryzhik. Tables of Integrals Series and Products. Academic Press, New York, 1980. [110] A. H. Gray and J. D. Markel. Distance measures for signal processing. IEEE Trans. Acoust., Speech, Signal Processing, ASSP-24:380–391, 1975.
Bibliography
247
[111] A. H. Gray and J. D. Markel. Quantization and bit allocation in speech processing. IEEE Trans. Acoust. Speech Signal Process., ASSP-24:459–473, December 1976. [112] S. Greibach. Formal languages: Origins and directions. In Proc. 20th Annual Symp. on Foundations of Computer Science, pages 66–90, 1979. [113] S. Grossberg, editor. The Adaptive Brain. North-Holland, Amsterdam, 1987. [114] J. Hadamard. Scientific Creativity. Dover, New York, 1960. [115] E. H. Hafer. Speech analysis by articulatory synthesis. Master’s thesis, Northwestern University, 1974. [116] J. L. Hall. Two tone suppression in a non-linear model of the basilar membrane. J. Acoust. Soc. Amer., 61:802–810, 1977. [117] M. Halle and K. M. Stevens. Speech recognition: A model and a program for research. IRE Trans. Inform. Theory, IT-8:155–159, 1962. [118] R. W. Hamming. We would know what they thought when they did it. In N. Metropolis, J. Howlett, and G. Rota, editors, A History of Computing in the Twentieth Century. Academic Press, New York, 1980. [119] Z. Harris. A Grammar of English on Mathematical Principles. Wiley, New York, 1982. [120] M. A. Harrison. Introduction to Formal Language Theory. Addison-Wesley, Reading, MA, 1979. [121] J. A. Hartigan. Clustering Algorithms. Wiley, New York, 1975. [122] J. P. Haton and J. M. Pierrel. Syntactic-semantic interpretation of sentences in the Myrtille-II speech understanding system. In Proc. Int. Conf. on Acoustics, Speech, and Signal Processing, pages 892–895, Denver, CO, 1980. [123] D. O. Hebb. The Organization of Behavior. Wiley, NY, 1949. [124] E. A. Henis and S. E. Levinson. Language as part of sensorimotor behavior. In Proc. AAAI Symposium, Cambridge, MA, November 1995. [125] M. R. Hestenes. Optimization Theory. Wiley, New York, 1975. [126] J. Hironymous. Automatic language identification from acoustic phonetic models. Bell Laboratories technical memorandum 1995. [127] Y. C. Ho and A. K. Agrawal. On pattern classification algorithms: introduction and survey. Proc. IEEE, 56, 1968. [128] A. Hodges. Alan Turing, the Enigma. Simon and Schuster, New York, 1983. [129] A. L. Hodgkin and A. F. Huxley. A quantitative description of membrane current and its application to conduction and excitation in nerves. J. Physiol., 117:500–544, 1952. [130] D. R. Hofstadter. G¨odel, Escher, Bach: an Eternal Golden Braid. Basic Books, New York, 1979. [131] P. Hohenberg and W. Kohn. Phys. Rev. B, 136:864, 1964. [132] G. Holton. Einstein, History and Other Passions. AIP Press, Woodbury, NY, 1995. [133] J. E. Hopcroft and J. D. Ullman. Formal Languages and Their Relation to Automata. Addison-Wesley, Reading, MA, 1969. [134] J. J. Hopfield. Form follows function. Physics Today, pages 10–11, November 2002. [135] M. J. Hopper, editor. Harwell Subroutine Library: A Catalog of Subroutines, volume 55. AERE Harwell, Oxfordshire, England, 1979. [136] J. Huang. Computational fluid dynamics for articulatory speech synthesis. Unpublished Ph.D. dissertation, University of Illinois at Urbana-Champaign, 2001. [137] E. B. Hunt. Artificial Intelligence. Academic Press, New York, 1975. [138] M. J. Hunt, M. Lennig, and P. Mermelstein. Experiments in syllable based recognition of continuous speech. In Proc. Int. Conf. on Acoustics, Speech, and Signal Processing, pages 880–883, Denver, CO, 1980. [139] K. Ishizaka, J. C. French, and J. L. Flanagan. Direct determination of vocal tract wall impedance. IEEE Trans. Acoust. Speech Signal Process., ASSP-23:370–373, 1975. [140] F. Itakura. Minimum prediction residual principle applied to speech recognition. IEEE Trans. Acoust. Speech Signal Process., ASSP-23:67–72, 1975. [141] R. Jackendoff. Parts and boundaries. Cognition, 41:9–45, 1991. [142] R. Jackendoff. The architecture of the linguistic–spatial interface. In P. Bloom, M. A. Peterson, L. Nadel, and M. F. Garrett, editors, Language and Space: Language, Speech and Communication, pages 1–30. MIT Press, Cambridge, MA, 1996.
248
Bibliography
[143] R. Jakobson. Observations on the phonological, classification of consonants. In Proc. 3rd Int’l Congress of Phonetic Sciences, pages 34–41, 1939. [144] W. James. Talks to Teachers on Psychology and to Students on Some of Life’s Ideals. Holt, New York, 1899. [145] J. Jaynes. The Origins of Consciousness in the Breakdown of the Bicameral Mind. Princeton University Press, Princeton, NJ, 1975. [146] F. Jelinek. A fast sequential decoding algorithm using a stack. IBM J. Res. Devel., 13:675–685, 1969. [147] F. Jelinek. Continuous speech recognition by statistical methods. Proc. IEEE, 64:532–556, 1976. [148] F. Jelinek, L. R. Bahl, and R. L. Mercer. Design of a linguistic statistical decoder for the recognition of continuous speech. IEEE Trans. Inform. Theory, IT-21:250–256, 1975. [149] F. Jelinek, L. R. Bahl, and R. L. Mercer. Continuous speech recognition: Statistical methods. In P. R. Krishnaiah and L. N. Kanal, editors, Classification, Pattern Recognition, and Reduction in Dimensionality, Handbook of Statistics 2. North-Holland, Amsterdam, 1982. [150] F. Jelinek and R. L. Mercer. Interpolated estimation of Markov source parameters from sparse data. In E. Gelsema and L. Kanal, eds, Pattern Recognition in Practice, pages 381–397. North-Holland, Amsterdam, 1980. [151] M. Johnson. The Body in the Mind: The Bodily Basis of Meaning, Imagination and Reason. University of Chicago Press, Chicago, 1987. [152] B. H. Juang, S. E. Levinson, and M. M. Sondhi. Maximum likelihood estimation for multivariate mixture observations of Markov chains. IEEE Trans. Inform. Theory, IT-32:307–310, March 1986. [153] W. V. Kempelen. The Mechanism of Speech Follows from the Description of a Speaking Machine. J. V. Degen, 1791. [154] J. Kittler, K. S. Fu, and L. F. Pau, editors. Pattern Recognition Theory and Application. D. Reidel, Dordrecht, 1982. [155] D. H. Klatt. Review of the ARPA speech understanding project. J. Acoust. Soc. Amer., 62:1345–1366, 1977. [156] D. H. Klatt and K. N. Stevens. On the automatic recognition of continuous speech: Implications from a spectrogram reading experiment. IEEE Trans. Audio Electroacoust., AU-21:210–217, 1973. [157] S. C. Kleene. Introduction to Mathematics. Van Nostrand, Princeton, NJ, 1952. [158] M. Kleffner. A method of automatic speech imitation via warped linear prediction. Master’s thesis, University of Illinois at Urbana-Champaign. [159] D. E. Knuth. The Art of Computer Programming. Volume 1: Fundamental Algorithms. Addiston-Wesley, Reading, MA, 1968. [160] D. E. Knuth. The Art of Computer Programming. Volume 3: Sorting and Searching. Addison-Wesley, Reading, MA, 1973. [161] T. Kohonen. Self Organization and Associative Memory. Springer-Verlag, Berlin, 1988. [162] T. Kohonen, H. Rittinen, M. Jalanko, E. Reuhkala, and S. Haltsonen. A thousand word recognition system based on learning subspace method and redundant hash addressing. In Proc. 5th Int. Conf. on Pattern Recognition, pages 158–165, Miami Beach, FL, 1980. [163] T. S. Kuhn. The Structure of Scientific Revolutions, 2nd ed. University of Chicago Press, Chicago, 1970. [164] S. Kullback and R. A. Leibler. On information and sufficiency. Ann. Math. Statist., 22:79–86, 1951. [165] G. Lakoff. Women, Fire and Dangerous Things. University of Chicago Press, Chicago, 1987. [166] G. Lakoff. Moral Politics. University of Chicago Press, Chicago, 1996. [167] G. Lakoff and M. Johnson. Metaphors We Live By. University of Chicago Press, Chicago, 1980. [168] G. Lakoff and M. Johnson. Philosophy in the Flesh: The Embodied Mind and its Challenges to Western Thought. Basic Books, New York, 1999. [169] G. Lakoff and R. Nunez. Where Mathematics Comes From: How the Embodied Mind Brings Mathematics into Being. Basic Books, New York, 2000. [170] B. Landau and R. Jackendoff. “What” and “where” in spatial language and spatial cognition. Behavioral and Brain Sciences, 16:217–265, 1993. [171] R. W. Langacker. Foundations of Cognitive Grammar. Stanford University Press, Stanford, CA, 1986. [172] P. Laplace. A Philosophical Essay on Probabilities. Dover, New York, 1951. [173] B. Latour. The Pasteurization of France. Harvard University Press, Cambridge, MA, 1988.
Bibliography
249
[174] E. Lawler. Combinatorial Optimization. Holt, Rinehart and Winston, New York, 1976. [175] W. A. Lea, M. F. Medress, and T. E. Skinner. A prosodically guided speech understanding strategy. IEEE Trans. Acoust. Speech Signal Process., ASSP-23:30–38, 1975. [176] W. A. Lea and J. E. Shoup. Gaps in the technology of speech understanding. In Proc. IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, pages 405–408, Tulsa, OK, 1978. [177] D. B. Lenat. CYC: A large-scale investment in knowledge infrastructure. Comm. ACM, 38(11): 33–38, 1995. [178] V. R. Lesser, R. D. Fennell, L. D. Erman, and D. R. Reddy. Organization of the Hearsay-II speech understanding system. IEEE Trans. Acoust., Speech Signal Process., ASSP-23:11–24, 1975. [179] S. E. Levinson. An artificial intelligence approach to automatic speech recognition. In Proc. IEEE Conf. on Systems, Man and Cybernetics, pages 344–345, Boston, MA, November 1973. [180] S. E. Levinson. The vocal speech understanding system. In Proc. 4th Int. Conf. on Artificial Intelligence, pages 499–505, Tbilisi, USSR, 1975. [181] S. E. Levinson. Cybernetics and automatic speech understanding. In Proc. IEEE ICISS, pages 501–506, Patras, Greece, August 1976. [182] S. E. Levinson. The effects of syntactic analysis on word recognition accuracy. Bell Syst. Tech. J., 57:1627–1644, 1977. [183] S. E. Levinson. Improving word recognition accuracy by means of syntax. In Proc. IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, Hartford, CT, May 1977. [184] S. E. Levinson. Implications of an early experiment in speech understanding. In Proc. AAAI Symposium, pages 36–37, Stanford, CA, March 1987. [185] S. E. Levinson. Speech recognition technology: A critique. In D. B. Roe and J. G. Wilpon, editors, Voice Communication between Humans and Machines, pages 159–164. National Academy Press, Washington, DC, 1994. [186] S. E. Levinson. The role of sensorimotor function, associative memory and reinforcement learning in automatic speech recognition. In Proc. Conf. on Machines that Learn, Snowbird, UT, April 1996. [187] S. E. Levinson. Mind and language. In Proc. Int’l Conf. On Development and Learning, Michigan State University, April 2000. [188] S. E. Levinson, A. Ljolje, and L. G. Miller. Large vocabulary speech recognition using a hidden Markov model for acoustic/phonetic classification. Speech Technology, pages 26–32, 1989. [189] S. E. Levinson, L. R. Rabiner, A. E. Rosenberg, and J. G. Wilpon. Interactive clustering techniques for selecting speaker independent reference templates for isolated word recognition. IEEE Trans. Acoust. Speech Signal Process., ASSP-27:134–140, 1979. [190] S. E. Levinson, L. R. Rabiner, and M. M. Sondhi. An introduction to the application of the theory of probabilistic functions of a Markov process to automatic speech recognition. Bell Syst. Tech. J., 62:1035–1074, 1983. [191] S. E. Levinson and A. E. Rosenberg. Some experiments with a syntax direct speech recognition system. In Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing, pages 700–703, 1978. [192] S. E. Levinson and A. E. Rosenberg. A new system for continuous speech recognition – preliminary results. In Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing, pages 239–243, 1979. [193] S. E. Levinson and K. L. Shipley. A conversational mode airline information and reservation system using speech input and output. Bell Syst. Tech. J., 59:119–137, 1980. [194] D. Li. Computational models for binaural sound source localization and sound understanding. PhD thesis, University of Illinois at Urbana-Champaign, 2003. [195] D. Li and S. E. Levinson. A Bayes rule-based hierarchical system for binaural sound source localization. In Proc. IEEE Int. Conf. on Acoustics Speech and Signal Processing, Hong Kong, April 2003. [196] K. P. Li and T. J. Edwards. Statistical models for automatic language identification. IEEE Int. Conf. on Acoustics, Speech and Signal Processing, Denver, CO, pp. 884–887, 1980. [197] R.-S. Lin. Learning vision-based robot navigation. Master’s thesis, University of Illinois at UrbanaChampaign, 2002. [198] Y. Linde, A. Buzo, and R. M. Gray. An algorithm for vector quantizer design. IEEE Trans. Commun., COM-28:84–95, 1980.
250
Bibliography
[199] L. A. Liporace. Linear estimation of non-stationary signals. J. Acoust. Soc. Am., 58:1288–1295, December 1975. [200] L. R. Liporace. Maximum likelihood estimation for multivariate observations of Markov sources. IEEE Trans. Inform. Theory, IT-28:729–734, September 1982. [201] R. J. Lipton and L. Snyder. On the optimal parsing of speech. Research Report 37, Dept. of Computer Science, Yale University, New Haven, CT, 1974. [202] Q. Liu. Interactive and incremental learning via a multisensory mobile robot. PhD thesis, University of Illinois at Urbana-Champign, 2001. [203] S. P. Lloyd. Least squares quantization in PCM. IEEE Trans. Inform. Theory, IT-28:129–136, 1982. [204] D. O. Loftsgaarden and C. P. Quesenberry. A nonparametric estimate of a multivariate density function. Ann. Math. Statist, 36:1049–1051, 1965. [205] G. G. Lorentz. The 13th problem of Hilbert. In F. E. Browder, editor, Mathematical Developments Arising from Hilbert Problems. American Mathematical Society, Providence, RI, 1976. [206] B. T. Lowerre. The HARPY speech understanding system. PhD thesis, Carnegie Mellon University, Pittsburgh, PA, 1976. [207] J. MacQueen. Some methods for classification and analysis of multivariate observations. In L. LeCam and J. Neyman, eds, Proc. 5th Berkeley Symposium on Mathematical Statistics and Probability, volume 1, pages 281–298. University of California Press, Berkeley, 1967. [208] J. Makhoul. Linear prediction: A tutorial review. Proc. IEEE, 63:561–580, 1975. [209] M. P. Marcus. Theory of Syntactic Recognition for Natural Language. MIT Press, Cambridge, MA, 1980. [210] H. Margenau. The Nature of Physical Reality. Yale University Press, New Haven, CT, 1958. [211] J. D. Markel and A. H. Gray. Linear Prediction of Speech. Springer-Verlag, New York, 1976. [212] A. A. Markov. An example of statistical investigation in the text of “Eugene Onyegin” illustrating coupling of “tests” in chains. In Proc. Acad. Sci, volume 7, pages 153–162, St. Petersburg, 1913. [213] J. C. Marshall. Minds, machines and metaphors. Soc. Stud. Sci., 7: 475–488, 1977. [214] T. R. McCalla. Introduction to Numerical Methods and FORTRAN Programming. Wiley, New York, 1967. [215] W. S. McCulloch and W. Pitts. A logical calculus of the ideas imminent in nervous activity. Bull. Math. Biophys., 5:115–133, 1943. [216] C. McGinn. The Mysterious Flame. Basic Books, New York, 1999. [217] W. S. Meisel. Computer Oriented Approaches to Pattern Recognition. Academic Press, New York, 1972. [218] C. Merchant. The Death of Nature: Women, Ecology and the Scientific Revolution. Harper and Row, New York, 1980. [219] G. Mercier, A. Nouhen, P. Quinton, and J. Siroux. The keal speech understanding system. In J. C. Simon, editor, Spoken Language Generation and Understanding, pages 525–544. D. Reidel, Dordrecht. [220] G. A. Miller, G. A. Heise, and Lichten. The intelligibility of speech as a function of the context of the test materials. J. Exp. Psych., 41:329–335, 1951. [221] M. Minsky and S. Papert. Perceptrons: An Introduction to Computational Geometry. MIT Press, Cambridge, MA, 1969. [222] M. L. Minsky. Matter, mind and models. In Semantic Information Processing, pages 425–432. MIT Press, Cambridge, MA, 1968. [223] P. M. Morse and K. U. Ingard. Theoretical Acoustics. McGraw-Hill, New York, 1968. [224] C. S. Myers and S. E. Levinson. Speaker independent connected word recognition using a syntax directed dynamic programming procedure. IEEE Trans. Acoust. Speech Signal Process., ASSP-30:561–565, 1982. [225] C. S. Myers and L. R. Rabiner. Connected digit recognition using a level building DTW algorithm. IEEE Trans. Acoust. Speech Signal Process., ASSP-29:351–363, 1981. [226] C. S. Myers and L. R. Rabiner. A level building dynamic time warping algorithm for connected word recognition. IEEE Trans. Acoust. Speech Signal Process., ASSP-29:284–297, 1981. [227] A. Nadas. Hidden Markov chains, the forward-backward algorithm and initial statistics. IEEE Trans. Acoust. Speech Signal Process., ASSP-31:504–506, April 1983. [228] A. Nadas. Estimation of probabilities in the language model of the IBM speech recognition system. IEEE Trans. Acoust. Speech Signal Process., ASSP-32:859–861, 1984.
Bibliography
251
[229] E. Nagel and J. R. Newman. G¨odel’s Proof. New York University Press, New York, 1958. [230] G. Nagy. State of the art in pattern recognition. Proc. IEEE, 56:836–60, 1967. [231] R. Nakatsu and M. Kohda. An acoustic processor in a conversational speech recognition system. Rev. ECL, 26:1505–1520, 1978. [232] W. L. Nelson. Physical principles for economies of skilled movements. Biol. Cybern., 46:135–147, 1983. [233] A. Newell, J. Barnett, J. Forgie, C. Green, D. Klatt, J. C. R. Licklieder, J. Munson, R. Reddy, and W. Woods. Speech Understanding Systems – Final Report of a Study Group. North Holland, Amsterdam, 1973. [234] H. Ney. The use of a one stage dynamic programming algorithm for connected word recognition. IEEE Trans. Acoust. Speech Signal Process., ASSP-32:263–271, 1984. [235] N. Nilsson. Problem Solving Methods in Artificial Intelligence. McGraw-Hill, New York, 1971. [236] J. P. Olive. Rule synthesis of speech from dyadic units. In Proc. IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, pages 568–570, Hartford, CT, 1977. [237] J. P. Olive. A real time phonetic synthesizer. J. Acoust. Soc. Amer., 66:663–673, 1981. [238] B. T. Oshika. Phonological rule testing of conversational speech. In Proc. IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, page 577, Philadelphia, PA, 1976. [239] B. T. Oshika, V. W. Zue, R. V. Weeks, H. Nue, and J. Auerbach. The role of phonological rules in speech understanding research. IEEE Trans. Acoust. Speech, Signal Process., ASSP-23:104–112, 1975. [240] D. S. Passman. The Jacobian of a growth transformation. Pacific J. Math., 44:281–290, 1973. [241] E. A. Patrick. Fundamentals of Pattern Recognition. Prentice Hall, Englewood Cliffs, NJ, 1972. [242] E. A. Patrick and F. P. Fischer. A generalized k-nearest neighbor rule. Inform. Control, 16:128–152, 1970. [243] A. Paz. Introduction to Probabilistic Automata. Academic Press, New York, 1971. [244] C. S. Peirce. Collected Papers of Charles Sanders Peirce, C. Hartstone and P. Weiss, eds. Harvard University Press, Cambridge, MA, 1935. [245] R. Penrose. The Emporor’s New Mind. Oxford University Press, 1990. [246] G. Perennou. The Arial II speech recognition system. In J. P. Haton, editor, Automatic Speech Analysis and Recognition, pages 269–275. Reidel, Dordrecht, 1982. [247] G. E. Peterson and H. L. Barney. Control methods used in a study of the vowels. J. Acoust. Soc. Amer., 24:175–185, 1952. [248] Plato. The Republic. Penguin, Harmondsworth, 1955. [249] H. Poincar´e. Discovery in Mathematical Physics. Dover, New York, 1960. [250] A. B. Poritz. Linear predictive hidden markov models. In J. D. Ferguson, editor, Proc. Symp. on the Application of Hidden Markov Models to Text and Speech, pages 88–142, Princeton, NJ, 1980. [251] A. B. Poritz. Linear predictive hidden Markov models and the speech signal. In Proc. Int. Conf. on Acoustics, Speech, and Signal Processing, pages 1291–1294, Paris, France, 1982. [252] M. R. Portnoff. A quasi-one-dimensional digital simulation for the time varying vocal tract. Master’s thesis, MIT, 1973. [253] R. K. Potter, G. A. Koop, and H. G. Kopp. Visible Speech. Dover, New York, 1968. [254] R. Quillian. Semantic nets. In M. Minsky, editor, Semantic Information Processing. MIT Press, Cambridge, MA, 1968. [255] L. R. Rabiner. On creating reference templates for speaker independent recognition of isolated words. IEEE Trans. Acoust. Speech Signal Process., ASSP-26:34–42, 1978. [256] L. R. Rabiner, A. Bergh, and J. G. Wilpon. An improved training procedure for connected digit recognition. Bell Syst. Tech. J., 61:981–1001, 1982. [257] L. R. Rabiner and B.-H. Juang. Fundamentals of Speech Recognition. Prentice Hall, Englewood Cliffs, NJ, 1993. [258] L. R. Rabiner, B. H. Juang, S. E. Levinson, and M. M. Sondhi. Recognition of isolated digits using hidden Markov models with continuous mixture densities. AT & T Tech. J., 64:1211–1234, 1985. [259] L. R. Rabiner and S. E. Levinson. Isolated and connected word recognition – theory and selected applications. IEEE Trans. Commun., COM-29:621–659, 1981.
252
Bibliography
[260] L. R. Rabiner and S. E. Levinson. A speaker independent syntax directed connected word recognition system based on hidden Markov models and level building. IEEE Trans. Acoust. Speech Signal Process., ASSP-33(3):561–573, 1985. [261] L. R. Rabiner, S. E. Levinson, A. E. Rosenberg, and J. G. Wilpon. Speaker-independent recognition of isolated words using clustering techniques. IEEE Trans. Acoust. Speech Signal Process., ASSP27:336–349, 1979. [262] L. R. Rabiner, S. E. Levinson, and M. M. Sondhi. On the application of vector quantization and hidden Markov models to speaker independent isolated word recognition. Bell Syst. Tech. J., 62:1075–1105, 1983. [263] L. R. Rabiner, S. E. Levinson, and M. M. Sondhi. On the use of hidden Markov models for speaker independent recognition of isolated words from a medium-size vocabulary. AT & T Bell Lab. Tech. J., 63:627–642, 1984. [264] L. R. Rabiner, A. E. Rosenberg, and S. E. Levinson. Considerations in dynamic time warping for discrete word recognition. IEEE Trans. Acoust. Speech Signal Process., ASSP-26:575–582, 1978. [265] L. R. Rabiner and R. W. Schafer. Digital Processing of Speech Signals. Prentice Hall, Englewood Cliffs, NJ, 1978. [266] L. R. Rabiner and C. E. Schmidt. Application of dynamic time warping to connected digit recognition. IEEE Trans. Acoust. Speech Signal Process., ASSP-28:337–388, 1980. [267] L. R. Rabiner, M. M. Sondhi, and S. E. Levinson. A vector quantizer combining energy and LPC parameters and its application to isolated word recognition. AT & T Bell Lab. Tech. J., 63:721–735, 1984. [268] L. R. Rabiner and J. G. Wilpon. Application of clustering techniques to speaker-trained word recognition. Bell Syst. Tech. J., 5:2217–2233, 1979. [269] L. R. Rabiner and J. G. Wilpon. Considerations in applying clustering techniques to speaker independent word recognition. J. Acoust. Soc. Amer., 66:663–673, 1979. [270] L. R. Rabiner and J. G. Wilpon. A simplified robust training procedure for speaker trained isolated word recognition systems. J. Acoust. Soc. Amer., 68:1271–1276, 1980. [271] L. R. Rabiner, J. G. Wilpon, and J. G. Ackenhusen. On the effects of varying analysis parameters on an LPC-based isolated word recognizer. Bell Syst. Tech. J., 60:893–911, 1981. [272] D. R. Reddy. Computer recognition of connected speech. J. Acoust. Soc. Amer., 42:329–347, 1967. [273] C. Reid. Hilbert. Springer-Verlag, Berlin, 1970. [274] M. D. Riley. Speech Time Frequency Representations. Kluwer Academic, Boston, 1989. [275] A. Robins and S. McCallum. The consolidation of learning during sleep: Comparing pseudorehearsal and unlearning accounts. Neural Networks, 12:1191–1206, 1999. [276] D. B. Roe and R. Sproat. The VEST spoken language translation system. In Proc. ICASSP-93, May 1993. [277] J. B. Rosen. The gradient projection method for nonlinear programming – Part I: Linear constraints. J. Soc. Indust. Appl. Math., 8:181–217, 1960. [278] A. E. Rosenberg and F. Itakura. Evaluation of an automatic word recognition over dialed-up telephone lines. J. Acoust. Soc. Amer., 60, Suppl. 1:512, 1976. [279] A. E. Rosenberg, L. R. Rabiner, J. G. Wilpon, and D. Kahn. Demisyllable based isolated word recognition system. IEEE Trans. Acoust. Speech Signal Process., ASSP-31:713–726, 1983. [280] F. Rosenblatt. Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms. Spartan, Washington, DC, 1962. [281] L. H. Rosenthal, L. R. Rabiner, R. W. Schafer, P. Cummiskey, and J. L. Flanagan. A multiline computer voice response system using ADPCM coded speech. IEEE Trans. Acoust. Speech Signal Process., ASSP22:339–352, 1974. [282] D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning internal representations by error propagation. In D. E. Rumelhart and J. L. McClelland, Parallel Distributed Processing: Explorations in the Microstructure of Cognition. MIT Press, 1986. [283] D. E. Rumelhart and J. L. McClelland. Parallel Distributed Processing. MIT, Cambridge, MA, 1986. [284] G. Ruske and T. Schotola. The efficiency of demisyllable segmentation in the recognition of spoken words. In J. P. Haton, editor, Automatic Speech Analysis and Recognition, pages 153–163. Reidel, Dordrecht, 1982.
Bibliography
253
[285] B. Russell and A. N. Whitehead. Principia Mathematica. Cambridge University Press, Cambridge, 1962. [286] M. J. Russell and R. K. Moore. Explicit modeling of state occupancy in hidden Markov models for automatic speech recognition. In Proc. IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, pages 5–8, Tampa, FL, March 1985. [287] M. J. Russell, R. K. Moore, and M. J. Tomlinson. Some techniques for incorporating local timescale information into a dynamic time warping algorithm for automatic speech recognition. In Proc. IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, pages 1037–1040, Boston, MA, 1983. [288] H. Sakoe. Two level DP-matching – a dynamic programming based pattern matching algorithm for connected word recognition. IEEE Trans. Acoust. Speech Signal Process., ASSP-27:588–595, 1979. [289] H. Sakoe and S. Chiba. A dynamic programming approach to continuous speech recognition. In Proc. 7th Int. Congr. on Acoustics, volume 3, pages 65–68, Budapest, 1971. [290] H. Sakoe and S. Chiba. Dynamic programming algorithm optimization for spoken word recognition. IEEE Trans. Acoust. Speech Signal Process., ASSP-26:43–49, 1978. [291] C. Scagliola. Continuous speech recognition without segmentation: Two ways of using diphones as basic speech units. Speech Commun., 2:199–201, 1983. [292] C. E. Schorske. Fin-de-Si`ecle Vienna: Politics and Culture. Knopf, New York, 1979. [293] J. R. Searle. Minds, brains and programs. Behavioral and Brain Sciences, 3:417–457, 1980. [294] G. Sebestyen. Decision Making Processes in Pattern Recognition. McMillan, New York, 1962. [295] C. E. Shannon. A mathematical theory of communication. Bell Syst. Tech. J., 27:379–423, 1948. [296] J. F. Shapiro. Mathematical Programming Structures and Algorithms. Wiley, New York, 1979. [297] K. Shikano and M. Kohda. A linguistic processor in a conversational speech recognition system. Trans. Inst. Elec. Commun. Eng. Japan, E61:342–343, 1978. [298] D. W. Shipman and V. W. Zue. Properties of large lexicons: Implications for advanced isolated word recognition systems. In Proc. Int. Conf. on Acoustics, Speech, and Signal Processing, pages 546–549, Paris, France, 1982. [299] J. E. Shore and R. W. Johnson. Axiomatic derivation of the principle of maximum entropy and the principle of minimum cross entropy. IEEE Trans. Inform. Theory, IT-26:26–36, 1980. [300] H. F. Silverman and N. R. Dixon. A parametrically controlled spectral analysis system for speech. IEEE Trans. Acoust. Speech Signal Process., ASSP-23:369–381, 1974. [301] J. C. Simon. Patterns and Operators. McGraw Hill, New York, 1986. [302] A. D. Sokal. Transgressing the boundaries: Towards a transformative hermeneutics of quantum gravity. Social Text, 46/47:217–252, 1996. [303] M. M. Sondhi. Model for wave propagation in a lossy vocal tract. J. Acoust. Soc. Amer., 55:1070–1075, 1974. [304] M. M. Sondhi. Estimation of vocal-tract areas: The need for acoustical measurements. IEEE Trans. Acoust. Speech Signal Process., ASSP-27:268–273, 1979. [305] M. M. Sondhi and S. E. Levinson. Computing relative redundancy to measure grammatical constraint in speech recognition tasks. In Proc. Int. Conf. on Acoustics, Speech, and Signal Processing, pages 409–412, Tulsa, OK, 1978. [306] H. W. Sorenson and D. L. Alspach. Recursive Bayesian estimation using Gaussian sums. Automatica, 7:465–479, 1971. [307] P. F. Stebe. Invariant functions of an iterative process for maximization of a polynomial. Pacific. J. Math., 43:765–783, 1972. [308] K. N. Stevens. Acoustic correlates of some phonetic categories. J. Acoust. Soc. Amer., 68:836–842, 1980. [309] I. Tan. Unpublished MSEE thesis, University of Illinois at Urbana-Champaign, June 2002. [310] E. Tanaka and K. S. Fu. Error correcting parsers for formal languages. IEEE Trans. Comput., C-27:605–615, 1978. [311] M. Tanenhaus, M. J. Spivey-Knowlton, K. Eberhard, and J. C. Sedivy. Integration of visual and linguistic information in spoken language comprehension. Science, 268:1632–1634, 1995.
254
Bibliography
[312] M. Tanenhaus, M. J. Spivey-Knowlton, K. Eberhard, and J. C. Sedivy. Using eye movements to study spoken language comprehension: Evidence for visually mediated incremental interpretation. In T. Inui and J. L. McClelland, editors, Attention and Performance 16: Information Integration in Perception and Communication, pages 457–478. MIT Press, 1996. [313] C. C. Tappert. A Markov model acoustic phonetic component for automatic speech recognition. Int. J. Man-Machine Studies, 9:363–373, 1977. [314] Y. Tohkura. Features for speech recognition. In Proc. IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, New York, NY, May 1983. [315] Michael Tomasello. The Cultural Origins of Human Cognition. Harvard University Press, Cambridge, MA, 1999. [316] M. Tomita. Efficient Parsing for Natural Language. Kluwer Academic, Boston, 1985. [317] J. T. Tou and R. C. Gonzalez. Pattern Recognition Principles. Addison-Wesley, Reading, MA, 1974. [318] J. M. Tribolet, L. R. Rabiner, and M. M. Sondhi. Statistical properties of an LPC distance measure. IEEE Trans. Acoust. Speech Signal Process., ASSP-27:550–558, 1979. [319] A. M. Turing. On computable numbers with an application to the Entscheidungsproblem. Proc. London Math. Soc., 42: 230–265, 1937. [320] A. M. Turing. Computing machinery and intelligence. Mind, pages 433–460. 1950. [321] V. M. Velichko and N. G. Zagoruyko. Automatic recognition of 200 words. Int. J. Man-Machine Studies, 2:223–234, 1969. [322] T. K. Vintsyuk. Recognition of words of oral speech by dynamic programming methods. Kibernetika, 81(8), 1968. [323] A. J. Viterbi. Error bounds for convolutional codes and an asymptotically optimal algorithm. IEEE Trans. Inform. Theory, IT-13, March 1967. [324] J. Von Neumann and O. Morgenstern. The Theory of Games. Princeton University Press, Princeton, NJ, 1950. [325] H. Wakita. Direct estimation of the vocal tract shape by inverse filtering of acoustic speech waveforms. IEEE Trans. Audio Electroacoust., AU-21:417–427, 1973. [326] D. E. Walker. The SRI speech understanding system. IEEE Trans. Acoust. Speech Signal Process., ASSP-23:397–416, 1975. [327] A. G. Webster. Acoustical impedance and the theory of horns. In Proc. Nat. Acad. Sci., volume 5, pages 275–282, 1919. [328] E. P. Wigner. On the unreasonable success of mathematics in physics. Comm. on Pure and Applic. Math., 13(1), 1960. [329] R. Weinstock. Calculus of Variations with Applications to Physics and Engineering. Dover, New York, 1974. [330] N. Wiener. Cybernetics or Control and Communication in the Animal and Machine. MIT Press, Cambridge, MA, 2nd edition, 1948. [331] E. O. Wilson. Consilience: The Unity of Knowledge. Knopf, New York, 1998. [332] J. J. Wolf and W. A. Woods. The HWIM speech understanding system. In IEEE Int. Conf. on Acoust., Speech, Signal Processing, pages 784–787, Hartford, CT, 1977. [333] W. A. Woods. Transition network grammar for natural language analysis. Commun. ACM, 13:591–602, 1970. [334] W. A. Woods. Motivation and overview of SPEECHLIS: An experimental prototype for speech understanding research. IEEE Trans. Acoust. Speech Signal Process., ASSP-23:2–10, 1975. [335] W. A. Woods. Syntax semantics and speech. In D. R. Reddy, editor, Speech Recognition. Academic Press, New York, 1975. [336] W. A. Woods. Optimal search strategies for speech understanding and control. Artificial Intelligence, 18:295–326, 1982. [337] W. A. Woods, M. A. Bates, B. C. Bruce, J. J. Colarusso, C. C. Cook, L. Gould, J. A. Makhoul, B. L. Nash-Webber, R. M. Schwartz, and J. J. Wolf. Speech understanding research at BBN. Technical Report 2976, Bolt, Beranek and Newman, 1974. Unpublished. [338] D. M. Younger. Recognition and parsing of context free languages in time n3 . Inform. Control, 10, 1967.
Bibliography
255
[339] P. L. Zador. Topics in the asymptotic quantization of continuous random variables. Technical report, Bell Labs, 1966. [340] P. L. Zador. Asymptotic quantization error of continuous signals and the quantization dimension. IEEE Trans. Inform. Theory, IT-28:139–148, March 1982. [341] W. Zhu and S. E. Levinson. Edge orientation-based multiview object recognition. In Proc. IEEE Int. Conf. on Pattern Recognition, Vol. 1, pages 936–939, Barcelona, Spain, 2000. [342] W. Zhu and S. E. Levinson. PQ-learning: an efficient robot learning method for intelligent behavior acquisition. In Proc. 7th Int. Conf. on Intelligent Autonomous Systems, pages 404–411, Marina del Rey, CA, March 2002. [343] W. Zhu, S. Wang, R. Lin, and S. E. Levinson. Tracking of object with SVM regression. In Proc. IEEE Int. Conf. on Computer Vision and Pattern Recognition, pages 240–245, Hawaii, USA, 2001. [344] V. W. Zue. The use of phonetic rules in automatic speech recognition. Speech Commun., 2:181–186, 1983.
Index accepting state, 124 acoustic pattern, 27 acoustic tube, 10 acoustic-phonetic model, 162 active constraint, 70 adiabatic constant, 13 allophone, 158 allophonic variation, 157 ambiguity function, 25 area function, 22 ARPA, 2 ARPAbet, 53 articulation manner of, 52, 54 place of, 52, 54 articulator, 52 articulatory configuration, 58 articulatory mechanism, 99 articulatory synthesis, 169 Artificial Intelligence, 213 associative memory, 230, 232, 238 asynchronous parsing methods, 130 autocorrelation function, 22 autocorrelation matrix, 87 autonomous robot, 235 autoregressive coefficients, 106 autrogressive process, 87 auxiliary function, 83, 104 average sentence length, 148 backward likelihood, 91 backward probability, 59 Baker’s algorithm, 140
Baum algorithm, 122, 137, 166 geometry of, 67 Baum, L., 3, 63 Baum-Welch algorithm, 62 Baum-Welch reestimation, 65 Bayes law, 32 Bayesian decision, 50 beam search, 160 behaviorism, 233 best-first algorithm, 160 binary tree, 112 binomial distribution, 37 boundary conditions, 14 broad phonetic categories, 52, 109 Cantor, G., 205, 206 categorical perception, 27 Cave–Neuwirth experiment, 107, 109 center embedding, 118 central limit theorem, 35 cepstral coefficients, 23 characteristic grammar, 114 Cholesky factorization, 107 Chomsky hierarchy, 110 Chomsky normal form, 111 Chomsky, N., 2, 110, 234 Church-Turing hypothesis, 205, 212 class membership, 30 class-conditional probability, 30 classification error, 33 clustering, 42 coarticulation, 54 Cocke-Kasami-Younger algorithm, 12
Mathematical Models for Speech Technology. Stephen Levinson 2005 John Wiley & Sons, Ltd ISBN: 0-470-84407-8
Index
258
code, 143 combinatorial optimization, 50 communication process, 175 connectionism, 43 connectivity matrix, 145, 150 consciousness, 229 consistent estimator, 37 consistent grammar, 114 constrained optimization, 64 constructive theory of mind, 212, 228 context sensitive language, 110 context-free language, 110 continuous observations, 80 continuum hypothesis, 207, 210 convective air flow, 16 correlated observations, 88 correlation matrix, 22 cost function, 33 covariance matrix, 34 critical point, 66, 69 cross entropy, 83 cybernetic paradigm, 202 cybernetics, 6, 199 CYC project, 216 decideability problem, 200 decision rule, 30 decoding error, 144, 154 density beta, 35, 37 binomial, 35 Cauchy, 35 chi-squared, 35 Laplace, 35 Poisson, 35 derivation operator, 109 determinism, 225 diagonalization, 206 dialog, 56 digamma function, 96 directed graph, 112 discrete acoustic tube model, 12 discrete Fourier transform, 19 distribution gamma, 89 duality, 225
Dudley, H., 5 durational parameter, 95 dynamic programming, 47, 59, 120, 123, 126, 163 dynamic time warping, 49 effective procedure, 214 effective vocabulary, 155 efficiency, 155 eigenvalues, 41, 151 eigenvectors, 41, 151 Einstein, A., 222, 226, 227 elliptical symmetry, 82 EM algorithm, 107, 138 embodied mind, 231 emergence, 220 ENIAC, 216 entropy, 143 epsilon representation, 153 equivocation, 154 Euler’s constant, 96 theorem of, 68 Euler-Lagrange equation, 46 event counting, 139 expectation operator, 34, 107 expected value, 34 Fano bound, 154 finite language, 111 finite regular language, 126 finite state automaton, 115 finite training set, 75 first moment matrix, 152 first-order predicate calculus, 190, 192 fixed point, 69 Flanagan, J. L., 9, 14 Fletcher, H., 5 fluent speech, 166 fluid dynamics, 16 formal grammar, 5, 109 formant bandwidths, 13, 23 formant frequencies, 11, 22, 54 forward likelihood, 91 forward probability, 59, 102 forward-backward algorithm, 59 foundations of mathematics, 190 free-will, 225
Index
fricatives, 52 functional equivalence, 213 Gaussian density function, 34 Gaussian mixture, 35, 81, 85 generating function, 111, 152 Godel numbering, 208, 209 Godel, K., 205, 207 Gold’s theorem, 137 grammar, 52 grammatical constraint, 144, 148, 150 grammatical inference, 137 grammatical rules, 109 growth transformation, 66 heat capacity, 13 Hessian, 70 heuristic function, 160 hidden Markov model, 2, 57 Hilbert, D., 44, 200, 205 Hodgkin Huxley model, 43 homogeneous polynomial, 59, 66 human machine communication error recovery in, 183 human-machine communication system, 188 impredicative set, 205 incompleteness theorem, 205, 207, 208 inequality constraint, 76 inflected forms, 167 inside probability, 140 instinctual behavior, 238 introspection, 215 Ionian Enchantment, 221 Itakura method, 49 Japanese phonotactics, 132 Jelinek, F., 2 k-means algorithm, 42 Kolmogorov representation theorem, 43 Kuhn–Tucker theorem, 70 Kullback–Leibler statistic, 83, 163 Lagrange multiplier, 41, 64, 65, 149 lambda calculus, 195, 205
259
language engine integrated, 157 modular, 157 language identification, 157, 170 language translation, 157, 171 left-to-right HMM, 77 letter-to-sound rules, 167 lexical access, 162 lexical assignment rules, 134, 140 lexical semantics, 193 lexicon, 157 liar’s paradox, 208 likelihood function, 83, 84 linear prediction, 21 linear prediction coefficients, 21 linear prediction transfer function, 21 linguistic structure, 51 logical inference, 193 logical operators, 191 logical quantifier, 192 loss function, 33 LPC power spectrum, 23 Mahalanobis distance, 36 majority vote rule, 31 manifold, 65 Markov process entropy of, 147 stationary probabilities of, 151 mathematical realism, 223 mathematical truth, 190 maximum a posteriority probability, 33 mean vector, 34 mel-scale cepstrum, 23 membership question, 116 metric, 30 Chebychev, 31 Euclidean, 31 Hamming, 31 Miller, G., 143, 156 mind-body dichotomy, 225 minimax criterion, 43 minimum risk rule, 33 Minkowski p-metric, 31 modular architecture, 161 morphology, 55
Index
260
multi-layer perceptron, 43 mutual information, 189 n-gram statistics, 133 nasals, 52 Navier-Stokes equations, 17 nearest neighbor distance, 31 nearest neighbor rule, 39 neural network, 43 Newton technique, 70 Newton’s method, 96 non-ergodic HMM, 77, 142 non-parametric decision rule, 35 non-parametric estimator, 50 non-stationarity, 16, 25, 57, 99 non-terminal symbol, 109 null symbol, 109 Nyquist rate, 20 observation sequence, 59 ontology, 193, 197 operator precedence, 191 optimal decision rule, 34 optimal decoder, 144 orthogonal polynomial, 35 orthonormality constraint, 41 outside probability, 141 paradigm shift, 241 parameter estimation, 58 parametric decision rule, 35 PARCORs, 22 parse tree, 139 parsing, 119 parts-of-speech, 117 Parzen estimator, 35 pattern recognition, 28 perceptron, 43 philosophy of mind, 201 phonetic inventory, 53 phonology, 52, 116 phonotactics, 52, 55, 109 phrase structure language, 110 Pierce, J., 2 Platonic theory of forms, 27, 202 plosives, 52
point risk, 34 point-set distance, 29 point-spread function, 25 polygamma function, 96 Poritz experiment, 107, 109 Poritz, A., 87 postmodernism, 226 potential function, 35 power spectrum, 15, 19 pragmatic information, 148 pragmatics, 56 predicate logical, 191 predicate argument structure, 195 prior probability, 33 priority queue, 160 probability density function, 33 production rules, 109 pronouncing dictionary, 157 propositional logic, 190 prosody, 55 prototype, 30 psychological truth, 190 push-down automaton, 115 quantization, 42, 58 quasi-stationarity, 99, 159 radiation impedance, 14 reasoning, 193 recursion equation for Webster equation, 15 reductionism, 220 reestimation formula, 62, 66, 93, 104–106 reflection coefficients, 22 regression coefficient, 99 regular grammar, 111, 112 regular language, 110 entropy of, 150 reinforcement learning, 230, 232 relative frequency, 145 relative redundancy, 144, 150 residual error, 22 resonance frequencies, 15 rewriting rules, 109 Richard paradox, 208
Index
Riemann zeta function, 96 right-linear language, 119 scaling, 73, 78, 97 Schorske, C., 214 scientific method, 222 self-punctuating code, 148 semantic analysis, 174 semantic category, 180 semantic information, 148 semantic net, 190 semantics, 55, 234 natural language, 189 semi-Markov process, 88 sensorimotor function, 230 separatrix, 84 Shannon, C., 143, 203 short duration amplitude spectrum, 17 short time Fourier transform, 19 singular matrix, 86 sociobiology, 221 Sokal hoax, 227 sound spectrograph, 1 source-filter model, 1, 17 source-filter model diagram, 19 spectrogram, 19, 20, 24 speech coding, 170 speech understanding, 3, 173 start symbol, 109 state duration, 88 state sequence, 61 state transition diagram, 124, 144 state transition matrix, 58 stationarity, 44 statistical decision theory, 28 stochastic grammar, 113 string derivation, 109 strong theory of AI, 213, 228 structure building rules, 134, 140 sub-word model, 157 sufficient statistic, 107 syntactico-semantic relationship, 189, 194 syntax, 55, 116
261
terminal symbol, 109 test set, 30 Text analysis, 167 text-to-speech synthesis, 167, 168 thermal conduction, 12 thermal losses, 11 thermos bottle parable, 219 time scale normalization, 108 time-frequency resolution, 25 Toeplitz matrix, 22, 87 tolerance region, 37 training set, 29 transfinite numbers, 206 transformation information preserving, 39 transitive closure, 109 trellis, 160 triphone, 157, 158 Turing machine, 210 Turing test, 228 Turing, A., 6, 200, 205, 217 Turing-equivalence, 119 two-dimensional Fourier transform, 24 unambiguous grammar, 114, 144 undecideability theorem, 211 universal Turing machine, 213, 235 variable duration model, 91 variational problem, 46 viscous friction, 12 viscous losses, 11 Viterbi algorithm, 74, 120, 127 vocabulary, 59 vocal tract constriction, 52 vocal tract model, 9 vocal tract transfer function, 15 vocoder, 1 voice-operated typewriter, 173 voicing, 53 vowels, 52 Webster equation, 10 Wiener, N., 6, 199 Wigner transform, 24 Wigner, E., 222 word lattice, 131