Finite-State Methods and Natural Language Processing: 8th International Workshop, FSMNLP 2009, Pretoria, South Africa, July 21-24, 2009, Revised ... (Lecture Notes in Computer Science, 6062) 364214683X, 9783642146831

This book constitutes the refereed proceedings of the 8th International Workshop on the Finite-State-Methods and Natural

168 57 2MB

English Pages 157 [156] Year 2010

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Title
Preface
Organization
Table of Contents
Tutorials
Learning Finite State Machines
Introduction
Some of the Key Ideas
You Can’t ‘Have Learnt’
Identification in the Limit
Complexity Matters
Some Basic Tools
Three Types of States
The Prefix Tree Acceptor ($P_TA$)
Red, Blue and White States
Merge and Fold
Some Techniques and Results
$D_FA$ and $N_FA$
Probabilistic Automata
Transducers
Some Open Problems and Challenges
References
Special Theme Tutorials
Developing Computational Morphology for Low- and Middle-Density Languages
What Is Computational Morphology?
Contents of the Tutorial
Links
Invited Papers
$f$sm2 – A Scripting Language Interpreter for Manipulating Weighted Finite-State Automata
Introduction
Formal Background
Featuresof $fsm2$
Supported Semirings
Algebraic Operations
Equivalence Transformations
Symbolic Formalism
Input/Output
Scripts
Special Features
Macros
Language Modeling
Further Directions
References
Appendix
Selected Operations and Applications of n-Tape Weighted Finite-State Machines
Introduction
Definitions
Operations
Simple Operations
Join Operation
Auto-intersection
Compilation of Auto-intersection
Post’s Correspondence Problem
A Class of Rational Auto-intersections
An Auto-intersection Algorithm
Applications
Morphological Analysis of Semitic Languages
Intermediate Results in Transduction Cascades
Induction of Morphological Rules
String Alignment for Lexicon Construction
Acronym and Meaning Extraction
Cognate Search
Conclusion
References
OpenFst
Special Theme Invited Talks
Morphological Analysis of Tone Marked Kinyarwanda Text
Introduction
The Characteristics of Tones
Tone in Computational Morphology
The Structure of This Paper
Characteristics of the Kinyarwanda Language
General Facts
The Underspecific Orthography
Manifestation of Tones in Verbs
Implementation of Tone Handling Rules
Evaluation
Test Corpus
Metrics
Comparison with the Orthographic Analyser
Conclusion
References
Regular Papers
Minimizing Weighted Tree Grammars Using Simulation
Introduction
Trees and Weighted Tree Grammars
Backward Simulation
Forward Simulation
Conclusion
References
Compositions of Top-Down Tree Transducers with \epsilon-Rules
Introduction
Notation
Top-Down Tree Transducers with Top-Down Tree Transducers with \epsilon-Rules
Separating Output Symbols
Composition Construction
Conclusions and Open Problems
References
Reducing Nondeterministic Finite Automata with SAT Solvers
Introduction
Related Work
Reduction Algorithm
Initial State Constraints
Final State Constraints
Transition Constraints
Additional Constraints
The Size of the SAT Problem
Example
Reduction v. Minimization
Experimental Results
Conclusion
References
Joining Composition and Trimming of Finite-State Transducers
Introduction
Preliminaries
Composition
Depth-First Composition
Acyclic Case
General Case
Complexity
Experiment
Conclusion
References
Special Theme Extended Abstracts
Porting Basque Morphological Grammars to foma, an Open-Source Tool
Introduction
Basque Morphology
Earlier Work and Current Migration to Open Source
Porting Basque Grammars to Foma
Conversion Procedure
Compatibility and Efficiency
Auxiliary Applications: Spelling Correction
Internal Support for Spelling Correction in Foma
Conclusion
References
Describing Georgian Morphology with a Finite-State System
Introduction
Analyzing the Morphology of Georgian
The Structure of Nouns, Adjectives and Pronouns
The Verb System of Georgian
Finite-State Implementation
Overview
Approach
Evaluation
Future Plans
Supporting the Non-Roman Script
Shallow Parsing
Integration to Other Resources
References
Finite State Morphology of the Nguni Language Cluster: Modelling and Implementation Issues
Introduction
Morphology of the Nguni Cluster of Languages
Overview
Typical Constraints
Approach for Zulu
Modelling and Implementation
Bootstrapping for Xhosa, Swati and Ndebele
Conclusion and Future Work
References
A Finite State Approach to Setswana Verb Morphology
Introduction
Overview of Setswana Verb Morphology
Affixes of the Setswana Verb
Auxiliary Verbs and Copulatives
Formation of Verbs
TheVerbAnalyser
From Text to Tokens
Two Tokenising Transducers
Application and Results
Conclusion and Future Work
References
Competition Announcements
Zulu: An Interactive Learning Competition
Introduction, Motivations, History and Background
A Brief Overview of Zulu
Other Related Competitions
A More Complete Presentation of Zulu
Registration Phase
Training Phase
Competition Phase
Discussion
References
Author Index
Recommend Papers

Finite-State Methods and Natural Language Processing: 8th International Workshop, FSMNLP 2009, Pretoria, South Africa, July 21-24, 2009, Revised ... (Lecture Notes in Computer Science, 6062)
 364214683X, 9783642146831

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Lecture Notes in Artificial Intelligence Edited by R. Goebel, J. Siekmann, and W. Wahlster

Subseries of Lecture Notes in Computer Science

6062

Anssi Yli-Jyrä András Kornai Jacques Sakarovitch Bruce Watson (Eds.)

Finite-State Methods and Natural Language Processing 8th International Workshop, FSMNLP 2009 Pretoria, South Africa, July 21-24, 2009 Revised Selected Papers

13

Series Editors Randy Goebel, University of Alberta, Edmonton, Canada Jörg Siekmann, University of Saarland, Saarbrücken, Germany Wolfgang Wahlster, DFKI and University of Saarland, Saarbrücken, Germany Volume Editors Anssi Yli-Jyrä University of Helsinki, Department of Modern Languages 00014 University of Helsinki, Finland E-mail: [email protected] András Kornai Harvard University, Institute for Quantitative Social Science 1737 Cambridge St, Cambridge MA 02138, USA and: Computer and Animation Research Institute, Hungarian Academy of Sciences, Kende u 13-17, Budapest 1111, Hungary E-mail: [email protected] Jacques Sakarovitch CNRS and Telecom ParisTech Laboratoire Traitement et Communication de l’Information 46, rue Barrault, 75634 Paris Cedex 13, France E-mail: [email protected] Bruce Watson University of Pretoria, FASTAR Research Group Petoria 0002, South Africa E-mail: [email protected]

Library of Congress Control Number: 2010931643 CR Subject Classification (1998): I.2, H.3, F.4.1, I.2.7, F.3, H.4 LNCS Sublibrary: SL 7 – Artificial Intelligence 0302-9743 ISSN 3-642-14683-X Springer Berlin Heidelberg New York ISBN-10 978-3-642-14683-1 Springer Berlin Heidelberg New York ISBN-13 This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. springer.com © Springer-Verlag Berlin Heidelberg 2010 Printed in Germany Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper 06/3180

Preface

This volume of Lecture Notes in Artificial Intelligence is a collection of revised versions of the papers and lectures presented at the 8th International Workshop on Finite-State Methods and Natural Language Processing, FSMNLP 2009. The workshop was held at the University of Pretoria, South Africa, during July 21–24, 2009. This was the first time in the history of the FSMNLP series that the event was not located in Europe. As its predecessors, the scope of FSMNLP 2009 included a range of topics around computational morphology, natural language processing, finite-state methods, automata, and related formal language theory. However, a special theme, Finite-State Methods for Under-Resourced Languages, was adopted as recognition of the event’s location on the African continent. The Program Committee was composed of internationally leading researchers and practitioners selected from academia, research labs, and companies. In total, 21 papers underwent a blind refereeing process in which each paper submitted to the workshop was reviewed by at least three Program Committee members, with the help of external referees. Of those papers, the workshop accepted 13 as regular papers and a further 6 as extended abstracts. The papers came from Croatia, Finland, France, Georgia, Germany, Poland, South Africa, Spain and the United States. In addition, the workshop program contained tutorials by Colin de la Higuera, Kemal Oflazer, and Johan Schalkwyk, inivited talks by Kenneth R. Beesley, Thomas Hanneforth, Andr´e Kempe, Jackson Muhirwe, and Johan Schalkwyk, as well as the Zulu competition announcement presented by Colin de la Higuera. After the FSMNLP 2009 workshop, the papers were re-reviewed for inclusion in this collection, with the assistance of more referees. As the result, four regular papers and four extended abstracts were selected in their final, revised format. The collection also includes the abstracts and extended papers of the competition announcement and of almost all invited lectures and tutorials presented at the workshop. It is a pleasure for the editors to thank the members of the Program Committee and the external referees for reviewing the papers and maintaining the high standard of the FSMNLP workshops. We are grateful to all the contributors to the conference, in particular to the invited speakers and sponsors, for making FSMNLP 2009 a scientific success despite the challenges in the global economical situation and with the long flight distances. Last, but not least, we wish to express our sincere appreciation to the local organizers for their tireless efforts. 14 April 2010

A. Yli-Jyr¨ a A. Kornai J. Sakarovitch B. Watson

Organization

FSMNLP 2010 was organized by the Department of Computer Science, University of Pretoria.

Conference Chair Bruce Watson

University of Pretoria, South Africa

Organizing Committee Loek Cleophas Derrick Kourie Jakub Piskorski Pierre Rautenbach Bruce Watson Anssi Yli-Jyr¨ a

University of Pretoria, South Africa (OC Chair) University of Pretoria, South Africa Polish Academy of Sciences, Warsaw, Poland University of Pretoria, South Africa University of Pretoria, South Africa Department of General Linguistics, University of Helsinki, Finland

Program Committee Chairs Andras Kornai Jacques Sakarovitch Anssi Yli-Jyr¨ a

Budapest Institute of Technology, Hungary and MetaCarta, Cambridge, USA Ecole nationale sup´erieure des T´el´ecommunications, Paris, France Department of General Linguistics, University of Helsinki, Finland

Program Committee Cyril Allauzen Sonja Bosch Francisco Casacuberta Damir Cavar Jean-Marc Champarnaud Loek Cleophas Maxime Crochemore Jan Daciuk Frank Drewes Dafydd Gibbon John Goldsmith

Google Research, New York, USA University of South Africa, South Africa Instituto Tecnologico De Inform´ atica, Valencia, Spain University of Zadar, Croatia Universit´e de Rouen, France University of Pretoria, South Africa King’s College London, UK Gda´ nsk University of Technology, Poland Umea University, Sweden University of Bielefeld, Germany University of Chicago, USA

VIII

Organization

Karin Haenelt Thomas Hanneforth Colin de la Higuera Johanna H¨ ogberg Arvi Hurskainen Lauri Karttunen Andr´e Kempe Kevin Knight Derrick Kourie Marcus Kracht Hans-Ulrich Krieger Eric Laporte Andreas Maletti Michael Maxwell Stoyan Mihov Kemal Oflazer Jakub Piskorski Laurette Pretorius Michael Riley Strahil Ristov James Rogers Max Silberztein Bruce Watson Sheng Yu Menno van Zaanen Lynette van Zijl

Fraunhofer Gesellschaft and University of Heidelberg, Germany University of Potsdam, Germany Jean Monnet University, Saint-Etienne, France Umea University, Sweden University of Helsinki, Finland Palo Alto Research Center and Stanford University, USA Cadege Technologies, Paris, France University of Southern California, USA University of Pretoria, South Africa University of California, Los Angeles, USA DFKI GmbH, Saarbr¨ ucken, Germany Universit´e de Marne-la-Vall´ee, France Universitat Rovira i Virgili, Spain University of Maryland, USA Bulgarian Academy of Sciences, Sofia, Bulgaria Sabanci University, Turkey Polish Academy of Sciences, Warsaw, Poland University of South Africa, South Africa Google Research, New York, USA Ruder Boskovic Institute, Zagreb, Croatia Earlham College, USA Universit´e de Franche-Comt´e, France University of Pretoria, South Africa University of Western Ontario, Canada Tilburg University, The Netherlands Stellenbosch University, South Africa

Additional Referees S. Amsalu F. Barthelemy M. Constant B. Daille

S. Gerdjikov H. Liang P. Mitankin M.-J. Nederhof

S. Pissis P. Prochazka M. Silfverberg N. Smith

Sponsors FASTAR Research Group - University of Pretoria University of Pretoria, Faculty of Engineering, Built Environment & IT Google Research Microsoft Research University of South Africa (UNISA)

Table of Contents

Tutorials Learning Finite State Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Colin de la Higuera

1

Special Theme Tutorials Developing Computational Morphology for Low- and Middle-Density Languages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kemal Oflazer

11

Invited Papers fsm2 – A Scripting Language Interpreter for Manipulating Weighted Finite-State Automata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Thomas Hanneforth

13

Selected Operations and Applications of n-Tape Weighted Finite-State Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Andr´e Kempe

31

OpenFst . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Johan Schalkwyk

47

Special Theme Invited Talks Morphological Analysis of Tone Marked Kinyarwanda Text . . . . . . . . . . . . Jackson Muhirwe

48

Regular Papers Minimizing Weighted Tree Grammars Using Simulation . . . . . . . . . . . . . . . Andreas Maletti

56

Compositions of Top-Down Tree Transducers with ε-Rules . . . . . . . . . . . . Andreas Maletti and Heiko Vogler

69

Reducing Nondeterministic Finite Automata with SAT Solvers . . . . . . . . Jaco Geldenhuys, Brink van der Merwe, and Lynette van Zijl

81

Joining Composition and Trimming of Finite-State Transducers . . . . . . . . Johannes Bubenzer and Kay-Michael W¨ urzner

93

X

Table of Contents

Special Theme Extended Abstracts Porting Basque Morphological Grammars to foma, an Open-Source Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I˜ naki Alegria, Izaskun Etxeberria, Mans Hulden, and Montserrat Maritxalar Describing Georgian Morphology with a Finite-State System . . . . . . . . . . Oleg Kapanadze Finite State Morphology of the Nguni Language Cluster: Modelling and Implementation Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Laurette Pretorius and Sonja Bosch A Finite State Approach to Setswana Verb Morphology . . . . . . . . . . . . . . . Laurette Pretorius, Biffie Viljoen, Rigardt Pretorius, and Ansu Berg

105

114

123 131

Competition Announcements Zulu: An Interactive Learning Competition . . . . . . . . . . . . . . . . . . . . . . . . . . David Combe, Colin de la Higuera, and Jean-Christophe Janodet

139

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

147

Learning Finite State Machines Colin de la Higuera Universit´e de Nantes, CNRS, LINA, UMR6241, F-44000, France [email protected]

Abstract. The terms grammatical inference and grammar induction both seem to indicate that techniques aiming at building grammatical formalisms when given some information about a language are not concerned with automata or other finite state machines. This is far from true, and many of the more important results in grammatical inference rely heavily on automata formalisms, and particularly on the specific use of determinism that is made. We survey here some of the main ideas and results in the field.

1

Introduction

The terms grammatical inference and grammar induction refer to the techniques allowing to rebuild a grammatical formalism for a language of which only partial information is known. These techniques are both inspired and have applications in fields like computational linguistics, pattern recognition, inductive inference, computational biology and machine learning [1,2].

2

Some of the Key Ideas

We describe first some of the ideas that seem important to us. 2.1

You Can’t ‘Have Learnt’

The question posed by grammatical inference is that of building a grammar or automaton for an unknown language, given some data about this language. But it is essential to explain that, in a certain sense, this question can never be settled. In other words, one cannot hope to be able to state “I have learnt” just because, given the data, we have built the best automaton for some combinatorial criterion (typically the one with the least number of states). Indeed, this would be similar to having used a random number generator, looking at the result, and claiming that “the returned number is random”. In both cases, the issue is about the building process itself [3]. 

This work was partially supported by the IST Programme of the European Community, under the Pascal 2 Network of Excellence, Ist–2006-216886.

A. Yli-Jyr¨ a et al. (Eds.): FSMNLP 2009, LNAI 6062, pp. 1–10, 2010. c Springer-Verlag Berlin Heidelberg 2010 

2

2.2

C. de la Higuera

Identification in the Limit

One essential concept in grammatical inference, in order to study the process of learning languages, is due to Gold: identification in the limit. Identification in the limit [4,5] describes a situation where learning is a never ending process. The learner is given information, builds a hypothesis, receives more information, updates his hypothesis, and so on, forever. This setting may seem unnatural, completely abstract, and far from a concrete learning situation. We argue differently and believe identification in the limit provides several useful insights. First, identification in the limit requires the notion of ‘target’. A target language, from which the data is extracted, pre-exists to the learning process. Therefore the task is really about finding this hidden target. Second, even if there are many cases where the data is not received in an incremental way (a more typical situation is one where a large sample of data is given all at once), it is still worth studying a learning session as part of a process. It may be that at some particular moment the learning algorithm returned a good hypothesis, but unless we have some completeness guarantee about the data we have got, there is no way we can be sure it is good. In other words, one can study the fact that we are learning a language, but not that we have learnt one. Third, even knowing that our algorithm does identify in the limit will not give us any guarantee in a specific situation; on the other hand what we do know is that if the algorithm does not identify in the limit there is necessarily a hidden bias: there is at least one possible language which, for some unknown or undeclared reason, is not learnable. The advantage of using an algorithm that has the propriety of identifying in the limit is that you can argue, if the algorithm fails: “Don’t blame me, blame the data” [3]. Definition 1. Let L be a class of languages. A presentation is a function φ : N → X where X is some set. This set is chosen accordingly to the task one faces: typically, in some cases X is a set of strings, in others a set of labelled strings. The set of all admitted presentations for L is denoted by Pres(L) which is a  subset of N → X , the set of all total functions from N to some set X. In some way these presentations denote languages from L, i.e. there exists a function Yields : Pres(L) → L. If L = Yields(φ) then we will say that φ is a presentation of L. We also denote by Pres(L) the set {φ ∈ Pres(L) : Yields(φ) = L}. There exists a function L whose value given any grammar from G is that of the language recognised, generated or described by this grammar. In words, the setting involves a class of languages (L), each member of which can be generated, recognised or represented by a grammar (or automaton) from some class G. A type of presentation is chosen: each presentation is a function (φ) corresponding to an enumeration of information which, in some chosen sense corresponds to a unique language (function Yields). We summarise these notions in Figure 1. The general goal of learning (in the framework of identification in the limit) is to find a learning algorithm A such

Learning Finite State Machines Yields

Pres(L)

3

L L

A G

Fig. 1. The learning setting

that ∀φ ∈ Pres(L), ∃n ∈ N : ∀m ≥ n, L(A(φm )) = Yields(φ). When this is true, we will say that class L is identifiable in the limit by presentations in Pres(L). Gold [5] proved the first and essential results in this setting: Theorem 1. Any recursively enumerable class of recursive languages is identifiable in the limit from an informant. No super-finite class of languages is identifiable in the limit from text. A text presentation of a language is an enumeration (in any order, perhaps with repetitions) of all strings in the language. An informed presentation is an enumeration (in any order, perhaps with repetitions) of all strings in Σ  (with Σ the reference alphabet), with in each case a label indicating if the string belongs or not to the language. A super-finite language class is a class that contains all finite languages and at least one infinite language. Of course, the theorem holds for the usual classes of languages from the Chomsky hierarchy, and typically for regular languages. 2.3

Complexity Matters

The general setting described in the previous section is close to what is typically called inductive inference. But if we want to apply the theoretical algorithms in practise, it is necessary to come up with algorithms whose resources are bounded in some way: one can (try to) bound the number of examples needed before convergence, in a best case or in a worse case, the number of mind changes the learning algorithm may make, or the number of times the learning algorithm fails to classify the next example it sees [6,7,8]. In fact, no unique notion of complexity covers all the possible learning scenarios.

3

Some Basic Tools

We describe some essential finite state tools that are of great use when devising or studying automata learning algorithms. 3.1

Three Types of States

We shall be dealing here with learning samples composed of labelled strings. Let Σ be an alphabet. An informed learning sample is made of two sets S+ and

4

C. de la Higuera

b

qa a

a a



b

b

qaa a

qab

Fig. 2. A Dfa

S− such that S+ ∩ S− = ∅. The sample will be denoted as S = S+ , S− . If S = S+ , S− is sampled from a language L ⊂ Σ  , then S+ ⊆ L and S− ∩ L = ∅. A text sample is made of just one set S+ . Let us consider sample: S+ = {a, aa} S− = {ab, aaba}. Now if we look at Dfa from Figure 2, we notice that state qε is reached by no string in the sample. Therefore, this state could just as well be accepting or rejecting, at least if we only have this learning data to decide upon. In order to not take premature decisions, we therefore manage three types of states: the final accepting states (double line), the final rejecting states (thick grey) and the other states, yet to be labelled. In a sense, the proportion of states that remain not final at the end of a learning process is a good indicator of the precision of the obtained Dfa. 3.2

The Prefix Tree Acceptor (

)

A prefix tree acceptor (Pta) is a tree-like Dfa built from the learning sample by taking all the prefixes in the the smallest Dfa   sample as states and constructing which is a tree : ∀q ∈ Q, {q  : ∃a ∈ Σ δ(q  , a) = q} ≤ 1. The Pta is useful as a good starting point for state merging algorithms and helps to see the learning problem like a search question [9].



a b

qa qb

a b b

qaa qab qbb

a

qaba

a

qbba

b

qabab

Fig. 3. Pta({(aa, 1), (aba, 1), (bba, 1), (ab, 0), (abab, 0)})

Learning Finite State Machines



a b

qa qb

a b b

5

qaa qab qbb

a

qaba

a

qbba

Fig. 4. Pta({(aa, 1), (aba, 1), (bba, 1)})

Note that we can also build a Pta from a set of positive strings only. This corresponds to building Pta( S+ , ∅ ). In that case, and for the same sample we would get the Dfa represented in Figure 4. Many automata learning algorithms take the Pta as starting point and then attempt to reduce the size of the Dfa by means of some merging operations: each merge is likely to generalise the language being recognised. What is at stake is to propose strategies for testing the states that are to be merged. 3.3

Red, Blue and White States

In order not to get lost in the process (and to avoid undoing merges that have been made some time ago) it will be interesting to divide the states into three categories [10]: - The Red states which correspond to states that have been analysed and which will not be revisited; they will be the states of the final automaton. - The Blue states which are the candidate states: they have not been analysed yet and it should be from this set that a state is drawn in order to consider merging it with a Red state. - The White states, which are all the others. They will in turn become Blue and then Red. We conventionally draw the Red states in dark grey and the Blue ones in light grey as in Figure 5, where Red = {qε , qa } and Blue = {qb , qaa , qab }. 3.4

Merge and Fold

Merging states (starting perhaps from the Pta) is the key to most algorithms that deal with inferring automata. The idea is that by merging states a generalisation of the language is taking place (thus some form of induction). In order to avoid problems with non-determinism, the merge of two states is immediately followed by a folding operation: the merge always occurs between a Red state and a Blue state. The Blue states have the following properties: – if q is a Blue state, it has exactly one predecessor, i.e. whenever δ(q1 , a1 ) = δ(q2 , a2 ) = q ∈ Blue, then necessarily q1 = q2 and a1 = a2 ; – q ∈ Blue is the root of a tree, i.e. if δ(q, u)=δ(q, v) then necessarily u = v.

6

C. de la Higuera

a



b

qaa

a

qa

b

qb

b

qab qbb

a

qaba

a

qbba

b

qabab

Fig. 5. Colouring of states: Red = {qε , qa }, Blue = {qb , qaa , qab }, all the other states are White

Algorithm Merge takes as arguments a Red state q and a Blue state q  . It first finds the unique pair (qf , a) such that q  = δA (qf , a). This pair exists and is unique because q  is a Blue state and therefore the root of a tree. Merge then redirects δ(qf , a) to q. After that, the tree rooted in q  (which is therefore disconnected from the rest of the Dfa) is folded (Fold) into the rest of the Dfa. The possible intermediate situations of non-determinism are dealt with during the recursive calls to Fold. This two-step process is shown in Figures 6 to 9. We want to merge state qaa with qε . Once the redirection has taken place (Figure 7), state qaa is effectively merged (Figure 8) with qε . In dashed lines the recursive merges that are still to be done. Then qaaa is folded into qa , and finally qaab into qb . The result is represented in Figure 9.

qb a

b

a



qaaa

b

qaa qa

a

a qaab

b

b

qaabb

Fig. 6. Before merging qaa with qε and redirecting transition δA (qa , a) to qε

qb a

b qε

qaaa

b a a

qaa qa

a b

qaab

Fig. 7. Before folding qaa into qε

b

qaabb

Learning Finite State Machines

7

qb



qaaa

b

a

b

a

qa

a

qaab

b

qaabb

Fig. 8. Folding in qaa qb b qε

qb a b a a

b qa

qaab

b

qaabb

(a) Folding in qaaa



b a b a a

qaabb

qa

(b) Folding in qaab .

Fig. 9. The end of the merge and fold process

Traditional state merging techniques rely on a “merge and determinise” procedure [11]. The presentation given here avoids the cumbersome non-deterministic phase [12].

4 4.1

Some Techniques and Results

 and 

Deterministic finite automata (Dfa) have been learnt by a variety of techniques and in a number of settings: – In the case of learning from text (only positive examples are available), theory tells us that identification in the limit cannot be achieved [4,13]. But for certain subclasses, like the k-testable languages [14] or the k-reversible languages [15], algorithms that do identify in the limit have been built. – When learning from an informant (this time both positive and negative examples are available), the picture is more positive. But if we are concerned with (too) optimistic convergence criteria, like those of Pac learning or of identification with only a polynomial number of implicit prediction errors, we should not hope for success [6,16,17]. If what we hope is to be able to only need polynomial characteristic samples [7], then algorithm Rpni [11] fulfils the conditions. – Active learning consists in learning from an Oracle to whom queries are submitted. In this setting, Angluin proved that equivalence queries alone, or membership queries alone were insufficient [18,19]. On the other hand, algorithm L* makes use of a minimally adequate teacher [20] to learn Dfa in polynomial time.

8

4.2

C. de la Higuera

Probabilistic Automata

Another line of research concerns learning probabilistic finite automata. State merging techniques have also been devised in this case. Algorithm Alergia uses the same principles as Rpni and merges two states when the distributions look alike [21]. Algorithm Mdi [22] uses distances between distributions to take the decisions about merging and algorithm Dsai [23] merges if there is a string of significant weight that appears in the two distributions with very different relative frequencies. Finally, Algorithm Dees solves systems of equations and uses multiplicity automata in order to learn probabilistic finite automata that may not be deterministic [24]. 4.3

Transducers

Algorithm Ostia was developed to learn subsequential transducers from examples consisting of pairs: each pair is composed of one string from the original language and its translation [25]. An extension of Ostia to allow the use of extra knowledge (typically about the domain and range of the function) has been proposed [26], and there exists an active learning version, where the goal is to learn from translation queries [27].

5

Some Open Problems and Challenges

The fact that there are a number of existing algorithms that allow to infer automata and transducers should not deter the potential researcher. Indeed, there are so many possible types of data from which to learn, or specific conditions posed on the automata we want to learn, or even on the criteria used to measure the quality of the learning process makes it of interest to pursue research. As an alternative starting point, one may find some open problems relating to learning finite state machines in [28], and an extensive presentation can be found in [12]. Looking at broader perspectives, one should understand that most of machine learning research, today, deals with statistical methods, making use of the fact that larger and larger samples of data are available. In this context, probabilistic finite state machines are an interesting model to investigate as they can both model efficiently distributions over strings (and also trees): the hope is that the language theoretic properties associated with the underlying structures will be strong enough to enable researchers to come up with good learning algorithms.

References 1. Sakakibara, Y.: Recent advances of grammatical inference. Theoretical Computer Science 185, 15–45 (1997) 2. de la Higuera, C.: A bibliographical study of grammatical inference. Pattern Recognition 38, 1332–1348 (2005)

Learning Finite State Machines

9

3. de la Higuera, C.: Data complexity issues in grammatical inference. In: Basu, M., Ho, T.K. (eds.) Data Complexity in Pattern Recognition, pp. 153–172. Springer, Heidelberg (2006) 4. Gold, E.M.: Language identification in the limit. Information and Control 10(5), 447–474 (1967) 5. Gold, E.M.: Complexity of automaton identification from given data. Information and Control 37, 302–320 (1978) 6. Pitt, L.: Inductive inference, Dfa’s, and computational complexity. In: Jantke, K.P. (ed.) AII 1989. LNCS (LNAI), vol. 397, pp. 18–44. Springer, Heidelberg (1989) 7. de la Higuera, C.: Characteristic sets for polynomial grammatical inference. Machine Learning Journal 27, 125–138 (1997) 8. de la Higuera, C., Janodet, J.C., Tantini, F.: Learning languages from bounded resources: the case of the DFA and the balls of strings. In: Clark, A., Coste, F., Miclet, L. (eds.) ICGI 2008. LNCS, vol. 5278, pp. 43–56. Springer, Heidelberg (2008) 9. Dupont, P., Miclet, L., Vidal, E.: What is the search space of the regular inference? In: [29], pp. 25–37 10. Lang, K.J., Pearlmutter, B.A., Price, R.A.: Results of the Abbadingo one DFA learning competition and a new evidence-driven state merging algorithm. In: Honavar, V., Slutski, G. (eds.) ICGI 1998. LNCS (LNAI), vol. 1433, pp. 1–12. Springer, Heidelberg (1998) 11. Oncina, J., Garc´ıa, P.: Identifying regular languages in polynomial time. In: Bunke, H. (ed.) Advances in Structural and Syntactic Pattern Recognition. Series in Machine Perception and Artificial Intelligence, vol. 5, pp. 99–108. World Scientific, Singapore (1992) 12. de la Higuera, C.: Grammatical inference: learning automata and grammars. Cambridge University Press, Cambridge (2010) 13. Angluin, D.: Inductive inference of formal languages from positive data. Information and Control 45, 117–135 (1980) 14. Garc´ıa, P., Vidal, E.: Inference of k-testable languages in the strict sense and applications to syntactic pattern recognition. Pattern Analysis and Machine Intelligence 12(9), 920–925 (1990) 15. Angluin, D.: Inference of reversible languages. Journal of the Association for Computing Machinery 29(3), 741–765 (1982) 16. Angluin, D., Kharitonov, M.: When won’t membership queries help? In: Proceedings of 24th Acm Symposium on Theory of Computing, pp. 444–454. ACM Press, New York (1991) 17. Kearns, M.J., Vazirani, U.: An Introduction to Computational Learning Theory. MIT press, Cambridge (1994) 18. Angluin, D.: Queries and concept learning. Machine Learning Journal 2, 319–342 (1987) 19. Angluin, D.: Negative results for equivalence queries. Machine Learning Journal 5, 121–150 (1990) 20. Angluin, D.: Learning regular sets from queries and counterexamples. Information and Control 39, 337–350 (1987) 21. Carrasco, R.C., Oncina, J.: Learning stochastic regular grammars by means of a state merging method. In: [29], pp. 139–150 22. Thollard, F., Dupont, P., de la Higuera, C.: Probabilistic Dfa inference using Kullback-Leibler divergence and minimality. In: Proceedings of the 17th International Conference on Machine Learning, pp. 975–982. Morgan Kaufmann, San Francisco (2000)

10

C. de la Higuera

23. Ron, D., Singer, Y., Tishby, N.: Learning probabilistic automata with variable memory length. In: Proceedings of COLT 1994, pp. 35–46. ACM Press, New Brunswick (1994) 24. Denis, F., Esposito, Y.: Learning classes of probabilistic automata. In: ShaweTaylor, J., Singer, Y. (eds.) COLT 2004. LNCS, vol. 3120, pp. 124–139. Springer, Heidelberg (2004) 25. Oncina, J., Garc´ıa, P., Vidal, E.: Learning subsequential transducers for pattern recognition interpretation tasks. Pattern Analysis and Machine Intelligence 15(5), 448–458 (1993) 26. Oncina, J., Var´ o, M.A.: Using domain information during the learning of a subsequential transducer. In: [30], pp. 301–312 27. Vilar, J.M.: Query learning of subsequential transducers. In: [30], pp. 72–83 28. de la Higuera, C.: Ten open problems in grammatical inference. In: Sakakibara, Y., Kobayashi, S., Sato, K., Nishino, T., Tomita, E. (eds.) ICGI 2006. LNCS (LNAI), vol. 4201, pp. 32–44. Springer, Heidelberg (2006) 29. Carrasco, R.C., Oncina, J. (eds.): ICGI 1994. LNCS (LNAI), vol. 862. Springer, Heidelberg (1994) 30. Miclet, L., de la Higuera, C. (eds.): ICGI 1996. LNCS (LNAI), vol. 1147. Springer, Heidelberg (1996)

Developing Computational Morphology for Low- and Middle-Density Languages Kemal Oflazer Carnegie Mellon University - Qatar Education City, PO Box 24866, Doha, Qatar [email protected]

Abstract. This tutorial will present an overview of the techiques for developing morphological processors for low- and middle-density languages. The developed morphological analyzers and generators can be used in many language processing applications.

1

What Is Computational Morphology?

Words in languages contain pieces of syntactic and semantic information encoded in various ways. Many language processing tasks need to either extract and process the information encoded in the words or need to synthesize words from available semantic and syntactic information. Computational morphology aims at developing formalisms and algorithms for the computational analysis and synthesis of word forms for use in language processing applications. Applications such as spelling checking and correction, stemming in document indexing etc., also rely on techniques in computational morphology especially for languages with rich morphology. Morphological analysis is the process of decomposing words into their constituents. Individual constituents of a word can be used to determine the necessary information about the word as a whole and how it needs to interpreted in the given context. Such information may range from basic part-of-speech information assigned from a fixed inventory of tags to structural information consisting of the relationships between components of the word further annotated with various features and their values. Morphological generation synthesizes words by making sure that the components making up a word are combined properly and their interactions are properly handled.

2

Contents of the Tutorial

This tutorial will present an overview of the techiques for developing morphological processors that can be used in part-of-speech tagging, syntactic parsing, 

On long term leave from, Faculty of Engineering and Natural Sciences, Sabancı University, Istanbul, 34956, Turkey.

A. Yli-Jyr¨ a et al. (Eds.): FSMNLP 2009, LNAI 6062, pp. 11–12, 2010. c Springer-Verlag Berlin Heidelberg 2010 

12

K. Oflazer

texttospeech, spelling checking and correction, document indexing and retrieval, and many other language processing applications. The tutorial covers the following main topics: – An overview of aspects of morphology and how morphology encodes information in various languages – An overview of computational morphology and how it relates to other tasks in natural language processing, – A review of the mathematical background: finite state recognizers, regular languages and expressions, finite state transducers, regular relations, combining transducers, extended regular expression operators, etc. – Fundamental components of morphological processors: lexicons, morphophonological and morphographemic rules, – Finite state approaches to computational morphology, – Parallel constraints approach: Two-level mophology, – Serial transductions: Cascaded rules, – Issues in engineering wide-coverage morphological analyzers Depending on the time, the tutorial may cover certain interesting morphological phenomena in certain languages can be computationally handled, and provide an demonstration of various aspects of xfst, the Xerox Finite State Tool, for building morphological processors.

3

Links

The tutorial slides are entitled “Computational Morphology” and they are available e.g. at http://fsmnlp2009.fastar.org/Program.html, or directly through the author.

fsm2 – A Scripting Language Interpreter for Manipulating Weighted Finite-State Automata Thomas Hanneforth Dept. for Linguistics, University of Potsdam, Germany

Abstract. The present article describes fsm2, a software program which can be used interactively or as a script interpreter to manipulate weighted finite-state automata with around 100 different commands. fsm2 is based on FSM – an efficient C++ template library to create and algebraically manipulate weighted automata.

1

Introduction

fsm2 is a simple XFST-style interpreter [1] for FSM, an efficient C++ template library to create and algebraically manipulate weighted automata. Like XFST it is based on a stack machine, that is, most fsm2 -commands manipulate finite-state automata on a stack. The notable differences to XFST are: 1. The (weighted) regular expression syntax is adopted from AT&T’s LexTools [2]. 2. All finite-state machines are always (sometimes trivially) weighted with an algebraic weight structure called a semiring (cf. Section 2). 3. All higher-level operations in FSM (on regular expressions, grammars, lexicons etc.) are based on a symbol specification. A symbol specification provides a mapping between symbols and numbers (the latter being used internally). So, the first step within the fsm2 -interpreter will consist in almost every case in loading a symbol specification file (cf. Section 3.4). 4. FSM and its interpreter fsm2 are open source software. The structure of this article is as follows: After repeating some of the technical preliminaries in Section 2 (additional technical definitions will be introduced in later sections, when needed), Section 3 will focus on the commands for manipulating weighted automata and creating them from symbolic expressions, while Section 4 describes some of the non-standard features of fsm2 . Finally, Section 5 discusses features that will be added to the scripting language in later versions of fsm2 .

2

Formal Background

Finite-state automata in FSM are weighted with an algebraic structure called a semiring. A. Yli-Jyr¨ a et al. (Eds.): FSMNLP 2009, LNAI 6062, pp. 13–30, 2010. c Springer-Verlag Berlin Heidelberg 2010 

14

T. Hanneforth

Definition 1 (Semiring). A structure K = W, ⊕, ⊗, 0, 1 is a semiring [3] if it fulfills the following conditions: 1. W, ⊕, 0 is a commutative monoid with 0 as the identity element for ⊕. 2. W, ⊗, 1 is a monoid with 1 as the identity element for ⊗. 3. ⊗ distributes over ⊕: ∀x, y, z ∈ W : x ⊗ (y ⊕ z) = (x ⊗ y) ⊕ (x ⊗ z) (left distributivity) ∀x, y, z ∈ W : (y ⊕ z) ⊗ x = (y ⊗ x) ⊕ (z ⊗ x) (right distributivity) 4. 0 is an annihilator for ⊗: ∀w ∈ W, w ⊗ 0 = 0 ⊗ w = 0. In the following, we will identify a semiring K with its carrier set W . Definition 2 (Semiring properties). Let K = W, ⊕, ⊗, 0, 1 be a semiring. – – – –

K K K K

is called idempotent if ∀a ∈ K : a ⊕ a = a. has the path property if ∀a, b ∈ K : a ⊕ b = a or a ⊕ b = b. is commutative if ∀a, b ∈ K : a ⊗ b = b ⊗ a. is 0-divisor-free if ∀a, b ∈ K, a, b = 0 : a ⊗ b = 0.

With the definition of a semiring at hand, it is possible to define weighted finitestate automata: Definition 3 (Weighted finite-state acceptor). A weighted finite-state acceptor (henceforth WFSA, [4]) A = Σ, Q, q0 , F, E, λ, ρ over a semiring K is a 7-tuple with 1. 2. 3. 4. 5. 6. 7.

Σ, the finite input alphabet, Q, the finite set of states, q0 ∈ Q, the start state, F ⊆ Q, the set of final states, E ⊆ Q × (Σ ∪ {ε}) × K × Q, the set of transitions, λ ∈ K, the initial weight and ρ : F → K, the final weight function mapping final states to elements in K.

An extension to WFSAs are weighted finite-state transducers (WFST), sometimes also called weighted string-to-string transducers. WFSTs have a second alphabet Γ of output symbols and a slightly extended definition of the transition set: E ⊆ Q × (Σ ∪ {ε}) × (Γ ∪ {ε}) × K × Q. The following definition functionally relates strings accepted (or generated) by a WFSA to semiring weights1 : Definition 4 (Weight associated with a string). Given a WFSA A, let a path π = t1 t2 . . . tk be a sequence of adjacent transitions. Let Π(q, x, p) for x ∈ Σ ∗ be the set of paths from state q ∈ Q to state p ∈ Q such that the concatenation of input symbols in each π ∈ Π(q, x, p) equals x. Let ω(π) the ⊗-multiplication of all transitions weights in a path π: ω(π) = w[t1 ] ⊗ w[t2 ] ⊗ . . . ⊗ w[tk ]. 1

For weighted transducers T , the definition is extended to pair of strings x, yT .

fsm2 – A Scripting Language Interpreter for Manipulating WFSAs

15

The weight associated with an input string x ∈ Σ ∗ wrt a WSFA A – denoted by xA – is computed by the following equation: xA =



λ ⊗ ω(π) ⊗ ρ(q)

(1)

q∈F, π∈Π(q0 ,x,q)

Definition 4 means that weights along an accepting path (leading to a final state) are combined by semiring multiplication. If there are several paths for some input string x – in this case the WFSA is called ambiguous – the weights of all those paths are combined additively.

Features of fsm2

3

This section describes the basic commands defined in fsm2. We start with the supported semirings, move on to automata operations and finally discuss the symbolic formalism offered by fsm2. 3.1

Supported Semirings

Table 1 shows the semirings currently predefined in fsm2 2 . The list can be expanded by providing a new C++ semiring class and registering it in fsm2. The T × T -semiring in Table 1 is a ranked semiring after the following definition (cf. [4], [5]): Definition 5 (Ranked tuple semiring). Let Ki be idempotent semirings (1 ≤ i ≤ k) for some constant k ≥ 1. → → Let − x = x1 , x2 , . . . xk and − y = y1 , y2 , . . . yk be two weight tuples, with → − → − − → xi , yi ∈ Ki . Define a new structure K = K1 × K2 × . . . × Kk , ⊕ k , ⊗ k , 0k , 1k

where: 1. 2. 3. 4.

0k = 0K1 , . . . , 0Kk

1k = 1K1 , . . . , 1Kk

→ → − → − x ⊗ k− y = x1 ⊗K1 y1 , . . . , xk ⊗Kk yk

→ → − − → x ⊕ k− y =



− → x → − y

∃j ≤ k, ∀i, 1 ≤ i < j : (xi = yi ) ∧ (xj ⊕Kj yj = xj ) otherwise

Items 1. to 3. in Def. 5 are straightforward. The definition of the ⊕-operation constitutes some sort of lexicographic natural order over the k-tuples. As can → − immediately be seen from the definition of ⊕ k , the ranked tuple semiring is 2

The actually available semirings depend on compilation options in the Makefile of fsm2.

16

T. Hanneforth Table 1. Predefined semirings in fsm2

Semiring name tropical real/probabilistic log string

Symbol T R, P L S

Definition R+ +∞ , min, +, +∞, 0 R, +, ·, 0, 1 R∞ , ⊕log , +, ∞, 0 Σ ∗ ∪ {s∞ }, ∧, ·, s∞ , ε

arctic Viterbi unification

A V U

R−∞ , max, +, −∞, 0 [0, 1], max, ·, 0, 1 FS, , , , ⊥

string×tropical

S ×T

string×Viterbi

S ×V

unification×Viterbi

U ×V

tropical×tropical

T ×T

Comments x ⊕log y = −ln(e−x + e−y ) p-subsequential, x ∧ y = longest common prefix of x and y, s∞ = “infinite string”

p-subsequential, FS = set of feature structures (fs), =generalisation, =unification, ⊥ = empty fs, =inconsistent fs p-subsequential, key-value semiring p-subsequential, key-value semiring p-subsequential, key-value semiring ranked semiring product

→ → idempotent by letting − y = − x . Moreover, if the sub-semirings have the path → − property, K will have it too. Idempotent semirings with the path property are suitable to support the best path operation [4]. Fig. 1 shows a WFST over the T × T -semiring which can be used to bracket occurrences of a search pattern α (here b|ab|ba|aba) (cf. [6] for details). The tropical semiring T adds weights along a path. The WFST in Fig. 1 adds 1 for every opening bracket to the second component of the weight tuple, while symbols which are part of the search pattern are scored with −1 in the weight tuple’s first component.

    

  





  

 

 



 







 



 



Fig. 1. Bracketing transducer T[] over the T × T -semiring

fsm2 – A Scripting Language Interpreter for Manipulating WFSAs

17

Example 1 (Input string mapping 1). T[] maps the input cabac to the following output strings and their tuple weights: c[aba]c → −3, 1

c[ab]ac → −2, 1

By composing a trivially weighted (that is, 1-weighted) WFSA representing the input with T[] and applying a best path operation afterwards, the first analysis representing a longest match can be selected. The definition of the tuple semiring ensures that the number of bracketed symbols is maximised in the first place; among the analyses being equal in this respect, the one with the least number of brackets is preferred: Example 2 (Input string mapping 2). T[] maps the input cabbc to the following output strings and their tuple weights: c[abb]c → −3, 1

c[ab][b]c → −3, 2

Note that counting symbols is only the simplest application of the T × T semiring. The part of the WFST which represents the pattern can be arbitrarily complex by taking pattern contexts into account and by assigning different weights to a given pattern in different contexts. Semirings in fsm2 are changed with the semiring command; semiring tsr x tsr for example switches the system to the T × T -semiring. The fsm2 framework allows (flat) weight tuples with up to 10 component semirings. 3.2

Algebraic Operations

The fsm2 -interpreter supports the full set of algebraic operations defined on WFSAs and WFSTs [7]. Table 2 gives an overview about the fsm2 -commands associated with these operations. Table 2. Commands for algebraic operations in fsm2 Command union concat star plus complement intersect compose crossproduct difference project reverse substitute bestpath

Meaning Union Concatenation Star closure Plus closure Complementation Intersection Composition Cross product of two WFSAs Difference of a WFSA and a FSA 1st, 2nd projection Reversal Substitution of WFSAs into a WFSA Return an automaton representing the best path wrt to the natural semiring order

Applies to WFSA, WFST WFSA, WFST WFSA, WFST WFSA, WFST unweighted FSA WFSA WFST WFSA WFSA WFST WFSA, WFST WFSA WFSA, WFST

18

T. Hanneforth

3.3

Equivalence Transformations

Equivalence transformations are operations which change/optimise the topology of a weighted automaton without changing its weighted language or relation. Table 3 summarises the equivalence transformations supported by fsm2. Table 3. Commands for equivalence operations in fsm2 Command rmepsilon determinize minimize synchronize epsnormalize optimize

push weights collect-weights

connect topsort

sort

Meaning Removes ε/ε : ε-transitions

Applies to WFSA, WFST Weighted acceptor/transducer determinisation, deWFSA, pending on the type of FSM WFST Minimises WFSAs WFSA Tries to synchronise input and output symbols WFST Tries to move output-ε-transitions towards the final WFST states to reduce nondeterministic computation For WFSAs, ε-removal, determinisation and minWFSA, imisation; for WFSTs, ε-removal and encoded deWFST terminisation/minimisation Pushes weights towards the initial or final states WFSA, WFST Combines transitions with the same source & destiWFSA, nation state and same labels by applying ⊕ to the WFST transition weights WFSA, Removes inaccessible states and transitions with 0weight WFST Renumbers the states in such a way that all transiAcyclic tions go from lower to higher state numbers WFSA, WFST Sorts after input/output label or weight WFSA, WFST

The FSM-library supports WFSAs with multiple outputs for semirings not having the path property, for example, the string and unification semirings or tuple semirings which have one of these semirings as a component. A special case of that type of WFSA are p-subsequential WFSAs, that is, WFSAs with a deterministic transition function and an extended final weight function ρ, which is defined as follows: ρ : F → 2K , ∃p : 1 ≤ |ρ(q)| ≤ p, ∀q ∈ F .

(2)

That means that a final state q can emit up to p different semiring weights, for example strings in the case of a string semiring. The advantage of p-subsequential WFSA is that they can handle a limited form of ambiguity often encountered in natural language processing without resorting to non-deterministic automata. Fig. 2 shows a 3-subsequential and minimised WFSA over the unification semiring for the German indefinite articles3 . 3

Multiple outputs at final states are indicated by dotted lines.

0

lemma : eine

agr : num : sg

e/⎣ cat : artindef 

⎡ 





1

i/⊥ 2

n/⊥ 3



 agr :

agr :

agr :





 case : nom gender : neut

case : acc gender : neut

e/⊥

case : nom gender : masc









4



















case : acc gender : masc







r/ agr : gender : f em



n/ agr :





case : nom gender : f em

case : acc gender : f em

m/ agr : case : dat



agr :

agr :

s/ agr : case : gen





6

5/⊥

7









  agr : case : gen

  agr : case : dat

  agr : gender : masc

  agr : gender : neut

fsm2 – A Scripting Language Interpreter for Manipulating WFSAs 19

Fig. 2. 3-subsequential U-WFSA for the German indefinite articles

Determinisation and minimisation of p-subsequential WFSAs (cf. [8]) is already fully supported in fsm2 ; ε-removal is currently restricted to WFSAs without ε-loops.

20

T. Hanneforth

The string semiring S is often used to encode unweighted transducers, especially for the purpose of determinising them4 . The output symbol of a transition of the unweighted transducer is simply represented as a string weight of an SWFSA. After optimisation of that WFSA, it is converted back to an unweighted transducer5 . Usually, ε-removal in WFSA is based on the computation of an ε-distance (see also [10]):



Δε (p, q) =

w[π]

(3)

π∈Π({p},ε,{q})

That is, the ε-distance Δε (p, q) between two states p and q is the ⊕-sum of the weights of all paths labeled only with ε leading from p to q. In semirings like S and U not having the path property, the ε-removal algorithm must not be based on computing ε-distances along Eq. (3) in the “conversion scenario” described above, since computing the ε-distance may result in a loss of information. For that purpose, fsm2 incorporates a special ε-removal algorithm for these semirings which is applicable as long as there are no ε-loops in the WFSA under consideration. The reification of string-to-string-transducers as S-WFSA can be extended to the weighted case for which we define the notion of a key-value-semiring. Definition 6 (Key-value semiring). Let K = WK , ⊕K , ⊗K , 0K , 1K be a “p-subsequential” semiring (e.g. the string or unification semiring). Let N = WN , ⊕N , ⊗N , 0N , 1N be a semiring. Define the key-value semiring K × N = WK × WN , ⊕K×N , ⊗K×N , 0K , 0N , 1K , 1N

as follows: 1. x1 , y1 ⊕K×N x2 , y2 = x1 ⊕K x2 , y1 ⊕N y2

2. x1 , y1 ⊗K×N x2 , y2 = x1 ⊗K x2 , y1 ⊗N y2

Up to now, Def. 6 simply defines the product of two semirings, the first (the key) being p-subsequential. To preserve the key and combine the value in the final output function ρ in WFSA over a key-value semiring, a special union operation ∪⊕ is necessary in the p-subsequential case. Definition 7 (∪⊕ ). Let K × N be a key-value semiring. The final weight combining function ∪⊕ : (K × N ) × 2K×N → 2K×N is defined in the following way:  S − { x , y  } ∪ { x, y ⊕ y  } ∃ x , y  ∈ S with x = x x, y ∪⊕ S = (4) S ∪ { x, y } otherwise Let C⊕ (S) denote the set of weights resulting from combining the elements of weight set S according to Eq. (4). 4 5

fsm2 also incorporates a true determinisation algorithm for weighted transducers. This is the current strategy in OpenFST ([9], personal communication with Johan Schalkwyk, July 2009).

fsm2 – A Scripting Language Interpreter for Manipulating WFSAs

21

The following example demonstrates the two cases of Def. 7 for the U × Vsemiring: Example 3 (∪⊕ in the U × V-semiring). ⎧  

⎫⎪ ⎪ ⎪ ⎪ ⎨ ⎬ f :a f :a ∪⊕ ⎪⎪ ,0.5 ,0.2 = ⎪ ⎪ g:b





f :a ∪⊕ ,0.5 g:b



⎧ ⎪ ⎪ ⎨

g:b



⎪ ⎪ ⎩

⎫⎪ ⎪ ⎬ f :a ,0.2 ⎪ ⎪ h:c ⎭



=

⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩

⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩



⎫⎪ ⎪ ⎬ f :a ,0.5 ⎪ ⎪ g:b ⎭



 ⎫⎪ ⎪ ⎬ f :a f :a ,0.5 , ,0.2 ⎪ ⎪ g:b h:c ⎭

We are now ready to define the language of a WFSA over a key-value-semiring: Definition 8 (String weight in a WFSA over a key-value-semiring). Let A be a WFSA over a key-value-semiring K × N . The weight (set) assigned by A to some input string x is defined as follows:  {λ ⊗ ω[π] ⊗ v} xA = f ∈F, π∈Π({q0 },x,{f }), v∈C⊕ (ρ(f ))

For example, a tropical-weighted string-to-string-transducer could be represented as a WFSA over the S × T -semiring with respect to Def. 8. 3.4

Symbolic Formalism

Besides the automata operations described in the previous subsections, fsm2 incorporates a number of different compilers for translating high-level descriptions into weighted automata. As already described in the introduction, all symbolic representations are based on a symbol signature. Symbol signatures. Internally, all weighted automata work with 2 or 4-byteintegers as input/output alphabets. To allow symbolic computation, a symbol signature puts symbols and internal integer representation into an oneto-one-mapping. The format of fsm2 ’s signature files is largely borrowed from LexTools [2]. Currently, there are two basic types of definitions in a symbol signature: 1. Type-subtype definitions 2. Category definitions Fig. 3 shows an example symbol signature. Subtype definitions can be nested and define an acyclic inheritance hierarchy. Category definitions combine a category name (for example, NOUN) with a specification which features are appropriate for that category. All symbols defined in the symbol signature, whether category, subtype or supertype, may be used in symbolic expressions (see the next subsections). Additionally, a number of special symbols like (default symbol), (failure symbol), (beginning of string) and (end of string) are added

22

T. Hanneforth

Letter Letter Case Gender Number Person Person_or_Number

a b c d e f g h i j k l m n o p q r s t u v w x y z ¨ a ¨ o ¨ u ß A B C D E F G H I J K L M N O P Q R S T U V W X Y Z ¨ A ¨ O ¨ U acc dat gen nom fem masc neut sg pl 1 2 3 Person Number

Category: Category: Category:

NOUN Gender Number Case VERB Person Number ARTDEF Gender Number Case

Fig. 3. Example symbol signature for German

automatically. Symbol signature are loaded in fsm2 with the load symspec command; fsm2 supports signature files in both ASCII and UTF-8 format. To avoid the repeated parsing of big signature files – for example those used in language modeling tasks where every word found in a given corpus counts as a separate symbol – fsm2 provides a way to precompile a symbol signature and save and load it in that optimised form. Weighted regular expressions. At the core of all compilers implemented in fsm2 is an efficient weighted-regular expression compiler (WREC) with a syntax similar to that of LexTools [2]. Table 5 in the Appendix shows all the regular expression operators implemented in WREC. Fig. 4 shows the automaton corresponding to the mapping of German noun M¨ utter (mother pl.) to its lemma, while Fig. 5 shows a weighted regular expression over the probabilistic semiring associating the German noun Mann (engl. man) to its three possible feature specifications. Notice the possibility of (abbreviatory) negation in Fig. 4.





























 







  





Fig. 4. Trivially weighted transducer for M¨ utter : Mutter [NOUN Gender=fem Number=pl Case=!dat]

 

















 





Fig. 5. Probabilistic acceptor for (Mann ([NOUN Gender=masc Number=sg Case=nom] | [NOUN Gender=masc Number=sg Case=acc] | [NOUN Gender=masc Number=sg Case=dat]))/d/m







 



fsm2 – A Scripting Language Interpreter for Manipulating WFSAs ein ein ein eine eine einem einem einen einer einer eines eines

Fig. 6. Lexicon for German article forms (wrt. the unification semiring)

Lexicons. Lexicons are simply treated as implicit disjunctions of regular expressions. Fig. 6 shows a lexicon for the German indefinite articles over the unification semiring6 . Lexicons are processed with the load lexicon command. Replacement rules. fsm2 supports a large set of replacement rules, partly taken from XFST. They are summarised in Table 4. Table 4. Replacement operators Operator alpha -> beta alpha -> beta / gamma

delta

alpha -> left ... right alpha @-> beta alpha @-> left ... right supertype1 => supertype2

Meaning Obligatory replacement of an instance of alpha by an instance of beta Obligatory replacement of an instance of alpha by an instance of beta if alpha is preceded by an instance of gamma and followed by an instance of delta Obligatory bracketing of alpha with left and right Longest match replacement of alpha by beta Longest match bracketing of alpha with left and right Obligatory replacement of each direct subtype of supertype1 with the corresponding direct subtype of supertype2

Most replacement operators exist also in an optional version. Replacement rules may also be put into a separate file which can be loaded with the load contextrules command. Grammars. fsm2 supports a restricted form of weighted context-free grammar, as described in [11]7 . Furthermore, for true context-free grammars which do not 6 7

The corresponding WFSA was already shown in Fig. 2. Basically, a finite-state automaton cannot handle center embedding.

24

T. Hanneforth [NP] [NP] [NP] [NP] [NP] [NP] [PP] [AP] [DET] [NOUN] [ADJ] [PREP]

--> --> --> --> --> --> --> --> --> --> --> -->

[NOUN] [DET] [NOUN] [DET] [AP] [NOUN] [AP] [NOUN] [DET] [NOUN] [PP] [DET] [AP] [NOUN] [PP] [PREP] [NP] [ADJ] [AP] | [ADJ] [the] | a [dog] | [cat] | [tree] [old] | [young] | [black] [of] | [under]

Fig. 7. Weighted probabilistic grammar







 





 

 

    







  

  











 





 

  



   



  







Fig. 8. Compiled WFSA for the grammar in Fig. 7

have an equivalent WFSA, an efficient method for approximating these kinds of grammars is available (see [12]). Fig. 7 shows a weighted grammar and Fig. 8 the corresponding WFSA over the probabilistic semiring. Grammars are loaded with the load grammar command. Grammars rules can be spread over different files by using the built-in #include command. 3.5

Input/Output

fsm2 makes use of several several file formats to store weighted automata: 1. A text file format downward compatible with the format used by AT&T’s FSM library [13]. 2. An XML format.

fsm2 – A Scripting Language Interpreter for Manipulating WFSAs

25

3. Binary formats for different internal data structures representing finite-state machines like adjacency lists, transition tables, adjacency matrices etc. Finite-state machines stored in one of the binary formats can be loaded and saved very quickly. Furthermore, the fsm2 -command draw produces GraphViz’s dot files8 . 3.6

Scripts

All commands typed in interactively in the fsm2 -interpreter shell can be also put into a script file. Fig. 9 shows a script file for compiling an English lexicon into a minimised WFSA over the tropical semiring. Each lexicon entry is associated with an unique integer as a weight. This WFSA represents a perfect hashing function. ## Example script for creating an WFSA over the tropical semiring ## representing a perfect hash function semiring tropical load symspec sigma ## Load lexicon and create hash function load lexicon words.lex ## Remove epsilons, determinise and minimise optimize ## Save result as XML print fsm > word_hash.xml ## Lookup lookup every man loves a woman

Fig. 9. fsm2 -script for creating a WFSA from a lexicon of English words

Executing the script in Fig. 9 produces the output shown in Fig. 10.

4

Special Features

This section describes some of the non-standard features of the scripting language. 4.1

Macros

fsm2 admits parametrised macros, that is, subprograms with automata or strings as parameters. Fig. 11 shows a macro compiling a replacement rule UPPER -> LOWER / LC RC into a transducer using the method proposed in [14]. The script may be called with the command call left to right replace(ab,x,ab,a) to obtain the WFST for the replacement rule ab -> x / ab 8

a .

See www.graphviz.org. GraphViz can be used to create drawings for the WFSMs in several output formats.

26

T. Hanneforth

FSM interactive interpreter v1.0.0 [Dec 8 2009] (tropical semiring) Type ’help’ for help on commands Semiring changed to ’tropical’ Symbol specification sigma.sym (118 user symbols, 11 supertypes, 0 categories) loaded in 0s Lexicon "words.lex" (79767 lines) compiled in 1.55s [weighted acceptor, 696282 states, 79766 final states, 696281 transitions] FSM optimized in 0.907s [weighted acceptor, 35798 states, 5895 final states, 76206 transitions] FSM written to file "word_hash.xml" every man loves a woman Script "word_hash.fsm2" sourced in 2.719s

Fig. 10. Output of the script in Fig. 9

# left_to_right_replace(UPPER,LOWER,LC,RC) macro left_to_right_replace(UPPER,LOWER,LC,RC) call InsertBrackets() sort olabel call ConstrainBrackets() compose sort olabel call LeftContext(%LC%) compose sort olabel call Replace(%UPPER%,%LOWER%) compose sort olabel call RightContext(%RC%) compose sort olabel call RemoveBrackets() compose optimize ilabel endmacro

Fig. 11. Macro for compiling a replacement rule UPPER -> LOWER / LC

4.2

RC

Language Modeling

fsm2 provides commands for simple language modeling tasks like N -gram counting [15] and joint or conditional N -gram probabilisation [16]. Fig. 12 shows a bigram counter for alphabet {a, b, c, d}, and Fig. 13 a script for counting and probabilising bigrams. The trivially weighted probabilistic acceptor (henceforth: PFSA) representing the input string is composed with the (highly nondeterministic) bigram counter. On the language level, the ⊕-operation inherent in the definition of the projection operation (see [7]) performs the actual bigram counting. On the automaton level, the ⊕-operation present in all equivalence transformations achieves the counting effect by ⊕-combining paths labeled with the same N -gram.

fsm2 – A Scripting Language Interpreter for Manipulating WFSAs

27

Fig. 12. Bigram counter semiring probabilistic load symspec sigma regex "abcabbaaabbcccbcbbcaacbbbacbcabcba" ngram counter 2 draw bigram_counter "" compose project 2 optimize # Store the PFSA representing the bigrams counts in a variable define bigrams_fsa probabilize 2 joint draw bigrams_joint "" push stack bigrams_fsa probabilize 2 conditional draw bigrams_cond ""

Fig. 13. Script for counting and probabilising bigrams

Fig. 14. Bigram joint and conditional probability distributions

Fig. 14 shows the optimised PFSA representing joint and conditional probabilisation distributions over the bigrams found in the given input text. Note that the PFSA representing joint distributions are always probabilistic, that is, the weights of the transitions leaving a certain state sum up to 1. On the other hand, PFSA representing conditional distributions are probabilistic only with respect to the states at the (N − 1)-level.

28

T. Hanneforth

Since N -gram counting transducers as the one shown in Fig. 12 may become huge in case of big alphabets, these kinds of transducers are implemented in fsm2 as virtual constant automata. This means that they exist only virtually and their transition function is computed on demand (refer to [16] for details).

5

Further Directions

Future versions of fsm2 will include: – More commands supporting language modeling, for example creation of backoff [17] and interpolation language models [18], probabilistic taggers etc. – Commands for creating automata for pattern matching, for example, for suffix and failure transition automata [19]. – A fast incremental lexicon compiler creating already minimised automata [20]. – Efficient partial determinisation algorithms for undeterminizable transducers (for example those typically encountered in computational morphology and robust finite-state based parsing). The fsm2 source code is available at www.fsmlib.org.

References 1. Beesley, K.R., Karttunen, L.: Finite State Morphology. CSLI, Stanford (2003) 2. Roark, B., Sproat, R.: Computational Approaches to Syntax and Morphology. Oxford University Press, Oxford (2007) 3. Kuich, W., Salomaa, A.: Semirings, Automata, Languages. EATCS Monographs on Theoretical Computer Science, vol. 5. Springer, Heidelberg (1986) 4. Mohri, M.: Semiring Frameworks and Algorithms for Shortest-Distance Problems. Journal of Automata, Languages and Combinatorics 7(3), 321–350 (2002) 5. Hanneforth, T.: Using ranked semirings for representing morphology automata. In: Mahlow, C., Pietrowski, M. (eds.) State of the Art in Computational Morphology, pp. 1–9. Springer, Heidelberg (2009) 6. Hanneforth, T.: Longest-match pattern matching with weighted finite state automata. In: Yli-Jyr¨ a, A., Karttunen, L., Karhum¨ aki, J. (eds.) FSMNLP 2005. LNCS (LNAI), vol. 4002, pp. 78–85. Springer, Heidelberg (2006) 7. Mohri, M.: Weighted automata algorithms. In: Droste, M., Kuich, W., Vogler, H. (eds.) Handbook of Weighted Automata. Springer, Heidelberg (2009) 8. Mohri, M.: Minimization Algorithms for Sequential Transducers. Theoretical Computer Science 234, 177–201 (2000) 9. Allauzen, C., Riley, M., Schalkwyk, J., Skut, W., Mohri, M.: Openfst: A general ˇ d´ ˇ arek, J. (eds.) and efficient weighted finite-state transducer library. In: Holub, J., Z CIAA 2007. LNCS, vol. 4783, pp. 11–23. Springer, Heidelberg (2007) 10. Mohri, M.: Generic epsilon-removal algorithm for weighted automata. In: Yu, S., P˘ aun, A. (eds.) CIAA 2000. LNCS, vol. 2088, pp. 230–242. Springer, Heidelberg (2001) 11. Mohri, M., Pereira, F.C.N.: Dynamic compilation of weighted context-free grammars. In: Proceedings of ACL 1998, pp. 891–897 (1998)

fsm2 – A Scripting Language Interpreter for Manipulating WFSAs

29

12. Mohri, M., Nederhof, M.J.: Regular approximation of context-free grammars through transformation. In: Junqua, J.C., van Noord, G. (eds.) Robustness in Language and Speech Technology, pp. 153–163. Kluwer Academic Publishers, Dordrecht (2001) 13. Mohri, M., Pereira, F.C.N., Riley, M.: The design principles of a weighted finitestate transducer library. Theoretical Computer Science 231, 17–32 (2000) 14. Karttunen, L.: The replace operator. In: 33th Annual Meeting of the Association for Computational Linguistics, pp. 16–23 (1995) 15. Allauzen, C., Mohri, M., Roark, B.: Generalized Algorithms for Constructing Statistical Language Models. In: Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics. The Association for Computational Linguistics, vol. 41, pp. 40–47 (2003) 16. Hanneforth, T., W¨ urzner, K.M.: Statistical language models within the algebra of weighted rational languages. Acta Cybernetica 19, 313–356 (2009) 17. Katz, S.M.: Estimation of Probabilities from Sparse Data for the Language Model Component of a Speech Recognizer. IEEE Transactions on Acoustics, Speech and Signal Processing 35(3), 400–401 (1987) 18. Jelinek, F.: Statistical Methods for Speech Recognition. Language, Speech and Communication. MIT Press, Cambridge (1997) 19. Aho, A.V., Corasick, M.J.: Efficient String Matching: An Aid to Bibiographic Search. Communications of the Asscociation for Computing Machinery 18(6), 333– 340 (1975) 20. Daciuk, J., Watson, B.W., Mihov, S., Watson, R.E.: Incremental construction of minimal acyclic finite-state automata. Comput. Linguist. 26(1), 3–16 (2000)

30

T. Hanneforth

Appendix

Table 5. Built-in regular expression operators Operator ! RE $ RE RE * RE + RE ? RE /1 RE /2 RE /r RE /b RE /pw RE /pl RE /e RE /d RE /m RE RE RE | RE RE & RE RE - RE RE : RE RE @ RE RE /i TYPE RE /iff RE

Type Prefix Prefix Postfix Postfix Postfix Postfix Postfix Postfix Postfix Postfix Postfix Postfix Postfix Postfix Infix Infix Infix Infix Infix Infix Infix Infix

Meaning Complement Contains operator Star closure Plus closure Optionality First projection Second projection Reversal Best path Push weights towards initial state Push labels towards initial state Remove epsilon transitions Determinize Minimize Operatorless concatenation Disjunction Intersection Difference Cross product Composition Ignore Iff-suffix-then-prefix operator

Selected Operations and Applications of n-Tape Weighted Finite-State Machines Andr´e Kempe Cad`ege Technologies 32 rue Brancion – 75015 Paris – France [email protected] http://a.kempe.free.fr

Abstract. A weighted finite-state machine with n tapes (n-WFSM) defines a rational relation on n strings. The paper recalls important operations on these relations, and an algorithm for their auto-intersection. Through a series of practical applications, it investigates the augmented descriptive power of n-WFSMs, w.r.t. classical 1- and 2-WFSMs (weighted acceptors and transducers). Some of the presented applications are not feasible with the latter.

1

Introduction

A weighted finite-state machine with n tapes (n-WFSM) [33,7,14,10,12] defines a rational relation on n strings. It is a generalization of weighted acceptors (one tape) and transducers (two tapes). This paper investigates the potential of n-ary rational relations (resp. nWFSMs) compared to languages and binary relations (resp. acceptors and transducers), in practical tasks. All described operations and applications have been implemented with Xerox’s WFSC tool [17]. The paper is organized as follows: Section 2 recalls some basic definitions about n-ary weighted rational relations and n-WFSMs. Section 3 summarizes some central operations on these relations and machines, such as join and autointersection. Unfortunately, due to Post’s Correspondence Problem, there cannot exist a fully general auto-intersection algorithm. Section 4 recalls a restricted algorithm for a class of n-WFSMs. Section 5 demonstrates the augmented descriptive power of n-WFSMs through a series of practical applications, namely the morphological analysis of Semitic languages (5.1), the preservation of intermediate results in transducer cascades (5.2), the induction of morphological rules from corpora (5.3), the alignment of lexicon entries (5.4), the automatic extraction of acronyms and their meaning from corpora (5.5), and the search for cognates in a bilingual lexicon (5.6). 

Sections 2–4 are based on published results [18,19,20,4], obtained at Xerox Research Centre Europe (XRCE), Meylan, France, through joint work between Jean-Marc Champarnaud (Rouen Univ.), Jason Eisner (Johns Hopkins Univ.), Franck Guingne and Florent Nicart (XRCE and Rouen Univ.), and the author.

A. Yli-Jyr¨ a et al. (Eds.): FSMNLP 2009, LNAI 6062, pp. 31–46, 2010. c Springer-Verlag Berlin Heidelberg 2010 

32

A. Kempe

2

Definitions

We recall some definitions about n-ary weighted relations and their machines, following the usual definitions for multi-tape automata [7,6], with semiring weights added just as for acceptors and transducers [24,27]. For more details see [18]. A weighted n-ary relation is a function from (Σ ∗ )n to K, for a given finite alphabet Σ and a given weight semiring K = K, ⊕, ⊗, ¯0, ¯1. Such a relation assigns a weight to any n-tuple of strings. A weight of ¯0 can be interpreted as meaning that the tuple is not in the relation. We are especially interested in rational (or regular) n-ary relations, i.e. relations that can be encoded by n-tape weighted finite-state machines, that we now define. We adopt the convention that variable names referring to n-tuples of strings → include a superscript (n) . Thus we write s(n) rather than s for a tuple of strings s1 , . . . sn . We also use this convention for the names of objects that “contain” n-tuples of strings, such as n-tape machines and their transitions and paths. An n-tape weighted finite-state machine (n-WFSM) A(n) is defined by a sixtuple A(n) = Σ, Q, K, E (n) , λ, , with Σ being a finite alphabet, Q a finite set of states, K = K, ⊕, ⊗, ¯ 0, ¯ 1 the semiring of weights, E (n) ⊆ (Q × (Σ ∗ )n × K × Q) a finite set of weighted n-tape transitions, λ : Q → K a function that assigns initial weights to states, and  : Q → K a function that assigns final weights to states. Any transition e(n) ∈ E (n) has the form e(n) = y, (n) , w, t. We refer to these four components as the transition’s source state y(e(n) ) ∈ Q, its label (e(n) ) ∈ (Σ ∗ )n , its weight w(e(n) ) ∈ K, and its target state t(e(n) ) ∈ Q. We refer by E(q) to the set of out-going transitions of a state q ∈ Q (with E(q) ⊆ E (n) ). (n) (n) (n) A path γ (n) of length k ≥ 0 is a sequence of transitions e1 e2 · · · ek such (n) (n) that t(ei ) = y(ei+1 ) for all i ∈ [1, k−1]. The label of a path is the element-wise concatenation of the labels of its transitions. The weight of a path γ (n) is ⎞ ⎛   (n)  (n) (n) w ej ⎠ ⊗ (t(ek )) (1) w(γ (n) ) =def λ(y(e1 )) ⊗ ⎝ j∈[1,k]

The path is said to be successful, and to accept its label, if w(γ (n) ) = ¯0.

3

Operations

We now recall some central operations on n-ary weighted relations and n-WFSMs [21]. The auto-intersection operation was introduced, with the aim of simplifying the computation of the join operation. The notation is inspired by relational databases. For mathematical details of simple operations see [18]. 3.1

Simple Operations

Any n-ary weighted rational relation can be constructed by combining the basic rational operations of union, concatenation and closure. Rational operations can

n-Tape Weighted Finite-State Machines

33

be implemented by simple constructions on the corresponding non-deterministic n-tape WFSMs [34]. These n-tape constructions and their semiring-weighted versions are exactly the same as for acceptors and transducers, since they are indifferent to the n-tuple transition labels. The projection operator πj1 ,...jm , with j1 , . . . jm ∈ [1, n], maps an n-ary relation to an m-ary one by retaining in each tuple components specified by the indices j1 , . . . jm and placing them in the specified order. Indices may occur in any order, possibly with repeats. Thus the tapes can be permuted or duplicated: π2,1 inverts a 2-ary relation. The complementary projection operator π {j1 ,...jm } removes the tapes j1 , . . . jm and preserves the order of the other tapes. 3.2

Join Operation

The n-WFSM join operator differs from database join in that database columns are named, whereas our tapes are numbered. Since tapes must explicitly be selected by number, join is neither associative nor commutative. For any distinct i1 , . . . ir ∈ [1, n] and any distinct j1 , . . . jr ∈ [1, m], we define a join operator 1{i1 =j1 ,...ir =jr } . It combines an n-ary and an m-ary relation into an (n + m − r)-ary relation defined as follows:1   (n) (m) (n) (m) R1 1{i1 =j1 ,...ir =jr } R2 (u1 , . . . un , s1 , . . . sm−r ) =def R1 (u(n) ) ⊗ R2 (v(m) ) (2) v (m) being the unique tuple s. t. π{j1 ,...jr } (v(m) ) = s(m−r) and (∀k ∈ [1, r]) vjk = uik . (n) (m) (n) (m) Important special cases of join are crossproduct R1 × R2 = R1 1 R2 , (n)

(n)

(n)

(n)

intersection R1 ∩ R2 = R1 1{1=1,...n=n} R2 , and transducer composition (2) (2) (2) (2) R1 ◦ R2 = π {2} (R1 1{2=1} R2 ). Unfortunately, rational relations are not closed under arbitrary joins [18]. Since the join operation is very useful in practical applications (Sec. 5), it is helpful to have even a partial algorithm: hence our motivation for studying autointersection. 3.3

Auto-intersection

For any distinct i1 , j1 , . . . ir , jr ∈ [1, n], we define an auto-intersection operator σ{i1 =j1 ,i2 =j2 ,...ir =jr } . It maps a relation R(n) to a subset of that relation, preserving tuples s(n) whose elements are equal in pairs as specified, but removing other tuples from the support of the relation.2 The formal definition is:    R(n) (s1 , . . . sn ) if (∀k ∈[1, r])sik =sjk (n) σ{i1 =j1 ,...ir =jr } (R ) (s1 , . . . sn ) =def (3) ¯ 0 otherwise 1

2

For example the tuples abc, def,  and def, ghi, , jkl combine in the join 1{2=1,3=3} and yield the tuple abc, def, , ghi, jkl, with a weight equal to the product of their weights. The requirement that the 2r indices be distinct mirrors the similar requirement on join and is needed in (5). But it can be evaded by duplicating tapes: the illegal operation σ{1=2,2=3} (R) can be computed as π {3} (σ{1=2,3=4} (π1,2,2,3 (R))).

34

A. Kempe

It is easy to check that auto-intersecting a relation is different from joining the relation with its own projections [18]. Actually, join and auto-intersection are related by the following equalities: (n)

R1

(m)

1{i1 =j1 ,...ir =jr } R2 =   (n) (m) π {n+j1 ,...n+jr } σ{i1 =n+j1 ,...ir =n+jr } ( R1 ×R2 ) σ{i1 =j1 ,...ir =jr } (R(n) ) =



(4)



⎜ ⎟ R(n) 1{i1 =1,j1 =2,...ir =2r−1,jr =2r} ⎝(π1,1 (Σ ∗ )×· · ·×π1,1 (Σ ∗ )⎠(5)   r times

Thus, for any class of difficult join instances whose results are non-rational or have undecidable properties [18], there is a corresponding class of difficult autointersection instances, and vice-versa. Conversely, a partial solution to one problem would yield a partial solution to the other. An auto-intersection on a single pair of tapes is said to be a single-pair one. An auto-intersection on multiple pairs of tapes can be defined in terms of multiple single-pair auto-intersections: σ{i1 =j1 ,...ir =jr } ( R(n) ) =def σ{ir =jr } ( · · · σ{i1 =j1 } ( R(n) ) · · · )

4

(6)

Compilation of Auto-intersection

We now briefly recall a single-pair auto-intersection algorithm and the class of bounded delay auto-intersections that this algorithm can handle. For a detailed exposure see [19]. 4.1

Post’s Correspondence Problem

Unfortunately, auto-intersection (and hence join) can be reduced to Post’s Correspondence Problem (PCP) [31]. Any PCP instance can be represented as an unweighted 2-FSM, and the set of all solutions to the instance equals the autointersection of the 2-FSM [18]. Since it is generally undecidable whether an arbitrary PCP instance has any solution, it is also undecidable whether the result of an arbitrary auto-intersection is empty. In practice it means that no partial auto-intersection algorithm can be “complete” in the sense that it always returns a correct n-FSM if it is rational, and always terminates with an error code otherwise. Such an algorithm would make PCP generally decidable since a returned n-FSM can always be tested for emptiness, and an error code indicates non-rationality and hence non-emptiness.

n-Tape Weighted Finite-State Machines

4.2

35

A Class of Rational Auto-intersections

Although there cannot exist a fully general algorithm, the auto-intersection (n) (n) A(n) = σ{i=j} (A1 ) can be compiled for a class of triples A1 , i, j whose definition is based on the notion of delay [8,26]. The delay δi,j (s(n) ) is the difference of length of the strings si and sj of the tuple s(n) : δi,j (s(n) ) = |si |−|sj | (i, j ∈ [1, n]). We call the delay bounded if its absolute value does not exceed some limit. The delay of a path γ (n) results from its labels on tapes i and j: δi,j (γ (n) ) = |((γ (n) ))i |− |((γ (n) ))j |. A path has bounded delay if all its prefixes have bounded delay,3 and an n-WFSM has bounded delay if all its successful paths have bounded delay. (n) As previously reported [19], if an n-WFSM A1 does not contain a path traversing both a cycle with positive and a cycle with negative delay w.r.t. tapes (n) i and j,4 then the delay of all paths of its auto-intersection A(n) = σ{i=j} (A1 ) max is bounded by some δi,j . This bound can be calculated from delays measured (n)

on specific paths of A1 . 4.3

An Auto-intersection Algorithm

Our algorithm for the above mentioned class of rational auto-intersections proceeds in three steps [19,20] : (n)

1. Test whether the triple A1 , i, j fulfills the above conditions. If not, then the algorithm exits with an error code. max for the delay of the auto-intersection 2. Calculation of the bound δi,j (n)

A(n) = σ{i=j} (A1 ). max . 3. Construction of the auto-intersection within the bound δi,j (3)

Figure 1 illustrates step 3 of the algorithm: State 0, the initial state of A1 , is copied as initial state 10 to A(3) . Its annotation, 0, ε, ε, indicates that it is a copy of state 0 and has leftover strings ε, ε. Then, all out-going transitions of state 0 and their target states are copied to A(3) , as states 11 and 13. A transitions is copied with its original label and weight. The annotation of state 11 indicates that it is a copy of state 0 and has leftover strings a, ε. These leftover strings result from concatenating the leftover strings of state 10, ε, ε, with the relevant components, a, ε, of the transition label a:ε:x. For each newly created state q ∈ QA , we access the corresponding state q1 ∈ QA1 , and copy q1 ’s outgoing transitions with their target states to A(3) , until all states of A(3) have been processed. 3

4

Any finite path has bounded delay (since its label is of finite length). An infinite path (traversing cycles) may have bounded or unbounded delay. For example, the delay of a path labeled with (ab, εε, xz)h is bounded by 2 for any h, whereas that of a path labeled with ab, εh ε, xzh is unbounded for h −→ ∞. Note that the n-WFSM may have cycles of both types, but not on the same path.

36

A. Kempe

a: ε:x /w 0 0

(a) (3)

A1

ε:a:y /w 1 1 /ρ1

(0,( ε, ε))

(b)

10

A(3) = (3) σ{1=2}(A1 ) (1,( ε, a))

a: ε:x /w 0

(0,( a, ε))

11

ε:a:y /w 1 13

(1,( ε, ε))

a: ε:x /w 0

(0,( aa, ε))

12

ε:a:y /w 1 14 /ρ1

Fig. 1. (a) A 3-WFSM and (b) its auto-intersection

State 12 is not created because the delay of its leftover strings aa, ε exceeds max the pre-calculated bound of δ1,2 = 1. The longest common prefix of the two leftover strings of a state is removed. Hence state 14 has leftover strings ε, ε instead of a, εε, a = a, a. A final state is copied with its original weight if it has leftover strings ε, ε, and with weight ¯ 0 otherwise. Therefore, state 14 is final and state 13 is not. The construction is proven to be correct and to terminate [19,20]. It can be performed simultaneously on multiple pairs of tapes.

5

Applications

This section focuses on demonstrating the augmented descriptive power of nWFSMs, w.r.t. to 1- and 2-WFSMs (acceptors and transducers), and on exposing the practical importance of the join operation. It also aims at illustrating how to use n-WFSMs, defined through regular expressions, in practice. Indeed, some of the applications are not feasible with 1- and 2-WFSMs. The section does not focus on the presented applications per se. 5.1

Morphological Analysis of Semitic Languages

n-WFSMs have been used in the morphological analysis of Semitic languages [14,22,23, e.g.]. Table 1 by Kiraz [22] shows the “synchronization” of the quadruple s(4) = aa, ktb, waCVCVC, wakatab in a 4-WFSM representing an Arabic morphological lexicon. Its first tape encodes a word’s vowels, its second the consonants (representing the root), its third the affixes and the templatic pattern (defining how to combine consonants and vowels), and its fourth the word’s surface form. Any of the tapes can be used for input or output. For example, for a given root and vowel sequence, we can obtain all existing surface forms and templates. For a given root and template, we can obtain all existing vowel sequences and surface forms, etc. 5.2

Intermediate Results in Transduction Cascades

Transduction cascades have been extensively used in language and speech processing [1,29,25, e.g.].

n-Tape Weighted Finite-State Machines

37

Table 1. Multi-tape-based morphological anaysis of Arabic; table adapted from Kiraz [22] a a k t b waCVCVC wa k a t a b

vocalism root pattern and affixes surface form

In a classical weighted transduction cascade (Figure 2), consisting of trans(2) (2) (1) ducers T1 . . . Tr , a weighted input language L0 , consisting of one or more (2) words, is composed with the first transducer, T1 , on its input tape. The output (1) projection of this composition is the first intermediate result, L1 . It is further (2) composed with the second transducer, T2 , which leads to the second interme(1) (1) (1) (2) diate result, L2 , etc.. Generally, Li = π2 (Li−1  Ti ) (i ∈ [1, r]). The output (1)

projection of the last transducer is the final result, Lr . At any point in the cascade, previous intermediate results cannot be accessed. This holds also if the cascade is composed into a single transducer: T (2) = (2) (2) T1  · · ·  Tr . None of the “incorporated” sub-relations of T (2) can refer to a sub-relation other than its immediate predecessor. (n ) (n ) In a multi-tape transduction cascade, consisting of n-WFSMs A1 1 . . . Ar r , any intermediate results can be preserved and used by subsequent transductions. Figure 3 shows an example where two previous results are preserved at each (2) point, i.e., each intermediate result, Li , has two tapes. The projection of the (1) output tape of the last n-WFSM is the final result, Lr : (2)

(1)

(2)

L1 = L0 1{1=1} A1 (2)

Li

L(1) r

(7)

(2)

(3)

= π2,3 ( Li−1 1{1=1,2=2} Ai = π3 (

(2) Lr−1

1{1=1,2=2} A(3) r

(i ∈ [2, r − 1])

)

)

(8) (9)

This augmented descriptive power is also available if the whole cascade is joined into a single 2-WFSM, A(2) , although A(2) has (in this example) only two tapes, for input and output, respectively. Figure 4 sketches the iterative constructions (3) (2) (3) of A(2) . (Any Bi is the join of A1 to Ai ) : (3)

= A1 1{1=1,2=2} A2

(3)

= π1,3,4 ( Bi−1 1{2=1,3=2} Ai

B2 Bi

(2)

A

(2)

(3)

(3)

= π1,3 (

Br(3)

)

(10) (3)

)

(i ∈ [3, r])

(11) (12)

Each (except the first) of the “incorporated” multi-tape sub-relations in A(2) will still refer to its two predecessors.

38

A. Kempe (2)

T1

(1)

L0

(2)

(1)

L1

(2)

T2

tape 1

tape 1

tape 2

tape 2

Tr

(1)

L r−1

(1)

Lr

tape 1

.....

tape 2

Fig. 2. Classical 2-WFSM transduction cascade

(2)

A1

(1) L0

(3)

(2) L1

(3)

A2

(2) L r−1

tape 1 tape 1

tape 2

tape 2

tape 3

Ar

(1)

Lr

tape 1

.....

tape 2 tape 3

Fig. 3. n-WFSM transduction cascade

(2)

(3)

A1

A2

tape 1

tape 1

tape 2

tape 2

(3)

B2

tape 3 (3)

(3)

B i−1

Bi (3)

tape 1

Ai

tape 2

tape 1

tape 3

tape 2 tape 3

(3)

Br

(2)

A

tape 1 tape 2 tape 3 Fig. 4. n-WFSM transduction cascade joined into a single 2-WFSM, A(2) , maintaining n-tape descriptive power

n-Tape Weighted Finite-State Machines

5.3

39

Induction of Morphological Rules

Induction of morphemes and morphological rules from corpora, both supervised and unsupervised, is a subfield of NLP on its own [3,9,5, e.g.]. We do not propose a new method for inducing rules, but rather demonstrate how known steps can be conveniently performed in the framework of n-ary relations. Learning morphological rules from a raw corpus can include, among others: (1) generating the least costly rule for a given word pair, that rewrites one word to the other, (2) identifying the set of pairs over all corpus words where a given rule applies, and (3) rewriting a given word by means of one or several rules. Construction of a rule generator. For any word pair, such as parler, parlons (French, [to] speak, [we] speak), the generator shall provide a rule, such as “.er:ons”, suitable for rewriting the first to the second word at minimal cost. In a rule, a dot shall mean that one or more letters remain unmodified, and an x:y-part that substring x is replaced by substring y. We begin with a 4-WFSM that defines rewrite operations:  ∗ (4) A1 = ?, ?, . , K{1=2} , 0 ∪ ?, ε, ?, D{1=3} , 0 ∪ ε, ?, ?, I{2=3} , 0 ∪ ε, ε, : , S, 0

(13)

where ? can be instantiated by any symbol, ε is the empty string, {i=j} a constraint requiring the ?’s on tapes i and j to be instantiated by the same symbol [28],5 and 0 a weight over the tropical semiring.

(?,?,.,K) {1=2}/0

( ε, ε,:,S) /0

(?, ε,?,D) {1=3}/0

( ε,?,?,I) {2=3} /0 (4)

Fig. 5. Initial form A1 of the rule generator

word 1 word 2 preliminary rule preliminary op. codes final rule final operation codes weights

s wu s w . . u KKD . u K k D 1 0 4

m m D m d 2

: S : S 0

i i I i I 4

m m I m i 2

Fig. 6. Mapping from the word pair swum, swim to various sequences

(4)

Figure 5 shows the graph of A1 and Figure 6 (rows 1–4) the purpose of its tapes: Tapes 1 and 2 accept any word pair, tape 3 generates a preliminary form of the rule, and tape 4 generates a sequence of preliminary operation codes. The (4) following four cases can occur when A1 reads a word pair (cf. Eq. 13) : 1. ?, ?, . , K{1=2} : two identical letters are accepted, meaning a letter is kept from word 1 to word 2, which is represented by a “.” in the rule and K (keep) in the operation codes, 5

Deviating from [28], we denote symbol constraints similarly to join and autointersection constraints.

40

A. Kempe

2. ?, ε, ?, D{1=3} : a letter is deleted from word 1 to 2, expressed by this letter in the rule and D (delete) in the operation codes, 3. ε, ?, ?, I{2=3} : a letter is inserted from word 1 to 2, expressed by this letter in the rule and I (insert) in the operation codes 4. ε, ε, : , S: no letter is matched in either word, a “:” is inserted in the rule, and an S (separator) in the operation codes. (1)

Next, we compile C1 that constrains the order of operation codes. For example, D must be followed by S, I must be preceded by S, I cannot be followed by D, etc. (4) (4) The constraints are enforced through join (Fig. 6 row 4) : A2 = A1 1{4=1} C (1) . (2) Then, we create B1 that maps temporary rules to their final form by replac(2) ing a sequence of dots (longest match) by a single dot. We join B1 with the (5) (4) (2) previous result (Fig. 6 rows 3, 5) : A3 = A2 1{3=1} B1 . (2)

Next, we compile B2 that creates more fine-grained operation codes. In a sequence of equal capital letters, it replaces each but the first one with its small (2) form. For example, DDD becomes Ddd. B1 is joined with the previous result (6) (5) (2) (Fig. 6 rows 4, 6) : A4 = A3 1{4=1} B2 . (1) (2) (2) C1 , B1 , and B2 can be compiled as unweighted automata with a tool such as Xfst [13,2] and then be enhanced with neutral weights. Finally, we assigns weights to the fine-grained operation codes by joining (1) B3 = (K, 1 ∪ k, 0 ∪ D, 4 ∪ d, 2 ∪ I, 4 ∪ i, 2 ∪ S, 0)∗ with the previous (6) (6) (1) result (Fig. 6 rows 6, 7) : A5 = A4 1{6=1} B3 . We keep only the tapes of the word pair and of the final rule in the generator (Fig. 6 rows 1, 2, 5). All other tapes are of no further use:   (6) (14) G(3) = π1,2,5 A5 The rule generator G(3) maps any word pair to a finite number of rewrite rules with different weight, expressing the cost of edit operations. The optimal rule (with minimal weight) can be found through n-tape best-path search [16]. Using rewrite rules. We suppose that the rules generated from random word pairs undergo some statistical selection process that aims at retaining only meaningful rules. To facilitate the following operations, a rule’s representation can be changed from a string, such as s(1) =“.er:ons”, to a 2-WFSM r(2) encoding the same relation.  This is done  by joining the rule with the generator: r(2) = π1,2 G(3) 1{3=1} s(1) . An r(2) resulting from “.er:ons”, accepts (on tape 1) only words ending in “er” and changes (on tape 2) their suffix to “ons”. Similarly, a 2-WFSM R(2) that encodes all selected rules can be generated by joining the of all rules (represented as strings) S (1) with the generator:  set  (2) (3) (1) . R = π1,2 G 1{3=1} S To find all pairs P (2) of words from a corpus where a particular rule applies, we compile the automaton W (1) of all corpus words, and compose it on both

n-Tape Weighted Finite-State Machines

41

tapes of r(2) : P (2) = W (1) ◦ r(2) ◦ W (1) . Similarly, identifying all word pairs P  (2) over the whole corpus where any of the rules applies (i.e., the set of “valid” (2) pairs) can be obtained through: P  = W (1) ◦ R(2) ◦ W (1) . (1) (1) Rewriting a word w(1) with a single rule r(2) is done by w2 = π2 (w1 ◦ r(2) ) (1) (1) and w1 = π1 (r(2) ◦ w2 ). Similarly, rewriting a word w(1) with all selected (1)

rules is done by W2 5.4

(1)

(1)

= π2 (w1 ◦ R(2) ) and W1

(1)

= π1 (R(2) ◦ w2 ).

String Alignment for Lexicon Construction

Suppose, we want to create a non-weighted lexicon transducer D(2) from a list of word pairs s(2) of the form inflected form, lemma, e.g., swum, swim, such that each path of the transducer is labeled with one of the pairs. For the sake of compaction, we restrict the labelling of transitions to symbol pairs of the form σ, σ, σ, ε, or ε, σ (∀σ ∈ Σ), while keeping paths as short as possible. A symbol pair restricted that way can be encoded by log2 (3|Σ|) bits, whereas an unrestricted pair over (Σ ∪ {ε}) × (Σ ∪ {ε}) requires log2 (|Σ ∪ {ε}|2 ) bits. For example, in the case of English words over an alphabet of 52 letters (small and capital accent-free Latin), a restricted symbol pair requires only log2 (3 · 52) ≈ 7.3 bits versus log2 ((52 + 1)2 ) ≈ 11.5 bits for an unrestricted one. Part of this gain will be undone by the lengthening of paths, requiring more transitions. Subsequent determinization (considering symbol pairs as atomic labels) and standard minimization should, however, lead to more compact automata, because a deterministic state can have at most 3|Σ| outgoing transitions with restricted labels, but up to |Σ ∪ {ε}|2 with unrestricted ones. In this schema, swum, swim should, e.g., be encoded either by the sequence s, sw, wu, εε, im, m or by s, sw, wε, iu, εm, m, rather than by the illformed s, sw, wu, im, m, or the sub-optimal s, εw, εu, εm, ε ε, sε, w ε, iε, m. We start the construction of the word aligner by creating a 5-WFSM over the real tropical semiring [11] : (5)

A1

=



?, ?, ?, ?, K{1=2=3=4} , 0 ∪ ε, ?, @, ?, I{2=4} , 1 ∪ ?, ε, ?, @, D{1=3} , 1

∗

(15)

where @ is a special symbol representing ε in an alignment, {1=2=3=4} a constraint requiring the ?’s on tapes 1 to 4 to be instantiated by the same symbol [28], and 0 and 1 are weights. (5) Figure 7 shows the graph of A1 and Figure 8 (rows 1–5) the purpose of its tapes: Input word pairs s(2) = s1 , s2  will be matched on tape 1 and 2, and aligned output word pairs generated from tape 3 and 4. A symbol pair ?, ? read on tape 1 and 2 is identically mapped to ?, ? on tape 3 and 4, a ε, ? (5) is mapped to @, ?, and a ?, ε to ?, @. A1 will introduce @’s in s1 (resp. (2) in s2 ) at positions where D shall have ε, σ- (resp. a σ, ε-) transitions.6 Tape 5 generates a sequence of operation codes: K (keep), D (delete), I (insert). 6

Later, we simply replace in D(2) all @ by ε.

42

A. Kempe

(?, ε,?,@,D)

(?,?,?,?,K)

{1=2=3=4}

{1=3}

/1

/0 ( ε,?,@,?,I)

{2=3}

/1

(5)

Fig. 7. Initial form A1 of a word pair aligner

input word 1 input word 2 output word 1 output word 2 operation codes weights

s s s s K 0

w w w w K 0

u i u @ @ i D I 1 1

m m m m K 0

Fig. 8. Alignment of the word pair swum, swim

(5)

For example, A1 will map swum, swim, among others, to swu@m, sw@im with KKDIK and to sw@um, swi@m with KKIDK. To remove redundant (duplicated) alignments, we prohibit an insertion to be immediately followed by a deletion, via the constraint: C (1) = (K ∪ I ∪ D)∗ − (?∗ I D ?∗ ). The constraint is imposed through join and the operations tape is removed:   (5) (16) Aligner(4) = π {5} A1 1{5=1} C (1) The Aligner(4) will map swum, swim, among others, still to swu@m, sw@im but no to sw@um, swi@m. The best alignment (with minimal weight) can be found through n-tape best-path search [16]. 5.5

Acronym and Meaning Extraction

The automatic extraction of acronyms and their meaning from corpora is an important sub-task of text mining, and received much attention [37,32,35, e.g.]. It can be seen as a special case of string alignment between a text chunk and an acronym. For example, the chunk “they have many hidden Markov models” can be aligned with the acronym “HMMs” in different ways, such as “they have many hidden Markov models” or “they have many hidden Markov models”. Alternative alignments have different weight, and ideally the one with the best weight should give the correct meaning. An alignment-based approach can be implemented by means of a 3-WFSM that reads a text chunk on tape 1 and an acronym on tape 2, and generates all possible alignments on tape 3, inserting dots to mark letters used in the acronym. For the above example this would give “they have many .hidden .Markov .model.s”, among others. The 3-WFSM can be generated from n-ary regular expressions that define the task in as much detail as required. As in the previous examples of induction of morphological rules (5.3) and word alignment for lexicon construction (5.4), we create additional tapes during construction, labelled with operation codes that will be removed in the end. We may chose very fine-grained codes to express details such as the position of an acronym letter within a word of the meaning (e.g.: initial, i-th, or last letter of the word), how many letters from the same word are used in the acronym, whether a word in the meaning is not represented

n-Tape Weighted Finite-State Machines

43

by any letter in the acronym, and much more. Through these operation codes, we assign different weights (ideally estimated from data) to the different situations. For a detailed description see [15]. The best alignment (with minimal weight), i.e., the most likely meaning of an acronym is found through n-tape best-path search [16]. The advantage of aligning via a n-WFSM rather than a classical alignment matrix [36,30] is that the n-WFSM can be built from regular expressions that define very subtle criteria, disallowing certain alignments or favoring others based on weights that depend on long-distance context. 5.6

Cognate Search

Extracting cognates with equal meaning from an English-German dictionary EG(3) that encodes triples English word, German word, part of speech, means to identify all paths of EG(3) that have similar strings on tapes 1 and 2. We create a similarity automaton S (2) that describes through weights the degree of similarity between English and German words. This can either be expressed through edit distance (cf. Sec. 5.3, 5.4, and 5.5) or through weighted (2) synchronic grapheme correspondences (e.g.: d-t, ght-cht,  ∗ th-d, th-ss, . . .) : S = ?, ?{1=2} , w0  ∪ d , t, w1  ∪ ght , cht , w2  ∪ . . . . When recognizing an English-German word pair, S (2) accepts either any two equal symbols in the two words (via ?, ?{1=2} ) or some English sequence and its German correspondence (e.g., ght and cht) with some weight. The set of cognates EG(3) cog is obtained by joining the dictionary with the similarity automaton: EG(3) = EG(3) 1{1=1,2=2} S (2) . EG(3) cog cog contains all (and only) (3) the cognates with equal meaning in EG such as daughter, Tochter, noun, eight, acht, num, light, leicht, adj, or light, Licht, noun. Weighs of triples express similarity of words. Note that this result cannot be achieved through ordinary transducer composition. For example, composing S (2) with the English and the German words separately: π1 (EG(3) )  S (2)  π2 (EG(3) ), also yields false cognates such as become, bekommen ([to] obtain).

6

Conclusion

The paper recalled basic definitions about n-ary weighted relations and their nWFSMs, central operations on these relations and machines, and an algorithm for the important auto-intersection operation. It investigated the potential of n-WFSMs, w.r.t. classical 1- and 2-WFSMs (acceptors and transducers), in practical tasks. Through a series of applications, it exposed their augmented descriptive power and the importance of the join operation. Some of the applications are not feasible with 1- or 2-WFSMs. In the morphological analysis of Semitic languages, n-WFSMs have been used to synchronize the vowels, consonants, and templatic pattern into a surface form. In transduction cascades consisting of n-WFSMs, intermediate result can be

44

A. Kempe

preserved and used by subsequent transductions. n-WFSMs permit not only to map strings to strings or string m-tuples to k-tuples, but m-ary to k-ary string relations, such as a non-aligned word pair to its aligned form, or to a rewrite rule suitable for mapping one word to the other. In string alignment tasks, an n-WFSM provides better control over the alignment process than a classical alignment matrix, since it can be compiled from regular expressions defining very subtle criteria, such as long-distance dependencies for weights.

Acknowledgments I wish to thank the anonymous reviewers of my paper for their valuable advice.

References 1. A¨ıt-Mokhtar, S., Chanod, J.-P.: Incremental finite-state parsing. In: Proc. 5th Int. Conf. ANLP, Washington, DC, USA, pp. 72–79 (1997) 2. Beesley, K.R., Karttunen, L.: Finite State Morphology. CSLI Publications, Palo Alto (2003) 3. Brent, M.: An efficient, probabilistically sound algorithm for segmentation and word discovery. Machine Learning 34, 71–106 (1999) 4. Champarnaud, J.-M., Guingne, F., Kempe, A., Nicart, F.: Algorithms for the join and auto-intersection of multi-tape weighted finite-state machines. Int. Journal of Foundations of Computer Science 19(2), 453–476 (2008) 5. Creutz, M., Lagus, K.: Unsupervised models for morpheme segmentation and morfology learning. ACM Transactions on Speech and Language Processing 4(1) (2007) 6. Eilenberg, S.: Automata, Languages, and Machines, vol. A. Academic Press, San Diego (1974) 7. Elgot, C.C., Mezei, J.E.: On relations defined by generalized finite automata. IBM Journal of Research and Development 9(1), 47–68 (1965) 8. Frougny, C., Sakarovitch, J.: Synchronized rational relations of finite and infinite words. Theoretical Computer Science 108(1), 45–82 (1993) 9. Goldsmith, J.: Unsupervised learning of the morphology of a natural language. Computational Linguistics 27, 153–198 (2001) 10. Harju, T., Karhum¨ aki, J.: The equivalence problem of multitape finite automata. Theoretical Computer Science 78(2), 347–355 (1991) 11. Isabelle, P., Kempe, A.: Automatic string alignment for finite-state transducers (2004) (Unpublished work) 12. Kaplan, R.M., Kay, M.: Regular models of phonological rule systems. Computational Linguistics 20(3), 331–378 (1994) 13. Karttunen, L., Ga´ al, T., Kempe, A.: Xerox finite state complier, Xerox Research Centre Europe, Grenoble, France (1998), Online demo and documentation http://www.xrce.xerox.com/competencies/content-analysis/fsCompiler/ 14. Kay, M.: Nonconcatenative finite-state morphology. In: Proc. 3rd Int. Conf. EACL, Copenhagen, Denmark, pp. 2–10 (1987) 15. Kempe, A.: Acronym-meaning extraction from corpora using multitape weighted finite-state machines. Research report 2006/019, Xerox Research Centre Europe, Meylan, France (2006)

n-Tape Weighted Finite-State Machines

45

16. Kempe, A.: Viterbi algorithm generalized for n-tape best-path search. In: Proc. 8th Int. Workshop FSMNLP, Pretoria, South Africa (2009) 17. Kempe, A., Baeijs, C., Ga´ al, T., Guingne, F., Nicart, F.: WFSC – A new weighted finite state compiler. In: Ibarra, O.H., Dang, Z. (eds.) CIAA 2003. LNCS, vol. 2759, pp. 108–119. Springer, Heidelberg (2003) 18. Kempe, A., Champarnaud, J.-M., Eisner, J.: A note on join and auto-intersection of n-ary rational relations. In: Watson, B., Cleophas, L. (eds.) Proc. Eindhoven FASTAR Days, Eindhoven, Netherlands, 2004. TU/e CS TR, vol. 04–40, pp. 64– 78 (2004) 19. Kempe, A., Champarnaud, J.-M., Eisner, J., Guingne, F., Nicart, F.: A class of rational n-WFSM auto-intersections. In: Farr´e, J., Litovsky, I., Schmitz, S. (eds.) CIAA 2005. LNCS, vol. 3845, pp. 266–274. Springer, Heidelberg (2006) 20. Kempe, A., Champarnaud, J.-M., Guingne, F., Nicart, F.: Wfsm auto-intersection and join algorithms. In: Yli-Jyr¨ a, A., Karttunen, L., Karhum¨ aki, J. (eds.) FSMNLP 2005. LNCS (LNAI), vol. 4002, pp. 120–131. Springer, Heidelberg (2006) 21. Kempe, A., Guingne, F., Nicart, F.: Algorithms for weighted multi-tape automata. Research report 2004/031, Xerox Research Centre Europe, Meylan, France (2004) 22. Kiraz, G.A.: Linearization of nonlinear lexical representations. In: Coleman, J. (ed.) Proc. 3rd ACL SIG Computational Phonology, Madrid, Spain (1997) 23. Kiraz, G.A.: Multitiered nonlinear morphology using multitape finite automata: a case study on Syriac and Arabic. Computational Lingistics 26(1), 77–105 (2000) 24. Kuich, W., Salomaa, A.: Semirings, Automata, Languages. EATCS Monographs on Theoretical Computer Science, vol. 5. Springer, Heidelberg (1986) 25. Kumar, S., Byrne, W.: A weighted finite state transducer implementation of the alignment template model for statistical machine translation. In: Proc. Int. Conf. HLT-NAACL, Edmonton, Canada, pp. 63–70 (2003) 26. Mohri, M.: Edit-distance of weighted automata. In: Champarnaud, J.-M., Maurel, D. (eds.) CIAA 2002. LNCS, vol. 2608, pp. 1–23. Springer, Heidelberg (2003) 27. Mohri, M., Pereira, F.C.N., Riley, M.: A rational design for a weighted finite-state transducer library. In: Wood, D., Yu, S. (eds.) WIA 1997. LNCS, vol. 1436, pp. 144–158. Springer, Heidelberg (1998) 28. Nicart, F., Champarnaud, J.-M., Cs´ aki, T., Ga´ al, T., Kempe, A.: Multi-tape automata with symbol classes. In: Ibarra, O.H., Yen, H.-C. (eds.) CIAA 2006. LNCS, vol. 4094, pp. 126–136. Springer, Heidelberg (2006) 29. Pereira, F.C.N., Riley, M.D.: Speech recognition by composition of weighted finite automata. In: Roche, E., Schabes, Y. (eds.) Finite-State Language Processing, pp. 431–453. MIT Press, Cambridge (1997) 30. Pirkola, A., Toivonen, J., Keskustalo, H., Visala, K., J¨ arvelin, K.: Fuzzy translation of cross-lingual spelling variants. In: Proc. 26th Annual Int. ACM SIGIR, Toronto, Canada, 2003, pp. 345–352 (2003) 31. Post, E.: A variant of a recursively unsolvable problem. Bulletin of the American Mathematical Society 52, 264–268 (1946) 32. Pustejovsky, J., Casta˜ no, J., Cochran, B., Kotecki, M., Morrell, M., Rumshisky, A.: Linguistic knowledge extraction from medline: Automatic construction of an acronym database. In: Proc. 10th World Congress on Health and Medical Informatics, Medinfo 2001 (2001) 33. Rabin, M.O., Scott, D.: Finite automata and their decision problems. IBM Journal of Research and Development 3(2), 114–125 (1959) 34. Rosenberg, A.L.: On n-tape finite state acceptors. In: IEEE Symposium on Foundations of Computer Science (FOCS), pp. 76–81 (1964)

46

A. Kempe

35. Schwartz, A., Hearst, M.: A simple algorithm for identifying abbreviation definitions in biomedical texts. In: Proc. Pacific Symposium on Biocomputing, PSB-2003 (2003) 36. Wagner, R.A., Fischer, M.J.: The string-to-string correction problem. Journal of the Association for Computing Machinery 21(1), 168–173 (1974) 37. Yeates, S., Bainbridge, D., Witten, I.H.: Using compression to identify acronyms in text. In: Proc. Data Compression Conf. (DCC-2000), Snowbird, Utah, USA (2000); Also published in a longer form as Working Paper 00/01, Department of Computer Science, University of Waikato (January 2000)

OpenFst Johan Schalkwyk Google Research, New York, USA

In this invited talk, we describe OpenFST, an open-source library for weighted finite-state transducers (WFSTs). OpenFst consists of a C++ template library with efficient WFST representations and over twenty-five operations for constructing, combining, optimizing, and searching them. At the shell-command level, there are corresponding transducer file representations and programs that operate on them. OpenFst is designed to be both very efficient in time and space and to scale to very large problems. This library has key applications speech, image, and natural language processing, pattern and string matching, and machine learning. We give an overview of the library, examples of its use, details of its design that allow customizing the labels, states, and weights and the lazy evaluation of many of its operations. Further information and a download of OpenFst can be obtained from http://www.openfst.org. The accompanying tutorial, “OpenFst in Depth”, gives an overview of algorithms, OpenFst code design and applications.

A. Yli-Jyr¨ a et al. (Eds.): FSMNLP 2009, LNAI 6062, p. 47, 2010. c Springer-Verlag Berlin Heidelberg 2010 

Morphological Analysis of Tone Marked Kinyarwanda Text Jackson Muhirwe Department of Computer Science, Faculty of Computing and IT, Makerere University, P.O. Box 7062, Kampala [email protected]

Abstract. Tones are a significant feature in most Bantu languages. However, previous work on morphological analysis of Bantu languages has always missed out the tones. In this paper we describe an approach to carry out morphological analysis of tone marked Kinyarwanda text. Kinyarwanda is an agglutinating tonal Bantu language with a complex morphological structure. Results obtained from these experiments were compared to results obtained with analysis of tone unmarked text. We found that analysis results from tone marked text were less ambiguous compared to similar analysis on tone unmarked text.

1

Introduction

Tone is a pitch element or register added to a syllable to convey lexical or grammatical information. It plays a very critical role in the use of tonal languages. From the work of Leben [1], Goldsmith [2], Yip [3], and many others, tones have been shown to be autonomous segments that require special attention and treatment. Previous computational morphology work on tonal Bantu languages has focused mainly on the official orthography which in most cases does not mark tones and long vowels. In this paper we describe experiments which were carried out on tonal morphological analysis of Kinyarwanda, a tonal Bantu language. 1.1

The Characteristics of Tones

Morphological analysis of tones is a challenge to both theoretical and computational linguists due to their characteristic features which include: mobility, stability, one-to-one relationship, and many-to-one relationship with the other segmental features [3]. Among these characteristics, mobility is the most challenging to deal with. A tone may be associated with a specific segment in a word and once a segment is deleted the tone is not deleted with it. The tone may be copied to another segment. Unlike other segmental features, tones may stay behind when a segment is deleted or moved, or they may fail to copy when a segment is reduplicated. This characteristic of tones is what is commonly known as the stability of tones. For details about these and many more, Yip [3] gives a good coverage. A. Yli-Jyr¨ a et al. (Eds.): FSMNLP 2009, LNAI 6062, pp. 48–55, 2010. c Springer-Verlag Berlin Heidelberg 2010 

Morphological Analysis of Tone Marked Kinyarwanda Text

1.2

49

Tone in Computational Morphology

The general challenges of computational morphology include – formation of valid words (morphotactics) and – the alternation rules that affect the surface realisation of the words. For most languages, the alternation rules mentioned above affect only consonants and vowels. Tonal languages have an extra challenge of dealing with alternation rules pertaining to only tones. In particular, a tonal morphological analyser for the Kinyarwanda language should be able to handle different tonal phenomena, such as – the lexical tones which are deleted in some grammatical environments – the grammatical tones and their different places of realizations. The tones have alternation rules that affect their realization on the surface. Such rules make the description of tone difficult. The challenges specific to computational morphology of tonal Bantu languages include: – handling of lexical tones which are deleted in some grammatical environments – coping with grammatical tones and their different places of realisation – coping with floating tones – handling vowel lengthening and shortening caused by tones, and – treatment of anticipated tones in rare environments. 1.3

The Structure of This Paper

In Section 2, an introduction of Kinyarwanda and its use of tones is presented. This is followed by the presentation of the approach used during the implementation of the Kinyarwanda tone marked text in Section 3. Section 4 discusses the evaluation of the morphological analyser and then finally the conclusion is given in Section 5.

2 2.1

Characteristics of the Kinyarwanda Language General Facts

Kinyarwanda, the national language of the Republic of Rwanda, is, after Swahili, the second largest spoken language in the Bantu group [4]. The genocide which took place in 1994 taking lives of more than one million Rwandans, but the number of speakers of the language in the great lakes region of East and Central Africa is perhaps more than 20 million. Currently Kinyarwanda is used in government schools as a medium of instruction on lower primary level in city schools and on the entire primary level in village schools. In spite of the wide usage of the language, it is still highly under-resourced and characterised by lack of electronic resources and insignificant presence on the Internet.

50

J. Muhirwe

Kinyarwanda belongs to the inter-lacustrine (Great Lakes) Bantu languages. It is closely related to Kirundi, the national language of Burundi, and Giha, a language spoken in Tanzania. The three languages are closely related in grammar and lexicon. Kinyarwanda has typical characteristics of Bantu languages. Its morphology is agglutinating in nature, characterised by a tone system, a noun classification system and complex verbs. Kinyarwanda has 5 vowels, which are either long or short. It is important to note that only a-, i-, u- appear in the preprefix position and in verbal extensions [4]. Moreover, the similarities with other Bantu languages include that (i) the verbs agree with subjects and objects, (ii) the language has prenasalised consonants and (iii) there is vowel harmony and consonant mutation. 2.2

The Underspecific Orthography

Kinyarwanda language has an official orthography, but it does not mark tones and long vowels. This creates a lot of ambiguities hence making the language hard to read even to native speakers of the language. The example (1) is a case in point. Use of tones and long vowels could be used to disambiguate the sentence as in (2): (1)

inzuki muzazihakure a. ’harvest honey from the bee hive’ b. ’remove the bees’.

(2)

a. b.

inzuki muz´ aazihak´ ure ’harvest honey from the bee hive’ inzuki muzaazihakuure ’remove the bees’.

Morphological analysis of Kinyarwanda language tonology has specific challenges because there is a lack of large tone marked corpora. Moreover, a survey of available literature on Kinyarwanda tones shows that there are different rules and marking conventions (Coupez [5], Bizimana [6], and Kimenyi [7]). All these authors use different rules and conventions. In this paper we have chosen to use features that are common with different authors. High tones are marked with H and low tones are not marked. 2.3

Manifestation of Tones in Verbs

Verbal tones are unstable, whereas lexical tones in nouns do tend to differ so much from surface tones. Therefore, we restrict our attention to the description of the verb tone problems and less of nominal tone problems. In the following, we describe how grammatical tones are manifested in verbs in two categories: some tenses keep lexical tone and other tenses neutralize it. Tenses Keeping the Lexical Tone. Tenses that keep the lexical tone do not affect the contrasting lexical tones of the verb stems. They allow them to keep whichever tone they have. These tenses include: infinitive, imperfective present,

Morphological Analysis of Tone Marked Kinyarwanda Text

51

Table 1. Examples of tenses keeping the lexical tone Tense Aspect Examples: Tense marker marker lexical Infinitive ku-a ku-riHriimb-a gu-tabaar-a ku-oHogosh-a Imper-ra-a ba-ra-riHriimb-a fective ba-ra-tabaar-a present ba-ra-oHogosh-a Comple- -ra-ye ba-ra-riHriimb-ye mentless ba-ra-tabaar-ye perfective ba-ra-oHogosh-a Participial -zaa- -a baH-zaa-riHriimba future baH-zaa-tabaara baH-zaa–oHogosha Relative -zaHa- -a ba-zaHa-riHriimba future ba-zaHa-tabaara ba-zaHa-oHogosha

Examples: surface kuriHriimba gutabaara kwoHogosha baraHriHriimba baratabaara baroHogosha baraHriHriimbye baratabaaye baroHogosha baHzaHaHriHriimba baHzaatabaara baHzoogosha baHzaHaHriHriimba bazaHatabaara baHzoHogosha

Gloss to sing to rescue to shave they are singing they are rescuing they are shaving they have sung they have rescued they have shaved they will sing they will rescue they will shave they might sing they might rescue they might shave

complementless habitual, complementless perfective present, complementless recent past, complementless narrative/consecutive tense, relative future, participial future, and complementless present irrealis conditional. Tenses Neutralizing the Lexical Tone. Tenses that neutralize the lexical tone change lexical tones of verbal stems. They may lower the whole finite verb or the stem or put a high tone on the second syllable of the stem or on the pre-stem position. There are three categories of such tenses: 1. Tenses that lower the stem: These are –zaa- (future tense), -raka- (hortative), and ø (habitual) and the imperative tense. 2. Tenses that put a high tone on the toneless verb stem, viz. -aara (remote past tense). 3. Tenses that put a high tone on the last syllable of the finite verb, viz. -aa(if clause of the conditional tense), and -ii- (negative imperative). The imperative tense is also considered as a tense neutralizing the lexical tone, since it lowers the tone of the whole verb: (3)

3

a. b. c.

koHr-a > kora geend-a > genda kuHund-a > kuunda.

Implementation of Tone Handling Rules

The approach chosen in the experiments was to include as much tone information in the lexicon as possible and therefore keeping the alternation rules as simple

52

J. Muhirwe Table 2. Tenses neutralizing the lexical tone

Tense Asp.Examples: Tense marker m. lexical Imper- ø -a ø-riHriimb-a ative ø-tabaar-a ø-oHogosh-a Affir- -zaa- -a ba-zaa-riHriimba mative ba-zaa-tabaara ba-zaa-oHogosha Negat. -a-ye nti-ba-a-riHriimb-ye recent nti-ba-a-tabaar-ye past nti-ba-a-oHogosh-ye Hort- -raka- -a ba-raka-riHriimb-a ative ba-raka-tabaar-a tense ba-raka-oHogosh-a Subj. -ra-e ba-ra-riHriimb-e near ba-ra-tabaar-e future ba-ra-oHogosh-e Negat. -zaa-e nti-baH-zaa-riHriimb-e subj. nti-baH-zaa-tabaar-e distant nti-baH-zaa-oHogosh-e future nti-baH-zaa-oHogosh-e

Examples: surface ririimba tabaara oogosha bazaaririimba bazaatabaara bazoogosha ntibaaririimbye ntibaatabaarye ntiboogoshe barakaririimba barakatabaara barakoogosha baraririimbe baratabaare baroogoshe ntiHbaHzaaririimbe ntiHbaHzaatabaare ntiHbaHzoongere ntiHbaHzoogoshe

Gloss sing rescue shave they will sing they will rescue they will shave they did not sing they did not rescue they did not shave they ought sing they ought rescue they ought shave they should sing they should rescue they should shave they should never s. they shd never rescue they shd never shave

as possible. The lexical tones of nouns and other word categories (except verbs) were marked in the lexicon with H. The challenge was on how to handle verbal grammatical tones. Verbal lexical tones were temporarily assigned letter X. In addition to the lexical tones, place holders for potential surface tones were added in the lexicon. Tenses neutralising the lexical tones were identified in the lexicon using Q. Two sets of rules were written to deal with morphological analysis of tones. The first set of rules deals with verb grammatical tones. In this case rules that handle the lexical tone keeping tenses and lexical neutralizing tenses in verbs were written. Tenses neutralizing the lexical tone are treated in the similar way after identifying the tenses that keep the lexical tone. Rules were ordered so that lexical neutralization is considered first. Example (4) is an illustration of these rules used focusing on the case of lexical neutralization where the lexical tone of whole verb stem is lowered. The second set of rules was about alternation rules for dealing with vowels and consonant changes between the lexical forms and surface forms. (4)

define Neutral X -> 0 || [$Q] define Keep X -> H ; define DropQ Q -> 0;

;

Morphological Analysis of Tone Marked Kinyarwanda Text

53

The results below were obtained with the Xerox Finite State tools [8] (5)

apply up> barateHeka ba[CL2-SUB-PF]ra[Imp-Pr]+VROOTteXeka[IMP-AM] apply up> guteHeka ku[Pr-Inf]+VROOTteXeka[IMP-AM] apply up> bateHeka ba[CL2-SUB-PF][HAB]+VROOTteXeka[IMP-AM] apply up> oongera a[CL1-SUB-PF]+VROOToongera[IMP-AM] apply up> bazaatabaara ba[CL2-SUB-PF]zaa[PART-FUT]+VROOTtabaara[IMP- AM].

The description of the other rules that are required concerning vowels and consonants are beyond the scope of this paper.

4 4.1

Evaluation Test Corpus

Resources for evaluating the Kinyarwanda morphological analyser on tone-marked text are different from those required for evaluating the analyser on orthographic text. Orthographic text is abundantly available in print and electronic media whereas tone marked text is only available in Kinyarwanda linguistic books as mere examples to illustrate concepts advocated by authors of those publications. In order to objectively evaluate the performance of the analyser, a tone marked corpus was manually created. This was done by adding tone marks and long vowels to words initially written according to the standard Kinyarwanda orthography. This resulted in a corpus of 224 tone marked verbs, while nouns were not included to the test set. The focus of the evaluation was on the analysis of verbs. The reason for choosing verbs is that analysis of nominal tones is less complex compared to verbal tones. Kinyarwanda noun morphology, whether tone-marked or not, is not as complex as verb morphology. Lexical tones of verbs undergo a lot of changes compared to their nominal counterparts. 4.2

Metrics

Due to the nature of the corpus, token types were used without any frequence information. The evaluation was based on the notions of precision and recall, defined as follows: correct responses × 100% correct responses + false responses correct responses recall = × 100%. correct responses + failed correct responses precision =

(1) (2)

In equation 1 and 2 above, correct responses refer to the number of words in the test corpus which were correctly analysed, and false responses refer to the number of words in the test corpus which were analysed but actually the analysis was wrong. For example, a verb may be wrongly analysed as a noun. failed correct

54

J. Muhirwe

responses refer to words in the test corpus which should have been analysed but for some reason were not analysed. During evaluation the failed correct responses were investigated. On evaluating the performance of the analyser based on types of tone marked text, the system recorded a 95.45% recall and 100% precision. 4.3

Comparison with the Orthographic Analyser

In the evaluation of the morphological analyser for tone-marked text, we tested the hypothesis that this analyser returns less substantially more ambiguous results compared to the morphological analyser for orthographic text. Technically, the test involved two different analysers containing the same lexicon. In addition, the tone marks and long vowels were sripped from the words in the tone-marked corpus. The new test corpus was used as input in testing with the orthographic morphological analyser. The official orthography analyser was found to occasionally give out multiple outputs for a single input. On examining these results it was discovered that they were due to ambiguities found in the text. The word gutaka written according to the official orthography could either mean to scream, or decorate. This word was analysed using the official orthography analyser and returned results shown in (6). (6)

apply up> gutaka ku[CL15-SUB-PF]+VROOTtaka[IMP-AM] ku[CL15-SUB-PF]+VROOTtaXaka[IMP-AM] apply up> bararirimba ba[CL2-SUB-PF]ra[Imp-Pr]+VROOTriXriimba[IMP-AM] ...ra[Imp-Pr]ri[CL5-OM]ri[CL5-OM]n[1P-SG-OM]+VROOTbaXa[IMP-AM] ...raXa[NOT-YET]+VROOTriXriimba[IMP-AM] ...raXa[NOT-YET]ri[CL5-OM]ri[CL5-OM]n[1P-SG-OM]+VROOTbaXa[IMP-AM] ...raXa[NOT-YET]ri[CL5-OM]ri[CL5-OM]n[1P-SG-OM]+VROOTbaXa[IMP-AM] ...raXa[NOT-YET]+VROOTriXriimba[IMP-AM]

The same words when enriched with tone marks and long vowels, and analysed with the morphological analyser for tone-marked text, returns: (7)

apply up> gutaka ku[CL15-SUB-PF]+VROOTtaka[IMP-AM] apply up> baraHriHriimba ba[CL2-SUB-PF]ra[Imp-Pr]H+VROOTriXriimba[IMP-AM]

Taking a look at baraHriHriimba, a verb form with grammatical tones reveals similar results as shown in (6) and (7). Analysis using the tone morphological analyser results into one output as shown in (7). Analysing the same word without tone marks and long vowels gives out five different valid analyses of the same word (6). These results confirm the hypothesis that the tone morphological analyser returns less ambiguous results compared to the official orthography analyser.

Morphological Analysis of Tone Marked Kinyarwanda Text

5

55

Conclusion

This paper has discussed the solutions to morphological analysis of Kinyarwanda tone-marked text. The results presented here show that it is possible to carry out morphological analysis of tone marked text. Results obtained from the experiments also confirmed the hypothesis that analysis of tone marked text produces less ambiguous results compared to tone unmarked text. From the foregoing, there is still room for improving the performance. Future work will focus on annotating more Kinyarwanda text with tone marks and long vowels and adding them to the tone morphological analyser lexicon. Acknowledgments. The author would like to sincerely acknowledge the support and expert advice provided by Anssi Yli-Jyr¨ a of University of Helsinki for the second version of this paper. I also acknowledge all the anonymous reviewers who provided valuable feedback and comments on the first version of this paper.

References 1. Leben, W.: Suprasegmental phonology. PhD thesis, MIT, Cambridge, Massachusetts (1973) 2. Goldsmith, J.: Autosegmental phonology. PhD thesis, MIT, Cambridge, Massachusetts (1976) 3. Yip, M.: Tone. Cambridge University Press, Cambridge (2002) 4. Kimenyi, A.: Kinyarwanda morphology. In: An International Handbook for Inflection and Word Formation. Walter de Gruyter, New York (2004) 5. Coupez, A.: Abr´eg´e de Grammaire Rwanda, vols. I, II. Institut National de Recherche Scientifique, Butare (1980) 6. Bizimana, S.: Imite´erere y’Ikinyarwanda. Pelloti Set Press, Kigali (1998) 7. Kimenyi, A.: A Tonal Grammar of Kinyarwanda: An Autosegmental and Metrical Analysis. The Edwin Mellen Press Ltd. (2002) 8. Beesley, K.R., Karttunen, L.: Finite State Morphology. CSLI Studies in Computational Linguistics, California (2003)

Minimizing Weighted Tree Grammars Using Simulation Andreas Maletti Universitat Rovira i Virgili, Departament de Filologies Rom` aniques Avinguda de Catalunya 35, 43002 Tarragona, Spain [email protected]

Abstract. Weighted tree grammars (for short: WTG) are an extension of weighted context-free grammars that generate trees instead of strings. They can be used in natural language parsing to directly generate the parse tree of a sentence or to encode the set of all parse trees of a sentence. Two types of simulations for WTG over idempotent, commutative semirings are introduced. They generalize the existing notions of simulation and bisimulation for WTG. Both simulations can be used to reduce the size of WTG while preserving the semantics, and are thus an important tool in toolkits. Since the new notions are more general than the existing ones, they yield the best reduction rates achievable by all minimization procedures that rely on simulation or bisimulation. However, the existing notions might allow faster minimization.

1

Introduction

Grammar reduction and minimization are well-studied subjects [1]. Here we consider weighted tree grammars (Wtg), which are widely used in applications such as model checking [2] and in several areas of natural language processing [3]. Such Wtg are an extension of weighted (probabilistic) context-free grammars that generate trees instead of strings. They can, for example, be used to generate parse trees (with a weight) directly. Several toolkits implement Wtg [4,5,6]. Let us review the existing results on minimization of Wtg. There exists a direct correspondence between Wtg and weighted tree automata [7,8,9]. Deterministic bottom-up (unweighted) tree automata can be minimized efficiently using the algorithms inspired by Hopcroft [10,11]. This work has been extended to deterministic bottom-up weighted tree automata over semifields (i.e., commutative semirings with multiplicative inverses) in [12]. However, minimizing general (unweighted) tree grammars is Pspace-complete [13] and cannot be approximated well [14,15] unless P = Pspace. These negative results extend to Wtg over idempotent, commutative semirings. Consequently, alternative (efficient) methods to reduce the size of tree grammars are explored in [11,16,17,18]. In contrast, general Wtg over fields (i.e., commutative semirings with multiplicative and additive inverses) can efficiently be minimized to a unique (up to 

This work was financially supported by the Ministerio de Educaci´ on y Ciencia (MEC) grants JDCI-2007-760 and MTM-2007-63422.

A. Yli-Jyr¨ a et al. (Eds.): FSMNLP 2009, LNAI 6062, pp. 56–68, 2010. c Springer-Verlag Berlin Heidelberg 2010 

Minimizing Weighted Tree Grammars Using Simulation

57

a change of basis) minimal Wtg [19,20]. Finally, efficient reductions of Wtg with the help of bisimulation relations are considered in [21]. Here we extend the simulation approach for tree grammars of [17] to Wtg over idempotent, commutative semirings, and in the process, overcome the major problems of [18]. Let us explain Wtg in more detail. A Wtg is a tree grammar, in which each production is assigned a weight of a semiring. Instead of just generating a certain set of trees, a Wtg assigns a weight to each tree. In [17] two types of simulation relations, called downward and upward simulations, are investigated for tree grammars. These notions were generalized to Wtg in [18]. This generalization required semirings with a partial order, so idempotent (i.e., a + a = a for all a) semirings equipped with their natural order were considered. Idempotent semirings are used, for example, when extracting the n-best derivations [22] from a parser (even if the parser was not trained over an idempotent semiring) and Wtg over certain idempotent semirings (like the tropical semiring) directly represent all “best” derivations. We generalize backward and forward simulation, which are defined in [18]. Intuitively, these notions correspond to backward and forward bisimulation of [21], but the notions of [18] did not properly generalize them. In addition, backward simulation generalizes downward simulation of [17] and our forward simulation generalizes upward simulation with respect to the identity as downward simulation [17]. We choose not to generalize upward simulations with respect to arbitrary downward simulations since we believe that two completely separate notions are easier to handle and understand. Our new notions now generalize all existing simulation and bisimulation notions for Wtg over commutative, idempotent semirings, and in addition, enjoy better properties than the corresponding notions of [18]. While a reduction with respect to a simulation of [18] might yield a Wtg that can be reduced further with the same type of simulation, this cannot occur with our simulations. A backward simulation is a quasi-order (i.e., a reflexive, transitive relation) on the nonterminals of the Wtg such that larger nonterminals dominate the smaller ones; i.e., if a nonterminal allows us to generate a tree with weight a, then any larger nonterminal must be able to generate the same tree with a weight that is larger than a. This relation even holds at the transition level for the notions of [18], but it is lost at the transition level in our generalization. Two nonterminals that can simulate each other are considered equivalent, and we can reduce the Wtg with this equivalence relation. The obtained Wtg is equivalent to the original Wtg, so our reduction procedure can be applied before and after lossy reduction techniques such as pruning. We also generalize forward simulation and show similar properties. Our minimization algorithms compute the greatest backward and forward simulations, which yield the best reduction. In addition, once a Wtg has been minimized with one type of simulation, it cannot be reduced any further using the same type. However, alternating backward and forward simulation can yield even smaller Wtg. In addition, pruning and other methods might also enable further nonterminals to become equivalent, so we might obtain even further reductions.

58

A. Maletti

S2 VP2

NP1 He

V1 lost

S2 NP2

NP3

DET1

N1

the

key

the

strange

VP2 fish

NP2

ate a

carrot

Fig. 1. Example tree (left) and tree used in Example 1 (right)

2

Trees and Weighted Tree Grammars

A quasi-order  on S is a reflexive, transitive relation on S. An up-set A ⊆ S (with respect to ) is such that for every s  s with s ∈ A also s ∈ A. The smallest up-set containing A ⊆ S is denoted by ↑A. If the quasi-order is not obvious from the context, then we write ↑ (A). Moreover, if A = {a}, then we write ↑a [and ↑ (a)]. For simplicity, we consider trees over ranked alphabets; i.e., each symbol we use has a fixed rank. Given an alphabet Σ, we write Σk for the set of symbols of Σ that have rank k. The rank of a symbol determines how many children a node marked with that symbol has in a tree. Consequently, our trees are formed by putting a symbol of rank k above k subtrees. Formally, the set TΣ of trees over Σ is the smallest set such that σ(t1 , . . . , tk ) ∈ TΣ for every σ ∈ Σk and t1 , . . . , tk ∈ TΣ . A commutative semiring is an algebraic structure (A, +, ·) such that (A, +) and (A, ·) are commutative monoids and · distributes over finite sums. It is idempotent if a + a = a for every a ∈ A. In an idempotent, commutative semiring (A, +, ·) the natural order  on A is defined by a  b if a + b = b. In the following, let (A, +, ·) be an idempotent, commutative semiring. For example, the tropical semiring (IR ∪ {∞}, min, +) is such a semiring, in which  = ≥. To present our approach in a general setting, we recall weighted tree grammars (Wtg) [7,23,24,25]1 . Weighted context-free grammars can be modelled as such grammars. Essentially, a Wtg defines a weighted hypergraph [22], and formally, a Wtg (in normal form) is a structure (N, Σ, P, I) such that – N is a finite set of nonterminals, – Σ is a ranked alphabet of terminals, a – P is a finite set of productions of the form S → σ(S1 , . . . , Sk ) where σ ∈ Σk , a ∈ A is a weight, and S, S1 , . . . , Sk ∈ N are nonterminals, and – I : N → A is an initial weight assignment. a

Intuitively, a production S → σ(S1 , . . . , Sk ) yields that a tree σ(t1 , . . . , tk ) can be generated by S provided that the subtrees t1 , . . . , tk can be generated 1

Note that most of the cited references investigate weighted tree automata, which are an equivalent formalism.

Minimizing Weighted Tree Grammars Using Simulation

59

by S1 , . . . , Sk , respectively. The production incurs the weight a. In general, we assume that no two productions differ only in the weight. Left-most derivations are defined as usual. The weight of a derivation is the product (using ·) of the weights of the productions involved (counting multiple occurrences of the same production). The weight wt(t, S) of a terminal tree t ∈ TΣ and a start nonterminal S is obtained by adding"the weights of all derivations of t from S.2 Finally, the weight wt(t) is wt(t) = S∈N I(S) · wt(t, S). Let us illustrate this on a small example of [5]. Note that the weights are just made up and do not reflect usage probabilities. Example 1. Let the semiring be the arctic semiring (IR ∪ {−∞}, max, +). Consider the Wtg (N, Σ, P, I) such that – N = {s, np, dt , jj , nn, vp, v }, – Σ contains {S2 , NP2 , NP3 , VP2 } and the English words “the”, “a”, “funny”, “blue”, “strange”, “fish”, “carrot”, “ate”, and “created”, – P contains the productions: .5

dt → a

.2

jj → blue

.8

nn → carrot

1

dt → the

.4

jj → funny

s → S2 (np, vp) np → NP2 (dt , nn) .6

nn → fish

1

v → ate

np → NP3 (dt , jj , nn) vp → VP2 (v , np)

.7

.5 .3

.5

jj → strange

.2 .3

v → created ,

– I(s) = 1 and I(S) = −∞ for all remaining S ∈ N . Now let us show a derivation. Consider tree t of Fig. 1 (right), which represents a parse tree of the English sentence “The strange fish ate a carrot.” We can derive it as follows (at the end of the line we display the accumulated weight): s ⇒ S2 (np, vp) ⇒ S2 (NP3 (dt , jj , nn), vp) ⇒ S2 (NP3 (the, jj , nn), vp) ⇒ S2 (NP3 (the, strange, nn), vp) ⇒∗ S2 (NP3 (the, strange, fish), VP2 (ate, NP2 (a, carrot)))

(1.6) (2.6) (6.2)

Since this is the only derivation that yields t (see Fig. 1), we have wt(t, s) = 6.2. If there would be several derivations (starting with s), then we would take the maximum weight among all such derivations. Since I(S) = −∞ for all S besides s, we can conclude wt(t) = 6.2.

3

Backward Simulation

Backward simulation was investigated for unweighted tree automata in [17] and generalized to our setting in [18]. Here we develop a general notion that generalizes the mentioned notions and the backward bisimulations of [21], which were 2

In proofs we sometimes use the semantics [9], which is " equivalent initial-algebra #k a a· wt(t given by: wt(σ(t1 , . . . , tk ), S) = S →σ(S i , Si ) for every S ∈ N , i=1 1 ,...,Sk )∈P σ ∈ Σk , and t1 , . . . , tk ∈ TΣ .

60

A. Maletti

not generalized by the notions of [18]. This latter fact led to the strange situation that a backward bisimulation was not at the same time also a backward simulation. Our notions repair this, and in addition, our minimization procedure is guaranteed to reduce more than all the minimization procedures developed for the mentioned simulations and bisimulations. From now on, let G = (N, Σ, P, I) be a Wtg, and for every σ ∈ Σk and T0 , . . . , Tk ⊆ N , let $

pwtσ (T0 , . . . , Tk ) =

a .

a

S0 →σ(S1 ,...,Sk )∈P S0 ∈T0 ,...,Sk ∈Tk

Intuitively, pwtσ (T0 , . . . , Tk ) is the weight of all productions generating the symbol σ and using the states of T0 , . . . , Tk . Definition 2 (cf. [18, Definition 1]). A quasi-order  on N is a backward simulation if for every S  T , symbol σ ∈ Σk , and T1 , . . . , Tk ∈ N pwtσ ({S}, ↑T1 , . . . , ↑Tk )  pwtσ ({T }, ↑T1 , . . . , ↑Tk ) . Note that the notion of a backward bisimulation is obtained by requiring that  is an equivalence relation in the previous definition (in that case ↑T is the equivalence class of T ). For the following discussions and results, let  be a backward simulation3 . Unfortunately, our definition does not easily offer an intuitive explanation, thus let us continue to explore some central properties of such simulations. First, there exists a greatest (with respect to ⊆) backward simulation. This can be proved by showing that for any two backward simulations also the reflexive, transitive closure of their union is a backward simulation. The main property of nonterminals S  T is that S generates trees t with a weight that is smaller than the weight with which the same tree t is generated by T . This immediately yields that nonterminals S and T that simulate each other (i.e., S  T  S) generate t with the same weight. Lemma 3 (see [18, Lemma 3]). We have wt(t, S)  wt(t, T ) for every t ∈ TΣ and S  T . Proof. Since this property is essential for our approach, let us present full proof details. First, we remark that the semiring operations + and · are monotone with respect to the natural order . Moreover, a  a + b for every a, b ∈ A. Second, suppose that t = σ(t1 , . . . , tk ) for some σ ∈ Σk and t1 , . . . , tk ∈ TΣ . We prove the statement by induction as follows (recall that the first line is the definition of the initial-algebra semantics): 3

Note that the name “backward simulation” makes more sense if we consider weighted bottom-up tree automata.

Minimizing Weighted Tree Grammars Using Simulation

$

wt(t, S) =



$

wt(ti , Si )

i=1

a

S →σ(S1 ,...,Sk )∈P



k %

pwtσ ({S}, ↑S1 , . . . , ↑Sk ) ·

$



pwtσ ({T }, ↑S1 , . . . , ↑Sk ) ·

$



$



a

T →σ(T1 ,...,Tk )∈P

k %



wt(ti , Si ) 

i=1

S1 ,...,Sk ∈N T1 ∈↑S1 ,...,Tk ∈↑Sk a T →σ(T1 ,...,Tk )∈P

=

wt(ti , Si )

k %

wt(ti , Si )

i=1

S1 ,...,Sk ∈N

=

k % i=1

S1 ,...,Sk ∈N



61

k %

$ S1 ,...,Sk ∈N T1 ∈↑S1 ,...,Tk ∈↑Sk a T →σ(T1 ,...,Tk )∈P



k %

wt(ti , Ti )

i=1

wt(ti , Ti ) = wt(t, T ) ,

i=1

where we use S  T at † and k times the induction hypothesis at ‡, which is   applicable because Si  Ti for every 1 ≤ i ≤ k. Clearly, nonterminals that simulate each other are superfluous. We can use this to reduce the number of nonterminals and the number of productions of the Wtg. Let us remark that {(S, T ) | S  T  S} is an equivalence relation on N , which we denote by . The equivalence class of S ∈ N is denoted by [S]. Definition 4 (see [18, Definition 6]). The collapsed Wtg, denoted by G/, is (N  , Σ, P  , I  ) where – N  = {[S] | S ∈ N }, – P  contains, for every σ ∈ Σk and S, S1 , . . . , Sk ∈ N , the production pwt ({S},↑S1 ,...,↑Sk )

σ [S] −−−− −−−−−−−−−−→ σ([S1 ], . . . , [Sk ]) ,

– I  ([S]) =

"

T ∈[S] I(T )

for every S ∈ N .

An easy check shows that the weight of the production in the second item in Definition 4 is independent of the chosen representatives S, S1 , . . . , Sk . In addition, the collapsed Wtg G/ never has more nonterminals than G. Theorem 5 (see [18, Theorem 7]). The Wtg G and G/ are equivalent. Proof. This is the most important theorem of this section, so let us present some detail. Let (G/) = (N  , Σ, P  , I  ), and we write wt for the weight computed with respect to G/. In a manner similar to the proof of Lemma 3 we first prove that wt (t, [S]) = wt(t, S) for every t ∈ TΣ and S ∈ N . Let t = σ(t1 , . . . , tk ) for some σ ∈ Σk and t1 , . . . , tk ∈ TΣ . Then

62

A. Maletti

one/1

NP1 /2

pro

NP1 /1

one/1

nmb

one/1

lit

n

NP1 /1

np

NP1 /2

NP1 /1

one/1

lit-np

[lit]

NP1 /1

[n]

Fig. 2. The Wtg (left) and the collapsed Wtg (right) of Example 6

$

wt (t, [S]) =

a ·

a

[S]→σ([S1 ],...,[Sk ])∈P  †

=

$

k %

wt (ti , [Si ])

i=1

pwtσ ({S}, ↑S1 , . . . , ↑Sk ) ·

wt(ti , Si )

i=1

S1 ,...,Sk ∈N

$

=

k %



S1 ,...,Sk ∈N T1 ∈↑S1 ,...,Tk ∈↑Sk a S →σ(T1 ,...,Tk )∈P

k %

$

wt(ti , Si ) =

i=1

a

S →σ(S1 ,...,Sk )∈P



k %

wt(ti , Si )

i=1

= wt(t, S) , where we used the induction hypothesis at †. With this auxiliary result, we immediately obtain $ $ I  (T ) · wt (t, T ) = I(S) · wt(t, S) = wt(t) .   wt (t) = T ∈N 

S∈N

In contrast to the backward simulation of [18], G/ cannot be reduced any further with the help of backward simulation, if  is our greatest backward simulation. Let us illustrate the definitions on a very simplistic example. Example 6. Consider the tropical semiring (IR ∪{∞}, min, +), and let G be such that N = {pro, nmb, n, np, lit , lit-np} and Σ = {one, NP1 }, and P contains the following productions: 1

n → NP1 (pro)

2

n → NP1 (nmb)

np → NP1 (pro) np → NP1 (nmb)

2 1

Finally, I(S) = 1 for every S ∈ N .

1

lit → one

1

nmb → one .

lit-np → NP1 (lit ) pro → one

1 1

Minimizing Weighted Tree Grammars Using Simulation

63

Algorithm 1. Minimization algorithm using backward simulation R0 ← N × N i←0 repeat j←i for all σ ∈ Σk and T1 , . . . , Tk ∈ N , let T1 = ↑Ri (T1 ), . . . , Tk = ↑Ri (Tk ) and do Ri+1 ← {(S, T ) ∈ Ri | pwtσ ({S}, T1 , . . . , Tk )  pwtσ ({T }, T1 , . . . , Tk )} i←i+1 until Ri = Rj

Let  be the greatest backward simulation. Then pro  nmb  lit  pro and n  np  lit-np  n. The collapsed Wtg is (N  , Σ, P  , I  ) with N  = {[lit ], [n]}, I  (T ) = 1 for every T ∈ N  , and P  contains the two productions: [lit ] → one and [n] → NP1 ([lit ]) both with weight 1. The two Wtg are displayed in Fig. 2. Finally, let us develop an algorithm that computes the greatest backward simulation, which we denote by  for the rest of this section. Our algorithm, which is displayed in Algorithm 1, proceeds in a similar fashion as the algorithm for the greatest simulation of a labeled transition system [17] and the algorithm for the coarsest backward bisimulation of [21]. Let r be the maximal rank of symbol in Σ. Our algorithm is conceptually simple, but its run-time complexity is O(|N |2+r |P |), which is very high compared to the algorithms of [21,17], which run in time O(|N |r|P |). A more efficient implementation, which utilizes the ideas of [21,17], remains a topic for further research. We compute  directly by refining the trivial quasi-order Q × Q iteratively. The main property that allows the refinement step is outlined in the next lemma. Lemma 7. Let σ ∈ Σk and T1 , . . . , Tk ⊆ N be up-sets. Then for every S  T pwtσ ({S}, T1 , . . . , Tk )  pwtσ ({T }, T1 , . . . , Tk ) . Proof. The proof can easily be obtained from the definitions and is omitted.   Thus, in our algorithm we need to select up-sets (with respect to ) and can then discard some pairs (S, T ) of nonterminals such that T cannot simulate S. A simple implementation of Algorithm 1 runs in time O(|N |2+r |P |), if we select the up-sets in order of their cardinality and reuse the already computed sums. Theorem 8 (cf. [18, Theorem 9]). Algorithm 1 can be implemented to run in time O(|N |2+r |P |) and returns Ri = . Proof. The time bound is easy to obtain. For the correctness, we prove the following two statements for every i (encountered during execution): (i)  ⊆ Ri and (ii) ↑Ri (S) is an up-set (with respect to ) for every S ∈ N . Let us proceed by induction on i. Both statements are true for i = 0 because R0 = N × N and ↑R0 (S) = N for every S ∈ N . Now, suppose that S  T . Then (S, T ) ∈ Ri by the induction hypothesis. Moreover, by Lemma 7, (S, T ) ∈ Ri+1 . Thus  ⊆ Ri+1 .

64

A. Maletti

Now, let S, T, U ∈ N be such that S ∈ ↑Ri+1 (T ) and S  U . Since  ⊆ Ri+1 , we also have (S, U ) ∈ Ri+1 and thus U ∈ ↑Ri+1 (T ), which proves that ↑Ri+1 (T ) is an up-set (with respect to ). Clearly, at termination, Ri is a backward simulation (see Definition 2). Since  ⊆ Ri and  is the greatest backward simulation for M , we can conclude that Ri = .  

4

Forward Simulation

The previous section established a new method to reduce Wtg. However, once we reduce with the help of the greatest backward simulation, we cannot reduce the Wtg any further with the help of backward simulation. Next, we introduce an alternative procedure, which can, in principle, reduce such Wtg further. In fact, the two minimization procedures can be alternated for maximal reduction. The new procedure uses a forward version of the simulation of Sect. 3. Our forward simulation is a generalization of the forward bisimulation of [21] and the forward simulation of [18], which in turn is the weighted analogue to the composed simulations of [17]. For a definition of ‘pwt’ see the paragraph before Definition 2. To simplify the following discussion, we will generally omit the set braces for singleton sets. Definition 9 (cf. [18, Definition 10]). A quasi-order  on N is a forward simulation if for every S  T : – I(S)  I(T ) and – for every symbol σ ∈ Σk , all nonterminals S  , S1 , . . . , Sk ∈ N , and 1 ≤ i ≤ k pwtσ (↑S  , S1 , . . . , S, . . . , Sk )  pwtσ (↑S  , S1 , . . . , T, . . . , Sk ) where S and T occur at the (i + 1)th position. Our definition of forward simulation is more general than the existing notions of [21,17,18]. However, we do not consider general upward simulations [17] with respect to arbitrary downward simulations here since we believe that two independent simulations (our forward simulation does not depend on a backward simulation) are easier to understand and analyze. Moreover, we can always first use backward-simulation minimization and then forward-simulation minimization to achieve roughly the same as with an upward-simulation minimization of [17]. As already remarked in the previous section, the more general the notion of simulation, the better the reduction rate (potentially at the expense of the run-time of the reduction algorithm). Another similarity to the backward case is that our definition of forward simulation is hard to illustrate. Thus, let us proceed with the principal properties of forward simulations. As in the backward case, there exists a greatest forward simulation. This follows from the fact that the reflexive, transitive closure of the union of any two forward simulations is again a forward simulation. From now on, let  be a forward simulation.

Minimizing Weighted Tree Grammars Using Simulation

65

The main property of similar states is slightly more complicated this time. We need an additional notion. A context is a tree of TΣ∪{} , where  is a new nullary symbol, such that the symbol  occurs exactly once. The set of all contexts is denoted by CΣ . The tree c[t] is obtained by replacing the symbol  in the context c ∈ CΣ by the tree t ∈ TΣ . Lemma 10 (see [18, Lemma 12]). Let c ∈ CΣ , S  ∈ N , and S  T . Then $ $ wt(c[S], T  )  wt(c[T ], T  ) . T  ∈↑S 

T  ∈↑S 

Proof. This property is not essential for our goals, so we omit the proof.

 

Again, we want to use the simulation to reduce the size of the Wtg. The definition of the collapsed WTG is slightly different this time. As before, let  be the equivalence relation {(S, T ) | S  T  S}. Definition 11 (see [18, Definition 13]). The collapsed Wtg, which is denoted by G/, is (N  , Σ, P  , I  ) where – N  = {[S] | S ∈ N }, – P  contains, for every σ ∈ Σk and S, S1 , . . . , Sk ∈ N , the production pwt (↑S,S1 ,...,Sk )

σ [S] −−−− −−−−−−−−→ σ([S1 ], . . . , [Sk ]) ,

– I  ([S]) = I(S) for every S ∈ N . It is again simple to show that the second and third item in Definition 11 are well-defined (i.e., independent of the chosen representative). Next, we prove the correctness of our construction. Theorem 12 (see [18, Theorem 15]). The Wtg G and G/ are equivalent.  Proof. Let (G/) = (N  , Σ, P  , I  ). As before, we use wt " for weights computed  using G/. As a first step, we prove that wt (t, [S]) = T ∈↑S wt(t, T ) for every t ∈ TΣ and S ∈ N . Suppose that t = σ(t1 , . . . , tk ) for some σ ∈ Σk and t1 , . . . , tk ∈ TΣ . Then we compute as follows where the equality marked † is explained below.

wt (t, [S]) =

$

pwtσ (↑S, S1 , . . . , Sk ) ·

$

pwtσ (↑S, S1 , . . . , Sk ) ·



$ T1 ,...,Tk ∈N

pwtσ (↑S, T1 , . . . , Tk ) ·

k %

wt(ti , Ti )

i=1

S1 ,...,Sk ∈N T1 ∈↑S1 ,...,Tk ∈↑Sk

=

wt (ti , [Si ])

i=1

S1 ,...,Sk ∈N

=

k %

k % i=1

wt(ti , Ti ) =

$

wt(t, T ) .

T ∈↑S

Let us take a closer look at the equation marked †. We can show this equality by showing both directions. Let us consider  first. Clearly, it is sufficient to show

66

A. Maletti

that for each summand of the left-hand side there exists a larger summand in the right-hand side. For this we consider a summand pwtσ (↑S, S1 , . . . , Sk ) ·

k %

wt(ti , Ti )

i=1

of the left-hand side of † for some nonterminals S1 , . . . , Sk , T1 , . . . , Tk ∈ N such that Ti ∈ ↑Si for every 1 ≤ i ≤ k. By Definition 9 and Si  Ti we have pwtσ (↑S, S1 , . . . , Sk )  pwtσ (↑S, T1 , . . . , Tk ). Consequently, pwtσ (↑S, S1 , . . . , Sk ) ·

k %

wt(ti , Ti )  pwtσ (↑S, T1 , . . . , Tk ) ·

i=1

k %

wt(ti , Ti )

i=1

and the latter is a summand on the right-hand # side of †. For the converse, let us consider a summand pwtσ (↑S, T1 , . . . , Tk ) · ki=1 wt(ti , Ti ) in the right-hand side where T1 , . . . , Tk ∈ N . Then this is clearly also a summand of the left-hand side of † (by setting Si = Ti ). This completes the proof of our auxiliary statement. For the statement of the theorem, we compute as follows: $  $ $ I  ([S]) · wt (t, [S]) = I(S) · wt(t, T ) wt (t) = S∈N

=

$

S∈N

T ∈↑S

I(S) · wt(t, S) = wt(t)

S∈N

because I(S)  I(T ) if S  T . This proves our theorem.

 

As in the backward case, the Wtg obtained by reducing with respect to the greatest forward simulation cannot be reduced any further with the help of forward simulation. Let us look at an example for illustration. Example 13. Consider the original Wtg of Example 6. Let  be the greatest forward simulation. Then pro  nmb  lit  pro and n  np  lit-np  n. The reduced Wtg coincides with the one of Example 6. Note that this is not a general property, but we rather chose a Wtg with this property to save space. Now let us develop an algorithm (see Algorithm 2) for the greatest forward simulation, which we denote by  for the rest of the section. It will run in time O(|N |3 r|P |), which is high compared to the run-time O(|N |r|P |) of the preceding algorithms [21,17]. The initial weight of a nontermial T that simulates S must be larger than that of S. Thus, we start with this restricted quasi-order in Algorithm 2. Again, we use a simple property that allows us to refine iteratively. Lemma 14. Let σ ∈ Σk , S1 , . . . , Sk ∈ N , 1 ≤ i ≤ k, and T ⊆ N be an up-set. Then pwtσ (T , S1 , . . . , S, . . . , Sk )  pwtσ (T , S1 , . . . , T, . . . , Sk ) for every S  T where S and T occur at the (i + 1)th position. Proof. The proof can be obtained easily from Definition 9.

 

Minimizing Weighted Tree Grammars Using Simulation

67

Algorithm 2. Minimization algorithm using forward simulation R0 ← {(S, T ) ∈ N × N | I(S)  I(T )} i←0 repeat j←i for all σ ∈ Σk , n ∈ {1, . . . , k}, and S  , S1 , . . . , Sk ∈ N , let T  = ↑Ri (S  ) and do Ri+1 ← {(S, T ) ∈ Ri | pwtσ (T  , . . . , S, . . . )  pwtσ (T  , . . . , T, . . . )} i←i+1 until Ri = Rj

Theorem 15 (see [18, Theorem 19]). Algorithm 2 can be implemented to run in time O(|N |3 r|P |) and returns Ri = . Proof. Again, the given time bound is easy to obtain. We prove the following two statements for every relevant i: (i)  ⊆ Ri and (ii) ↑Ri (S  ) is an up-set (with respect to ) for every S  ∈ N . By the first condition of Definition 9 we have  ⊆ R0 . Moreover, if  ⊆ Ri , then ↑Ri (S  ) is an up-set with respect to  for every S  ∈ N (see the proof of Theorem 8). Now, suppose that S  T . Then (S, T ) ∈ Ri by the induction hypothesis. Moreover, by Lemma 14 we also have (S, T ) ∈ Ri+1 because ↑Ri (S  ) is an up-set (with respect to ). Thus  ⊆ Ri+1 . Clearly, Ri is a forward simulation (see Definition 9) at termination. Since  ⊆ Ri , we can conclude that Ri = .  

5

Conclusion

We introduced the most general simulation relations for weighted tree automata, which generalize all the existing notions of [21,17,26]. Such simulations enjoy the theoretical properties we expect, but the computation of the greatest backward and forward simulation is significantly more expensive than the corresponding computations for the notions of [21,17,26]. Earlier work in [21,17,26] reports reductions of 7–76.1% (with an average around 50%) with less general forms of (bi)simulation, so our work will result in at least as much reduction. Future implementation work will be undertaken to verify how much our approach improves upon those results.

References 1. Dale, R., Moisl, H., Somers, H.L. (eds.): Handbook of Natural Language Processing. CRC Press, Boca Raton (2000) 2. Abdulla, P.A., Jonsson, B., Mahata, P., d’Orso, J.: Regular tree model checking. In: Brinksma, E., Larsen, K.G. (eds.) CAV 2002. LNCS, vol. 2404, pp. 555–568. Springer, Heidelberg (2002) 3. Knight, K., Graehl, J.: An overview of probabilistic tree transducers for natural language processing. In: Gelbukh, A. (ed.) CICLing 2005. LNCS, vol. 3406, pp. 1–24. Springer, Heidelberg (2005)

68

A. Maletti

4. Klarlund, N., Møller, A.: MONA Version 1.4 User Manual. BRICS, Department of Computer Science, University of Aarhus (2001) 5. May, J., Knight, K.: Tiburon: A weighted tree automata toolkit. In: Ibarra, O.H., Yen, H.-C. (eds.) CIAA 2006. LNCS, vol. 4094, pp. 102–113. Springer, Heidelberg (2006) 6. Cleophas, L.: Forest FIRE and FIRE wood: Tools for tree automata and tree algorithms. In: Proc. FSMNLP, pp. 191–198 (2008) 7. Berstel, J., Reutenauer, C.: Recognizable formal power series on trees. Theoret. Comput. Sci. 18(2), 115–148 (1982) ´ 8. Esik, Z., Kuich, W.: Formal tree series. J. Autom. Lang. Combin. 8(2), 219–285 (2003) 9. Borchardt, B.: The Theory of Recognizable Tree Series. PhD thesis, Technische Universit¨ at Dresden (2005) 10. Hopcroft, J.E.: An n logn algorithm for minimizing states in a finite automaton. Theory of Machines and Computations, pp. 189–196. Academic Press, London (1971) 11. H¨ ogberg, J., Maletti, A., May, J.: Backward and forward bisimulation minimisation ˇ arek, J. (eds.) CIAA 2007. LNCS, vol. 4783, ˇ d´ of tree automata. In: Holub, J., Z pp. 109–121. Springer, Heidelberg (2007) 12. Maletti, A.: Minimizing deterministic weighted tree automata. Inform. and Comput. 207(11), 1284–1299 (2009) 13. Meyer, A.R., Stockmeyer, L.J.: The equivalence problem for regular expressions with squaring requires exponential space. In: Proc. FOCS, pp. 125–129. IEEE Computer Society, Los Alamitos (1972) 14. Gramlich, G., Schnitger, G.: Minimizing nfa’s and regular expressions. J. Comput. System Sci. 73(6), 908–923 (2007) 15. Gruber, H., Holzer, M.: Inapproximability of nondeterministic state and transition complexity assuming P = NP. In: Harju, T., Karhum¨ aki, J., Lepist¨ o, A. (eds.) DLT 2007. LNCS, vol. 4588, pp. 205–216. Springer, Heidelberg (2007) 16. Abdulla, P.A., H¨ ogberg, J., Kaati, L.: Bisimulation minimization of tree automata. Int. J. Found. Comput. Sci. 18(4), 699–713 (2007) 17. Abdulla, P.A., Bouajjani, A., Hol´ık, L., Kaati, L., Vojnar, T.: Computing simulations over tree automata. In: Ramakrishnan, C.R., Rehof, J. (eds.) TACAS 2008. LNCS, vol. 4963, pp. 93–108. Springer, Heidelberg (2008) 18. Maletti, A.: A backward and a forward simulation for weighted tree automata. In: Bozapalidis, S., Rahonis, G. (eds.) CAI 2009. LNCS, vol. 5725, pp. 288–304. Springer, Heidelberg (2009) 19. Bozapalidis, S., Louscou-Bozapalidou, O.: The rank of a formal tree power series. Theoret. Comput. Sci. 27(1-2), 211–215 (1983) 20. Bozapalidis, S.: Effective construction of the syntactic algebra of a recognizable series on trees. Acta Inform. 28(4), 351–363 (1991) 21. H¨ ogberg, J., Maletti, A., May, J.: Bisimulation minimisation for weighted tree automata. In: Harju, T., Karhum¨ aki, J., Lepist¨ o, A. (eds.) DLT 2007. LNCS, vol. 4588, pp. 229–241. Springer, Heidelberg (2007) 22. Huang, L., Chiang, D.: Better k-best parsing. In: Proc. IWPT, pp. 53–64 (2005) 23. Alexandrakis, A., Bozapalidis, S.: Weighted grammars and Kleene’s theorem. Information Processing Letters 24(1), 1–4 (1987) 24. Bozapalidis, S.: Equational elements in additive algebras. Theory Comput. Systems 32(1), 1–33 (1999) 25. Borchardt, B., Vogler, H.: Determinization of finite state weighted tree automata. J. Autom. Lang. Combin. 8(3), 417–463 (2003) 26. Abdulla, P.A., Hol´ık, L., Kaati, L., Vojnar, T.: A uniform (bi-)simulation-based framework for reducing tree automata. In: Proc. MEMICS, pp. 3–11 (2008)

Compositions of Top-Down Tree Transducers with ε-Rules Andreas Maletti1, and Heiko Vogler2 1

Universitat Rovira i Virgili, Departament de Filologies Rom` aniques Avinguda de Catalunya 35, 43002 Tarragona, Spain [email protected] 2 Technische Universit¨ at Dresden, Faculty of Computer Science 01062 Dresden, Germany [email protected]

Abstract. Top-down tree transducers with ε-rules (εtdtts) are a restricted version of extended top-down tree transducers. They are implemented in the framework Tiburon and fulfill some criteria desirable in a machine translation model. However, they compute a class of transformations that is not closed under composition (not even for linear and nondeleting εtdtts). A composition construction that composes two εtdtts M and N is presented, and it is shown that the construction is correct, whenever (i) N is linear, (ii) M is total or N is nondeleting, and (iii) M has at most one output symbol in each rule.

1

Introduction

Many aspects of machine translation (MT) of natural languages can be formalized by employing weighted finite-state (string) transducers [1,2]. Successful implementations based on this word- or phrase-based approach are, for example, the At&t Fsm toolkit [3], Xerox’s finite-state calculus [4], the Rwth toolkit [5], Carmel [6], and OpenFst [7]. However, the phrase-based approach is not expressive enough, for example, to easily handle the rotation needed in the translation of the English structure NP-V-NP (subject-verb-noun phrase) to the Arabic structure V-NP-NP. A finite-state transducer can only implement this rotation by storing the subject, which might be very long, in its finite memory. Syntax-based (or tree-based) formalisms can remedy this shortage. Examples of such formalisms are the top-down tree transducer [8,9], the extended topdown tree transducer [10,11,12], the synchronous tree substitution grammar [13], the synchronous tree-adjoining grammar [14], the multi bottom-up tree transducer [15,16,17,18], and the extended multi bottom-up tree transducer [19]. Some of these models are formally compared in [20,21,19] and an overview on their usability in syntax-based MT is presented in [22,23]. For example, the toolkit Tiburon [24] implements weighted extended top-down tree transducers with some standard operations. 

This author was financially supported by the Ministerio de Educaci´ on y Ciencia (MEC) grants JDCI-2007-760 and MTM-2007-63422.

A. Yli-Jyr¨ a et al. (Eds.): FSMNLP 2009, LNAI 6062, pp. 69–80, 2010. c Springer-Verlag Berlin Heidelberg 2010 

70

A. Maletti and H. Vogler

In this paper, we consider top-down tree transducers with ε-rules (εtdtts), which are a (syntactically) restricted version of extended top-down tree transducers [25,21] and as such implemented in Tiburon [24]. In fact, εtdtts properly generalize finite-state transducers, and thus existing models for the phrase-based approach can be straightforwardly translated into εtdtts. Moreover, ε-rules sometimes allow the designer to more compactly express himself. The addition of ε-rules is also a step towards symmetry of the model; extended top-down tree transducers have full symmetry in the linear and nondeleting case. It is often beneficial in the development process to train “small” task-specific transducers [26]. Then we would like to compose those “small” transducers to obtain a single transducer, to which further operations can be applied. The success of the finite-state transducer toolkits (like Carmel [6] and OpenFsm [7]) is to a large extent due to this compositional approach, which allows us to avoid cascades of transducers. Here, we study εtdtts in order to better understand compositions of extended top-down tree transducers. In fact, several phenomena (ε-rules and non-shallow left-hand sides) contribute to the problems faced in such compositions. To investigate the effect of ε-rules, we prove a composition result inspired by Baker [27,28]. Namely, the composition of two εtdtts M and N can be computed by one εtdtt if (i) N is linear, (ii) M is total or N is nondeleting, and (iii) M has at most one output symbol in each rule (cf. Theorem 17). Compared to Baker’s original result [27] for nondeterministic top-down tree transducers, we have the additional condition that each rule of M contains at most one output symbol. Our result generalizes the composition closure for transformations computed by finite-state transducers, for which Condition (iii) can always be achieved [29, Cor. III.6.2]. However, this is not true for linear and nondeleting εtdtts, and we investigate procedures that reduce the number of output symbols per rule.

2

Notation

Let X = {x1 , x2 , . . .} be a fixed set of variables and Xk = {xi | 1 ≤ i ≤ k} for every k ≥ 0. The composition of the relations τ1 ⊆ A × B and τ2 ⊆ B × C is denoted by τ1 ; τ2 . This notation is extended to classes of relations in the obvious way. Ranked alphabets are defined as usual. We use Σk to denote the set of all symbols of rank k in the ranked alphabet Σ. To indicate that σ ∈ Σ has rank k we write σ (k) . Two ranked alphabets Σ and Δ are called compatible if every symbol of Σ ∩ Δ is assigned the same rank in Σ as in Δ. Then, Σ ∪ Δ is again a ranked alphabet. The set of Σ-trees indexed by a set V is denoted by TΣ (V ). We denote by CΣ (V ) ⊆ TΣ ({x1 } ∪ V ) the set of all contexts over Σ indexed by V , which are Σ-trees indexed by {x1 } ∪ V such that the variable x1 occurs exactly once. We abbreviate TΣ (∅) and CΣ (∅) by TΣ and CΣ , respectively. Given L ⊆ TΣ (V ) we denote the set {σ(t1 , . . . , tk ) | σ ∈ Σk , t1 , . . . , tk ∈ L} by Σ(L). Now let t ∈ TΣ (V ). We denote the set of all variables x ∈ X that occur in t by var(t). Finally, for any finite set Y ⊆ X we call a mapping θ : Y → TΣ (V ) a substitution. Such a substitution θ applied to t yields the tree tθ that is

Compositions of Top-Down Tree Transducers with ε-Rules

71

obtained from t by replacing each occurrence of every x ∈ Y by θ(x). If Y = {x1 } and t ∈ CΣ (V ), then we also write t[θ(x1 )] instead of tθ.

3

Top-Down Tree Transducers with ε-Rules

In this section, we recall top-down tree transducers [9,8,30,31]. We slightly change their definition to allow rules that do not consume an input symbol. Such rules are called (input) ε-rules. Definition 1. A top-down tree transducer with ε-rules (for short: εtdtt) is a system (Q, Σ, Δ, I, R) where: – Q is a finite set of states (disjoint to Σ and Δ). Each state is considered to be of rank 1. – Σ and Δ are ranked alphabets of input and output symbols, respectively. – I ⊆ Q is a set of initial states. – R is a finite set of rules of the form (i) q(σ(x1 , . . . , xk )) → r with q ∈ Q, σ ∈ Σk , and r ∈ TΔ (Q(Xk )) or (ii) q(x1 ) → r with q ∈ Q and r ∈ TΔ (Q(X1 )). Rules of the form (i) are input-consuming, whereas rules of the form (ii) are ε-rules. We denote the set of input-consuming and ε-rules of R by RΣ and Rε , respectively. An εtdtt without ε-rules is called a top-down tree transducer (tdtt). For simplicity, we generally assume that input and output ranked alphabets are compatible (i.e., each encountered symbol has only one rank). For the rest of this section, let M = (Q, Σ, Δ, I, R) be an εtdtt. A rule l → r ∈ R is called linear (respectively, nondeleting) if every variable x ∈ var(l) occurs at most once (respectively, at least once) in r. The εtdtt M is linear (respectively, nondeleting) if all of its rules are so. Example 2. Let N = ({p}, Σ, Σ, {p}, R) be the εtdtt with Σ = {σ (2) , γ (1) , α(0) } and the following rules: p(x1 ) → γ(p(x1 ))

p(γ(x1 )) → γ(p(x1 ))

p(σ(x1 , x2 )) → σ(p(x1 ), p(x2 ))

p(α) → α .

Clearly, N is linear and nondeleting. Intuitively, the ε-rule allows the transducer to insert (at any place in the output tree) any number of γ-symbols. The remaining rules just output the input symbol and process the subtrees recursively. The semantics of the εtdtt M is given by rewriting (cf. [21, Definition 2]). Since in the composition of two εtdtts M and N (cf. Definition 13), the εtdtt N will process right-hand sides of rules of M and such right-hand sides involve symbols of the form q(xi ) that are not in the input alphabet of N , we define the rewrite relation also for trees containing symbols not present in Q ∪ Σ ∪ Δ. So let Σ  and Δ be two compatible ranked alphabets such that Σ ⊆ Σ  and

72

A. Maletti and H. Vogler

Δ ⊆ Δ . Moreover, let l → r ∈ R be a rule, C ∈ CΔ (Q(TΣ  )) be a context, and θ : var(l) → TΣ  be a substitution. Then we say that C[lθ] rewrites to C[rθ] using l → r, denoted by C[lθ] ⇒l→r C[rθ]. For every ζ, ξ ∈ TΔ (Q(TΣ  )) we M write ζ ⇒M ξ if there exists ρ ∈ R such that ζ ⇒ρM ξ. The tree transformation computed by M , denoted by τM , is the relation τM = {(t, u) ∈ TΣ × TΔ | ∃q ∈ I : q(t) ⇒∗M u} where ⇒∗M denotes the reflexive, transitive closure of ⇒M . Two εtdtt are equivalent if their computed tree transformations coincide. Example 3. Consider the εtdtt N of Example 2. To illustrate the use of symbols that are not in Q ∪ Σ, let v be such a new symbol. Then p(γ(v)) ⇒N γ(p(γ(v))) ⇒N γ(γ(p(v))) . In a similar way, we have p(γ(α)) ⇒∗N γ(γ(p(α))) ⇒N γ(γ(α)), and consequently, (γ(α), γ(γ(α))) ∈ τN . We say that M is total if for every q ∈ Q and t ∈ TΣ there exists u ∈ TΔ such that q(t) ⇒∗M u. This property is clearly decidable, which can be seen as follows: We first modify M such that I = {q} and call the resulting εtdtt Mq . It is known [25] that the domain of τMq is a regular tree language, and thus we can decide whether it is TΣ [30,31]. The domain of τMq is TΣ for all states q ∈ Q if and only if M is total. The εtdtt of Example 2 is total, but let us illustrate the notion of totality on another example. Example 4. We consider the ranked alphabet Σ = {σ (2) , γ (1) , α(0) } and the εtdtt M = ({q, q1 }, Σ, Σ, {q}, R) with the rules q(x1 ) → γ(q1 (x1 )) q1 (σ(x1 , x2 )) → σ(q(x1 ), σ(α, q(x2 )))

q(γ(x1 )) → γ(q(x1 )) q(α) → α .

Clearly, there is no tree u ∈ TΣ such that q1 (α) ⇒∗M u. Thus M is not total. Finally, the class of tree transformations computed by εtdtts is denoted by εTOP. We use ‘l’ and ‘n’ to restrict to the transformations computed by linear and nondeleting εtdtts, respectively. For example, ln-εTOP denotes the class of transformations computed by linear, nondeleting εtdtts. We conclude this section by showing that ln-εTOP is not closed under composition. This shows that linear, nondeleting εtdtts are strictly less expressive than linear, nondeleting recognizable tree transducers of [32] because the latter compute a class of transformations that is closed under composition [32, Theorem 2.4]. Note that the latter have regular extended right-hand sides. Theorem 5. ln-εTOP ; ln-εTOP ⊆ l-εTOP. Proof. Consider the two transformations τ1 = {(α, σ(α, α))}

and

τ2 = {(σ(α, α), σ(γ m (α), γ n (α))) | m, n ≥ 0}

where γ m (α) abbreviates γ(γ(· · · γ(α) · · · )) containing the symbol γ exactly m times. It is easy to show that τ1 and τ2 are in ln-εTOP. However, the com  position τ1 ; τ2 clearly cannot be computed by any linear εtdtt.

Compositions of Top-Down Tree Transducers with ε-Rules

4

73

Separating Output Symbols

σ σ Next we define two normal forms of εtdtts. One will be the normal form D we already mentioned, in which each C x1 rule contains at most one output symx1  p(xj ) q (x ) i bol. For our composition results in δ the next section, we will require the δ first transducer to be in this normal r form. This normal form can always C be achieved, but in general linearity is x1 p(xj ) q  (xi ) not preserved (see Theorem 11). The q(x) second normal form is less restricted, viz. we only demand that no (non(i) (ii) trivial) context can be separated from any right-hand side of a rule. This normal form can always be achieved Fig. 1. The two forms of right-hand sides that are forbidden in a maximally output(see Theorem 8) while preserving linseparated εtdtt earity or nondeletion. The underlying algorithm is valuable because on some input transducers it achieves 1-symbol normal form. Definition 6 (cf. [19, Definition 4]). Let M = (Q, Σ, Δ, I, R) be an εtdtt. – It is in 1-symbol normal form if r ∈ Q(X) ∪ Δ(Q(X)) for every l → r ∈ R. – It is maximally output-separated if for every l → r ∈ R: (i) r = C[r ] for every context C ∈ CΔ \ {x1 } and r ∈ Δ(TΔ (Q(X))), and (ii) r = D[C[q(x)]] for every context D ∈ CΔ (Q(X)) \ {x1 } potentially containing states, context C ∈ CΔ \ {x1 }, state q ∈ Q, and variable x ∈ X. By definition, every εtdtt in 1-symbol normal form is maximally output-separated. Let us explain the normal forms in more detail. The right-hand side of a rule of an εtdtt in 1-symbol normal form is either a state followed by a subtree variable (like q(x1 )) or a single output symbol at the root with states as children (like δ(p(x1 ), q(x2 )) or α). The weaker property of maximal output-separation only demands that no nontrivial context C without states can be split from the right-hand side r of a rule. More precisely, Condition (i) states that C cannot be taken off from r such that at least one output symbol remains, and Condition (ii) states that C[q(x)] may not form a proper subtree of r (see Figure 1). Example 7. Let M = ({q}, Σ, Σ, {q}, R) be the linear, nondeleting, and total εtdtt with Σ = {σ (2) , γ (1) , α(0) } and the following rules q(σ(x1 , x2 )) → γ(σ(q(x1 ), σ(α, q(x2 )))) q(γ(x1 )) → γ(q(x1 )) q(α) → α .

(ρ1 ) (ρ2 ) (ρ3 )

74

A. Maletti and H. Vogler

This εtdtt is not maximally output-separated because the right-hand side of ρ1 can be decomposed into the context γ(x1 ) and the subtree σ(q(x1 ), σ(α, q(x2 ))), which violates Condition (i) of Definition 6. Further, the subtree σ(α, q(x2 )) can be decomposed into the context σ(α, x1 ) and the subtree q(x2 ), which violates Condition (ii) of Definition 6. Note that the rules ρ2 and ρ3 satisfy Conditions (i) and (ii) because they have only one output symbol. It was formally verified in [33] that for every linear and nondeleting εtdtt there exists an equivalent linear and nondeleting εtdtt that is maximally outputseparated. Here we extend this result and show that, in general, every εtdtt can be transformed into an equivalent maximally output-separated εtdtt. Theorem 8. For every εtdtt M we can effectively construct an equivalent maximally output-separated εtdtt N . Moreover, if M is linear (respectively, nondeleting), then so is N . Proof. Let M = (Q, Σ, Δ, I, R) be an εtdtt. We will present an iterative procedure that transforms M into an equivalent maximally output-separated εtdtt. Suppose that M is not maximally output-separated; otherwise the procedure terminates. Thus, there exist a rule l → r ∈ R and (i) a context C ∈ CΔ \ {x1 } and r ∈ Δ(TΔ (Q(X))) such that r = C[r ], or (ii) a context D ∈ CΔ (Q(X)) \ {x1 } potentially containing states, a context C ∈ CΔ \{x1 }, a state q ∈ Q, and a variable x ∈ X such that r = D[C[q(x)]]. Let p ∈ / Q be a new state and l = p (l ) for some p ∈ Q and l ∈ TΣ (X). We construct the εtdtt M  = (Q ∪ {p}, Σ, Δ, I, R) with R = (R \ {l → r}) ∪ R where: – In case (i), R contains the two rules: p (x1 ) → C[p(x1 )] and p(l ) → r . – Otherwise R contains the two rules: l → D[p(x)] and p(x1 ) → C[q(x1 )]. Note that M  is linear (respectively, nondeleting), if M is. Moreover, we observe that each new right-hand side (i.e., C[p(x1 )], r , D[p(x)], and C[q(x1 )]) contains strictly fewer output symbols than r. Hence, this procedure can only be finitely iterated and eventually must yield a maximally output-separated εtdtt N . The proof that M and M  are equivalent (i.e., τM = τM  ) is straightforward and dropped here.   Example 9. Recall the linear, nondeleting, and total εtdtt M of Example 7, which is not maximally output-separated. After having applied the first item of the construction to rule ρ1 we obtain the εtdtt which is shown in Example 4. Since this is still not maximally output-separated, we apply the second item of the construction to the rule q1 (σ(x1 , x2 )) → σ(q(x1 ), σ(α, q(x2 ))) and obtain the εtdtt M  = ({q, q1 , q2 }, Σ, Σ, {q}, R) with rules q(x1 ) → γ(q1 (x1 )) q(γ(x1 )) → γ(q(x1 )) q(α) → α .

q1 (σ(x1 , x2 )) → σ(q(x1 ), q2 (x2 )) q2 (x1 ) → σ(α, q(x1 ))

Compositions of Top-Down Tree Transducers with ε-Rules

75

The linear and nondeleting εtdtt M  is not in 1-symbol normal form but maximally output-separated. Note that M  is not total. Whereas the normalization of an εtdtt to one that is maximally output-separated preserves linearity and nondeletion, there exist εtdtts (in fact, even linear, nondeleting tdtts) that admit no equivalent linear, nondeleting εtdtt in 1-symbol normal form. Example 10. Let M = ({q}, Σ, Δ, {q}, R) be the linear, nondeleting, and total tdtt such that Σ = {σ (2) , α(0) }, Δ = {δ (3) , α(0) }, and R contains the two rules q(σ(x1 , x2 )) → δ(q(x1 ), q(x2 ), α)

q(α) → α .

Clearly, M is maximally output-separated, but more interestingly, there exists no equivalent linear or nondeleting εtdtt N in 1-symbol normal form. To illustrate this, let N be an equivalent εtdtt in 1-symbol normal form. Obviously, it must have a rule ρ = l → r that generates the δ in the output. This rule ρ must contain three variables in r, which proves that N is not linear because l is either p(x1 ), p(σ(x1 , x2 )), or p(α) for some state p. Now suppose that there are states q1 , q2 , q3 and z1 , z2 , z3 ∈ X2 such that r = δ(q1 (z1 ), q2 (z2 ), q3 (z3 )). Then q3 (t) ⇒∗N α for every t ∈ TΣ . Taking t = σ(α, α), it is obvious that this cannot be achieved by a nondeleting εtdtt. The previous example shows that linearity and nondeletion must be sacrificed to obtain 1-symbol normal form. Indeed we will show in the next theorem that for every εtdtt there exists an equivalent εtdtt in 1-symbol normal form. Naturally, the obtained εtdtt is, in general, nonlinear and deleting (and typically also non-total). Unfortunately, this limits the application in practice because some important operations cannot be applied to nonlinear εtdtts (e.g., computation of the range, image of a regular tree language, etc.).1 Theorem 11 (cf. [19, Theorem 5]). For every εtdtt we can effectively construct an equivalent εtdtt in 1-symbol normal form. Proof. We present a procedure that iteratively yields the desired εtdtt. Let M = (Q, Σ, Δ, I, R) be an εtdtt that is not in 1-symbol normal form. Thus there exists a rule l → r ∈ R such that r contains at least two output symbols (i.e., r∈ / Q(X)∪Δ(Q(X))). Consequently, let l = p(l ) for some p ∈ Q and l ∈ TΣ (X), and let r = δ(r1 , . . . , rk ) for some k ≥ 1, δ ∈ Δk , and r1 , . . . , rk ∈ TΔ (Q(X)). Let q1 , . . . , qk be k new distinct states. We construct the εtdtt M  = (Q , Σ, Δ, I, R ) where Q = Q ∪ {q1 , . . . , qk }, R = (R \ {l → r}) ∪ {ρ, ρ1 , . . . , ρk }, and ρ = p(x1 ) → δ(q1 (x1 ), . . . , qk (x1 )) ρ1 = q1 (l ) → r1 , 1

...,

ρk = qk (l ) → rk .

We thus suggest the following normalization procedure: First transform into maximally output-separated normal form and only then apply the transformation into 1-symbol normal form, if still necessary.

76

A. Maletti and H. Vogler

Clearly, the right-hand sides of the new rules all contain fewer output symbols than r. Thus, the iteration of the above procedure must eventually terminate with an εtdtt N in 1-symbol normal form. The proof that M and M  are equiv  alent (i.e., τM = τM  ) is similar to the corresponding proof of Theorem 8. Example 12. Recall the linear and nondeleting εtdtt M  from Example 9. The interesting rule is q2 (x1 ) → σ(α, q(x1 )). According to the construction in the proof of Theorem 11, we replace this rule by the rules q2 (x1 ) → σ(q3 (x1 ), q4 (x1 )) q3 (x1 ) → α q4 (x1 ) → q(x1 ) . Thus we obtain the εtdtt M  = ({q, q1 , q2 , q3 , q4 }, Σ, Σ, {q}, R) with rules q(x1 ) → γ(q1 (x1 ))

q1 (σ(x1 , x2 )) → σ(q(x1 ), q2 (x2 ))

q(γ(x1 )) → γ(q(x1 )) q(α) → α

q2 (x1 ) → σ(q3 (x1 ), q4 (x1 )) q3 (x1 ) → α q4 (x1 ) → q(x1 ) .

Note that M  is in 1-symbol normal form, but nonlinear, deleting, and non-total.

5

Composition Construction

This section is devoted to the compositions of tree transformations computed by εtdtts. We start by recalling the classical composition result for tdtts [28,27]. Let M and N be tdtts. If (i) M is deterministic or N is linear, and (ii) M is total or N is nondeleting, then τM ; τN can be computed by a tdtt. Here we focus on εtdtts, so let M and N be εtdtts. Since deterministic transducers have only few applications in natural language processing, we modify Condition (i) to: (i) N is linear. We will prove that if Conditions (i) and (ii) are fulfilled and, additionally, (iii) M is in 1-symbol normal form, then there exists an εtdtt, denoted by M ; N , that computes τM ; τN . Now we define the composition M ; N of M and N . Our principal approach coincides with the one of [27,19]. In fact, if M and N are tdtts, then M ; N is equal to the composition of M and N as defined in [27, p. 195]. To simplify the construction of M ; N , we make the following conventions valid. Let M = (Q, Σ, Γ, IM , RM ) and N = (P, Γ, Δ, IN , RN ) be εtdtts such that all involved ranked alphabets are compatible. Moreover, we do not distinguish between the trees p(q(t)) and (p, q)(t) where p ∈ P and q ∈ Q. Recall that we have defined the rewrite relation of an εtdtt on trees that may contain symbols that are not present in that transducer. Definition 13 (cf. [19, Definition 9]). Let M be in 1-symbol normal form. The composition of M and N is the εtdtt    ∪ RN ∪ RM;N ) M ; N = (P × Q, Σ, Δ, IN × IM , RM

where:

 = {p(l) → p(r) | p ∈ P, l → r ∈ RM , r ∈ Q(X)} RM  ε RN = {l[q(x1 )] → r[q(x1 )] | q ∈ Q, l → r ∈ RN }

 Γ = {p(l) → r | p ∈ P, l → r ∈ RM , ∃ρ ∈ RN : p(r) ⇒ρN r } . RM;N

Compositions of Top-Down Tree Transducers with ε-Rules

77

    Let us discuss the three sets RM , RN , and RM;N in detail. The set RM contains variants of all rules of RM that do not contain any output symbol. For each state p ∈ P , such a variant is obtained by annotating the two states (in the  contains left- and right-hand side) by p (i.e., replacing q by (p, q)). The set RN variants of the ε-rules of RN ; this time annotated with a state q ∈ Q. This is  illustrated in Example 14. Finally, the set RM;N contains rules that are obtained by processing the right-hand side of a rule of RM that contains an output symbol by an input-consuming rule of RN . Since M is in 1-symbol normal form, each rule of RM has at most one output symbol and the rule of RN will consume it. The construction trivially preserves linearity and nondeletion. Let us illustrate the construction by composing our running example with the transducer of Example 2.

Example 14. We consider the εtdtt M  = (Q, Σ, Σ, {q}, R) of Example 12 (with Q = {q, q1 , q2 , q3 , q4 }) and the εtdtt N = ({p}, Σ, Σ, {p}, R) of Example 2. Note that M  is in 1-symbol normal form and N is linear and nondeleting. Here we only discuss some of the rules of M  ; N that are needed to process the input t = σ(γ(α), α). Since p(x1 ) → γ(p(x1 )) is in RN , we have that {(p, q  )(x1 ) → γ((p, q  )(x1 )) | q  ∈ Q}  . Moreover, by p(q(x1 )) ⇒M  p(γ(q1 (x1 ))) ⇒N γ(p(q1 (x1 ))) is a subset of RN  we obtain the rule (p, q)(x1 ) → γ((p, q1 )(x1 )) in RM  ;N . Finally, we have

p(q1 (σ(x1 , x2 ))) ⇒M  p(σ(q(x1 ), q2 (x2 ))) ⇒N σ(p(q(x1 )), p(q2 (x2 )))  and thus the rule (p, q1 )(σ(x1 , x2 )) → σ((p, q)(x1 ), (p, q2 )(x2 )) in RM  ;N . Altogether, we obtain this potential derivation where we write ⇒ for ⇒M  ;N :

(p, q)(t) ⇒ γ((p, q)(t)) ⇒ γ(γ((p, q1 )(t))) ⇒ γ(γ(σ((p, q)(γ(α)), (p, q2 )(α)))) . Continuing also with rules we did not explicitly show, we obtain ⇒ γ(γ(σ(γ((p, q)(α)), (p, q2 )(α)))) ⇒ γ(γ(σ(γ(α), (p, q2 )(α)))) ⇒ γ(γ(σ(γ(α), γ((p, q2 )(α))))) ⇒ γ(γ(σ(γ(α), γ(α)))) . Next, we will consider the correctness of our composition construction of Definition 13. To this end, we prove that τM ; τN = τM;N provided that (i) N is linear, (ii) M is total or N is nondeleting, and (iii) M is in 1-symbol normal form. Henceforth we assume the notation of Definition 13.2 We start with the easy direction and show that every derivation that first uses exclusively derivation steps of M and then only steps of N can be simulated by a derivation of M ; N . 2

The following lemmas exhibit an interesting symmetry to the bottom-up case described in [19]. Roughly speaking, the preconditions of Lemmas 15 and 16 are exchanged in the bottom-up case.

78

A. Maletti and H. Vogler

Lemma 15 (cf. [19, Lemma 12]). Let ζ ∈ P (Q(TΣ )), ξ ∈ P (TΓ ), and u ∈ TΔ be such that ζ ⇒∗M ξ ⇒∗N u. If M is in 1-symbol normal form, then ζ ⇒∗M;N u. In particular, τM ; τN ⊆ τM;N . Proof. The proof can be done by induction on n in the derivation ζ ⇒nM ξ ⇒∗N u where ⇒nM is the n-fold composition of ⇒M . In the induction step, we have to consider two situations, namely whether the first applied rule (of RM ) creates an output symbol or not. If it does not, then we can immediately use the induction hypothesis to conclude ζ ⇒∗M;N u. Otherwise, we need to identify the rules (potentially many because N might copy with the help of ε-rules) in ξ ⇒∗N u that process the created output symbol. We then apply those rules immediately and use the induction hypothesis.   Lemma 16 (cf. [19, Lemma 11]). Let ζ ∈ P (Q(TΣ )) and u ∈ TΔ be such that ζ ⇒∗M;N u. If (i) N is linear and (ii) M is total or N is nondeleting, then ζ (⇒∗M ; ⇒∗N ) u. In particular, τM;N ⊆ τM ; τN . Proof. We prove the statement by induction on n in the derivation ζ ⇒nM;N u.    Clearly, there are three types (RM , RN , and RM;N ) of rules to distinguish. The  case RM is trivial. In the other two cases, we defer the derivation step using a rule of RN . This can only be done, if (i) N is linear (because otherwise N might copy redeces of M ) and (ii) M is total or N is nondeleting (because otherwise M can get stuck on a redex that originally was deleted by N ).   Theorem 17. Let M and N be εtdtts. If (i) N is linear, (ii) M is total or N is nondeleting, and (iii) M is in 1-symbol normal form, then M ; N computes τM ; τN . Proof. The theorem follows directly from Lemmas 15 and 16.

 

The reader might wonder why we keep requirement (iii), although it can always be achieved using Theorem 11. First, totality is not preserved by Theorem 11, and second, if we consider linear εtdtts M and N , then M ; N is also linear. The latter fact can, for example, be used to show that ln-εTOP is closed under leftcomposition with relabelings [30,31], which are computed by linear, nondeleting top-down tree transducers that are always in 1-symbol normal form. In addition, the composition closure of finite-state transductions follows from the linear variant because finite-state transducers can be brought into 1-symbol normal form using the procedure for maximal output-separation.3 We note that Theorem 17 generalizes the main theorem of [33], which states that a linear and nondeleting εtdtt in 1-symbol normal form can be composed with a linear and nondeleting εtdtt. Corollary 18. ln−εTOP ; · · · ; ln−εTOP ⊂ εTOP. Proof. Inclusion is immediate by Theorems 11 and 17. Strictness follows from the fact that not all transformations of εTOP preserve regularity whereas all transformations of ln−εTOP do [30,25].   3

In principle, we could also use the procedure of Theorem 11, but we would need to prove that, in this special case, linearity and nondeletion are actually preserved.

Compositions of Top-Down Tree Transducers with ε-Rules

6

79

Conclusions and Open Problems

In this paper we have considered εtdtts, which straightforwardly generalize finitestate (string) transducers, and thus, technology developed for the latter model can be embedded into the former one. We have proved that, for two εtdtts M and N , the composition of τM and τN can be computed by one εtdtt provided that (i) N is linear, (ii) M is total or N is nondeleting, and (iii) M is in 1-symbol normal form. This generalizes Baker’s composition result for nondeterministic top-down tree transducers. Moreover, our investigation on composition of εtdtt might give some insight in how to solve the open problem stated in [34,35]: find a tree transducer model that is expressive, modular, inclusive, and trainable. Another open problem (stated by one of the referees) is the following: given a tree transducer type (such as linear, nondeleting εtdtts) whose class of transformations is not closed under composition (such as ln-εTOP) but ideally has the other three properties (of the list mentioned above), develop an algorithm that accepts two transducers of that type and determines whether their composition can be captured by the same type of transducer (and if it can, return such a transducer). Acknowledgments. The authors gratefully acknowledge the support of the research group around Kevin Knight (ISI/USC). In addition, the authors would like to thank Carmen Heger, who researched compositions of linear and nondeleting εtdtts in [33]. Also, we thank the referees for helpful suggestions, which improved the readability of the paper.

References 1. Hopcroft, J.E., Ullman, J.D.: Introduction to automata theory, languages, and computation. Addison-Wesley, Reading (1979) 2. Mohri, M.: Weighted Automata Algorithms. In: Droste, M., Kuich, W., Vogler, H. (eds.) Handbook of Weighted Automata, pp. 209–252. Springer, Heidelberg (2009) 3. Mohri, M., Pereira, F.C.N., Riley, M.: The design principles of a weighted finitestate transducer library. Theoret. Comput. Sci. 231(1), 17–32 (2000) 4. Kaplan, R.M., May, M.: Regular models of phonological rule systems. Computational Linguistics 20(3), 331–378 (1994) 5. Kanthak, S., Ney, H.: Fsa: an efficient and flexible C++ toolkit for finite state automata using on-demand computation. In: Proc. ACL, pp. 510–517 (2004) 6. Graehl, J.: Carmel: finite-state toolkit. ISI/USC (1997), http://www.isi.edu/licensed-sw/carmel/ 7. Allauzen, C., Riley, M., Schalkwyk, J., Skut, W., Mohri, M.: OpenFst — a general ˇ arek, J. (eds.) ˇ d´ and efficient weighted finite-state transducer library. In: Holub, J., Z CIAA 2007. LNCS, vol. 4783, pp. 11–23. Springer, Heidelberg (2007) 8. Rounds, W.C.: Mappings and grammars on trees. Math. Syst. Theory 4(3), 257–287 (1970) 9. Thatcher, J.W.: Generalized2 sequential machine maps. J. Comput. Syst. Sci. 4(4), 339–367 (1970) 10. Arnold, A., Dauchet, M.: Transductions inversibles de forˆets. Th`ese 3`eme cycle M. Dauchet, Universit´e de Lille (1975) 11. Arnold, A., Dauchet, M.: Bi-transductions de forˆets. In: Proc. ICALP, pp. 74–86. Cambridge University Press, Cambridge (1976)

80

A. Maletti and H. Vogler

12. Graehl, J., Knight, K., May, J.: Training tree transducers. Computational Linguistics 34(3), 391–427 (2008) 13. Shabes, Y.: Mathematical and computational aspects of lexicalized grammars. PhD thesis, University of Pennsylvania (1990) 14. Shieber, S.M., Schabes, Y.: Synchronous tree-adjoining grammars. In: Proc. ACL, pp. 253–258 (1990) 15. Lilin, E.: Une g´en´eralisation des transducteurs d’´etats finis d’arbres: les Stransducteurs. Th`ese 3`eme cycle, Universit´e de Lille (1978) 16. Lilin, E.: Propri´et´es de clˆ oture d’une extension de transducteurs d’arbres d´eterministes. In: Astesiano, E., B¨ ohm, C. (eds.) CAAP 1981. LNCS, vol. 112, pp. 280–289. Springer, Heidelberg (1981) 17. F¨ ul¨ op, Z., K¨ uhnemann, A., Vogler, H.: A bottom-up characterization of deterministic top-down tree transducers with regular look-ahead. Inf. Process. Lett. 91(2), 57–67 (2004) 18. F¨ ul¨ op, Z., K¨ uhnemann, A., Vogler, H.: Linear deterministic multi bottom-up tree transducers. Theoret. Comput. Sci. 347(1-2), 276–287 (2005) 19. Engelfriet, J., Lilin, E., Maletti, A.: Extended multi bottom-up tree transducers. In: Ito, M., Toyama, M. (eds.) DLT 2008. LNCS, vol. 5257, pp. 289–300. Springer, Heidelberg (2008) 20. Shieber, S.M.: Synchronous grammars as tree transducers. In: Proc. TAG+7, pp. 88–95 (2004) 21. Maletti, A.: Compositions of extended top-down tree transducers. Inf. Comput. 206(9-10), 1187–1196 (2008) 22. Knight, K., Graehl, J.: An overview of probabilistic tree transducers for natural language processing. In: Gelbukh, A. (ed.) CICLing 2005. LNCS, vol. 3406, pp. 1–24. Springer, Heidelberg (2005) 23. Knight, K., May, J.: Applications of Weighted Automata in Natural Language Processing. In: Droste, M., Kuich, W., Vogler, H. (eds.) Handbook of Weighted Automata, pp. 555–580. Springer, Heidelberg (2009) 24. May, J., Knight, K.: Tiburon: A weighted tree automata toolkit. In: Ibarra, O.H., Yen, H.-C. (eds.) CIAA 2006. LNCS, vol. 4094, pp. 102–113. Springer, Heidelberg (2006) 25. Maletti, A., Graehl, J., Hopkins, M., Knight, K.: The power of extended top-down tree transducers. SIAM J. Comput. 39(2), 410–430 (2009) 26. Yamada, K., Knight, K.: A decoder for syntax-based statistical MT. In: ACL, pp. 303–310 (2002) 27. Baker, B.S.: Composition of top-down and bottom-up tree transformations. Inform. Control 41(2), 186–213 (1979) 28. Engelfriet, J.: Bottom-up and top-down tree transformations—a comparison. Math. Systems Theory 9(3), 198–231 (1975) 29. Berstel, J.: Transductions and context-free languages. Teubner, Stuttgart (1979) 30. G´ecseg, F., Steinby, M.: Tree Automata. Akad´emiai Kiad´ o, Budapest (1984) 31. G´ecseg, F., Steinby, M.: Tree languages. In: Handbook of Formal Languages, vol. 3, pp. 1–68. Springer, Heidelberg (1997) 32. Kuich, W.: Full abstract families of tree series I. In: Jewels Are Forever, pp. 145–156. Springer, Heidelberg (1999) 33. Heger, C.: Composition of linear and nondeleting top-down tree transducers with ε-rules. Master’s thesis, TU Dresden (2008) 34. Knight, K.: Capturing practical natural language transformations. Machine Translation 21(2), 121–133 (2007) 35. Knight, K.: Requirements on an MT system. Personal communication (2008)

Reducing Nondeterministic Finite Automata with SAT Solvers Jaco Geldenhuys, Brink van der Merwe, and Lynette van Zijl Department of Computer Science Stellenbosch University, Private Bag X1, 7602 Matieland, South Africa {jaco,abvdm,lynette}@cs.sun.ac.za

Abstract. We consider the problem of reducing the number of states of nondeterministic finite automata, and show how to encode the reduction as a Boolean satisfiability problem. This approach improves on previous work by reducing a more general class of automata. Experimental results show that it produces a minimal automaton in almost all cases and that the running time compares favourably to the Kameda-Weiner algorithm.

1

Introduction

The use of nondeterministic finite automata (NFAs) in natural language processing is well-known [15]. NFAs provide an intuitive model for many natural language problems, and in addition have potentially fewer states than deterministic finite automata (DFAs). However, in large, real-world applications, NFAs can be prohibitively large and unfortunately NFA state minimization is PSPACEcomplete [8], and NP-complete for even severely restricted automata [11]. This paper considers NFA state reduction from a new viewpoint. Our goal is to solve the problem for practical settings within a limited time. We encode the NFA reduction problem as a satisfiability problem, and use a SAT solver to determine a solution that can be turned back into a reduced, equivalent NFA. Furthermore, we are interested in generalized nondeterministic automata. A NFAs M = (S, Σ, Δ, sˆ, F, ) is a tuple where S is a finite set of states, Σ is a finite alphabet, sˆ ∈ S is the initial state, F ⊆ 2S is a set of final state sets, Δ : S×Σ×S is the set of transitions, and  is a binary associative and commutative set operation. Define d : S × Σ → 2S by d(s, a) = {s | (s, a, s ) ∈ Δ}, define e : 2S × Σ → 2S by e(X, a) = s∈X d(s, a), and define f : 2S × Σ ∗ → 2S by f (S, ) = S and f (S, wa) = e(f (S, w), a) for w ∈ Σ ∗ and a ∈ Σ. A -NFA M accepts a word w if and only if f ({ˆ s}, w) ∈ F . In this framework, a traditional NFA is a ∪-NFA. The work presented here is the first known reduction solution for this class of automata. In the rest of the paper, we consider related work in Sect. 2, the details of the reduction algorithm are described in Sect. 3, followed by an example in Sect. 4. We consider the degree of reduction in Sect. 5, experimental results are presented in Sect. 6, and we conclude in Sect. 7. A. Yli-Jyr¨ a et al. (Eds.): FSMNLP 2009, LNAI 6062, pp. 81–92, 2010. c Springer-Verlag Berlin Heidelberg 2010 

82

2

J. Geldenhuys, B. van der Merwe, and L. van Zijl

Related Work

We briefly mention previous approaches to NFA minimization/reduction, some of which restrict the NFAs under consideration to, for example, acyclic NFAs or unary NFAs, while others consider reduction instead of minimization, as we do. In the case of the minimization of general NFAs (without any restrictions), there has been some interest since the early 1970s. Indermark [7], Kameda and Weiner [9], Carrez [2], Kim [10], Melnikov [13], and Pol´ ak [17] all consider the problem, with Kameda and Weiner’s work the classic solution. Almost all of this work is based on a so-called “universal automaton” of which the minimal NFA is a subautomaton. Pol´ ak describes some of the relationships between proposed universal automata from a more algebraic perspective. For large practical applications, it can help to sacrifice accuracy for speed. Algorithms to find reduced but not minimal NFAs within a reasonable time have been investigated, for instance by Matz and Pothoff [12], Champarnaud and Coulon [3] and Ilie and Yu [5,6]. Another approach is to target specific subsets of NFAs that occur in practical applications. For example, the minimization problem for acyclic automata (or, equivalently, finite languages) was considered by Daciuk et al. [4], Mihov [14], Amilhastre, Janssen, and Vilarem [1]. The minimization of unary ⊕-NFAs (symmetric difference NFAs) is considered in [19]. Classical solutions to minimization are shown not to apply, as these techniques depend on finding the dual of a given NFA, which is often impossible for a ⊕-NFA, as the states do not have well-defined predecessors. Hence, the efficient minimization of -NFAs in the general case has not yet been solved, which makes the technique presented here particularly interesting.

3

Reduction Algorithm

The new reduction algorithm is shown in Fig. 1. For a given input NFA M , it starts by calculating the minimal equivalent DFA M+ . This determines the smallest size n for any NFA that could possibly be equivalent to M . The algorithm iterates over the sizes n, n + 1, n + 2, . . . until it either finds a small equivalent NFA or reaches the size of M (or perhaps M+ if it is smaller). The exact -NFA nature of M (whether it is a ∪-, ∩-, ⊕-NFA, or something else) is used in Step 1 to choose the appropriate determinization, and by changing some details in the construction of P in Step 5, the user can control the -NFA nature of the output NFA. The rest of this section describes Step 5 in detail. Suppose that DFA M+ has m = |S+ | states and a = |Σ| alphabet symbols. The final states and transitions of M+ can be represented in tabular form: α1 α2 · · · αa z11 z12 · · · z1a z21 z22 · · · z2a .. .. . . .. . . . . m zm1 zm2 · · · zma

Final State f1 1 f2 2 .. .. . . fm

Reducing Nondeterministic Finite Automata with SAT Solvers

83

Algorithm. SATReduce Input: NFA M = (S, Σ, Δ, sˆ, F ) 1. determinize & minimize M −→ DFA M+ = (S+ , Σ, Δ+ , sˆ+ , F+ ) 2. n := log2 |S+ | 3. while n < min(|S|, |S+ |) do 4.

construct symbolic NFA M? = (S? , Σ, Δ? , sˆ? , F? ) with n = |S? |

5.

construct SAT problem P to describe M+ = determinize(M? )

6.

apply a SAT solver to P

7.

if P is satisfiable then extract and return M?

8.

n := n + 1

9. endwhile 10. return M (if n = |S|) or M+ (if n = |S+ |) Fig. 1. The reduction algorithm

Each of the variables fi is Boolean (either 0 or 1), and each of the variables zjk is a state number in the range 0 . . . m. When zjk = 0 it means that there is no αk labelled transition leaving state j; otherwise, (j, αk , zjk ) ∈ Δ+ . The exact values of these variables are known since M is given as input and Step 1 calculates a unique M+ . It now constructs an NFA M? with n states that is assumed – for the moment – to be equivalent to the input M . If the assumption is false, the SAT problem P will be unsatisfiable. No actual computation is involved in this step. Final Initial State α1 g1 e1 1 X11 g2 e2 2 X21 .. .. .. .. . . . . en n Xn1 gn

α2 X12 X22 .. . Xn2

· · · αa · · · X1a · · · X2a . . .. . . · · · Xna

(1)

As before, the gi and the ei are Boolean variables, and Xjk can be viewed as either a subset of {1 . . . n} or equivalently as a bitset Xjk = xjk1 xjk2 . . . xjkn , where each of the xjk are once again Boolean variables. Note that the only “guessing” is that the value of n is large enough. The values of fi and zij (the knowns) will be used to determine the values of gi , ei , and Xjk (the unknowns). The next step is at the heart of the process. The algorithm implicitly determinizes M? by the subset construction: Final State α1 f1 T1 U11 f2 T2 U21 .. .. .. . . . fm Tm Um1

α2 U12 U22 .. . Um2

· · · αa · · · U1a · · · U2a . .. . .. · · · Uma

(2)

84

J. Geldenhuys, B. van der Merwe, and L. van Zijl

(The algorithm does not build this automaton explicitly; it is merely a mental tool for understanding its operation.) The assumption that the subset construction results in a DFA that is isomorphic to M+ , means that M? must have precisely m states, and that the fi values are the same as those of M+ . Each Ti is a subset of {1 . . . n}, or equivalently as a bitset Ti = ti1 ti2 . . . tin , and Ti corresponds to state i of M+ ; this kind of renumbering of the subsets is usual after determinization. During the subset construction the values of Ti and Xjk are used to calculate the value of Uik , but Uik is also constrained by the value of zik . In particular, if zik = p and p = 0, then Uik = Tp . This relationship is the starting point of the conversion to a SAT problem. 3.1

Initial State Constraints

Assume, without loss of generality, that sˆ+ = 1. State 1 corresponds to T1 , and can be related to the initial states of M? by (t11 = e1 )∧(t12 = e2 )∧. . .∧(t1n = en ). This is, however, the only role for the ei variables, and is trivially satisfied by solution to the other constraints. We can safely omit it from the SAT problem. 3.2

Final State Constraints

If fi = 1 (that is, state i of M+ is final), then Ti must contain at least one final state of the unknown NFA M? . In this case, (ti1 ∧ g1 ) ∨ (ti2 ∧ g2 ) ∨ . . . ∨ (tin ∧ gn ). If fi = 0, then ¬((ti1 ∧ g1 ) ∨ (ti2 ∧ g2 ) ∨ . . . ∨ (tin ∧ gn )), and more generally, fi ⇔ ((ti1 ∧ g1 ) ∨ (ti2 ∧ g2 ) ∨ . . . ∨ (tin ∧ gn )) for i = 1, 2, . . . , m. Since the values of the fi are known, it will be more convenient to not make use of the last equivalence and to write the m constraints separately. 3.3

Transition Constraints

Let zjk = p and assume for now that p = 0. This means that the result of the subset construction contains the entries αk State Tj Ujk = Tp

or, equivalently,

State αk tj1 tj2 . . . tjn tp1 tp2 . . . tpn

Assume further that the algorithm is looking for a ∪-NFA. During the subset construction, state i of the NFA is included in Tp if and only if at least one of the states in Tj has a transition to state i. This constraint can be formulated as (i ∈ Tp ) ⇔ (1 ∈ Tj ∧ i ∈ X1k ) ∨ (2 ∈ Tj ∧ i ∈ X2k ) ∨ . . . ∨ (n ∈ Tj ∧ i ∈ Xnk ) for i = 1, 2, . . . , n. Written in terms of the Boolean variables, it becomes tpi ⇔ (tj1 ∧ x1ki ) ∨ (tj2 ∧ x2ki ) ∨ . . . ∨ (tjn ∧ xnki ).

Reducing Nondeterministic Finite Automata with SAT Solvers

85

So each transition zjk = p (and p = 0) generates the following n constraints: tp1 ⇔ (tj1 ∧ x1k1 ) ∨ (tj2 ∧ x2k1 ) ∨ . . . ∨ (tjn ∧ xnk1 ) tp2 ⇔ (tj1 ∧ x1k2 ) ∨ (tj2 ∧ x2k2 ) ∨ . . . ∨ (tjn ∧ xnk2 ) .. .

(3)

tpn ⇔ (tj1 ∧ x1kn ) ∨ (tj2 ∧ x2kn ) ∨ . . . ∨ (tjn ∧ xnkn ). The constraints above apply to non-zero zjk transitions, but what happens when zjk = 0? In this case, Ujk = ∅. The corresponding constraints are ¬((tj1 ∧ x1k1 ) ∨ (tj2 ∧ x2k1 ) ∨ . . . ∨ (tjn ∧ xnk1 )) ¬((tj1 ∧ x1k2 ) ∨ (tj2 ∧ x2k2 ) ∨ . . . ∨ (tjn ∧ xnk2 )) .. .

(4)

¬((tj1 ∧ x1kn ) ∨ (tj2 ∧ x2kn ) ∨ . . . ∨ (tjn ∧ xnkn )). It is the use of disjunction in the constraints in (3) and (4) that determines the -NFA nature of the output. If the algorithm is looking for a ∩-NFA, all of the constraints would be identical, except that the disjunctions are replaced by conjunctions; for ⊕-NFA, they are replaced by exclusive-or operators. 3.4

Additional Constraints

There are two more potential sources of constraints to consider. Firstly, can Ti = ∅ for some i? The answer is “yes” if M+ contains a sink state s that is not final and for which (s, α, s) ∈ Δ+ for all α ∈ Σ. The presence of such as state depends on whether Δ+ is required to be total or not. This does not affect the algorithm, and the final state constraints will exclude solutions where Ti = ∅ for all i. Secondly, is it possible Ti = Tj for i = j? Again the answer is “yes”. This indicates that the value of n is too large. In fact, a (n − 1)-state NFA can be derived directly from the output of the SAT solver by eliminating the duplicated state. However, since the algorithm in Fig. 1 starts with the smallest possible value for n and proceeds in steps of 1 state, this situation will not arise. 3.5

The Size of the SAT Problem

To accommodate SAT solvers, the constraints discussed in the previous sections must be converted to conjunctive normal form (CNF). As the second column of Table 1 shows, the number of clauses (i.e., conjuncts) grows exponentially as n grows. Fortunately, this can be mitigated by introducing auxiliary variables. For example, a constraint such as a ⇔ (b1 ∧ c1 ) ∨ (b2 ∧ c2 ) ∨ . . . ∨ (bl ∧ cl ) can be rewritten by adding a new variable d and the constraint d ⇔ (b2 ∧ c2 ) ∨ . . . ∨ (bl ∧ cl ), and multiplying out the remaining terms of the original: (¬a ∨ b1 ∨ d) ∧ (¬a ∨ c1 ∨ d) ∧ (a ∨ ¬b1 ∨ ¬c1 ) ∧ (a ∨ ¬d)

86

J. Geldenhuys, B. van der Merwe, and L. van Zijl Table 1. Upper bounds for the number of variables and clauses Without new variables

(gi , tjk , xjk ) vars. Final states New vars. Clauses Horn

2

With new variables

(n, nm, n a)

(n, nm, n2 a)

0 2n F + 2nG 2nG

(n − 1)F (4n − 3)F + 2nG (2n − 1)F + 2nG

Transitions (∪-NFAs) New vars. 0 Clauses n(2n + n)R − nL + n2 Z Horn n2 R − nL + n2 Z

n(n − 1)R n(4n − 1)R − 2nL + n2 Z n(2n + 1)R + n2 Z

Transitions (⊕-NFAs) New vars. 0 Clauses 2n3n R + n(3n − 1)Z/2 Horn unknown

n(n − 1)R + n(n − 1)Z n(8n − 5)R − 3nL + n(8n − 10)Z n(n + 2)R + nL + n(n − 4)Z

The full details of such transformations are simple but tedious. The values in Table 1 are upper bounds because the exact structure of M+ may lead to equivalence among some of the constraints. In the table, m is the size of the DFA M+ , and n is the size of the “guessed” NFA M? . The size of the alphabet is a. The number of final states is F , and G is the number of non-final states, so that F + G = m. The number of empty (zero) transitions of M+ is Z, while R is the number of non-empty transitions. Z + R = ma. Amongst the R non-empty transitions there are L self-loops.

4

Example

Consider the DFA M+ = (S+ , Σ, Δ+ , sˆ+ , F+ ), where S+ = {1, 2, 3, 4, 5}, Σ = {a, b}, sˆ+ = 1, and Δ+ and F+ are shown in the table on the left in Fig. 2. The figure also shows the automaton in a graphical form. The calculation starts with the assumption that M? contains 3 states: a b Final Initial State g1 e1 1 a11 a12 a13 b11 b12 b13 g2 e2 2 a21 a22 a23 b21 b22 b23 g3 e3 3 a31 a32 a33 b31 b32 b33 For brevity, we use ajk and bjk instead of xj1k and xj2k used in (1). Let M! = determinize(M? ). The aim is to construct a SAT problem that expresses the fact that M! = M+ . The number of states of M! and its final states are taken directly from M+ , as are the values of the transitions (that is, the values of the Ujk ):

Reducing Nondeterministic Finite Automata with SAT Solvers

Final State 0 1 1 2 0 3 1 4 0 5

a 2 4 4 4 5

b 3 5 3 3 3

@ R b 1   b  q  P a  4 P i   a a 6 a

?  P q )  3 P b  6 b

  b - 2  

 ) - 5  P a 

87

Fig. 2. Tabular and graphical representations of the automaton M+

Final State 0 t11 t12 t13 1 t21 t22 t23 0 t31 t32 t33 1 t41 t42 t43 0 t51 t52 t53

a t21 t22 t23 t41 t42 t43 t41 t42 t43 t41 t42 t43 t51 t52 t53

b t31 t32 t33 t51 t52 t53 t31 t32 t33 t31 t32 t33 t31 t32 t33

The task is to describe the relation between the values of gj , ajk , bjk , and tjk , given the available information about M+ . As far as the final states are concerned, state T2 is final (because f2 = 1) and is marked as such by the subset construction if and only if it contains at least one final state of M? . Either 1 ∈ T2 and g1 = 1, or 2 ∈ T2 and g2 = 1, or 3 ∈ T2 and g3 = 1, or some combination of these statements is true. In other words, (t21 ∧ g1 ) ∨ (t22 ∧ g2 ) ∨ (t33 ∧ g3 ). The same is true for state T4 : (t41 ∧ g1 ) ∨ (t42 ∧ g2 ) ∨ (t43 ∧ g3 ), and similar but negated constraints can be derived from the non-final states T1 , T3 , and T5 : ¬((t11 ∧ g1 ) ∨ (t12 ∧ g2 ) ∨ (t13 ∧ g3 )) ¬((t31 ∧ g1 ) ∨ (t32 ∧ g2 ) ∨ (t33 ∧ g3 )) ¬((t51 ∧ g1 ) ∨ (t52 ∧ g2 ) ∨ (t53 ∧ g3 )). Constraints for each of the transitions of M! must now be generated. Consider a State X = t11 t12 t13 Y = t21 t22 t23 The usual formula for the value on the right is Y = ai1 ai2 ai3 . In other words,

& i∈X

Ai , where Ai =

– state 1 ∈ Y iff (1 ∈ X ∧ 1 ∈ A1 ), or (2 ∈ X ∧ 1 ∈ A2 ), or (3 ∈ X ∧ 1 ∈ A3 ); – state 2 ∈ Y iff (1 ∈ X ∧ 2 ∈ A1 ), or (2 ∈ X ∧ 2 ∈ A2 ), or (3 ∈ X ∧ 2 ∈ A3 ); – state 3 ∈ Y iff (1 ∈ X ∧ 3 ∈ A1 ), or (2 ∈ X ∧ 3 ∈ A2 ), or (3 ∈ X ∧ 3 ∈ A3 ).

88

J. Geldenhuys, B. van der Merwe, and L. van Zijl

This can be expressed in a propositional form as t21 ⇔ (t11 ∧ a11 ) ∨ (t12 ∧ a21 ) ∨ (t13 ∧ a31 ) t22 ⇔ (t11 ∧ a12 ) ∨ (t12 ∧ a23 ) ∨ (t13 ∧ a32 ) t23 ⇔ (t11 ∧ a13 ) ∨ (t12 ∧ a23 ) ∨ (t13 ∧ a33 ) The other nine transitions yield 27 similar constraints. These constraints can now be given to a SAT solver to determine whether any values of gi , tij , aij , and bij satisfy the equations. Space does not allow us to explore the results of this step, but we believe that the intention is clear.

5

Reduction v. Minimization

Our reduction algorithm assumes that for a given minimal DFA M+ there exists an equivalent NFA M? for which S(M? ) ≈ M+ (where ≈ denotes isomorphism, and S denotes the subset construction). This assumption holds (M+ itself qualifies as a candidate of M? ), but not always for the minimal NFA equivalent to M+ , which explains why the algorithm reduces instead of minimizing. But how often does the subset construction produce a minimal DFA? To answer this question we investigated randomly generated ∪-NFAs. (Small automata with fewer than 2 states were filtered out.) Each NFA was determinized and its size α was recorded, and then minimized and its new minimal size β was recorded. Table 2 displays the results of the “gap” α − β for different numbers of states and alphabet symbols. Each cell contains three values: the mean gap size, standard error of the mean (SEM), and number of random samples (in 1000s). For example, for 10-state 4-symbol NFAs the 99% confidence interval for the mean size difference between the determinized and minimized DFA is 5.09 ± 2.58 × 0.014 = 5.054 . . . 5.126 states (based on 230,000 samples). Figure 3 shows a typical distribution. Interestingly, for a given alphabet size the gap appears to grow smaller as the number of states increases, although there does seem to be a small initial hump that shifts towards a larger number of states as the alphabet size increases. This is shown more clearly in Fig. 4. Table 3 contains the mean gap sizes (only) for ⊕-NFAs. Surprisingly, in this case the subset construction almost always produces a minimal DFA. The SEM is vanishingly small for all of the samples, and the mean gap is < 1 when |Σ| > 3.

6

Experimental Results

We implemented both the Kameda-Weiner algorithm and our new reduction algorithm in C. The latter uses the zChaff SAT solver [20]. Preliminary time consumption results are shown in Table 4. The numbers in the table are the mean running times (in milliseconds) for the respective algorithms to minimize/reduce 1000 random NFAs. While the SEM is small for a large number of cases, the sample is relatively small, and further experiments are clearly needed, also to explain the anomalies in some of the results.

Reducing Nondeterministic Finite Automata with SAT Solvers

89

Table 2. The mean gap size, SEM, and sample size (1000s) for random ∪-NFAs

|Σ|

5

6

7

8

9

n 10

11

12

13

14

15

2

2.68 2.80 2.56 2.26 2.00 1.78 1.61 1.47 1.37 1.26 1.23 0.004 0.004 0.004 0.003 0.003 0.005 0.008 0.009 0.012 0.011 0.011 650 740 670 770 630 210 70 20 10 10 10

3

4.48 5.04 4.73 4.23 3.77 3.38 3.05 2.80 2.61 2.45 2.31 0.007 0.007 0.007 0.005 0.006 0.009 0.014 0.022 0.021 0.020 0.019 610 830 740 980 640 230 80 30 10 10 10

4

5.84 7.18 6.97 6.30 5.63 5.09 4.61 4.26 3.99 3.64 3.47 0.009 0.010 0.010 0.008 0.009 0.014 0.021 0.032 0.031 0.029 0.027 630 790 840 940 660 230 80 30 10 10 10

5

6.78 9.13 9.15 8.41 7.57 6.86 6.23 5.74 5.27 4.96 4.70 0.011 0.011 0.011 0.011 0.012 0.018 0.028 0.043 0.078 0.076 0.075 580 950 1080 920 650 230 80 30 10 10 10

6

7.31 10.87 11.28 10.50 9.53 8.64 7.88 7.24 6.75 6.38 6.00 0.011 0.014 0.015 0.015 0.016 0.024 0.036 0.054 0.085 0.083 0.080 680 860 900 840 610 220 80 30 10 10 10

7

7.53 12.35 13.28 12.57 11.47 10.43 9.58 8.88 8.22 7.66 7.24 0.012 0.019 0.019 0.015 0.019 0.028 0.044 0.066 0.114 0.089 0.087 590 620 780 1190 610 230 80 30 10 10 10

8

7.50 13.67 15.24 14.58 13.39 12.27 11.24 10.38 9.56 9.15 8.45 0.012 0.018 0.018 0.015 0.022 0.034 0.052 0.078 0.121 0.118 0.115 650 850 1110 1580 610 220 80 30 10 10 10

9

7.28 14.80 17.10 16.57 15.34 14.01 12.87 11.89 11.10 10.35 9.86 0.012 0.021 0.021 0.021 0.026 0.039 0.097 0.089 0.145 0.125 0.122 640 800 1120 1080 580 220 30 30 10 10 10

10

6.95 15.72 18.85 18.54 17.21 15.79 14.51 13.48 12.44 12.02 11.00 0.012 0.024 0.028 0.022 0.031 0.045 0.067 0.101 0.167 0.149 0.144 640 690 760 1230 530 210 80 30 10 10 10 Table 3. The mean gap size for random ⊕-NFAs |Σ| 2 3

5

6

7

8

9

n 10

11

12

13

14

15

0.50 0.54 0.56 0.57 0.57 0.61 0.66 0.32 0.55 1.64 1.64 0.05 0.02 0.01 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

Nonetheless, we are optimistic about these results, given that the SAT-based reduction has to contend with the overhead of the SAT solver. Moreover, the new algorithm produced minimal NFAs in almost all cases. For n = 5 and |Σ| = 4 it produced 14 (out of 1000) NFAs that were one state larger than the minimal. No larger gap was ever produced.

90

J. Geldenhuys, B. van der Merwe, and L. van Zijl

70000

Number of automata

60000 50000 40000 30000 20000 10000 0 0

5

10

15

20

25

Gap size Fig. 3. Histogram of the number of ∪-NFAs per gap size for automata with n = 5 (states) and |Σ| = 4 (alphabet symbols)

14 12

Mean gap size

10 8

|Σ| = 7 |Σ| = 6

6

|Σ| = 5 4

|Σ| = 4 |Σ| = 3

2

|Σ| = 2

0 4

6

8

10

12

14

16

n = Number of states Fig. 4. Mean gap size as a function of the number of states for different alphabet sizes

Reducing Nondeterministic Finite Automata with SAT Solvers

91

Table 4. Time consumption for the Kameda-Weiner (KW) and the SAT-based reduction (SR) algorithm based on 1000 samples (in milliseconds) |Σ| 2

7

3

4

5

n

KW

SR

KW

SR

KW

SR

KW

SR

5 6 7 8 9 10 11 12 13

0.143 0.116 0.144 0.236 0.352 0.616 1.124 2.338 4.505

0.621 0.483 0.360 0.264 0.244 0.252 0.424 0.733 1.233

16.391 0.492 0.180 0.232 0.404 0.745 1.305 2.038 3.792

1.870 1.824 3.329 0.721 0.284 0.344 0.444 0.681 1.293

269.686 93.133 0.303 0.296 0.452 0.765 1.385 2.378 4.620

3.784 10.128 4.623 0.887 0.432 0.492 0.553 0.797 1.380

− 759.206 19.925 0.332 0.476 0.729 1.225 2.507 4.713

7.380 15.043 13.066 1.173 0.452 0.460 0.677 0.881 1.249

Conclusion

Based on our results, we conclude that our method has significant potential for use in practical applications. Indeed, our reduction is so successful that it almost minimizes the NFA in most cases. Our method is general, in that it can easily handle -NFAs. It is not clear at this point how much work is needed to convert other reduction algorithms to operate on -NFA, apart from one investigation conducted by M¨ uller [16]. SAT solvers are widely used and highly optimized, and the SAT problem is actively studied. Recent work on distributed and parallel SAT solvers (see the survey by Singer [18]) opens up the possibility of also improving the running time of our algorithm in this way. Depending on the SAT solver, it may be possible to specify additional constraints on the reduced NFA. For example, we may be interested in a reduced NFA with the fewest number of transitions, or the fewest number of final states. Future work can explore many directions, particularly when it comes to the SAT solver used; this includes issues like: (1) determine which SAT solvers work best; (2) determine whether a special SAT solver is required; and (3) investigate an incremental SAT solver that can reuse work from previous iterations.

References 1. Amilhastre, J., Janssen, P., Vilarem, M.C.: FA minimization heuristics for a class of finite languages. In: Boldt, O., J¨ urgensen, H. (eds.) WIA 1999. LNCS, vol. 2214, pp. 1–12. Springer, Heidelberg (1999) 2. Carrez, C.: On the minimalization of nondeterministic automata. Technical report, Laboratoire de Calcul de Facult´e des Sciences de l’Universit´e de Lille (1970) 3. Champarnaud, J.-M., Coulon, F.: Nfa reduction algorithms by means of regular inequalities. Theoretical Computer Science 327(3), 241–253 (2004)

92

J. Geldenhuys, B. van der Merwe, and L. van Zijl

4. Daciuk, J., Mihov, S., Watson, B.W., Watson, R.E.: Incremental construction of minimal acyclic finite-state automata. Comp. Linguistics 26(1), 3–16 (2000) 5. Ilie, L., Navarro, G., Yu, S.: On NFA reductions. In: Karhum¨ aki, J., Maurer, H., P˘ aun, G., Rozenberg, G. (eds.) Theory Is Forever. LNCS, vol. 3113, pp. 112–124. Springer, Heidelberg (2004) 6. Ilie, L., Yu, S.: Algorithms for computing small NFAs. In: Diks, K., Rytter, W. (eds.) MFCS 2002. LNCS, vol. 2420, pp. 328–340. Springer, Heidelberg (2002) 7. Indermark, K.: Zur Zustandsminimisierung nichdeterministischer erkennender Automaten. GMD Seminarberichte Bd. 33, Gesellschaft f¨ ur Mathematik und Datenverarbeitung (1970) 8. Jiang, T., Ravikumar, B.: Minimal NFA problems are hard. In: Leach Albert, J., Monien, B., Rodr´ıguez-Artalejo, M. (eds.) ICALP 1991. LNCS, vol. 510, pp. 629– 640. Springer, Heidelberg (1991) 9. Kameda, T., Weiner, P.: On the state minimization of nondeterministic finite automata. IEEE Transactions on Computers C-19, 617–627 (1970) 10. Kim, J.H.: State minimization of nondeterministic machines. Technical Report RC 4896, IBM Thomas J. Watson Research Center (1974) 11. Malcher, A.: Minimizing finite automata is computationally hard. Theoretical Computer Science 327(3), 375–390 (2004) 12. Matz, O., Miller, A., Potthoff, A., Thomas, W., Valkema, E.: Report on the program AMoRE. Technical Report 9507, Christian-Albrechts-Unversit¨ at, Kiel (Oct. 1995) 13. Melnikov, B.F.: A new algorithm of the state-minimization for the nondeterministic finite automata. Korean Jnl. of Comp. and Appl. Mathematics 6(2), 277–290 (1999) 14. Mihov, S.: Direct building of minimal automaton for given list. Annuaire de l’Universite de Sofia “St. Kl. Ohridski” (1998) 15. Mohri, M.: Finite-state transducers in language and speech processing. Comp. Linguistics 23(2), 269–311 (1997) 16. M¨ uller, G.: Minimization of symmetric difference finite automata. Master’s thesis, Stellenbosch University (April 2006) 17. Pol´ ak, L.: Minimalizations of NFA using the universal automaton. International Jnl. of Foundations of Computer Science 16(5), 999–1010 (2005) 18. Singer, D.: Parallel resolution of the satisfiability problem: A survey. In: Talbi, E.-G. (ed.) Parallel Combinatorial Optimization, ch. 5, pp. 123–148. John Wiley and Sons, Chichester (2006) 19. van Zijl, L., Daciuk, J., M¨ uller, G.: Minimization of unary symmetric difference NFAs. South African Computer Jnl. 34, 69–75 (2005) 20. http://www.princeton.edu/~ chaff/zchaff.html

Joining Composition and Trimming of Finite-State Transducers Johannes Bubenzer and Kay-Michael W¨ urzner University of Potsdam {bubenzer,wuerzner}@uni-potsdam.de Abstract. The composition of two (weighted) finite-state transducers is usually carried out in two steps: In the first step, all accessible states of the result are constructed regardless of their co-accessibility. Non-coaccessible states are removed afterwards in the second step. This approach can lead to huge intermediate automata with only a fraction of their states being useful in the end. We present a novel composition algorithm which avoids the construction of non useful states by using a single depth-first traversal while having the same asymptotic complexity as the existing approaches.

1

Introduction

The composition of weighted finite-state transducers (WFSTs) is an important operation in constructing natural language processing systems based on finitestate devices [1,2]. It is used to combine different levels of representation by relating outputs and inputs of the operands. The running time and the memory requirements of the composition are crucial factors in the construction process of such systems. Standard composition algorithms [3] create transitions for matching output and input labels of the outgoing transitions of two states (one from each of the involved WFSTs). The creation of non-accessible states is avoided by simulating a traversal of the resulting transducer starting at its initial state. A major drawback of such approaches is the possible presence of non-useful states (with corresponding incoming and outgoing transitions) in the result of the composition, because it is not guaranteed that all states lie on a path to a final state, i.e. not all states are co-accessible. This may lead to huge intermediate transducers which need to be trimmed afterwards. In certain adverse cases, these transducers may not even fit into the main memory. Trimming (i.e. the removal of non-useful states) is usually done by a depth-first traversal of the WFST which can detect non-accessible as well as non-co-accessible states[4]. In this paper, we present an approach which avoids non-co-accessible states in the result of the composition of two WFSTs by merging a depth-first composition with trimming. Our algorithm is related to the detection of the strongly connected components (SCCs) of graphs [5], since it decides whether a state q is useful after all states of the SCC q is contained in have been processed. 

This work has been funded by the Deutsche Forschungsgemeinschaft (Grant KL 955/12-1).

A. Yli-Jyr¨ a et al. (Eds.): FSMNLP 2009, LNAI 6062, pp. 93–104, 2010. c Springer-Verlag Berlin Heidelberg 2010 

94

J. Bubenzer and K.-M. W¨ urzner

The remainder of this article is as follows: In Sect. 2, we introduce the necessary concepts and definitions and fix the notation, Sect. 3 presents our composition algorithm, first for the acyclic (3.1) and then for the general (3.2) case and finally in Sect. 4, we present an experiment comparing our algorithm to the standard formulation.

2

Preliminaries

We assume that the weights of the automata under consideration are elements of a semiring [6]. Definition 1 (Semiring). A structure K = K, ⊕, ⊗, 0, 1 is a semiring if 1. 2. 3. 4.

K, ⊕, 0 is a commutative monoid with 0 as the identity element for ⊕, K, ⊗, 1 is a monoid with 1 as the identity element for ⊗, ⊗ distributes over ⊕, and 0 is an annihilator for ⊗: ∀a ∈ K, a ⊗ 0 = 0 ⊗ a = 0.

If the ⊗ operation is commutative, that is ∀a, b ∈ K, a ⊗ b = b ⊗ a, K is commutative. Idempotent semirings have the property ∀a ∈ K, a ⊕ a = a. A semiring is called complete (e.g [7]) if it is possible to define sums for all families (ai |i ∈ I) of elements in K, where I is an arbitrary index set, such that the following conditions are satisfied: ' ' ' 1. ai = 0, i∈{j} ai = aj , i∈{j,k} ai = aj ⊕ ak for j = k 'i∈∅ ' ' & 2. ( i∈Ij ai ) = i∈I ai , if j∈J Ij = I and Ij ∩ Ij  = ∅ for j = j  . j∈J ' ' ' ' 3. i∈I (c ⊗ ai ) = c ⊗ ( i∈I ai ), i∈I (ai ⊗ c) = ( i∈I ai ) ⊗ c. Examples of semirings are the boolean semiring B = {0, 1}, ∨, ∧, 0, 1, the real semiring R = R, +, ·, 0, 1, and the tropical semiring T = R+ ∪ {∞}, min, +, ∞, 0. Given Definition 1, we now define a WFST over a semiring. Definition 2 (WFST). A weighted finite-state transducer T = Σ, Γ, Q, q0 , F, E, ρ over a semiring K is an 7-tuple with 1. 2. 3. 4. 5. 6. 7.

Σ, the finite input alphabet, Γ , the finite output alphabet, Q, the finite set of states, q0 ∈ Q, the start state, F ⊆ Q, the set of final states, E ⊆ Q × Q × (Σ ∪ {ε}) × (Γ ∪ {ε}) × K, the set of transitions and ρ : F → K, the final weight function mapping final states to elements in K.

A weighted finite-state acceptor (WFSA) can be regarded as a WFST with the same alphabet for inputs and outputs which performs an identity mapping. Thus, the input and the output label of each transition is equal. We consider unweighted finite state automata as being implicitly weighted over the boolean semiring B.

Joining Composition and Trimming of Finite-State Transducers

95

For every transition e ∈ E, we denote by p[e] its source state, by n[e] its destination state, by i[e] its input label, by o[e] its output label and by w[e] its weight. A path π is a sequence e1 . . . en of n transitions such that p[ei+1 ] = n[ei ]. If p[e1 ] = q0 and n[en ] ∈ F , π is called successful. The concatenation of the transition’s input labels i[e1 ] . . . i[en ] (output labels o[e1 ] . . . o[en ], resp.) constitute the path’s input (output, resp.) label. The weight w of π is w[e1 ] ⊗ . . . ⊗ w[en ]. π −1 denotes the last transition of a path. Given a state q ∈ Q, an input string x ∈ Σ ∗ , an output string y ∈ Γ ∗ and a set of states P ⊆ Q, Π(q, x, y, P ) denotes the collection of all paths π(q, x, y, p) where p ∈ P . Definition 3 (Weight in a Transducer). The weight associated by T with a pair of strings x ∈ Σ ∗ and y ∈ Γ ∗ is defined as  T(x, y) = w[π] ⊗ ρ(n[π −1 ]) π∈Π(q0 ,x,y,F )

A state is reachable from another state if there exists a path between them. Note that by definition every state q ∈ Q is reachable from itself via ε [8]. Definition 4 formalizes the notion that a state is reachable from the start state q0 . Definition 4 (Accessible State). Given a WFST T, a state qk ∈ Q is called accessible if there exists a path e1 . . . en with n ≥ 1 such that p[e1 ] = q0 and n[en ] = qk . If a final state can be reached from a state q it is called co-accessible, formalized by Definition 5. Definition 5 (Co-accessible State). Given a WFST T, a state qk ∈ Q is called co-accessible if there exists a path e1 . . . en with n ≥ 1 such that p[e1 ] = qk and n[en ] ∈ F . If a state is both accessible and co-accessible it is called useful. If a WFST contains only useful states it is called trimmed or connected. Definition 6 (Strongly Connected Component). A strongly connected component in a WFST T is a maximal set of states C ⊆ Q such that for every pair of states v and w ∈ C a path π1 = e1 . . . em with m ≥ 1 and p[e1 ] = v and n[em ] = w and another path π2 = f1 . . . fn with n ≥ 1 and p[f1 ] = w and n[fn ] = v exists in T. The decomposition of a WFST T into its SCCs is the acyclic component graph TSCC of T [9]. In an acyclic WFST, each state forms an own SCC. 2.1

Composition

Composition is a binary operation on two WFSTs T1 and T2 over a semiring K which share the same output and input alphabet respectively. It has been shown

96

J. Bubenzer and K.-M. W¨ urzner

that K has to be commutative [10] and complete [4] for the composition to be well defined in the general case. Informally, the composition operation matches transitions from T1 to transitions from T2 if the corresponding output and input labels coincide. Formally: Definition 7 (Composition of WFSTs). Given two WFSTs T1 = Σ, Γ, Q1 , q01 , F1 , E1 , ρ1  and T2 = Γ, Δ, Q2 , q02 , F2 , E2 , ρ2  weighted over a commutative and complete semiring K, the composition of T1 and T2 , denoted by T1 ◦ T2 , is for all x ∈ Σ ∗ and y ∈ Δ∗ defined by  T1 (x, z) ⊗ T2 (z, y) T1 ◦ T2 (x, y) = z∈Γ ∗

The set of transitions E of the resulting transducer T can be described as follows (q1 , q2 ∈ Q1 , p1 , p2 ∈ Q2 , a ∈ Σ ∪ {ε}, b ∈ Γ ∪ {ε}, c ∈ Δ ∪ {ε}, w, v ∈ K) for WFSTs over idempotent semirings:  ( ) (q1 , p1 ), (q2 , p2 ), a, c, w ⊗ v E= (q1 , q2 , a, b, w) ∈ E1 ∪

(p1 , p2 , b, c, v) ∈ E2  (

(q1 , p1 ), (q2 , p1 ), a, ε, w

)

(q1 , q2 , a, ε, w) ∈ E1 ∪

(p1 , p2 , b, c, v) ∈ E2  (

(1)

(q1 , p1 ), (q1 , p2 ), ε, c, v

)

(q1 , q2 , a, b, w) ∈ E1 (p1 , p2 , ε, c, v) ∈ E2 In the case of non-idempotent semirings, a special ε-filter must be applied as described in [4]. Note that intersection of WFSAs is a special case of the composition [10]. Implementations of the composition operation are – at least to our knowledge – usually carried out in two steps [11,12,3,4]. Starting with (q01 , q02 ), each transition leaving a state in T1 is checked for a matching counterpart in T2 before moving on to the next state. This ensures the absence of non-accessible states in the result. However, it is not guaranteed that all states are co-accessible since new states are added to the state set of the resulting transducer as soon as they are accessible. A posterior trimming is executed to warrant the connectivity of the result of the composition. In some cases, the intermediate transducer is remarkably much larger than the trimmed one. Consider for example context-dependent rewrite rules which can be compiled into finite-state transducers [13]. Figure 1(b) shows the WFST corresponding to the rule ε → ab/ b. The non-trimmed result of the composition of a and b has six useful and six non-useful states, whereas the trimmed transducer only preserves six useful states. This disparity often increases with larger involved

Joining Composition and Trimming of Finite-State Transducers



 













 

 



(a)













97

(b)





 









 



 





 





 

(c) Fig. 1. The non-trimmed result (c) of the composition of a WFST (a) and another WFST (b) which corresponds to the rewrite rule ε → ab/ b

operands. In certain cases, the intermediate result may blow up beyond the size of the main memory. This problem is often dealt with dynamic [14], on-the-fly [15] or lazy [3] composition implementations. However, such approaches suffer from the drawback that the resulting transducers may not benefit from off-line optimizations as minimization [16] which causes less efficient lookups. The complexity of the composition is within O((|Q1 | + |E1 |)(|Q2 | + |E2 |)) if all transitions leaving any state in T1 match all transitions in any state in T2 .

3

Depth-First Composition

In existing approaches, two traversals are used to get a connected WFST which is the result of the composition of two WFSTs: A breadth- or depth-first traversal for computing the accessible states and their transitions and a depth-first traversal for removing the non-co-accessible states and the transitions reaching and leaving them [4].1 The main idea of our novel composition algorithm is to replace those two traversals with a single depth-first traversal leaving only useful states and transitions in the resulting transducer. According to Definition 5, a state q is co-accessible if it is on a path leading to a final state. Thus, all paths starting at q have to be explored before q’s co-accessibility can be finally judged. This suggests a recursive algorithm which computes the composition for all states reachable from q before finishing the computation of q itself. However, this is not possible in general since two states may belong to a cycle. 1

  Note that the posterior trimming has a linear complexity (O |Q| + |E| ) and thus does not add to the asymptotic complexity of the composition.

98

J. Bubenzer and K.-M. W¨ urzner

3.1

Acyclic Case

For all acyclic WFSTs, a linear ordering of the states in Q exists which corresponds to the finishing times of a depth first search [17] which is often called the (reverse) topological sort [9]. Given two WFSTs T1 and T2 weighted over a commutative and complete semiring K such that at least one of them is acyclic2 , it is possible to compute their composition T3 using this ordering on the resulting states by a simple recursive algorithm given below.

Algorithm 1. Depth-first composition of acyclic WFSTs

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

Input: T1 = Σ, Γ, Q1 , q01 , F1 , E1 , ρ1  and T2 = Γ, Δ, Q2 , q02 , F2 , E2 , ρ2  Output: T3 = Σ, Δ, Q, (q01 , q02 ), F, E, ρ begin compute composition((q1 , q2 )) if (q1 , q2 ) ∈ F1 × F2 then connected [(q1 , q2 )] ← T rue Q ← Q ∪ {(q1 , q2 )} F ← F ∪ {(q1 , q2 )} ρ((q1 , q2 )) ← ρ1 (q1 ) ⊗ ρ2 (q2 ) else connected [(q1 , q2 )] ← U nknown foreach e ∈ matching transitions((q1 , q2 )) do if n[e] is not yet visited then compute composition(n[e]) if connected [n[e]] = T rue then connected [(q1 , q2 )] ← T rue E ← E ∪ {e} Q ← Q ∪ {(q1 , q2 )} end begin composition compute composition((q01 , q02 )) return T3 end

The computation of the composition explores states and transitions starting at the initial state (q01 , q02 ) (line 16) to ensure the accessibility of all states. The function compute composition in Algorithm 1 loops over the matching transitions of the current pair of states (q1 , q2 ) (line 8) (from Q1 × Q2 ) computed according to Equation (1) by the functionmatching transitions. compute composition is called again for the not yet computed destination states of those transitions, before the computation of (q1 , q2 ) is finished (lines 13 and 14). Whenever the computation of one of the destination states (q1n , q2n ) is finished, it is checked whether (q1n , q2n ) is useful, and (q1 , q2 ) is marked useful if this is the case (lines 10-12). The attribute connected ∈ {T rue, U nknown} is used to store information about the usefulness of a state. All reachable final states are coaccessible by Definition 5 (and thus useful) and marked as such (lines 3-6). 2

Note that the algorithm is also applicable if the result of the composition of two cyclic WFSTs is acyclic.

Joining Composition and Trimming of Finite-State Transducers

  

 

99



 

Fig. 2. Intermediate result of Algorithm 2 at line 19 of compute composition((0,0)), grey indicates an unknown connected attribute

3.2

General Case

In cyclic WFSTs, it is possible, while processing a state q, to encounter another state p which has been visited before but is not yet finally processed. The decision on q’s co-accessibility cannot be made. We consider the SCCs of the resulting WFST T3 in such cases. Recall from Definition 6, that all states which are reachable from each other form an SCC. The component graph of T3 is an acyclic directed graph and thus admits a linear ordering of its components. Consider the SCC Cq and a state q ∈ Cq , what we basically do in cyclic cases is to delay the co-accessibility decision for q until all states of Cq have been processed. If one of the states in Cq is co-accessible (that is one of the SCCs following Cq in TSCC or Cq itself contains a final state) all states in Cq are co-accessible. For the computation of the SCCs, we use the simplification of Tarjan’s classical construction [5] presented in [18]. This recursive algorithm can also be used for trimming with only slight modifications. We integrate it into the recursive depthfirst composition as shown in Algorithm 2. The inheritance of the (now three-valued) attribute connected ∈ {True, False, Unknown} works similarly as in Algorithm 1. The only difference is that it may remain unknown for some state (p1 , p2 ) whether it is co-accessible or not even after all its transitions have been processed. Such a case is illustrated in Fig. 2. The main difference between Algorithms 1 and 2 is that the latter maintains a stack S of states to identify the SCCs in T3 . Each state gets a unique index (with respect to the reverse topological sort). Also each state has a root-attribute (initially its index), which is used to find the root or first state [9] of its SCC. If a transition from a state (q1 , q2 ) to an already visited but not yet finished state (qn1 , qn2 ) is encountered (i.e. a cycle is detected, line 15), (q1 , q2 ) inherits (qn1 , qn2 )’s root numbering if it is less than its own (line 16). Via the recursion, root[(qn1 , qn2 )] may be inherited by all states prior in the cycle. Whenever the condition in line 18 holds, a root state r of an SCC C is found. All states above r on the stack S (built up in line 9) belong to C. The stack is successively reduced and all members of C and their transitions are removed from T3 if r is not useful. 3.3

Complexity

The worst-case complexity of Algorithms 1 and 2 is still quadratic in space and time. Both algorithms do not need posterior trimming. However, the detection

100

J. Bubenzer and K.-M. W¨ urzner

Algorithm 2. Depth-first composition of non-acyclic WFSTs

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32

Input: T1 = Σ, Γ, Q1 , q01 , F1 , E1 , ρ1  and T2 = Γ, Δ, Q2 , q02 , F2 , E2 , ρ2  Output: T3 = Σ, Δ, Q, (q01 , q02 ), F, E, ρ begin compute composition((q1 , q2 )) Q ← Q ∪ {(q1 , q2 )} root [(q1 , q2 )] ← index [(q1 , q2 )] ← |Q| if (q1 , q2 ) ∈ F1 × F2 then connected [(q1 , q2 )] ← T rue F ← F ∪ {(q1 , q2 )} ρ((q1 , q2 )) ← ρ1 (q1 ) ⊗ ρ2 (q2 ) else connected [(q1 , q2 )] ← U nknown push(S,(q1 , q2 )) foreach e ∈ matching transitions((q1 , q2 )) do if n[e] is not yet visited then compute composition(n[e]) if connected [n[e]] = T rue then connected [(q1 , q2 )] ← T rue E ← E ∪ {e} else if connected [n[e]] = U nknown then root [(q1 , q2 )] ← min(root [n[e]],root [(q1 , q2 )]) E ← E ∪ {e} if root [(q1 , q2 )] = index [(q1 , q2 )] then repeat (p1 , p2 ) ←top(S) if connected [(q1 , q2 )] = T rue then connected [(p1 , p2 )] ← T rue else connected [(p1 , p2 )] ← F alse Q ← Q \ {(p1 , p2 )} E ← E \ E[(p1 , p2 )] pop(S) until (q1 , q2 ) = (p1 , p2 ) end begin composition compute composition((q01 , q02 )) return T3 end

and processing of the SCCs in Algorithm 2 adds (|E| + |Q|) to the worst case complexity of the composition; exactly the same complexity arises for the second step in standard approaches. The attributes connected, root and index are three constants of length3 |Q|. The advantage of our composition algorithm lies in the fact that states and their outgoing transitions are deleted as soon as it is decidable whether they are co-accessible, that is when the first state of the SCC they are a member of is 3

Note that the mapping from pairs of states from the operands into single states of the result typically used in real implementations [11,12] could take over the function of index.

Joining Composition and Trimming of Finite-State Transducers

101

finished. In cases where all states are useful or all states belong to a single SCC, no improvement can be achieved.

4

Experiment

To examine the runtime behaviour of our composition algorithm, we implemented it in C++ and compared it to the composition algorithm given in [4] (slightly modified for being able to handle ε-transitions) with posterior trimming. The features which were compared are the maximum numbers of states and transitions created by the different algorithms during their execution. We implemented a simple representation of WFSTs using linked lists [9] in order to ensure a fair comparison independent from software-specific optimizations4 . 

  













 



 



























 











 



 



 

  

 











Fig. 3. Transducer filtering non-overlapping 4-grams

4

Due to the length restrictions of this work, we set aside comparisons of execution times and memory requirements. Although, such comparisons are subject to future work.

102

J. Bubenzer and K.-M. W¨ urzner

80000

● ● ● ●





60000









40000

● ● ●



+

● ● ●



20000

mean number of states

● ●

+ ●

● ● ● ● ● ● ●

● ●

0





● + ● +

● + ● +

● ● + ● +

5

● ● ● + ● +

+ + ● + ● +

+ + +



● ● ● ● +

10

● ● ● ● ● ●

+ +

15







● ● ● ● ● ●

+ ● ● ● + ● + +

● ● ● ● ● ● ●

+ + + +

● ● ● + ●

+ + + + + +

20

+ + ● + +

+ ● ● ●

+ ● + + +

25

● ● ●

+ ● ● ●

+ + + + ● + + +

+ + + + +

● ●

● ●

● ●



+ + ● + ●

● ●

+ ● + + + +

+ + +

30



● + ● + +



+ ●

+

+ +

+

+ + +

35

+

40

alphabet size

100000 120000







● ●

80000

● ●



60000

+







40000



● ●

+



● +

● + ● +

● + ● +

5

● ● + ●

● + ● ● +

10

● + + ● + ● +

● ● ● + ● + ● + ● +

● ● + ● +

15

● ●

● + ● + ● +

● + ● ● ● + + ● + +

20

● ● + ● ● + ● ● ● + +



● +

● ●



0





20000

mean number of transitions

● ●

● ● ● ●

● ● ● ● ●

+ + + + + +

+ + + ● +

25

● ●

+ ● ● ● ●



● + ● ● + + +

● + ● + ● + + ● + + +

30

+ + + +







● ●

+ + ●

● + + ● + ● + ●



+ ● + + + +

+ +

35

● + + ●

+ + +



+ + + ● ●

+ + +

40

alphabet size

Fig. 4. The mean of the maximum number of states and transitions per word length plotted for the standard composition algorithm (circles) and Algorithm 2 (pluses) during the composition

Joining Composition and Trimming of Finite-State Transducers

103

We use a WFSA A4 representing 4-grams and filter those sequences from its closure which do not overlap in their trigram suffix and prefix, respectively. This is done for Σ = Γ = {a, b} by the transducer TC shown in Fig. 3. Compositions with TC can create lots of non-co-accessible states due to the overlapping: The suffix of a 4-gram is unconditionally processed and may create non-co-accessible states if no matching prefix is found. In the experiment, we increase the size of the alphabet Σ successively and obtain all 4-grams consisting of the characters in Σ from a corpus of approximately 8,000 newspaper articles. The 4-grams are composed with the corresponding filter transducer and the maximum numbers of states and transitions used in the composition are monitored. Figure 4 summarizes the results. Obviously, our algorithm uses less states and transitions than the classical approach.

5

Conclusion

We have presented a novel algorithm for the composition of two WFSTs which avoids non-useful states in its result. In contrast to existing approaches, only a single depth-fist traversal on the WFST under construction is necessary.

Acknowledgements We would like to thank the anonymous reviewers and Thomas Hanneforth for their helpful remarks on an earlier version of this paper.

References 1. Mohri, M.: Finite-State Transducers in Language and Speech Processing. Computational Linguistics 23(2), 269–311 (1997) 2. Pereira, F.C., Riley, M.D.: Speech Recognition by Composition of Weighted Finite Automata. In: Roche, E., Schabes, Y. (eds.) Finite-State Language Processing. Language, Speech, and Communication, vol. 12, pp. 433–453. The MIT Press, Cambridge (1997) 3. Mohri, M., Pereira, F.C., Riley, M.D.: Speech Recognition with Weighted FiniteState Transducers. In: Rabiner, L., Juang, F. (eds.) Handbook on Speech Processing and Speech Communication, Part E: Speech recognition, pp. 1–31. Springer, Heidelberg (2007) 4. Mohri, M.: Weighted Automata Algorithms. In: Droste, M., Kuich, W., Vogler, H. (eds.) Handbook of Weighted Automata. EACTS Monographs in Theoretical Computer Science, pp. 213–254. Springer, Heidelberg (2009) 5. Tarjan, R.E.: Depth-First Search and Linear Graph Algorithms. SIAM Journal on Computing 1(2), 146–160 (1972) 6. Kuich, W., Salomaa, A.: Semirings, Automata, Languages. EATCS Monographs on Theoretical Computer Science, vol. 5. Springer, Heidelberg (1986) ´ 7. Esik, Z., Kuich, W.: Equational Axioms for a Theory of Automata. In: Vide, C.M., Mitrana, V., P˘ aun, G. (eds.) Formal Languages and Applications. Studies in Fuzziness and Soft Computing, vol. 148, pp. 183–196. Springer, Heidelberg (2004)

104

J. Bubenzer and K.-M. W¨ urzner

8. Hopcroft, J.E., Ullman, J.D.: Introduction to Automata Theory, Languages and Computation. Addison-Wesley Series in Computer Science. Addison-Wesley Publishing Company, Reading (1979) 9. Cormen, T.H., Leiserson, C.E., Rivest, R.L., Stein, C.: Introduction to Algorithms, 2nd edn. The MIT Press, Cambridge (2001) 10. Mohri, M.: Weighted Finite-State Transducer Algorithms: An Overview. In: Mart´ın-Vide, C., Mitrana, V., Paun, G. (eds.) Formal Languages and Applications. Studies in Fuzziness and Soft Computing, vol. 148, pp. 551–564. Springer, Heidelberg (2004) 11. Hanneforth, T.: FSM – C++ Library for Manipulating (Weighted) Finite Automata (2004), http://www.ling.uni-potsdam.de/tom/fsm/ 12. Allauzen, C., Riley, M., Schalkwyk, J., Skut, W., Mohri, M.: OpenFst: A General ˇ d´ ˇ arek, J. and Efficient Weighted Finite-State Transducer Library. In: Holub, J., Z (eds.) CIAA 2007. LNCS, vol. 4783, pp. 11–23. Springer, Heidelberg (2007) 13. Kaplan, R.M., Kay, M.: Regular Models of Phonological Rule Systems. Computational Linguistics 20(3), 331–378 (1994) 14. Cheng, O., Dines, J., Doss, M.M.: A Generalized Dynamic Composition Algorithm of Weighted Finite State Transducers for Large Vocabulary Speech Recognition. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2007), vol. 4, pp. 345–348. IEEE, Los Alamitos (2007) 15. Hori, T., Hori, C., Minami, Y.: Fast On-The-Fly Composition for Weighted FiniteState Transducers in 1.8 Million-Word Vocabulary Continuous Speech Recognition. INTERSPEECH, 289–292 (2004) 16. Mohri, M.: Minimization Algorithms for Sequential Transducers. Theoretical Computer Science 234, 177–201 (2000) 17. Tarjan, R.E.: Finding Dominators in Directed Graphs. SIAM Journal on Computing 3(1), 62–89 (1974) 18. Nuutila, E., Soisalon-Soininen, E.: On Finding the Strongly Connected Components in a Directed Graph. Information Processing Letters 49(1), 9–14 (1994)

Porting Basque Morphological Grammars to foma, an Open-Source Tool I˜ naki Alegria1 , Izaskun Etxeberria1 , Mans Hulden2 , and Montserrat Maritxalar1 1

IXA group University of the Basque Country [email protected] 2 University of Arizona Department of Linguistics [email protected]

Abstract. Basque is a morphologically rich language, of which several finite-state morphological descriptions have been constructed, primarily using the Xerox/PARC finite-state tools. In this paper we describe the process of porting a previous description of Basque morphology to foma, an open-source finite-state toolkit compatible with Xerox tools, provide a comparison of the two tools, and contrast the development of a twolevel grammar with parallel alternation rules and a sequential grammar developed by composing individual replacement rules.

1

Introduction

In this paper we describe some aspects of the design of a Basque morphological processing system built with finite-state techniques. Since we have recently ported a Basque morphological grammar from the Xerox formalism to the opensource foma toolkit [1], we shall focus on aspects and experiences highlighted by this process. In light of this migration to an open source toolkit, which forced us to convert an older two-level description of the morphology to an equivalent sequential rule-based one, we shall also contrast some practical design aspects of two-level grammars [2,3] with so-called sequential replacement rule grammars [4,5].

2

Basque Morphology

Basque is an agglutinative language with a rich morphology. Earlier descriptions of Basque morphological analysis with finite-state techniques include [6,7]. A later implementation using the Xerox/PARC compilers is described in [8]. From the point of view of designing a complete description of Basque morphology, the most prominent and features of the language include: – Basque morphology is very rich. The determiner, the number and the declension case morphemes are appended to the last element of the noun phrase and always occur in this order. A. Yli-Jyr¨ a et al. (Eds.): FSMNLP 2009, LNAI 6062, pp. 105–113, 2010. c Springer-Verlag Berlin Heidelberg 2010 

106

I. Alegria et al.

– Basque nouns belong to a single declension; the 15 case markers are invariant. – Functions that prepositions normally fulfill are realized by case suffixes inside word-forms. Basque offers the possibility of generating a large number of inflected word-forms. From a single noun entry, a minimum of 135 inflected forms can be generated. While 77 of these are simple combinations of number, determiners, and case markings (and not capable of further inflection), the rest (58) include one of the two possible genitive markers (possessive and locative) and they can create new declension cases. With this in mind, Basque can be called an agglutinative language. – Basque has ergative case, which marks the subjects of transitive verbs. Linguistic theories and nomenclature about this phenomenon are varying: some use the terminology ‘ergative language,’ and others ‘non-accusative.’ – The verb provides all the grammatical and agreement information about the subject, the two possible objects, as well as tense and aspect-related information, etc.

3

Earlier Work and Current Migration to Open Source

The earlier two-level descriptions referred to above have been used in different applications, for example in a spell checker and corrector named Xuxen,1 which is widely used among Basque speakers. In addition to the standard morphological description, additional finite-state transducers that extend the basic grammar and is useable for other tasks have been built [8]. These are: – A standard analyzer that includes normalized morphemes and standard phonological rules. – An enhanced analyzer for the analysis and normalization of linguistic variants (modeling dialectal usage and competence errors). This is a critical tool for languages like Basque in which standardization is recent and dialectal lexical and phonological variants are common and widespread. – A ‘guesser,’ or analyzer for words which have no lemma listing in the lexicon. For this, the standard transducer is simplified and the lexical entries in the open categories (nouns, proper names, adjectives, verbs, etc.) are removed. These entries, which constitute the vast majority of the lexicon, are substituted by a general automaton that accepts any combination of characters in the orthography, subject to some phonological constraints. Migration to open source technology of the various finite-state-based grammars for Basque that have been developed is still an ongoing process. Much of what has been accomplished so far has been done in a semi-automatic way; some of this work is described in e.g. [9]. For open-source spell checking and correction applications, we have used hunspell [10], since hunspell is directly supported in the later versions of OpenOffice and Mozilla/Firefox. However, hunspell is limited in its descriptive power. It is, 1

http://www.xuxen.com

Porting Basque Morphological Grammars to foma, an Open-Source Tool

107

for instance, not possible to express phonological alternations independently of the lexicon, which results in that the conversion from the original transducerbased descriptions is not at all straightforward. For this reason, we have also experimented with the internal support foma provides for spell checking and spelling correction applications based on finite automata, and plan to incorporate this in the array of finite-state based applications for Basque already available.

4

Porting Basque Grammars to Foma

The original source description of the Basque [7] was compiled with the the Xerox toolkit [5], using a number of the formalisms that it supports. The lexicon specification language, lexc, was used for modeling the lexicon and constraining the morphotactics, and the two-level grammar language, twolc, was used for constructing a transducer that models the phonological and orthographical alternations in Basque. As foma provides no method for compiling two-level rules, the first step in the migration consisted of translating the rules formerly compiled with twolc to the form of replacement rules supported by foma—largely identical to the rules supported by xfst. The original description of the Basque morphology was built prior to the introduction of publicly available tools to manipulate and compile sequentially composed replacement rules, and thus followed the twolevel formalism. We were also aware of the fact that there has been a recent shift in preference toward favoring sequential replacement rules rather than two-level rules in the design of morphological parsers [5]. This was part of the motivation for our decision to explore the replacement rule paradigm when reimplementing our grammars with open-source tools. Porting grammars from one formalism to another one is an interesting problem for which there are few resources to be found in the literature. This is especially true in the case of sequentially composed replacement rules and the two-level formalism. Grammars written in either of the two are of course compilable into finite-state transducers and are therefore equivalent in a sense, which in turn should motivate a comparison between the two from the point of view of the grammar developer. The most prominent aspects of such a comparison should include differences in grammar size and grammar complexity, ease and clarity of rules and rule interactions as well as ease of debugging rules. In table 1 we illustrate a simple case of debugging two-level grammars as opposed to debugging sequential rule grammars. A large part of the work involved in developing a two-level grammar consists of avoiding rule conflicts. Here, we have two rules in (a) that constrain the realization of k: the first rule dictates that k should elide before consonants while the second rule states that it shall become voiced following a consonant. The first rule is exemplified by underlying-surface pair forms such as horiektan:horietan, while the second handles alternations such as eginko:egingo. Unfortunately, as stated, the two rules conflict as there

108

I. Alegria et al.

is no unequivocal statement about what a k should correspond to in case it occurs with a consonant on both sides. Additional information needs to be provided for the proper compilation of the grammar, often in the form restating the two rules with more complex and restricted contextual parts. By contrast, the ordered rules in (b), where the elision rule is assumed to apply before the voicing rule, give the correct forms without a need for additional specification about what occurs in cases where the contexts overlap. In many cases this kind of an approach allows one to design an ordered rewrite rule grammar in a more isolated way such that each rule targets one phonological generalization only—such as elision or devoicing—without mixing information from two arguably separate phenomena in both rules. Table 1. Simple rule conflict cases which often resolve automatically when implemented as ordered rules (a) k:0 _ Cons k:g Cons _ (b) k -> 0 || _ Cons k -> g || Cons _

In table 2 we provide an example of the differences between both descriptions. The example rules illustrate a number of phenomena. In Basque, the prefix ber can precede verbal roots. Before a vowel, the final character (r) of this prefix is doubled, but before a consonant the ber prefix changes into bir. In addition to this, if the first character of the root is h, this h disappears. If the prefix is expressed as beR (indicating a hard r) the chain beR+egin (a lexical expression) generates berregin, while beR+gai+tu generates birgaitu, and beR+has+i changes into berrasi. In our experience, the sequential rules provided conceptual simplicity and were easier to debug than two-level rules. This was largely because one can avoid complex interactions in rules and their contexts of application, which in general one cannot when designing a two-level grammar. However, the order of the rules needs to be designed carefully, which is also a nontrivial problem. 4.1

Conversion Procedure

In converting the rules, we decided to follow a simple strategy. First, rules with limited contextual requirements (single-symbol simple contexts) were transformed into sequential rules. Also, rules that were very specific in their conditioning environments (such as those interacting with only one particular prefix) were converted. Following this step, rules making use of information not available at the surface level—abstract morphophonemes, diacritic symbols, or complex morphological information—were resolved. These rules fell naturally into the early parts of

Porting Basque Morphological Grammars to foma, an Open-Source Tool

109

Table 2. Simplified comparison between parallel and sequential rules in Basque morphology # parallel rules (a) R:r _ +:0 _ h:0 # b e R:r 0:r +:0 _ Vowel 0:r R:r _ +:0 (h:0) Vowel e:i # b _ R +:0 Cons # sequential (b) h -> 0 || # 0 -> r || R e -> i || # R -> r + -> 0

rules b e R + _ Vowel _ + Vowel b _ R + Cons

the chain of compositions since diacritics and abstract symbols could be deleted and would no longer be available for subsequent rules. In a way, the strategy was to move step by step from the lexical level of abstract diacritic symbols and morphophonemes toward the actual surface words. The most complex rules—those with multiple contexts and interactions with other rules—were placed at the end. These include rules such as e-epenthesis, r epenthesis, and enforcing a final a in the lemmas. Within this last block of rules, the internal ordering of the rules was also much more important and required more care in the design process. During each of the above conversion steps, examples from the previously used two-level system were used for testing. After converting the two-level description to context-dependent replacement rules, porting the description to foma was straightforward because the lexc description is fully compatible. Two experiments were carried out: the first morphology-oriented with the whole lexicon containing the full morphological description of all the morphemes (including category, case, tense, person, etc.) and the second spelling-oriented with only the lexical description of the morphemes (surface level and indispensable marks for the rules). In both descriptions the same phonological rules were used. After this, more thorough testing and debugging was performed. We applied the two morphological analyzers (the original two-level one and the new one) to a corpus of 80,000 different word forms to find discrepancies by running the unix tool diff on the results. In the end, we managed to reduce the discrepancies to only one analysis of one word, which was then found to be a mistake in the lexicon grammar. The lexicon had omitted a hard R for place names (all final R’s in place names are hard in Basque).

110

I. Alegria et al.

Some auxiliary applications, such as a spelling corrector module, had already been developed in the xfst replacement rule paradigm after the original two-level morphological grammar, and thus no conversion was required for constructing these transducers in foma as it compiled the original xfst rules to identical transducers as the Xerox tools. 4.2

Compatibility and Efficiency

In porting the Xerox-based grammatical descriptions to foma, we noted very few discrepancies in compatibility. Foma has no separate program for extensive debugging of lexicon specifications in the lexc format, but is able to import lexc descriptions through the main interface. Our earlier lexicons were thus imported without changes, and compiled to identical transducers as with Xerox’s lexc: 2,321,693 states and 2,466,071 arcs for our more detailed lexicon, and 63,998 states and 136,857 arcs for the less detailed one. Compiling the complete system which requires compiling the lexicon (89,663 entries), compiling 77 replacement rules, and composing all of these separate transducers under lexc and xfst took 28.96s,2 and 15.39s with foma,3 on a 2.93GHz Pentium 4 computer with 2Gb memory running Linux. Foma, however required far more temporary memory for compilation—a peak usage of 788.2 Mb which occurred while compiling the larger lexicon—while lexc used a maximum 161.2 Mb for the same task.

5

Auxiliary Applications: Spelling Correction

One of the approaches to capture certain kinds of spelling errors for Basque has been to identify predictable and often occurring types of misspellings and suggest corrections for these. Because of the fairly recent standardization of the orthography, and because of the amount of dialectal variation in Basque, cognitive errors of this type are not uncommon, and it is important for a spelling corrector to identify these errors accurately. One of the components of the Basque morphological system is a transducer that encodes a set of 24 hand-coded string perturbation rules reflecting errors commonly found in Basque writing, such as confusing a z and and c, an x and a z, and so on. This transducer is composed with an automaton encoding the possible surface forms of the morphology, providing a markup of possible misspellings, which is then composed with a filter that accepts only those misspellings that match a word in the lexicon (see Fig. 1). This error correction mechanism was originally modeled with xfst replacement rules, and so was directly portable to foma, and usable as such. 5.1

Internal Support for Spelling Correction in Foma

A recent addition to the foma toolkit and API is an algorithm for quickly finding approximate matches to an input word and an automaton. The default metric of 2 3

xfst-2.10.29 (cfsm 2.12.29), and lexc-3.7.9 (2.12.29) Version 0.9.6alpha.

Porting Basque Morphological Grammars to foma, an Open-Source Tool

111

zihurra [input word] Correction markup FST

zihurra zi/OH/urra s/SZ/ihurra x/XZ/ihurra x/XZ/i/OH/urra c/CZ/ihurra c/CZ/i/OH/urra Morph Filter FST

ziurra

[corrected word]

Fig. 1. An illustration of the functioning for the part of the spelling corrector that recognizes typical orthographical errors. A correction markup filter nondeterministically changes the input to a number of possible candidate errors, after which the morphological filter retains only those that are actual words

distance is the Levenshtein distance, i.e. minimum edit distance where character substitutions, deletions, and insertion all have a cost of 1 unit. However, foma also provides the possibility of defining separate costs for different types of string perturbation operations. This feature is convenient in that one automatically has a spelling corrector available, given a morphological analyzer. In our experiments with foma, we have simply extracted the range (lower side in the terminology of [5]) of our morphological analyzer transducer, producing a cyclic finite-state automaton which encodes all the words that have a legitimate morphological analysis in the original system, directly producing a spelling corrector application.

foma[0]: regex MORPHO.l; 80512 states, 346119 arcs, Cyclic. foma[1]: apply med Using confusion matrix [Euskara] apply med> zihurra ziurra zimurra zigurra zuhurra bihurra

Cost[f]: Cost[f]: Cost[f]: Cost[f]: Cost[f]:

1 2 2 2 2

Fig. 2. Applying the minimum edit distance finder of foma produces spelling correction suggestions like those of our hand coded rules seen in Fig. 1, given a similar specification in the form of a confusion matrix

112

I. Alegria et al.

We have also done some preliminary experiments in modeling the cognitive errors described above, by specifying an additional weight confusion matrix to foma’s minimum edit distance algorithm, giving each of the 24 string perturbation operations in our earlier separate finite-state correction grammar a low cost (1 unit), and other operations the cost (2 units), yielding an spelling corrector application very similar to the earlier hand-built one, although much easier to construct. An example of the interactive spelling correction is given in Fig. 2. In the future we hope to automatically derive the weights for different edit operations for increased accuracy, and integrate the spell checker and spelling corrector using foma’s C language API to other applications, such as OpenOffice and Mozilla/Firefox.

6

Conclusion

We have described a segment of an ongoing process to migrate Basque natural language processing tools to open-source technology—that of porting a widecoverage morphological description of the language to compile with the foma toolkit. This entailed rewriting a formerly two-level grammar into sequential replacement rules. We also hope to address the porting of other applications— such as a spelling corrector—with foma.

Acknowledgements The first author has been partially funded by the Spanish Ministry of Education and Science (OpenMT: Open Source Machine Translation using hybrid methods, TIN2006-15307-C0301). We wish to thank the anonymous reviewers for helpful comments and suggestions.

References 1. Hulden, M.: Foma: a finite-state compiler and library. In: EACL 2009 Proceedings, pp. 29–32 (2009) 2. Koskenniemi, K.: Two-level morphology: A general computational model for wordform recognition and production. Publication 11, University of Helsinki, Department of General Linguistics, Helsinki (1983) 3. Karttunen, L., Koskenniemi, K., Kaplan, R.M.: A compiler for two-level phonological rules. In: Dalrymple, M., Kaplan, R., Karttunen, L., Koskenniemi, K., Shaio, S., Wescoat, M. (eds.) Tools for Morphological Analysis. CSLI, Palo Alto (1987) 4. Kaplan, R.M., Kay, M.: Regular models of phonological rule systems. Computational Linguistics 20(3), 331–378 (1994) 5. Beesley, K., Karttunen, L.: Finite-State Morphology. CSLI, Stanford (2003) 6. Aldezabal, I., Alegria, I., Artola, X., D´ıaz de Ilarraza, A., Ezeiza, N., Gojenola, K., Urkia, M.: EUSLEM: Un lematizador/etiquetador de textos en Euskara. In: Actas del X. Congreso de la SEPLN C´ ordoba (1994)

Porting Basque Morphological Grammars to foma, an Open-Source Tool

113

7. Alegria, I., Artola, X., Sarasola, K., Urkia, M.: Automatic morphological analysis of Basque. Literary & Linguistic Computing 11(4), 193–203 (1996) 8. Alegria, I., Aranzabe, M., Ezeiza, A., Ezeiza, N., Urizar, R.: Using finite state technology in natural language processing of Basque. In: Watson, B.W., Wood, D. (eds.) CIAA 2001. LNCS, vol. 2494, pp. 1–11. Springer, Heidelberg (2002) 9. Alegria, I., Ceberio, K., Ezeiza, N., Soroa, A., Hernandez, G.: Spelling correction: from two-level morphology to open source. In: Calzolari, N., Choukri, K., Maegaard, B., Mariani, J., Odjik, J., Piperidis, S., Tapias, D. (eds.) Proceedings of the Sixth International Language Resources and Evaluation (LREC 2008), Marrakech, Morocco, European Language Resources Association, ELRA (May 2008) 10. Nemeth, L., Tron, V., Halacsy, P., Kornai, A., Rung, A., Szakadat, I.: Leveraging the open source ispell codebase for minority language analysis. In: Proceedings of SALTMIL (2004)

.2b+`B#BM; :2Q`;BM JQ`T?QHQ;v rBi?  6BMBi2@aii2 avbi2K PH2; ETM/x2 h#BHBbB aii2 lMBp2`bBiv- h#BHBbB- :2Q`;B QF!+m+bmbXM2i

#bi`+iX AM i?2 TT2`- TTHB+iBQM Q7 i?2 6BMBi2 aii2 hQQHb iQ QM2 Q7 i?2 aQmi?2`M *m+bBM HM;m;2b- :2Q`;BM- Bb /Bb+mbb2/X AM :2Q`;BMb BM KMv MQM@AM/Q@1m`QT2M ;;HmiBMiBp2 HM;m;2b- +QM+i2MiBp2 KQ`T?Qi+iB+b Bb BKT`2bbBp2Hv T`Q/m+iBp2 /m2 iQ Bib `B+? KQ`T?QHQ;vX h?2 T`2b2Mi2/ :2Q`;BM GM;m;2 JQ`T?QHQ;B+H h`Mb/m+2` Bb +T#H2 iQ T`b2 HH i?2Q`2iB+HHv TQbbB#H2 7Q`Kb 7Q` i?2 H2KKi Q7 :2Q`;BM MQmMbT`QMQmMb- /D2+iBp2b- /p2`#b- MmK2`Hb- 7mM+iBQMH rQ`/b M/ 7Q` KQbi Q7 i?2 H2KKi 7`QK dk p2`# b2ibX

R

AMi`Q/m+iBQM

h?2 +/2KB+ ;`KK`b M/ /B+iBQM`B2b 7Q` i?2 :2Q`;BM HM;m;2 #QmM/ (RĜj)- i?Qm;?- i?Bb /Q2b MQi K2M i?i i?2`2 2tBbib bmTTQ`i 7Q` +QKTmiiBQMH TTHB+iBQMb BMpQHpBM; i?Bb HM;m;2- bBM+2 i?2b2 `2bQm`+2b `2 MQi pBH#H2 BM  7Q`K i?i KF2b i?2K TTHB+#H2 7Q` +QKTmiiBQMH T`Q+2bbBM;X h?2 KBM ibF Q7 i?2 QmiHBM2/ 2M/2pQm` rb iQ T`2T`2 M/ +QM/m+i  72bB@ #H2 /2`BpiBQM Q7  HBM;r`2 @ i?2 ;`KK` M/ H2tBb @ 7Q` i2ti MHvbBb M/ ;2M@ 2`iBQM BM i?2 :2Q`;BM HM;m;2 #b2/ QM i?2 +Q``2bTQM/BM; FMQrH2/;2 #Qmi Qi?2` HM;m;2b BM i?2 6BMBi2@aii2 miQKi T`/B;KX 6BMBi2@aii2 i2+?MB[m2b ?p2 #22M p2`v TQTmH` M/ bm++2bb7mH BM +QKTmiiBQMH KQ`T?QHQ;v M/ Qi?2` HQr2`@H2p2H TTHB+iBQMb BM Mim`H HM;m;2 2M;BM22`BM;X h?2 #bB+ +HBK Q7 }MBi2@ bii2 TT`Q+? Bb i?i  KQ`T?QHQ;B+H MHvx2` 7Q`  Mim`H HM;m;2 +M #2 BKTH2K2Mi2/ b  /i bi`m+im`2 +HH2/  6BMBi2@aii2 h`Mb/m+2`X h?2 6ahb `2 #B/B`2+iBQMH- T`BM+BTH2/- 7bi- M/ UmbmHHvV +QKT+iX .2}M2/ #v i?2 HBM@ ;mBbib mbBM; /2+H`iBp2 7Q`KHBbKb- M/ +`2i2/ mbBM; H;Q`Bi?Kb M/ +QKTBH2`b i?i `2bB/2 rBi?BM  T`2@r`Bii2M }MBi2@bii2 BKTH2K2MiiBQM- }MBi2@bii2 bvbi2Kb +QMbiBimi2 /KB`#H2 2tKTH2b Q7 i?2 b2T`iBQM Q7 HM;m;2 bT2+B}+ `mH2b M/ HM;m;2@BM/2T2M/2Mi 2M;BM2X AM TTHB+iBQMb r?2`2 }MBi2 bii2 K2i?Q/b `2 TT`QT`Bi2- i?2v `2 2ti`2K2Hv ii`+iBp2- Qz2`BM; Ki?2KiB+H 2H2;M+2 i?i i`MbHi2b /B`2+iHv BMiQ +QKTmi@ iBQMH ~2tB#BHBiv M/ T2`7Q`KM+2X M 2`Hv }MBi2 bii2 bvbi2K- hrQ@G2p2H JQ`@ T?QHQ;v- rb /2p2HQT2/ #v EX EQbF2MMB2KB URN3jVX Ai ;p2 HBM;mBbib  rv iQ /Q }MBi2 bii2 KQ`T?QHQ;v #27Q`2 i?2`2 rb  HB#``v Q7 }MBi2 bii2 H;Q`Bi?Kb M/ #27Q`2 +QKTBH2`b 7Q` Hi2`MiBQM `mH2b r2`2 /2p2HQT2/X qBi?Qmi  +QKTQbBiBQM H;Q`Bi?K- `mH2b +QmH/ MQi #2 +b+/2/- #mi r2`2 BMbi2/ Q`;MBx2/ BMiQ  bBM;H2 A. Yli-Jyr¨ a et al. (Eds.): FSMNLP 2009, LNAI 6062, pp. 114–122, 2010. c Springer-Verlag Berlin Heidelberg 2010 

Describing Georgian Morphology with a Finite-State System

115

H2p2H- TTHvBM; BM T`HH2H #2ir22M i?2 irQ dzH2p2HbǴ Q7 i?2 KQ/2H, i?2 H2tB+H H2p2H M/ i?2 bm`7+2 H2p2HX JMv HBM;mBbib ?p2 i`B2/ iQ mb2 irQ@H2p2H KQ`T?QHQ;v #mi ?/ iQ ;Bp2 Bi mT- Q7i2M +HBKBM; i?i irQ@H2p2H KQ`T?QHQ;v /Q2b MQi rQ`F 7Q` +2`iBM ivT2b Q7 Mim`H HM;m;2b (9)X i T`2b2Mi- i?2 +?QB+2 Q7 }MBi2 bii2 BK@ TH2K2MiiBQMb M/ iQQHFBib Bb [mBi2 #`Q/ M/ BM+Hm/2b S*@EAJJP- i?2 hh 6aJ GB#``v hh G2tiQQHb- i?2 6b liBHb e T+F;2- i?2 s2`Qt 6BMBi2@aii2 *H+mHmb Ut7biV- 2i+X q2 mb2/ i?2 Hii2` iQQHFBi Ut7biV b M BKTH2K2MiiBQM 2MpB`QMK2Mi 7Q` /2@ p2HQTK2Mi Q7  :2Q`;BM HM;m;2 i`Mb/m+2`X h?Bb T`Q/m+i ?b #22M bm++2bb@ 7mHHv TTHB2/ iQ 1M;HBb?- 6`2M+?- aTMBb?- SQ`im;m2b2- AiHBM- .mi+?- :2`KM6BMMBb?- >mM;`BM- hm`FBb?- .MBb?- ar2/Bb?- LQ`r2;BM- *x2+?- SQHBb?- _mb@ bBM- CTM2b2X _2b2`+? avbi2Kb BM+Hm/2 `#B+- JHv- EQ`2M- "b[m2- A`Bb? M/ vK`X

k

MHvxBM; i?2 JQ`T?QHQ;v Q7 :2Q`;BM

:2Q`;BM ;`KK` Bb MQi i?2 KBM 7Q+mb Q7 i?Bb /Bb+mbbBQMX >Qr2p2`- iQ ;Bp2 i?2 `2/2` M BMbB;?i BMiQ r?i bQ`i Q7 ;`KK` 7Q` :2Q`;BM HM;m;2 b?QmH/ #2 BKTH2K2Mi2/ mbBM; i?2 6BMBi2 aii2 TT`Q+?- bQK2 BM7Q`KiBQM #Qmi i?2 ;`KKiB+H +i2;Q`B2b Q7 :2Q`;BM MQmMb M/ p2`#b Bb M2+2bb`vX kXR h?2 ai`m+im`2 Q7 LQmMb- /D2+iBp2b M/ S`QMQmMb LQmMbX h?2 MQmM rQ`/7Q`KǶb bi`m+im`2 BM :2Q`;BM Bb b 7QHHQrb, ah1JY SGl_G Y *a1 Y1JS>XoP*XYSPah6AsY1JS>XoP*X ୘୕ U2#V d QTiBQMb ୔UV N QTiBQMb ୔UV _ ୠf୛ UMfhV h?2 bi`m+im`H mMBib BMi`Q/m+2/ BM BiHB+b `2 QTiBQMHX h?2`2 `2 irQ p`BMib Q7 i?2 SGl_G K`F2`, 2# @ 7Q` i?2 KQ/2`M :2Q`;BM M/ i?2 `+?B+ Mfh p`BiBQMb BM /Bz2`2Mi +b2b Q7 /2+H2MbBQMX h?2`2 `2 b2p2M *a1 K`F2`b- Q7 r?B+? KQbi ?p2 irQ HHQKQ`T?b UM/ N SPah6As2bVX LQKBMiBp2, 1`;iBp2, .iBp2, :2MBiBp2, AMbi`mK2MiH, #HiBp2, oQ+iBp2,

@ଡ଼ U@BV- y @ୟ୔ U@KV- @ୟ U@KV @୥ U@bV @ଡ଼୥ U@BbV- @୥ U@bV @ଡ଼୛ U@BhV- @୛ U@hV @୔ୗ U@/V- @ୗ U@/V @ୡ U@QV- y

aiM/`/ +/2KB+ ;`KK`b Q7 :2Q`;BM HBbi mT iQ kR +Hbb2b Q7 MQmM bi2KbBM+Hm/BM;, bi2Kb 2M/BM; rBi? +QMbQMMib- bi2Kb 2M/BM; rBi? pQr2Hb- bi2Kb 2M/BM; rBi? i?2 pQr2H @B- i`mM+i2/ bi2Kb- `2/m+2/ bi2Kb- 2i+X h?2 `2bi Q7 i?2 +Hbb2b /QTi M B``2;mH` /2+H2MbBQM ivT2X AM i?2 p2`bBQM Q7  MQmM i`Mb/m+2` T`2b2Mi2/

116

O. Kapanadze

?2`2- i?2 KQbi 7`2[m2Mi QM2b `2 bi2Kb 2M/BM; rBi?  +QMbQMMi M/ bi2Kb 2M/BM; rBi? i?2 pQr2H X h?2v BM+Hm/2 8-e9k M/ 3-9jd LQmMb `2bT2+iBp2HvX h?2 :2Q`;BM MQmM MHvbBb M/ ;2M2`iBQM KQ/mH2 #b2/ QM i?2 6ah iQQHb mb2b ~; /B+`BiB+bX h?2 MmK#2` Q7 +BiiBQM 7Q`Kb Q7 i?2 :2Q`;BM MQmM H2tB+QM 2t+22/b kyXyyy H2KKi M/ i?2 JQ`T?QHQ;B+H h`Mb/m+2` Bb +T#H2 iQ T`b2 HH i?2Q`2iB+HHv TQbbB#H2 MQmM 7Q`KbX h?2 ivTB+H 2tKTH2b Q7 QmiTmi Q7  6ah T`b2 `2 ;Bp2M #2HHQr, URV

ଢ଼୔୮୘୕ଡ଼୥୔୛୙ଡ଼୥ UF+2#BbhpBbV dz7Q` i?2 K2MǴ bi2K4&ଢ଼୔୮UF+V'- MmK#2`4TH&୘୕U2#V'- +b24;2MBiBp2X TQbi7Btn#2M27+iBp2n7Q`&୛୙ଡ଼୥UipBbV'- +i4L

/D2+iBp2bX AM i?2 :2Q`;BM HM;m;2 i?2 /D2+iBp2b ?p2 i?2 bK2 KQ`T?QHQ;@ B+H bi`m+im`2 b i?2 MQmMbX h?2 QMHv /Bz2`2M+2 Bb i?2 #b2M+2 Q7 THm`H 7Q`Kb 7Q` /D2+iBp2b 2t+2Ti i?2 +b2b r?2M i?2v `2 dzbm#biMiBpBx2/Ǵ M/ +M #2 `2T@ `2b2Mi2/ BM  LS@T?`b2 b i?2 MQmM bm#biBimi2X 2X;X UkV

ୟ୰୙୔ୠ୘୘୕ଡ଼ UKrM22#BV dzUHBiXV i?2 :`22Mb (T`iv)Ǵ

UjV

୰ଡ଼୛୞୘୕ଡ଼ UrBhH2#BV dzUHBiXV i?2 _2/b (`Kv)ǴVX

h?2 H2tB+QM Q7 /D2+iBp2b BM i?2 :2Q`;BM 6ah i`Mb/m+2` +QMbBbib Q7 +X RdX8yy H2KKiX h?2 MmK2`Hb `2 /2+HBM2/ HBF2 /D2+iBp2bX h?2 QmiTmi Q7 i?2 T`bBM; T`Q+2/m`2 /2HBp2`b i?2 bK2 72im`2 pHm2b b 7Q` i?2 MQmMb- 2t+2Ti i?2 72im`2 +i r?B+? Bb K`F2/ b +i4.CX S`QMQmMbX h?2 T`QMQmMb- b BM HH HM;m;2b BM ;2M2`H- `2 `2HiBp2Hv 72ri?Qm;?- i?2B` KQ`T?QHQ;B+H bi`m+im`2 Bb `i?2` +QKTH2t M/ i?2 QmiTmi SPa b  i; Kv ?p2 RR /Bz2`2Mi QTiBQMb /2T2M/BM; mTQM i?2 T`QMQmM ivT2X b 7Q` /p2`#b M/ 7mM+iBQMH rQ`/b- i?2v `2 mMBM~2+i2/X kXk h?2 o2`# avbi2K Q7 :2Q`;BM h?2 :2Q`;BM p2`#H Tii2`Mb `2 +QMbB/2`#Hv KQ`2 +QKTH2t i?M i?Qb2 Q7 MQmMbX b i?2 #+F;`QmM/ 7Q` i?2 /2b+`BTiBQM- r2 /`r QM i?2 rB/2Hv ++2Ti2/ ;`KKiB+H i`/BiBQM- ++Q`/BM; iQ r?B+? }p2 +Hbb2b Q7 p2`#b `2 /BbiBM;mBb?2/ BM :2Q`;BM, Ĝ h`MbBiBp2 p2`#b U*RV- bQK2iBK2b FMQrM b +iBp2 p2`#bX q?BH2 KQbi Q7 i?2 p2`#b BM i?Bb +Hbb `2 i`MbBiBp2- b bm;;2bi2/ #v i?2 +Hbb MK2-  72r `2 BMi`MbBiBp2 p2`#b r?B+? BM~2+i HBF2 i`MbBiBp2bX *Hbb R p2`#b ;2M2`HHv ?p2  bm#D2+i M/  /B`2+i Q#D2+iX Ĝ AMi`MbBiBp2 p2`#b U*kVX AMi`MbBiBp2 p2`#b QMHv iF2  bm#D2+i- MQi  /B`2+i Q#D2+i Ui?Qm;?  72r ;Qp2`M M BM/B`2+i /iBp2 Q#D2+iVX JQbi p2`#b BM i?Bb +Hbb ?p2  bm#D2+i i?i /Q2b MQi T2`7Q`K Q` +QMi`QH i?2 +iBQM Q7 i?2 p2`#X h?2 TbbBp2 pQB+2 Q7 *Hbb R i`MbBiBp2 p2`#b #2HQM; BM i?Bb +Hbb iQQX

Describing Georgian Morphology with a Finite-State System

117

Ĝ J2/BH p2`#b U*jV- bQK2iBK2b FMQrM b i?2 +iBp2@K2/BH p2`#bX h?2v /Bz2` 7`QK *Hbb R p2`#b BM i?i KQbi /2MQi2 BMi`MbBiBp2 +iBpBiB2b- M/ bQ M2p2` iF2  /B`2+i Q#D2+i- #mi mMHBF2 *Hbb k p2`#b- K2/BH p2`#b K`F i?2B` bm#D2+i mbBM; i?2 2`;iBp2 +b2X Ĝ AMp2`bBQM p2`#b U*9V- bQK2iBK2b FMQrM b BM/B`2+i p2`#bX h?2b2 p2`#b K`F i?2 bm#D2+i rBi? i?2 /iBp2 +b2 M/ i?2 /B`2+i Q#D2+i rBi? i?2 MQK@ BMiBp2-  Tii2`M FMQrM b BMp2`bBQMX JQbi *Hbb 9 p2`#b /2MQi2 722HBM;b2KQiBQMb- b2MbiBQMb- M/ bii2b Q7 #2BM; i?i 2M/m`2 7Q` T2`BQ/b Q7 iBK2X Ĝ aiiBp2 p2`#bX h?2b2 biiBp2 BMi`MbBiBp2b bQK2iBK2b `2 +HH2/ ǵTbbBp2b Q7 bii2XǶ aQK2 Q7 i?2 :2Q`;BM HM;m;2 `2b2`+?2`b /Q MQi +QMbB/2` i?2K b  +Hbb Bib2H7- bBM+2 i?2B` KQ`T?QHQ;B+H bi`m+im`2 Bb p2`v bBKBH` iQ i?Qb2 Q7 BM/B`2+i p2`#bX

a+`22p2bX h?2 p2`# bi`m+im`2 Q7 :2Q`;BM Bb +QKTHB+i2/- 2bT2+BHHv r?2M +QK@ T`2/ iQ i?i Q7 i?2 KQbi AM/Q@1m`QT2M HM;m;2bX AM 1M;HBb?- 7Q` 2tKTH2i?2 p2`# bvbi2K 72im`2b i2Mb2- T2`bQM M/ MmK#2`X h?Bb Bb HbQ ;2M2`HHv i`m2 7Q` :2Q`;BM- #mi MQi 2t+iHv bBKBH`X _i?2` i?M mbBM; i?2 i2`Kb dzi2Mb2Ǵ- dzb@ T2+iǴ- dzKQQ/Ǵ- 2i+X b2T`i2Hv- i?2 :2Q`;BM p2`# ;`KK` Bb #mBHi ++Q`/BM; iQ  bvMi+iB+Q@KQ`T?QHQ;B+H T`BM+BTH2 `QmM/  +QMbi`m+i +HH2/ a2`B2 i?i Bb /2@ b+`B#2/ mbBM; i?2 +QM+2Ti Q7 b+`22p2- 7`QK :2Q`;BM ୟ୰ଢ଼୤ଡ଼୙ଡ଼ UKibǶFǶ`BpBV dz`QrǴ (R- k)X h?2`2 `2 j a2`B2b 2bi#HBb?2/ ++Q`/BM; iQ i?2 bvMi+iB+ 72im`2b Q7 am#D2+i Q` am#D2+ifP#D2+i `2HiBQMb `2~2+i2/ BM  p2`# 7Q`KX  b+`22p2 Bb  b2i Q7 +2HHb BM i?2 p2`#H T`/B;K K`F2/ rBi?  +QMbiMi i2Mb2- bT2+i M/ KQ/2X  b+`22p2 +QMiBMb  b2i Q7 +2HHb- QM2 +2HH 7Q` 2+? am#D2+ifP#D2+i S2`bQMfLmK#2` +QK#BMiBQMX h?2 MmK#2` Q7 b2HHb BM  b+`22p2 /2T2M/b QM  pH2M+v Q7 i?2 bT2+B}+ p2`# 7Q`K, A7 Bi Bb  KQMQpH2Mi UBMi`M@ bBiBp2V p2`# i?2M  b+`22p2 Bb  b2i Q7 bBt +2HHb BM i?2 p2`#H T`/B;K- QM2 +2HH 7Q` 2+? bm#D2+i T2`bQMfMmK#2` +QK#BMiBQM Ubm#D+iRfbm#D2+ikfbm#D2+ij- bBM@ ;mH`fTHm`HVX A7 i?2 p2`# Bb #B@ Q` i`BpH2Mi Ui`MbBiBp2 Q` #Bi`MbBiBp2V i?2M i?2 b+`22p2 Bb  b2i Q7 kk b2HHb `2~2+iBM; S2`bQM M/ am#D2+i f P#D2+i `2HiBQMb K`F2/ BM  p2`# rBi? i?2 ii+?2/ bvMi+iB+ 7`K2b UQ` `;mK2Mi bi`m+im`2 bm+? b am#@ D2+i Y P#D2+iV BMi`Q/m+2/ #2HHQr,

U9V

am#DRa; am#DRSH am#Dka; am#DkSH am#Dja; am#DjSH

"BpH2Mifh`BpH2Mi *QK#BMiBQMb YP#Dka; YP#DkSH YP#Dka; YP#DkSH YP#DRa; YP#DRSH YP#DRa; YP#DRSH YP#DRa; YP#DRSH YP#Dka; YP#DkSH YP#DRa; YP#DRSH YP#Dka; YP#DkSH

YP#Dj YP#Dj YP#Dj YP#Dj YP#Dj YP#Dj

h?2`2 `2 RR b+`22p2b bT`2/ +`Qbb j a2`B2b, e b+`22p2b BM i?2 }`bi a2`B2- k b+`22p2b BM i?2 b2+QM/ M/ j b+`22p2b BM i?2 i?B`/X h?2 }`bi a2`B2 Bb bm#/BpB/2/ BMiQ irQ am#b2`B2b Ĝ S`2b2Mi M/ 6mim`2X 1+? am#b2`B2b +QKT`Bb2 j b+`22p2bX

118

O. Kapanadze

1+? p2`# b+`22p2 Bb 7Q`K2/ #v //BM;  MmK#2` Q7 T`2}t2b M/ bm{t2b iQ i?2 p2`# `QQiX *2`iBM {t +i2;Q`B2b `2 HBKBi2/ iQ +2`iBM b+`22p2bX AM  ;Bp2M b+`22p2- MQi HH TQbbB#H2 K`F2`b `2 Q#HB;iQ`vX M +iBp2 i`MbBiBp2 p2`# `QQi BM HH j a2`B2b +M T`Q/m+2 k9k }MBi2 p2`# 7Q`Kb bQK2 Q7 r?B+? `2 K#B;mQmb BM i?2 `2bT2+i Q7 S2`bQM M/ am#D2+i P#D2+i `2HiBQMb- i?i Bb-  bT2+B}+ p2`# 7Q`K +M `2+2Bp2 KQ`2 i?M QM2 bvMi+iB+ 7`K2X "2bB/2b- i?2 KDQ`Biv Q7 i?2 +iBp2 i`MbBiBp2 p2`# `QQib +M #2 +QMp2`i2/ M/ BM~2+i2/ b BMi`MbBiBp2 #BpH2Mi Q` KQMQpH2Mi p2`#bX h?2Q`2iB+HHv-  bBM;H2 :2Q`;BM p2`# `QQi Bb +T#H2 iQ T`Q/m+2 KQ`2 i?M Ryyy /Bz2`2Mi p2`# 7Q`KbX _MFbX avMi;KiB+ bi`m+im`2 Q7 M BM~2+i2/ :2Q`;BM p2`# +M #2 pBbmHBx2/ b  HBM2` b2[m2M+2 Q7 TQbBiBQMb- Q` dzbHQibǴ- #27Q`2 M/ 7i2` i?2 `QQi TQbBiBQM- r?B+? Bb `272``2/ iQ b bHQi _X h?2 bBKTHB}2/ KQ/2H Q7 i?2 :2Q`;BM p2`# bi`m+im`2 ?b  iQiH Q7 MBM2 bHQibX 1+? bHQi +M #2 }HH2/ #v  x2`Q Q` QM2 Q7 {t2b 7`QK i?2 b2iX h?2 `MFb `2 MK2/ #v GiBM mTT2` +b2b M/ ?p2 M Q`/2` b U8V

 Y " Y * Y _ Y . Y 1 Y 6 Y : Y >X

1+? `MF +QMbBbib Q7 KQ`T?QHQ;B+H 2H2K2Mib r?B+? `2 BM +QKTH2K2Mi`v /Bb@ i`B#miBQM BM  p2`# 7Q`K b2[m2M+2 M/ i?2B` +QK#BMiBQMb ;2M2`i2 M BM~2+i2/ }MBi2 p2`# 7Q`KX h?2 `MFb BM i?2 i#H2 `2 +QKTQb2/ Q7 i?2 7QHHQrBM; p2`# +QKTQM2Mib, Ĝ _MF  Ĝ S`2p2`#bX h?2b2 +M // 2Bi?2` /B`2+iBQMHBiv Q` M `#Bi``v K2MBM; iQ i?2 p2`#X S`2p2`#b TT2` BM i?2 7mim`2- Tbi M/ T2`72+iBp2 b+`22p2bX Ĝ _MF " @ S`2}tH MQKBMH K`F2`bX h?2v BM/B+i2 r?B+? T2`bQM T2`@ 7Q`Kb i?2 +iBQM U;2MiV Q` 7Q` r?B+? T2`bQM i?2 +iBQM Bb /QM2X Ĝ _MF * Ĝ S`2@`/B+H pQr2HbX h?2v ?p2  MmK#2` Q7 7mM+iBQMb- #mi BM bQK2 +b2b- MQ TT`2Mi 7mM+iBQM +M #2 bbB;M2/ iQ i?2 T`2@`/B+H pQr2HX Ĝ _MF _ Ĝ o2`# `QQibX h?2 p2`# `QQib Kv #2 MHvx2/ b irQ /BK2M@ bBQMH Ki`Bt rBi? Ry ?Q`BxQMiH UKQ`T?QHQ;B+HV M/ R8 /B;QMH UbvMi+iB+@ ;`KKiB+HV 72im`2bX 6`QK TQi2MiBH R8y p2`# `QQi b2ib QMHv dk `2 ii2bi2/ BM :2Q`;BM BM+Hm/BM; HH B``2;mH` p2`#bX h?2 `2bi Q7 i?2 Ki`Bt +2HHb `2 2KTivX aQK2 b2ib BM i?2 Ki`Bt b2HHb +QMiBM Dmbi  72r p2`#H `QQibX Ĝ _MF . Ĝ h?2KiB+ am{t Q` S`2b2Mif6mim`2 ai2K 7Q`KMibX h?2@ KiB+ bm{t2b `2 T`2b2Mi BM i?2 T`2b2Mi M/ 7mim`2 b+`22p2b 7Q` *Hbb R@j p2`#b- #mi `2 #b2Mi BM i?2 Tbi M/ KQbiHv #b2Mi BM i?2 T2`72+iBp2 b+`22p2bX Ĝ _MF 1 Ĝ *mbiBp2Xh?2 :2Q`;BM +mbiBpBiv Bb 2tT`2bb2/ KQ`T?QHQ;B@ +HHvX h?2 +mbiBp2 K`F2` Q#HB;iQ`BHv +Q@Q++m`b rBi? i?2 p2`bBQM K`F2` BM `MF *X Ĝ _MF 6 Ĝ AKT2`72+iBp2 f _QQi m;K2MiX  K`F2` Q`  `QQi m;K2Mi Bb +?`+i2`BbiB+ Q7 i?2 BKT2`72+i- +QM/BiBQMH- T`2b2Mi bm#DmM+iBp2 M/ 7mim`2 bm#DmM+iBp2X Ĝ _MF :X

Describing Georgian Morphology with a Finite-State System

119

RX h?2 b+`22p2 K`F2`bX h?2b2 +QK2 #27Q`2 i?2 THm`H K`F2` bHQiX h?2v `2 b2H/QK bm{+B2Mi BM i?2Kb2Hp2b iQ B/2MiB7v i?2 b+`22p2 mMK#B;m@ QmbHvX h?2 b+`22p2 K`F2`b `2 mbmHHv QKBii2/ #27Q`2 i?2 i?B`/ T2`bQM T`QMQKBMH K`F2`X kX am{tH MQKBMH K`F2`bX h?2 i`MbBiBp2 p2`#b mb2 i?2 bm{tH MQK@ BMH K`F2` 7Q` i?2 i?B`/ T2`bQM bBM;mH` BM T`2b2Mi M/ 7mim`2 b+`22p2bX AMi`MbBiBp2 p2`#b- i?2 Tbi M/ T2`72+iBp2 b+`22p2b Q7 i?2 i`MbBiBp2 M/ K2/BH p2`#b- M/ BM/B`2+i p2`#b- 2KTHQv b2ib Q7 pQr2HbX jX mtBHB`v p2`#bX h?2v `2 QMHv mb2/ BM i?2 T`2b2Mi BM/B+iBp2 M/ T2`72+iBp2 b+`22p2b Q7 BM/B`2+i p2`#b M/ BM i?2 T2`72+iBp2 b+`22p2 Q7 BM@ i`MbBiBp2 p2`#b r?2M i?2 bm#D2+i Bb }`bi Q` b2+QM/ T2`bQMUbVX h?2v `2 7Q`Kb Q7 p2`# dziQ #2Ǵ i?i +imHHv `2 i`Mb7Q`K2/ BMiQ p2`# bm{t2bX Ĝ _MF > Ĝ SHm`H J`F2`X .2T2M/BM; QM r?B+? b2i Q7 MQKBMH K`F2`b Bb 2KTHQv2/- i?2 TT`QT`Bi2 THm`H bm{t Bb //2/X Ai +M `272` iQ 2Bi?2` bm#D2+i Q` Q#D2+iX AM i?2 HBM2` `MF Q`/2` i?2 2H2K2Mib Q7 `MF _ ?p2 i?2 ?B;?2bi T`BQ`Biv BM i?2 b2Mb2 Q7 ;2M2`iBp2 +QMbi`BMibX .2`Bp2/ 7`QK i?2 p2`# +Hbb i?2v HB+2Mb2 i?2 KQbi Q7 i?2 2H2K2Mib 7`QK i?2 Qi?2` `MFbX 1bT2+BHHv- i?Bb +QM+2`Mb i?2 2H2K2Mib Q7 `MF * M/ "X _MF _ BM +QK#BMiBQM rBi? i?2 bm{t2b 7`QK `MF * /2i2`KBM2b bvMi+iB+ pH2M+v Uam#D2+i M/ P#D2+i `;mK2MibV Q7 i?2 :2Q`;BM }MBi2 p2`# 7Q`KX  ivTB+H `2bmHi Q7 i?2 :2Q`;BM p2`# MHvb2b +QMiBMb  KQ`T?QHQ;B+H bi`m+@ im`2 Q7  }MBi2 p2`# M/ Bib bvMi+iB+ pH2M+vX 2X;X UeV

୙୫ଡ଼ୗଡ଼୛ UpvB/BhV dzr2 b2HH Bifi?2KǴ X am#DRf୙ Y ୫ଡ଼ୗ Y i?2K2fଡ଼ Y ୛ 4 ibKFQfam#DRSR Y P#Dja; #X am#DRf୙ Y ୫ଡ଼ୗ Y i?2K2fଡ଼ Y ୛ 4 ibKFQfam#DRSR Y P#DjSH-

r?2`2 am#DRf୙ BM/B+i2b i?2 Rbi T2`bQM bm#D2+i `2T`2b2Mi2/ b ୙@ Up@V-  p2`#H `QQi ୫ଡ଼ୗ UvB/V- i?2KiB+ K`F2` i?2K2- `2T`2b2Mi2/ b ଡ଼ U@BV M/  THm`H K`F2` ୛ U@hVX h?Bb Tii2`M 2[mHb iQ  b+`22p2 dzibKFQǴ UT`2b2Mi AM/B+iBp2V rBi?  bm#D2+i Q7 i?2 Rbi T2`bQM THm`H BM/B+i2/ #v am#DSHR M/ M Q#D2+i Q7 i?2 j`/ T2`bQM i?i Bb 2Bi?2` bBM;mH` Q` THm`HX MQi?2` 2tKTH2 Bb  p2`# MHvbBb rBi?  TQbbB#H2 Tii2`M 7Q`  *Hbb k T`2}tH TbbBp2 p2`# UdV

j

ୗ୔ୟ୘୲୔୦୘୕ୡୗ୘୥ U/K2ti2#Q/2bV dzB7 Bi rBHH #2 TBMi2/ 7Q` K2Ǵ X S`pfୗ୔ YP#DRa;fୟ YSbf୘Y୲୔୦Y ୘୕Yୡୗ୘Y୥ 4 Fpb?B`2#BiB@Rfam#Dja;Y P#DRa;X

6BMBi2@aii2 AKTH2K2MiiBQM

jXR Pp2`pB2r _2bQm`+2bX h?2 :2Q`;BM 6ah KQ`T?QHQ;B+H i`Mb/m+2` /2p2HQTK2Mi bi`i2/ BM kyy9 b  bmTTH2K2Mi Bbbm2 iQ i?2 J +Qm`b2 BM *QKTmiiBQMH GBM;mBbiB+b

120

O. Kapanadze

r?B+? A +QM/m+i2/ i i?2 h#BHBbB aii2 lMBp2`bBiv- :2Q`;BX h?2 H2tB+H BMTmi iQ i?2 /2p2HQT2/ i`Mb/m+2` ?b #22M iF2M 7`QK i?2 :2Q`;BM HM;m;2 2tTHM@ iQ`v /B+iBQM`vX a2p2`H ;`QmTb Q7 bim/2Mib +QMi`B#mi2/ iQ i?2 H2tB+Q;`T?B+ T`i Q7 i?2 T`QD2+iX A ?p2 +QMbi`m+i2/ i?2 KBM 7`K2 Q7 T`Q;`K KQ/mH2b r?B+? ?/ #22M 2ti2M/2/ M/ i2bi2/ #v i?2 bim/2Mib BM i?2 T`+iB+H b2bbBQMb Q7 i?2 K2MiBQM2/ J +Qm`b2X h?2`2 `2 d H2tB+QMb i?i +Q``2bTQM/ iQ i?2 d KQ/mH2b UQ` i`Mb/m+2`bVX h?Bb Bb `2~2+iBQM Q7  i`/BiBQMH T`i Q7 bT22+? T`iBiBQM /QTi2/ BM i?2 :2Q`;BM ;`KK`X "mi i?2 MmK#2` Q7 +Hbb2b p`B2b 7`QK bm#H2tB+QM iQ bm#H2tB+QMX 1X;X 7Q` T`QMQmMb i?2`2 `2 RR +Hbb2b- r?2`2b 7Q` /p2`#b Dmbi eX jXk TT`Q+? *QM+i2MiBp2 JQ`T?QHQ;vX 6Q` KQ`T?QHQ;v +QMbi`m+iBQM r2 mb2/ +QM+i2@ MiBQM T`Q+2/m`2b bBM+2 i?2 :2Q`;BM HM;m;2 Bb M ;;HmiBMiBp2 HM;m;2 rBi? +2`iBM 2H2K2Mib Q7 i?2 BM~2+iBQMH T?2MQK2MQM r?B+? K2Mb i?i {t2b 2+? 2tT`2bb  bBM;H2 K2MBM; M/ i?2v mbmHHv /Q MQi K2`;2 rBi? 2+? Qi?2` Q` z2+i 2+? Qi?2` T?QMQHQ;B+HHvX hQ Qm` FMQrH2/;2 i?2 HM;m;2b rBi?  bBKBH` bi`m+@ im`2 M/ 6aJ KQ`T?QHQ;v `2 hm`FBb?- >mM;`BM M/ 6BMMBb?X "mi r2 M2p2` mb2/ MHQ;v rBi? i?2 #Qp2 K2MiBQM2/ HM;m;2b BM Qm` T`QD2+iX h?2 TT`Q+? i?i r2 miBHBx2/ BM +QMbi`m+iBM; Q7  :2Q`;BM 6ah i`Mb/m+2` /`r QM i?2 +QM+i2MiBp2 +Hbb2b Q7 7Q`KMib BM  rQ`/ i2KTHi2X AM i?2 T`Q@ +2bb Q7 i?2 i`Mb/m+2` BKTH2K2MiiBQM r2 TTHB2/ i?2 ~; /B+`BiB+b 7Q` ?M/HBM; +QMbi`BMib i?i `2 72im`2@#b2/ `i?2` i?M T?QMQHQ;B+HX aQ- r2 /B/ MQi mb2 T?QMQHQ;B+Hf`2r`Bi2 `mH2b iQ /2`Bp2 HHQKQ`T?b 7Q` MQmM /2+H2MbBQM- #mi BMbi2/7Q` +b2 K`F2` HHQKQ`T?bǶ +QMbi`BMi r2 miBHBx2/ ~; /B+`BiB+bX h?2v +QMbB/@ 2`#Hv bBKTHB7v H2t+ /2b+`BTiBQMb- KBMiBM i?2 M2irQ`Fb bKHH2` M/ F22T i?2 i`Mb/m+2` rBi? +X j J2;#vi2b bBx2X 6H; .B+`BiB+bX q2 miBHBx2/ i?2 r2HH@FMQrM K2+?MBbK Q7 ~; /B+`BiB+b (9)X h?Bb K2+?MBbK Bb M B/2H K2Mb 7Q` 72im`BM; i?2 :2Q`;BM KQ`T?QHQ;v BM 6ah +QMi2tiX M/- BM ;2M2`H- Bi Bb p2`v 2z2+iBp2 7Q` Bib ;`KK` KQ/2HBM;- bBM+2 i?2v +Tim`2 2H2;MiHv BM s2`Qt *H+mHmb 7`K2rQ`F i?2 bK2 i?BM;b i?i HBM;mBbib ?/ QmiHBM2/ #v Qi?2` K2MbX h?2`2 `2 Rd3 ~; /B+`BiB+b- KQbi Q7 i?2K ;Bp2 KmHiBTH2 pHm2bX h?2v +M ?M/H2 Hi2`MiBQM +QMbi`BMib i?i `2 72im`2@#b2/, AM MQmM MHvbBb- i?2v HHQr iQ +QMbi`BM +b2 K`F2`Ƕb HHQKQ`T?bX AM p2`# MHvbBb i?2v +QMbi`BM /Bz2`2Mi p2`# +Hbb2b- pQB+2- +QMDm;iBQM ivT2b- 2i+X- 2bT2+BHHv r?2M i?2`2 `2 b2T`i2/ /2T2M/2M+2bX  T`QKBM2Mi 72im`2 Q7 :2Q`;BM KQ`T?QHQ;v Bb HQM; /BbiM+2 /2T2M/2M+2b BM i?2 b2Mb2 i?i i?2 2H2K2Mib Q7 `MF "- M/ T`iB+mH`Hv Q7 `MF *- HB+2Mb2 Qi?2` {t2b 7i2` `MF _ Up2`# `QQiVX lbBM; b2ib Q7 ~; /B+`BiB+b r2 r2`2 #H2 MQi QMHv iQ +QMbi`BM HQM; /BbiM+2 2H2K2Mib BM KQ`T?Qi+iB+b #mi HbQ iQ dzKQ/2HǴ KMv KQ`T?QbvMi+iB+ T?2MQK2M bT2+B}+ iQ i?2 :2Q`;BM HM;m;2X AM //BiBQM- #v rv Q7 +QK#BMBM; ~; /B+`BiB+b 7Q` /Bz2`2Mi pHm2b Q7 `MF * 2H2K2Mib UT`2@ `/B+H pQr2HV rBi? i?2 p2`# +Hbb 72im`2- i?2 T`b2 QmiTmi /2i2`KBM2b bvMi+iB+

Describing Georgian Morphology with a Finite-State System

121

bm#+i2;Q`BxiBQM 7`K2b 7Q` 2+? :2Q`;BM }MBi2 p2`# 7`QKX TTHB+iBQM Q7 i?2 +?BMb Q7 ~; /B+`BiB+b- i?i +QMbi`BM  p2`# `QQi- TT2`2/ i?2 KQbi 2z2+iBp2 rv iQ #HQ+F BHH2;H Ti?b `2bmHiBM; 7`QK i?2 ?QKQMvKQmb `QQib r?B+? r2 MK2/ dzi?2 p2`# `QQibǶ 7KBHB2bǴX h?Bb T?2MQK2MQM Q++m`b r?2M  bBM;H2 p2`# `QQi Kv TT2` rBi?BM /Bz2`2Mi p2`# +Hbb2b i?i `2 BM~2+i2/ BM KMv rvb M/ TQi2MiBHHv Kv +mb2 Qp2`;2M2`iBQM M/ Qp2``2+Q;MBiBQMX jXj 1pHmiBQM aBM+2 r2 ?p2 iQ mb2 M 2M+Q/2/ :2Q`;BM b+`BTi- ?Bi?2`iQ r2 r2`2 MQi #H2 iQ i2bi i?2 i`Mb/m+2` QM  7`22Hv pBH#Hv :2Q`;BM lLA*P.1 i2tiX h?2`27Q`27Q` 2+? MQmM- T`QMQmM- /D2+iBp2- p2`#- 2i+X +Hbb2b i2bi }H2b r2`2 +QKTBH2/ KMmHHvX h?2v +QKT`Bb2 HH TQbbB#H2 BM~2+i2/ 7Q`Kb i?i +M #2 T`Q/m+2/ #v  :2Q`;BM HM;m;2 +QMp2MiBQMH ;`KK` BM i?2 T`Q+2bb Q7 MQmM /2+H2MbBQM M/ p2`# +QMDm;iBQMX h?2 `2+HH Q7 MQmMb Bb ?B;? M/ i?2 i`Mb/m+2` +M ?M/H2 rBi?Qmi 2t+2TiBQM HH MQmMb 7`QK i?2 kR +Hbb2bX 6Q` i?2 dk +Hbb2b Q7 p2`# `QQib rBi? +X8yyy H2K@ Ki M MHvbBb `2+HH Bb +X 3yWX h?2 p2`# T`b2 QmiTmi bmz2` 7`QK K#B;mBiv r?B+? Bb  +QMb2[m2M+2 Q7 KQ`T?QHQ;B+H ?QKQMvKv-  T?2MQK2MQM Q#b2`p2/ BM i?2 :2Q`;BM HM;m;2X 6Q` 2tKTH2-  bBM;H2 }MBi2 r2`# +QMbi`m+iBQM +Q`@ `2bTQM/b iQ i?2 7Q`Kb 7`QK /Bz2`2Mi b+`22p2b rBi? i?2 /Bz2`2Mi am#D2+i@P#D2+i +QK#BMiBQMb M/ bvMi+iB+ 7`K2X h?Bb K#B;mBiv +M MQi #2 `2bQHp2/ QM i?2 KQ`T?QHQ;B+H H2p2H- `i?2` Bi Bb  ibF Q7 i?2 b?HHQr T`bBM; KQ/mH2X h?2 `2p2`b2 T`Q+2/m`2 U;2M2`iBQM Q7 i?2 bm`7+2 p2`# 7Q`K 7`QK i?2 H2tB+H `2T`2b2MiiBQMV biBHH M22/b `2}M2K2MiX AM i?2 K2MiBK2- r2 `2 i2biBM; i?2 i`Mb@ /m+2` rBi? i2tib 7`QK `2TQbBiQ`B2b i?i ++mKmHi2 7`22Hv pBH#H2 +Q`TQ` BM i?2 lh63 7Q`Ki i?i M22/b iQ #2 +QMp2`i2/ BMiQ  _QKMBx2/ a*AA 2M+Q/BM;X AM i?2 +m``2Mi p2`bBQM Q7 i?2 H2tB+H i`Mb/m+2`- i?i /2b+`B#2b i?2 KQbi Q7 rQ`/b 7`QK i?2 KQ/2`M :2Q`;BM HM;m;2 2tTHMiQ`v /B+iBQM`v- i?2`2 `2 9Ny3N bii2b M/ Ry8kyd i`MbBiBQMb U`+bVX _2+QKTBHiBQM Q7 i?2 H2tB+QM iF2b 8@ d KBMmi2b rBi?BM M p2`;2 S* UAMi2H Ĝ kXyy :>x- _J R :"- jk #Bi QT2`iBM; bvbi2KVX

9

6mim`2 SHMb

9XR amTTQ`iBM; i?2 LQM@_QKM a+`BTi "v MQr i?2 i`Mb/m+2` `2/b i?2 _QKMBx2/ b+`BTi r?B+? Bb  KBtim`2 Q7 k8 HQr2` M/ 3 mTT2` +b2b Q7 GiBM b+`BTi mb2/ BM ;2M2`H b  biM/`/ 7Q` i?2 :2Q`;BM b+`BTi 2M+Q/BM;X "b2/ QM .`X E "22bH2vǶb `2+QKK2M/iBQMb Hbi v2`A ?p2 bi`i2/ rQ`FBM; QM /2p2HQTK2Mi Q7 M 2/BiQ` p2`bBQM r?B+? rBHH #2 +@ T#H2 iQ ?M/H2 #Qi? i?2 :2Q`;BM M/ i?2 GiBM b+`BTib bBKmHiM2QmbHv BM i?2 T`Q;`K +Q/2 r`BiBM; T`Q+2bbX h?Bb rQmH/ HHQr i?2 i`Mb/m+2` iQ rQ`F rBi? i?2 lLA*P.1 biM/`/ :2Q`;BM avH72M 7QMib M/ MHvx2  7`22Hv pBH#Hv i2bi +Q`TmbX

122

O. Kapanadze

9Xk a?HHQr S`bBM; h?2 `2bmHi Q7 i?2 T`bBM; T`Q+2/m`2 @  i;;2/ M/ H2KKiBx2/ QmiTmi Q7 i?2 bQm`+2 :2Q`;BM THBM i2ti @ rBHH #2 72/ iQ i?2 b?HHQr T`bBM;fbvMi+iB+ +?mMFBM; 2M;BM2X h?2 Bbbm2 rBHH #2 /`Bp2M #v i?2 p2`# T`b2 QmiTmi bmTTH2K2Mi2/ #v i?2 p2`# 7Q`Kb +QMb2[m2Mi bvMi+iB+ 7`K2bX h?2 ii+?2/ am#D2+i Y P#D2+i `;mK2Mi bi`m+im`2 rBHH #2 i`Mb7Q`K2/ BMiQ  am#D2+i- .B`2+i M/fQ` AM/B`2+i P#D2+i +b2 K`FBM; Tii2`MbX AM i?2 K2MiBK2  +Q``2bTQM/BM; KQ/mH2 Bb mM/2` /2p2HQTK2Mi r?B+? rBHH T`QpB/2 M BMTmi 7Q` i?2 b?HHQr T`bBM; T`Q+2/m`2bX 9Xj AMi2;`iBQM iQ Pi?2` _2bQm`+2b h?2 T`2b2Mi2/ 6ah i`Mb/m+2` b  JQ`T?QHQ;B+H S`b2` Bb  T`i Q7 i?2 HBM;mBb@ iB+ iQQHb 7Q` i?2 :2Q`;BM HM;m;2X q2 THM iQ K2`;2 Bi rBi? i?2 Qi?2` `2mb#H2 :2Q`;BM HM;m;2 `2bQm`+2b iQ +QMi`B#mi2 7`QK i?2 KmHiBHBM;mH T2`bT2+iBp2 iQ i?2 GM;m;2 1M;BM22`BM; Bbbm2 Q7 i?Bb HQr@/2MbBiv HM;m;2X

_272`2M+2b RX `QMbQM- >XAX, :2Q`;BM,  _2/BM; :`KK`X *Q``2+i2/ 2/BiBQMX aHpB+ Sm#HBb?2`b*QHmK#mb- P?BQ URNNyV kX hb+?2MFûHB- EX, 1BM7Ƀ?`mM; BM /B2 ;2Q`;Bb+?2 aT`+?2X oQHmK2 RĜkX KB`MB o2`H;wɃ`B+? URN83V jX J2HBFBb?pBHB- .X- >mKT?`B2b- CX.X- EmTmMB- JX, h?2 :2Q`;BM o2`#,  JQ`T?QbvM@ i+iB+ MHvbBbX .mMrQQ/v S`2bb- aT`BM;}2H/- o Ukyy3V 9X "22bH2v- EX_X- E`iimM2M- GX, 6BMBi2 aii2 JQ`T?QHQ;vX *aGA Sm#HB+iBQMb- aiM7Q`/*HB7Q`MB UkyyjV

Finite State Morphology of the Nguni Language Cluster: Modelling and Implementation Issues Laurette Pretorius1 and Sonja Bosch2 1

2

School of Computing, University of South Africa, Pretoria [email protected] Department of African Languages, University of South Africa, Pretoria [email protected]

Abstract. The paper provides an overview of a project on computational morphological analysers for the Nguni cluster of languages namely Zulu, Xhosa, Swati and Ndebele. These languages are agglutinative and lesser-resourced. The project adopted a finite approach, which is wellsuited to modelling both regular morphophonological phenomena and linguistic idiosyncrasies. The paper includes a brief overview of the morphology of this cluster of languages, then focuses on how the various morphophonological phenomena of Zulu are modelled and implemented using the Xerox finite-state toolkit. The bootstrapping of the Zulu morphological analyser prototype, ZulMorph, to obtain analyser prototypes for Xhosa, Swati and Ndebele, is outlined and experimental results given.

1

Introduction

The work reported on in this paper concerns the development of morphological analysers for the Nguni cluster of languages. This cluster belongs to the South eastern Bantu language zone, and comprises four of the official languages of South Africa, namely Zulu, Xhosa, Swati and (Southern) Ndebele. The purpose is to provide an overview of the development of a finite-state morphological analyser for Zulu, and the subsequent bootstrapping of this analyser to provide analysers for related under-resourced languages (Xhosa, Swati and Ndebele). The work is based on the finite-state morphology approach as described by Beesley and Karttunen [1], which allows modelling of linguistic rules as well as idiosyncratic behaviour. The structure of the paper is as follows: An overview of the morphology of the Nguni languages is given, followed by a brief discussion of typical linguistic constraints that need to be captured by a morphological analyser for Zulu. The third section focuses on specific features of the Xerox finite-state toolkit that we found useful for modelling and implementing linguistic phenomena of the Nguni languages. A discussion on the bootstrapping of the Zulu analyser for the other languages is given in Sect. 4, followed by a conclusion. A. Yli-Jyr¨ a et al. (Eds.): FSMNLP 2009, LNAI 6062, pp. 123–130, 2010. c Springer-Verlag Berlin Heidelberg 2010 

124

2

L. Pretorius and S. Bosch

Morphology of the Nguni Cluster of Languages

2.1

Overview

The morphological structure of the Nguni languages, as of all the Bantu languages, is based on two principles, namely the nominal classification system, and the concordial agreement system. According to the nominal classification system, nouns are formally marked by class prefixes. These noun class prefixes, have for ease of analysis, been assigned numbers by Bantu language scholars. Table 1 gives examples of Meinhof’s [2, p. 48] numbering system of some of the noun class prefixes as represented in all four Nguni languages. Table 1. Noun class prefixes in the four Nguni languages Class

Zulu

Xhosa

Swati

Ndebele

1 umu-ntu um-ntu um-ntfu um-ntu 2 aba-ntu aba-ntu ba-ntfu aba-ntu 5 i(li)-tshe ili-tye li-tje ili-tjhe 6 ama-tshe ama-tye ema-tje ama-tjhe 9 i(m)-mpungushe i(m)-mpungutye im-p(h)ungushe i-p(h)ungutjha 10 izi(m)-mpungushe ii(m)-mpungutye tim-p(h)ungushe iim-p(h)ungutjha

Gloss ‘person’ ‘persons’ ‘stone’ ‘stones’ ‘jackal’ ‘jackal’

Noun prefixes generally indicate number, with the uneven class numbers designating singular and the corresponding even class numbers designating plural. The similarity of noun class prefixes across language boundaries within the Nguni language cluster should be noted in Table 1. Noun prefixes play an important role in the morphological structure of Nguni languages in that they link the noun to other words in the sentence. This linking is manifested by a system of concordial agreement, which governs grammatical correlation in verbs, adjectives, possessives, pronouns and so forth as illustrated by the following Zulu example: (1)

Abantu abaningi bangazithengisa izimoto zabo. people who are many they may sell them cars of them. ‘Many people may sell their cars.’

The mainly agglutinating Nguni languages make extensive use of prefixes as well as suffixes in the formation of words. The root is the constant core element from which words or word forms are constructed while the rest is inflection and derivation. Orthographically, a conjunctive writing system is adhered to. Each linguistic word consists of a number of bound parts or morphemes that can never occur independently as separate words. The two types of morphemes that are generally recognised, are roots (the lexical core) and affixes (grammatical meaning or function). The morphological structure of the verb in Nguni languages is considerably more complex than that of the noun. A number of slots, both preceding and

Finite State Morphology of the Nguni Language Cluster

125

following the verb root may contain numerous morphemes that function as derivations, inflection for tense-aspect and markers of nominal arguments. Examples are cross-reference of the subject and object by means of class- (or person/number-) specific object markers, locative affixes, morphemes distinguishing verb forms in clause-final and non-final position, negation, tense markers etc. As in the case of the nominal classification system, the complexities of the verb are comparable across language boundaries, specifically Nguni language boundaries. 2.2

Typical Constraints

On the one hand, typical linguistic constraints or rules that should be captured by a morphological analyser for Zulu include the following: – Accurate representation of the nominal classification system as illustrated in Table 1; – Separated or long-distance dependencies (e.g. affixes that cannot co-occur in the same word, i.e. incompatible morphemes such as the present tense morpheme -ya- with other tense morphemes, or with negative morphemes); – Roots showing irregular morphotactic behaviour (e.g. a certain group of verb roots are restricted to a specific suffix in the formation of imperatives); – Circumfixes, that is co-ordinated pairs consisting of a prefix and a suffix (e.g. a negative prefix which requires a negative suffix); – Other constraints that are feature-based rather than phonological (e.g. the constraint on copula prefixes to combine with certain noun classes); – Forward-looking feature requirements such as the presence of one morpheme requiring the presence of another morpheme (e.g. an object concord requires a verb root). On the other hand, constructions which are not strictly rule-based cause overgeneration in Zulu computational morphology. By rule-based we mean that there is no (set of) word formation rule(s) that accurately models the constructions in real language. Although such constructions may be well-formed according to the rules, they are in fact invalid strings or illicit structures in terms of the real language. A broad coverage morphological analyser should also cover such idiosyncratic morphological behaviour as comprehensively as possible. Two examples of idiosyncratic behaviour are: – The formation of locatives derived from nouns (in noun classes 1, 1a, 2 and 2a the locative is formed by prefixing a locative prefix ku-(i.e. rulebased); in noun classes 3 to 10 the locative prefix e- is usually followed by a locative suffix -ini (rule), but for certain noun roots (the exceptional cases) the locative is formed by only prefixation of the prefix e- (i.e. no general rule applies). This behaviour is idiosyncratic and attested forms and their roots are obtained from dictionaries and corpora); – Extensions of verb roots by means of suffixes (the basic meaning of a verb root in Zulu may be modified by suffixing so-called extensions to the verb root, functioning as inflectional morphemes. However, not all roots may take all extensions arbitrarily due to semantic restrictions).

126

3

L. Pretorius and S. Bosch

Approach for Zulu

Following the Xerox tradition, the morphotactics (morpheme sequencing) are modelled in the high-level declarative language lexc (lexicon compiler), as cascades of so-called (morpheme) continuation classes. lexc is well-suited to creating finite-state lexicons for natural language consisting of typically thousands of word roots and affixes. The morphophonological alternations (sound and orthographic changes) that occur in Zulu morphology are modelled as regular expressions using xfst, the compiler for the Xerox regular expression calculus. The finite-state transducers that are obtained by compiling the lexc and xfst scripts are finally composed to obtain the transducer that represents the morphological analyser. Conceptually, the upper language of the lexc transducer is the analysis language in which the morphological analyses are rendered. The lower language of the xfst transducer represents the surface forms/words in the real natural language. The composed transducer then has the analysis language as upper language and the surface forms as lower language. 3.1

Modelling and Implementation

Flag Diacritics. A particularly useful device offered by the Xerox finite-state toolkit as an extension is the so-called flag diacritics. Flag diacritics provide a means of feature-setting and feature-unification that keep transducers small, enforce desirable results such as linguistic constraints, and simplify grammars. In particular, they are used to block illegal paths at run time by the analysis and generation routines. In lexc and xfst they are treated as multi-character symbols spelt according to the two templates (2)

a. b.

@operator.feature@, and @operator.feature.value@.

Typical operators that we use are Unification Test, Positive (Re)Setting, Negative (Re)Setting, Require Test, and Disallow Test. For a detailed discussion, see [1]. In the Zulu morphological analyser under discussion, we made extensive use of the Xerox flag diacritics for the modelling of the morphotactics of linguistic constraints. Different modelling requirements were met by employing a number of different and appropriate operators. In the following subsections the implementation of a selection of constraints, as indicated in 2.2, are explained and illustrated by means of Zulu examples. For a more detailed discussion, see [3] and [4]. Accurate Representation of the Nominal Classification System. A Simplified lexc fragment: Multichar_Symbols ... {NPrePre1] [BPre1] ... [DimSuf] [AugSuf] ... @U.CL.1-2@ ^U ^MU ... ... LEXICON NounPrefix

Finite State Morphology of the Nguni Language Cluster u[NPrePre1]mu[BPre1]@U.CL.1-2@:^U^[email protected]@ a[NPrePre2]ba[BPre2]@U.CL.1-2@:^A^[email protected]@ u[NPrePre14]bu[BPre14]@U.CL.14@:^U^[email protected]@

127

NStem; NStem; NStem;

LEXICON NStem fana NClass1-2; fana NClass14; ntu NClass1-2; ntu NClass14; LEXICON NClass1-2 @U.CL.1-2@

#;

LEXICON NClass14 @U.CL.14@

#;

Aspects of note: The cascade of continuation classes to implement morpheme sequencing; the presence of the upper language (the analysis language) and the lower language (the intermediate language) separated by a colon (:); the use of multi-character symbols to extend the alphabets of both the analysis language and the intermediate language, in particular, the symbol ^MU that denotes noun prefix information and is used in the xfst rule (below) to distinguish this morpheme from other occurrences of -mu- in words. Two xfst operators that are used extensively in modelling morphophonological alternations are the composition of two regular relations A .o. B, and conditional replacement A -> B || L _ R where A, B, L and R are all regular languages, with the latter two representing the left and the right contexts. Also note the order (more specific to more general) in which the various rules in the composed rule below are allowed to fire.(No attempt is made to explain the entire xfst syntax, cf. [1]). A typical alternation rule (% is used to literalise the ^ symbol and ^BR and ^ER are multi-character symbols): define rulemu %^MU -> 0 || _ %^BR m .o. %^MU -> m || _ [%^BR Syllable Syllable | %^BR Syllable %^ER [Vowel | Syllable] .o. %^MU -> m || _ %^BR Vowel .o. %^MU -> m u;

Separated or Long-Distance Dependencies. The long present tense morpheme -ya- is incompatible with negative morphemes. Correct analyses for bayakhumbula ’they remember’ (positive) and abakhumbuli (negative) therefore are (3)

ba[SC2]ya[LongPres]khumbul[VRoot]a[VerbTerm] and a[NegPre]ba[SC2]khumbul[VRoot]i[VerbTermNeg].

However, the ungrammatical form *a-ba-ya-khumbul-i is not analysed since -ya- is incompatible with the two negative morphemes -a- and -i. For the implementation

128

L. Pretorius and S. Bosch

of this constraint the P-, U- and R-operators are employed with respect to the NEG feature to enforce the appropriate behaviour. Forward-Looking Feature Requirements. The presence of one morpheme requires the presence of another morpheme, for instance an object concord must be followed by a verb root. This requirement is significant in the Bantu languages since verb-like constructions may also be formed from other roots or stems, which do not allow object concords. The copula construction may be preceded by a subject concord (e.g. u-ngumuntu ‘he is a person’), and the construction may also be negativised (aka-ngumuntu ‘he is not a person’). Should this constraint on the occurrence of the object concord not be implemented, overgeneration will occur with object concords appearing in conjunction with subject concords, even in copula constructions. The occurrence of an object concord marks the expectation that a verb root will follow (the P-operator), which is indeed fulfilled by means of an associated R-operator. Locatives Derived from Nouns. In noun classes 3 to 10 the regular formation of the locative is by prefixing e-, followed by a locative suffix -ini. In exceptional, morphologically/syntactically unpredictable cases only the prefix eoccurs. Examples are entabeni ‘on the mountain’ analysed as (4)

e[LocPre]i[NPrePre9]n[BPre9]ntaba[NStem]ini[LocSuf]

and ekhaya ‘at home’ analysed as (5)

e[LocPre]i[NprePre5]li[BPre5]khaya[NStem].

Here flag diacritics were used to mark (@P.LocSuf.OFF@) the exceptions in the word root lexicon and to ensure that the customary suffix will not be allowed (@R.LocSuf.OFF@).

4

Bootstrapping for Xhosa, Swati and Ndebele

The bootstrapping process uses the existing Zulu analyser ZulMorph as a first rudimentary morphological analyser for the other closely related languages. By means of systematic and stepwise enhancement improved analysers for Xhosa, Swati and Ndebele are developed. This requires a careful investigation of the cross-linguistic similarities and dissimilarities and how they are best modelled and implemented. The components of ZulMorph are summarised below. ZulMorph offers the following in terms of the morphotactics: – Affixes for all parts-of-speech (e.g. subject & object concords, noun class prefixes, verb extensions etc.) – Word roots (i.e. nouns (15 800), verbs (7 600), relatives (408), adjectives (48), ideophones (2 735), conjunctions (176)) – Rules for legal combinations and orders of morphemes (e.g. u-ya-ngi-thand-a and not *ya-u-a-thand-ngi)

Finite State Morphology of the Nguni Language Cluster

129

Regarding morphophonological alternations ZulMorph contains rules that determine the allomorphs of each morpheme (e.g. ku-lob-w-a > ku-lotsh-w-a, u-mulomo > u-m-lomo). In word formation the open class accepts the addition of new items by means of processes such as borrowing, coining, compounding and derivation, mainly represented by verb roots and noun stems. The closed class represents affixes that model the fixed morphological structure of words, as well as items such as conjunctions, pronouns etc. Typically no new items can be added to the closed class. Since our point of departure is ZulMorph, we focus on Xhosa, Swati and Ndebele affixes that differ from their Zulu counterparts. The bootstrapping process is iterative and new information regarding dissimilar morphological constructions is incorporated systematically in the morphotactics component. Similarly, rules are adapted in a systematic manner. The process also inherently relies on similarities between the languages, and therefore the challenge is to model the dissimilarities accurately. The carefully conceptualised and appropriately structured (lexc) continuation classes embodying the Zulu morphotactics provide a suitable framework for including all the closed class dissimilarities as discussed in detail in [5]. The word root (open class) lexicon of ZulMorph was enhanced firstly by the addition of an extensive Xhosa lexicon extracted from a paper dictionary that includes noun stems (5 600), verb roots (6 066), relatives (26), adjectives (17), ideophones (30), conjunctions (28). Secondly, a Swati lexicon was improvised by applying regular Swati sound changes to the Zulu lexicon. Since no lexicon is available for Ndebele, the identification of Ndebele roots/stems relies on the Zulu, Xhosa and Swati lexicons. The Zulu alternations are assumed to apply to Xhosa, Swati and Ndebele unless otherwise modelled. Regarding language-specific alternations special care is taken to ensure that the rules fire only in the desired contexts and order. For example, Xhosa-specific sound changes should not fire between Zulu-specific morphemes, and vice versa. Language specific behaviour that deviates from the Zulu is marked in lexc by special multicharacter symbols which in turn are used in implementing the correct firing contexts for the various language specific sound changes. This applies, for example, to the vowel combination ii, which does not occur in Zulu. While the general rule ii > i holds for Zulu, the vowel combination ii needs to be preserved in Xhosa and Ndebele. Another example occurs in the rules for passive extensions, w or iw, to verb roots. For instance, the Zulu rule p h -> s h || Cons Vowel _ %^ER [w | i w] fires except in the case of Xhosa verb roots for which the rule p h -> t s h || Cons Vowel _ %^Xh %^ER [w | i w] is executed. Notation: % literalises the ^ symbol, ^Xh is the Xhosa marker and ^ER denotes the end of a root.

130

L. Pretorius and S. Bosch

Certain aspects of the Xhosa, Swati and Ndebele grammars need to be modelled independently and then built into the analyser, for instance the formation of the so-called temporal form that does not occur in Zulu. A preliminary evaluation, based on the use of parallel test corpora (The Constitution, s.a.) of approximately 7000 types each for the four languages, yielded the following results in words analysed: Zulu - 5653 (80.68 %); Xhosa - 5250 (71.10 %); Swati - 3971 (58.26 %); Ndebele - 3994 (58.96 %).

5

Conclusion and Future Work

The analyser prototype ZulMorph at present covers most of the morphotactics and morphophonological alternations required for the automated analysis/generation of all the Zulu word categories. Preliminary results obtained by bootstrapping morphological analysers for Xhosa, Swati and Ndebele from ZulMorph are promising. A systematic assessment and validation of the analyses and also of the linguistic accuracy and coverage of the various analysers are in progress while future work also entails systematically scaling up and refining all aspects addressed in the experiment both with respect to linguistic similarities and differences.

Acknowledgments This material is based upon work supported by the South African National Research Foundation under grant number 2053403. Any opinion, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Research Foundation.

References 1. Beesley, K., Karttunen, L.: Finite State Morphology. CSLI Publications, Center for the Study of Language and Information, Stanford, California (2003) 2. Meinhof, C.: Introduction to the phonology of the Bantu languages. Dietrich Reimer/Ernst Vohsen, Berlin (1932) 3. Bosch, S., Pretorius, L.: A finite-state approach to linguistic contraints in Zulu morphological analysis. Studia Orientalia 103, 205–227 (2006) 4. Pretorius, L., Bosch, S.: Containing overgeneration in Zulu computational morphology. South African Linguistics and Applied Language Studies 26(2), 209–216 (2008) 5. Bosch, S., Pretorius, L., Fleisch, A.: Experimental bootstrapping of morphological analysers for Nguni languages. Nordic Journal of African Studies 17(2), 66–88 (2008)

A Finite State Approach to Setswana Verb Morphology Laurette Pretorius1, Biffie Viljoen1 , Rigardt Pretorius2 , and Ansu Berg2 1 2

School of Computing, University of South Africa, Pretoria [email protected], [email protected] School of Languages, North-West University, Potchefstroom {Rigardt.Pretorius,Ansu.Berg}@nwu.ac.za

Abstract. Setswana is characterised by a disjunctive orthography according to which verbal prefixal morphemes are usually written disjunctively, while suffixal morphemes to the verb root follow a conjunctive writing style. This article specifically focusses on a finite-state approach to Setswana verb morphology and the challenges of the disjunctive orthography used for prefixes.

1

Introduction

Setswana, Sepedi (Northern Sotho) and Sesotho (Southern Sotho) form the Sotho group belonging to the South-Eastern zone of Bantu languages. These languages are characterised by a disjunctive (also referred to as semi-conjunctive) orthography, affecting mainly the word category of verbs [1, pp. 12-28]. In particular, verbal prefixal morphemes are usually written disjunctively, while suffixal morphemes to the verb root follow a conjunctive writing style. Since the morphological analyser is designed to accept linguistic words (as defined, for instance, in [2]) as input it has to allow for whitespace in verb constructs. This ipso facto relies on a pre-processing tokenisation phase in which the verb constructs (that is, linguistic words or tokens) are identified. For all other word categories orthographic and linguistic words coincide. These word categories, including nouns, adjectives, adverbs, pronouns, ideophones, particles and interjections have already been included in the current Setswana analyser prototype. We report on the use of two tokenising transducers and a finite-state morphological analyser to solve the Setswana tokenisation/morphological analysis problem, specifically arising due to the disjunctive writing style followed for Setswana verbs. For a more detailed exposition see [3]. The structure of the paper is as follows: A short overview of Setswana verb morphology is followed by a discussion of the finite-state approach to the computational morphology of the disjunctively written Setswana verb constructs and of the two tokenising transducers. The penultimate section addresses the application of the tokenisation/morphological analysis transducers to a test corpus and discusses the results obtained. The paper is concluded with plans for future work. A. Yli-Jyr¨ a et al. (Eds.): FSMNLP 2009, LNAI 6062, pp. 131–138, 2010. c Springer-Verlag Berlin Heidelberg 2010 

132

2

L. Pretorius et al.

Overview of Setswana Verb Morphology

Setswana verbs may be categorised as follows: basic verbs, copulatives and auxiliary verbs. Furthermore, they may appear in different moods such as indicative, infinitive, consecutive, habitual, imperative, subjunctive, participial and relative; and tenses such as present, perfectum, future and past tense. With respect to copulatives, a distinction is also made between associative, descriptive and identifying types. Auxiliary verbs, on the other hand, do not usually carry mood and tense information. A complete exposition of Setswana verb morphology falls outside the scope of this article (see [1] for more details). Main aspects of interest are briefly introduced and in some cases illustrated by means of examples. The most basic form of the verb in Setswana consists of an infinitive prefix + a root + a verbal ending, for example, go bona (‘to see’) consists of the infinitive prefix go, the root bon- and the verbal ending -a. While verbs in Setswana may also include various other prefixes and suffixes, the root always forms the lexical core of a word. The root may be described as ”a lexical morpheme [that] can be defined as that part of a word which does not include a grammatical morpheme; cannot occur independently as in the case with words; constitutes the lexical meaning of a word and belongs quantitatively to an open class” [1]. 2.1

Affixes of the Setswana Verb

The verbal root can be preceded by several disjunctively written prefixes (cf. [1, pp. 171-183]) viz. subject agreement morphemes, object agreement morphemes, aspectual morphemes, the temporal morpheme and negative morphemes. The reflexive morpheme i- (‘-self’) (which is an object morpheme) and the object agreement morpheme of the first person singular n- are written conjunctively to the root. Various morphemes may be suffixed to the verbal root and follow the conjunctive writing style, viz. the causative suffix, applicative suffix, reciprocal suffix, perfect suffix, passive suffix, followed by the verbal endings. The sequencing and optionality of these morphemes form the basis of the modelling of the Setswana verb constructs and are discussed below. 2.2

Auxiliary Verbs and Copulatives

”Syntactically an auxiliary verb is a verb which must be followed by a complementary predicate, which can be a verb or verbal group or a copulative group or an auxiliary verbal group, because it cannot function in isolation” [1, p. 273]. A more detailed discussion of auxiliary verbs in Setswana may be found in [4]. Copulatives function as introductory members to non-verbal complements. The morphological forms of copula are determined by the copulative relation and the type of modal category in which they occur. These factors give rise to a large variety of morphological forms [1, pp. 275-281]. 2.3

Formation of Verbs

The formation of Setswana verbs is governed by a set of linguistic rules, often referred to as morphotactics, according to which the various prefixes and suffixes

A Finite State Approach to Setswana Verb Morphology

133

may be sequenced and combined to form valid verb forms and by a set of morphophonological alternation rules that model the sound changes that occur at morpheme boundaries. These formation rules form the basis of the finite-state morphological analyser, discussed in subsequent sections. The other core component of this morphological analyser is a comprehensive, ideally complete, set of valid Setswana word roots. By design, incorrectly formed or partial strings will not be recognised as valid Setswana words. The significance of this for tokenisation specifically is that, in principle, the morphological analyser can and should recognise only (valid) tokens. Morphotactics. In our linear (as opposed to hierarchical) morphological analysis the prefixes and suffixes have a specific sequencing with regard to the verbal root. We illustrate this by means of a number of examples. A detailed exposition of the rules governing the order and valid combinations of the various prefixes and suffixes may be found in [1]. – Object agreement morphemes and the reflexive morpheme always appear directly in front of the verbal root, for example le a di reka (‘he buys it’). No other prefix can be placed between the object agreement morpheme and the verbal root or between the reflexive morpheme and the verbal root. – The position of the negative morpheme ga is always directly in front of the subject agreement morpheme, for example, ga ke di bone. (‘I do not see it/them’). – The negative morphemes sa and se follow the subject agreement morpheme, for example, (fa) le sa dire (‘(while) he is not working’) and (gore) re se di je (‘(so that) we do not eat it’). – The aspectual morphemes always follow the subject agreement morpheme, for example, ba sa dira (‘they are still working’). – The temporal morpheme also follows the subject agreement morpheme, for example, ba tla dira (‘they shall work’). Examples of long distance dependencies are ke a ba bona (‘I see them’) and ga ke ba bone (‘I do not see them’). Other examples of rules that assist in preventing invalid combinations are as follows: – The object agreement morpheme is a prefix that can be used simultaneously with the other prefixes in the verb, for example, ba a di bona (‘they see it/them’). – The aspectual morphemes and the temporal morpheme cannot be used simultaneously, for example, le ka ithuta (‘he can learn’) and le tla ithuta (‘he will learn’). Since (combinations of) suffixes are written conjunctively, they do not add to the complexity of the disjunctive writing style prevalent in verb tokenisation. Morphophonological Alternation Rules. Sound changes can occur when morphemes are affixed to the verbal root. Regarding prefixes, the object agreement morpheme of the first person singular ni/n in combination with the root

134

L. Pretorius et al.

causes a sound change and this combination is written conjunctively, for example ba ni-bon-a > ba mpona (‘they see me’). In some instances the object agreement morpheme of the third person singular, which is the same as the class 1 prefix causes sound changes when used with verbal roots beginning with b-. They are then written conjunctively, for example, ba mo-bon-a > ba mmona (‘they see him’). When the subject agreement morpheme ke (the first person singular) and the progressive morpheme ka are used in the same verb, the sound change ke ka > nka may occur, for example, ke ka opela > nka opela (‘I can sing’). Regarding suffixes, sound changes also occur under certain circumstances, but since they are written conjunctively, they do not influence tokenisation.

3

The Verb Analyser

The finite-state morphological analyser prototype for Setswana, developed with the Xerox finite state toolkit [5], implements Setswana morpheme sequencing (morphotactics) by means of a lexc script containing cascades of so-called lexicons, each of which represents a specific type of prefix, suffix or root. Sound changes at morpheme boundaries (morphophonological alternation rules) are implemented by means of xfst regular expressions. These lexc and xfst scripts are then compiled and subsequently composed into a single finite state transducer, constituting the morphological analyser ([6] and [7]). While the implementation of the morphotactics and alternation rules is, in principle, complete, the word root lexicons still need to be extended to include all valid Setswana roots. In the modelling and implementation in lexc a distinction was made between basic verbs, copulatives and auxiliary verbs. As illustration we give the following simplified fragment of the cascade of continuation classes for original or basic verbs. Multichar_Symbols @P.Verb.ON@ @P.Mood.IND@ @P.TnsOrAsp.PRES@ @P.Pos.ON@ AgrSubj 1p Sg ... LEXICON Verb [email protected]@@P.Mood.CON@:... ConsecMood; [email protected]@@P.Mood.HAB@:... HabitMood; [email protected]@@P.Mood.IMP@:... ImperMood; [email protected]@@P.Mood.IND@:... IndicMood; ... LEXICON IndicMood [email protected]@@P.Pos.ON@:... SubjectConcord; [email protected]@@P.Pos.ON@:... SubjectConcord; [email protected]@@P.Pos.ON@:... SubjectConcord; ... LEXICON SubjectConcord [email protected]@:ke% @D.Mood.IMP@ NegativePrefix2; [email protected]@:re% @D.Mood.IMP@ NegativePrefix2; ... [email protected]@@D.Mood.IMP@:o% ... NegativePrefix2; [email protected]@:ba% @D.Mood.IMP@ NegativePrefix2;

A Finite State Approach to Setswana Verb Morphology

135

... LEXICON VerbalEnding ... [email protected]@@R.TnsOrAsp.PRES@@R.Pos.ON@:a... #; [email protected]@@R.TnsOrAsp.PERF@@R.Pos.ON@@R.PerfSuf.ON@:e...#; [email protected]@@R.TnsOrAsp.FUT@@R.Pos.ON@:a... #; ...

Pertinent issues for noting are: – The use of multicharacter symbols (see [5]) to extend the alphabet of the upper (analysis) language with tags; – The use of flag diacritics (see [5]) to keep track of valid morpheme sequences and model long distance dependencies; – The verb morphology is based on the assumption that valid verb structures are disjunctively written, which implies that the blank character forms part of the surface form alphabet. – Number of verb roots currently in the analyser: 517 – Size of the network: 7769 states and 16703 arcs Below we give examples of typical analyses for the verb re tla dula (‘we will sit/stay’). The analyses indicate the part-of-speech (here a verb), relevant feature information such as the mood (here indicative or participial), the tense (here present, future or perfect) and the positive/negative form (here positive), followed by a ’:’ and then the morphological analysis. The tags are chosen to be self-explanatory and the verb root appears in square brackets. (1)

re tla dula (‘we will sit/stay’) a. VerbINDmoodFUTPos:AgrSubj-1p-Pl+TmpPre+[dul]+VerbalEnding b. VerbPARmoodFUTPos:AgrSubj-1p-Pl+TmpPre+[dul]+VerbalEnding.

Both moods, indicative and participial, constitute valid analyses. The occurrence of multiple valid morphological analyses is typical and would require (context dependent) disambiguation at subsequent levels of processing. Other examples of typical analyses are (2)

a.

ba a kwala (‘they write’) VerbINDmoodPRESPos:AgrSubj-Cl2+AspPre+[kwal]+VerbalEnding

b.

o tla reka (‘he will buy’) VerbINDmoodFUTPos:AgrSubj-Cl1+TmpPre+[rek]+VerbalEnding

c.

ke dirile (‘I have worked’) VerbINDmoodPERFPos:AgrSubj-1p-Sg+[dir]+Perf+VerbalEnding.

In the first analysis ba is the subject agreement class 2; a the aspectual prefix; kwal the verb root and a the verbal ending. Now that we know how the verb analyser is put together, we return to the issue of tokenisation since unless the analyser receives valid tokens as input, it is not of much use.

136

4

L. Pretorius et al.

From Text to Tokens

Words, syntactic groups, clauses, sentences, paragraphs, etc. usually form the basis of the analysis and processing of natural language text. However, texts in electronic form are just sequences of characters, including letters of the alphabet, numbers, punctuation, special symbols, whitespace, etc. The identification of word and sentence boundaries is therefore essential for any further processing of an electronic text. Tokenisation or word segmentation may be defined as the process of breaking up the sequence of characters in a text at the word boundaries. Tokenisation may therefore be regarded as a core technology in natural language processing. Tokenisation for alphabetic segmented languages such as English is considered a relatively simple process where linguistic words are usually delimited by whitespace and punctuation. This task is effectively handled by means of simple regular expression scripts. While Setswana is also an alphabetic segmented language, we reiterate that its disjunctive orthography causes token internal whitespace in a number of constructions of which the verb is the most important and widely occurring. We illustrate this by means of the following example: In the English sentence I shall buy meat the four tokens (separated by /) are I / shall / buy / meat. However, in the Setswana sentence Ke tla reka nama (‘I shall buy meat’) the two tokens are Ke tla reka / nama. Therefore tokenisation is language dependent [8] and this aspect requires special attention in Setswana tokenisation (see also [9] and [10]). We propose a novel combination of the morphological analyser for Setswana, discussed above, and two tokenising transducers for addressing the Setswana tokenisation/morphological analysis problem. A core assumption in our approach is that all and only successfully analysed tokens are valid tokens. 4.1

Two Tokenising Transducers

We combine a comprehensive and reliable morphological analyser for Setswana, which caters for disjunctively written verb constructions, a verb tokenising transducer and a tokenising transducer that generates systematically shortened candidate tokens. Since the focus is on token internal whitespace, the Setswana tokeniser prototype makes provision for punctuation and alphabetic text, but not yet for the usual non-alphabetic tokens such as dates, numbers, hyphenation, abbreviations, etc. The first tokenising transducer is based on a grammar for linguistically valid verb constructions, implemented with xfst regular expressions. The second transducer is then applied to longest matches that are not successfully analysed and therefore do not constitute valid tokens. This tokenising transducer systematically shortens an invalid longest string by breaking it up into all possible pairs of shorter candidate tokens. For example, dikgwedi tsa setswana results in dikgwedi and tsa setswana, or dikgwedi tsa and setswana. Since these shortened strings still do not constitute valid tokens, the next iteration of shortening produces dikgwedi, tsa and setswana as three separate valid tokens.

A Finite State Approach to Setswana Verb Morphology

5

137

Application and Results

A test corpus consisting of a variety of carefully selected short texts were compiled, tokenised by hand and validated by a linguist. The hand-tokenised data provide a means of measuring the success of the morphological analysis and the tokenisation procedure. Preprocessing of the text consisted of normalisation so as not to contain capitalisation and punctuation. All word roots occurring in the text were added to the word root lexicon of the morphological analyser to prevent omissions in the word root lexicon influencing the morphological analysis and tokenisation experiment. The data used in testing the morphological analyser are the 5790 hand-tokens of which 5439 (93.94%) were successfully analysed. For the purposes of this experiment the unanalysed tokens were not investigated further. An investigation into the causes for these failures in order to improve the analyser forms part of future work. For testing the tokenisation procedure we converted the hand-tokens that were analysed successfully, back into running text. This data served as a measure for the accuracy of the automated tokenisation procedure. – Output of tokenising transducer 1: 4555 auto-tokens – Output of morphological analyser: 3909 auto-tokens analysed (85.82%) and 646 auto-tokens not analysed – Output of first iteration of tokenising transducer 2: 1246 auto-tokens analysed and 46 auto-tokens not analysed – Output of second iteration of tokenising transducer 2: 69 auto-tokens analysed and 0 auto-tokens not analysed – Combined valid auto-tokens: 5224 In terms of types we obtain the tokenisation results in Table 1. Table 1. Tokenisation results Auto-tokens (A) Hand-tokens (H) A∩H A\H H\A Precision (P) Recall (R) F-score (2PR/(P+R))

6

1705 1614 1516 189 98 0.8891 0.9393 0.9135

Conclusion and Future Work

The F-score of 0.91 in Table 1 may be considered a promising result, given that it was obtained on the most challenging aspect of Setswana tokenisation. The approach scales well and may form the basis for a full scale, broad coverage

138

L. Pretorius et al.

tokeniser for Setswana. A limiting factor is the as yet incomplete root lexicon of the morphological analyser. We identify two issues that warrant future investigation, viz. the improvement of the morphological analyser by investigating the hand-tokens that were not successfully analysed; the improvement of the tokenisation procedure by considering errors and omissions in the results.

Acknowledgements This material is based upon work supported by the South African National Research Foundation under grant number 2053403. Any opinion, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Research Foundation.

References 1. Kr¨ uger, C.: Introduction to the Morphology of Setswana. Lincom Europe, Muenchen (2006) 2. Kosch, I.: Topics in Morphology in the African Language Context. Unisa Press, Pretoria (2006) 3. Pretorius, R., Berg, A., Pretorius, L., Viljoen, B.: Setswana tokenisation and computational verb morphology: Facing the challenge of a disjunctive orthography. In: Proceedings of the EACL 2009 Workshop on Language Technologies for African Languages, Athens, Greece (2009) 4. Pretorius, R.: Auxiliary Verbs as a Sub-category of the Verb in Tswana (PhD thesis). Potchefstroom University for Christian Higher Education, Potchefstroom, South Africa (1997) 5. Beesley, K., Karttunen, L.: Finite State Morphology. CSLI Publications, Center for the Study of Language and Information, Stanford (2003) 6. Pretorius, R., Viljoen, B., Pretorius, L.: A finite-state morphological analysis of Setswana nouns. South African Journal of African Languages 25(1), 48–58 (2005) 7. Pretorius, L., Viljoen, B., Pretorius, R., Berg, A.: Towards a computational morphological analysis of Setswana compounds. Literator 29(1), 1–20 (2008) 8. Schiller, A.: Multilingual finite-state noun-phrase extraction. In: Proceedings of the ECAI 1996 Workshop on Extended Finite State Models of Language, Riva del Garda, Italy (1996) 9. Anderson, W., Kotz´e, P.: Finite state tokenisation of an orthographical disjunctive agglutinative language: The verbal segment of northern sotho. In: Proceedings of the 5th International Conference on Language Resources and Evalution, Genoa, Italy (2006) 10. Hurskeinen, A., Louwrens, L., Poulos, G.: Computational description of verbs in disjoining writing systems. Nordic Journal of African Studies 14(4), 438–451 (2005)

Zulu: An Interactive Learning Competition David Combe1 , Colin de la Higuera2 , and Jean-Christophe Janodet1

2

1 ´ Universit´e de Lyon, F-42023 Saint-Etienne, CNRS UMR5516, Laboratoire Hubert Curien, Universit´e de Saint-Etienne - Jean Monnet {david.combe,janodet}@univ-st-etienne.fr Universit´e de Nantes, CNRS, LINA, UMR6241, F-44000, France [email protected]

Abstract. Active language learning is an interesting task for which theoretical results are known and several applications exist. In order to better understand what the better strategies may be, a new competition called Zulu (http://labh-curien.univ-st-etienne.fr/zulu/) is launched: participants are invited to learn deterministic finite automata from membership queries. The goal is to obtain the best classification rate from a fixed number of queries.

1

Introduction, Motivations, History and Background

When learning language models, techniques usually make use of huge corpora that are unavailable in many under-resourced languages. One possible way around this problem of lack of data is to be able to interrogate an expert with a number of chosen queries, in an interactive mode, until a satisfying language model is reached. In this case, an important indicator of success of the learning algorithm is the amount of energy the expert has to spend in order for learning to be successful. A nice learning paradigm covering this situation is that of Active Learning, a variant of which for grammatical inference was introduced by Dana Angluin [1,2]. The setting is formal and has been nicely studied over the years by a number of researchers, with a number of publications mainly in journals and conferences in machine learning or theoretical computer science. Typically, the formalism involves the presence of a learner (he) and an Oracle (she). There has to be an agreement on the type of queries one can make. The better studied queries are: – Membership queries. The learner presents a string to the Oracle and gets back the label. – Strong equivalence queries. The learner presents a machine and gets back YES or a counterexample. 

This work was partially supported by the IST Programme of the European Community, under the Pascal 2 Network of Excellence, Ist–2006-216886. The work started while the second author was at Saint-Etienne-University.

A. Yli-Jyr¨ a et al. (Eds.): FSMNLP 2009, LNAI 6062, pp. 139–146, 2010. c Springer-Verlag Berlin Heidelberg 2010 

140

D. Combe, C. de la Higuera, and J.-C. Janodet

– Weak equivalence queries. The learner presents a machine and gets back YES or NO. – Correction queries. The learner presents a string and gets back YES or a close element from the target language (‘close’ requires that a topology has been defined). A number of variants exist for the Oracle. She can be probabilistic, have a worst case policy, return noisy examples, etc. There are several issues to be considered, depending on what the Oracle really represents: a worst case situation, an average situation or even a helpful situation (the Oracle may then be a teacher). A typical task on which active learning has been investigated is grammatical inference [3]. In this case, the task is to learn a grammatical representation of a language, while querying the Oracle. There are a number of negative results that have been obtained, tending to prove that some class of languages/grammars could not be efficiently learned by using queries of one type or another. Deterministic finite automata (DFA) were thoroughly investigated in this setting: as negative results, it was proved that they could not be learned from just a polynomial number of membership queries [4] nor from just a polynomial number of strong equivalence queries [5]. On the other hand, algorithm L* designed by Angluin [6], was proved to be able to learn DFA from a polynomial number of membership queries and equivalence queries: this combination is called a minimally adequate teacher. Extensions or alternative presentations of L* can be found in [7,8] and there has been further theoretical work aimed at counting the number of queries really necessary [9,10], on identifying the power of the equivalence queries [11], or relating the query model to other ones [12,13]. Several open problems related to learning grammars and automata in this setting have been proposed [14]. Algorithm L* has been adapted to learn probabilistic automata [15,16], multiplicity automata [17], NFA, tree automata, transducers [18], etc, [19,20] with, in each case, subtle variations. An interesting extension has been studied recently, concerning correction queries [21,22,23]: typical corrections are made by returning a string closest for the edit distance, or a string built by appending the shortest suffix for the new string to belong to the language. On the other hand one may believe that the setting is purely formal and ill adapted to practical situations. For example, the fact that there is more and more data available might mean the end of interactive learning. On the contrary, the fact of being able to choose what data needs labelling, during the actual learning, offers many algorithmic advantages. As far as applications go, typical situations in which Oracle learning makes sense are described in [24]. The earliest task addressed by L* inspired algorithms was that of map building in robotics: the (simplified) map is a graph and the outputs in each state are what the robot may encounter in a state. A particularity here is that the robot cannot be reset: the learning algorithm is to learn from just one very long string and all its prefixes [25,26].

Zulu: An Interactive Learning Competition

141

A task somehow related is that of discovering the rational strategy followed by an adversary. This line was looked into in a number of papers related to agent technologies [27,28]. One task on which grammatical inference is proving to be particularly useful is that of wrapper induction: the idea is to find in a web page (or various of the same type) all arguments of a special sort. In this context, the role of the Oracle is played by the human user [29]. In different tasks linked with checking if the specifications of a system or a piece of hardware are met, the item to be checked is used as an Oracle. Queries are made in order to build a model of the item, and then the learned model can be checked against the specifications. Obviously, there is an issue with the distribution used to simulate the equivalence query [30,31,32]. What has not been extensively studied is how to optimise the learning task by trying to make easy queries, or queries for which the Oracle’s answer is simple. These are some of the strong motivations for stemming research in the direction of developing new interactive learning strategies and algorithms.

2

A Brief Overview of Zulu

Zulu is both a web based platform simulating an Oracle in a DFA learning task and a competition. As a web platform, Zulu allows users to generate tasks, to interact with the Oracle in learning sessions and to record the results of the users. It provides the users with a baseline algorithm written in JAVA, or the elements allowing to build from scratch a new learning algorithm capable of interacting with the server. The server ( http://labh-curien.univ-st-etienne.fr/zulu ) can be accessed by any user/learner who can open an account. The server acts as an Oracle for membership queries. A player can log in and ask for a target DFA. The server then computes how many queries it needs to learn a reasonable machine (reasonable means less than 30% classification errors), and invites the player to interact in a learning session in which he can ask up to that number of queries. At the end of the learning process the server gives the learner a set of unlabelled strings (a test set). The labels the learner submits are used to compute his score. As a starting point the baseline algorithm, which is a simple variation of L*, with some sampling done to simulate equivalence queries, is given to the user, who can therefore play with some simple JAVA code for a start (i.e. he doesn’t have to develop from scratch). The competition itself will be held in the spring of 2010. In this case, the competing algorithms will all have to solve new tasks. The exact design of the competition is still to be decided, with some open questions concerning fairness and avoiding collusion still to be solved. The reasons for launching this challenge are that there seems to be a renewed interest in L*, or more generally in active learning of discrete structures. People have started to apply or adapt L* in computational linguistics but also in web

142

D. Combe, C. de la Higuera, and J.-C. Janodet

applications or in formal specification checking. And interestingly, in all cases, the crucial resource is not the computation time but the number of queries. Basically we believe that there is a lot of room for improvement, and real need for new ideas. The hope is that in between researchers active in the field, others more interested in DFA, and perhaps students that have studied (or are studying) L* in a machine learning course, the competition will be a success.

3

Other Related Competitions

There have been in the past competitions related with learning finite state machines or grammars. – Abbadingo (http://abbadingo.cs.nuim.ie/) was organised in 1997: the goal was to learn DFA of sizes ranging from 64 to 512 states from positive and negative data, strings over a two letter alphabet. – System Gowachin (http://www.irisa.fr/Gowachin/) was developed in 1998to generate new automata for classification tasks: the possibility of having a certain level of noise was introduced. – The GECCO conference organised in 2004 a competition in which the task consisted in learning DFA from noisy samples with varying levels of noise (http://cswww.essex.ac.uk/staff/sml/gecco/NoisyDFA.html). – Omphalos competition (http://www.irisa.fr/Omphalos/) involved in 2004 learning context-free grammars, given samples which in certain cases contained both positive and negative strings, in others, just text. – Tenjinno competition was organised in 2006 in order to motivate research in learning transducers (http://web.science.mq.edu.au/tenjinno/). Of course, a number of machine learning competitions have been organised during the past years. A specific effort has been made by the Pascal network (http://pascallin2.ecs.soton.ac.uk/Challenges/).

4

A More Complete Presentation of Zulu

Let us now describe in more detail the way Zulu works. 4.1

Registration Phase

Each participant (learning algorithm) would register and be given an ID. One can admit that a team presents various participants. Registration is done online. An email system ensures a reasonable level of security.

Zulu: An Interactive Learning Competition

4.2

143

Training Phase

A participant can choose between creating a new learning task or selecting one created by another player. He receives a number identifying a learning session, the size of the alphabet and the number of membership queries he is allowed. The participant then uses his learning algorithm (which can be a variation of the baseline algorithm used to measure the difficulty of the task, or anything he chooses and that will communicate with the server with some predefined routines). The algorithm then submits strings (membership queries) interactively and obtains their label. When ready to guess (at most after having made the maximum number of queries), the learner receives from the server a set of 1800 new unseen strings, and proposes a labelling of these strings to the server. This is just another string of length 1800. He obtains a score, kept on the server, which can help others know how well the opposition is doing. An idea of a learning session is represented in the following table. Note that the numbers are here meaningless: of course, you need many more queries than 66 for such a task. Query : I would like to connect and want a target for algorithm MyLearner Query: string red Query: string green Query: string t Query: string so Query: I think I know. Query: [a string of length 1800] 0101000. . .

4.3

Answer: OK, I have generated a new target. It has less than 100 states on a 26 letter alphabet. You are allowed 66 queries Answer: YES Answer: NO Answer: YES Answer: YES Answer: please label the following 1800 strings Algorithm MyLearner did 77% on a 55 state DFA with 66 queries.The baseline did 73%

Competition Phase

In order to classify the contestants, a two-dimensional grid is used: one dimension concerns the size (in states) of the automata, and the other the size of the alphabet. The exact way of dealing with the classification is still an open issue. If every player is asked to work on the same task, there is a difficulty with collusion: why not share the results of a membership query? On the other hand, there is a real difficulty in comparing players that would not be working on the same tasks, as the variance is unknown. The intended procedure is that players will be given a fixed number of targets, of various sizes both in terms of number of symbols in the alphabet and number of states, and are asked to use their algorithm on these tasks. Classifications for each size of automata and a global classification will be proposed.

144

5

D. Combe, C. de la Higuera, and J.-C. Janodet

Discussion

The hope is that Zulu will appeal to researchers in a number of fields, even if there will be room for criticism by all: – Computational linguists are interested in active learning, but not necessarily of formal languages. Since the goal here is to learn about strategies, we feel that a certain control of the target is necessary. A next step to suit their needs would be to include natural language data sets. It is also the case that regular languages are not considered powerful enough for most applications in linguistics. Even if it is still discussed if context-free grammars are a formalism sufficiently expressive, it makes sense to consider that a contextfree grammar learning competition would be more useful to this community. – Researchers using machine learning in robotics (for map building, for example) might disagree with the fact that the structure of the graph underlying the DFA is not planar, or that the robot can be reset. These are some directions that should be investigated. – Researchers closer to applications will disagree with the fact that there is no noise nor ‘real’ probabilities. Our feeling in this case is that too little is known about the ‘simple’ task for now. – Researchers in active learning would surely like to be able to use alternative types of queries: this is in project, and we hope to be able to add the possibility of querying the Oracle in different ways (possibly with different costs depending on the ‘difficulty’ of the query) in a near future. – The more orthodox theoreticians will argue (correctly) that we are treating the equivalence query replacement issue in an unsatisfying manner. This is true and we welcome better suggestions. – Anyone who has looked into the problem of generating random automata will certainly be able to propose improvements on the procedure described on the website. We look forward to such discussions. – A number of people would convincingly argue in favour of learning transducers (using translation queries) instead of DFA. This indeed seems to us to be a next step.

Acknowledgement Zulu makes use of many pieces from the Gowachin engine developed by Fran¸cois Coste. We also are grateful for remarks and discussions with the Zulu scientific committee.

References 1. Angluin, D.: Queries and concept learning. Machine Learning Journal 2, 319–342 (1987) 2. Angluin, D.: Queries revisited. Theoretical Computer Science 313(2), 175–194 (2004)

Zulu: An Interactive Learning Competition

145

3. de la Higuera, C.: A bibliographical study of grammatical inference. Pattern Recognition 38, 1332–1348 (2005) 4. Angluin, D.: A note on the number of queries needed to identify regular languages. Information and Control 51, 76–87 (1981) 5. Angluin, D.: Negative results for equivalence queries. Machine Learning Journal 5, 121–150 (1990) 6. Angluin, D.: Learning regular sets from queries and counterexamples. Information and Control 39, 337–350 (1987) 7. Balc´ azar, J.L., Diaz, J., Gavald` a, R., Watanabe, O.: An optimal parallel algorithm for learning Dfa. In: Proceedings of the 7th Colt, pp. 208–217. ACM Press, New York (1994) 8. Kearns, M.J., Vazirani, U.: An Introduction to Computational Learning Theory. MIT press, Cambridge (1994) 9. Balc´ azar, J.L., Diaz, J., Gavald` a, R., Watanabe, O.: The query complexity of learning Dfa. New Generation Computing 12, 337–358 (1994) 10. Bshouty, N.H., Cleve, R., Gavald` a, R., Kannan, S., Tamon, C.: Oracles and queries that are sufficient for exact learning. Journal of Computer and System Sciences 52, 421–433 (1996) 11. Gavald` a, R.: On the power of equivalence queries. In: Proceedings of the 1st European Conference on Computational Learning Theory. The Institute of Mathematics and its Applications Conference Series, new series, vol. 53, pp. 193–203. Oxford University Press, Oxford (1993) 12. Castro, J., Guijarro, D.: PacS, simple-Pac and query learning. Information Processing Letters 73(1-2), 11–16 (2000) 13. de la Higuera, C., Janodet, J.C., Tantini, F.: Learning languages from bounded resources: the case of the DFA and the balls of strings. In: [33], pp. 43–56 14. de la Higuera, C.: Ten open problems in grammatical inference. In: [34], pp. 32–44 15. de la Higuera, C., Oncina, J.: Learning probabilistic finite automata. In: [35], 175–186 16. Guttman, O., Vishwanathan, S.V.N., Williamson, R.C.: Learnability of probabilistic automata via oracles. In: Jain, S., Simon, H.U., Tomita, E. (eds.) ALT 2005. LNCS (LNAI), vol. 3734, pp. 171–182. Springer, Heidelberg (2005) 17. Bergadano, F., Varricchio, S.: Learning behaviors of automata from multiplicity and equivalence queries. Siam Journal of Computing 25(6), 1268–1280 (1996) 18. Vilar, J.M.: Query learning of subsequential transducers. In: Miclet, L., de la Higuera, C. (eds.) ICGI 1996. LNCS (LNAI), vol. 1147, pp. 72–83. Springer, Heidelberg (1996) 19. Saoudi, A., Yokomori, T.: Learning local and recognizable ω-languages and monadic logic programs. In: Vit´ anyi, P.M.B. (ed.) EuroCOLT 1995. LNCS, vol. 904, pp. 157–169. Springer, Heidelberg (1995) 20. Yokomori, T.: Learning two-tape automata from queries and counterexamples. Mathematical Systems Theory, 259–270 (1996) 21. Beccera-Bonache, L., Bibire, C., Dediu, A.H.: Learning Dfa from corrections. In: Fernau, H., ed.: Proceedings of the Workshop on Theoretical Aspects of Grammar Induction (Tagi). WSI-2005-14. Technical Report, University of T¨ ubingen, pp.1–11 (2005) 22. Becerra-Bonache, L., de la Higuera, C., Janodet, J.C., Tantini, F.: Learning balls of strings from edit corrections. Journal of Machine Learning Research 9, 1841–1870 (2008) 23. Kinber, E.B.: On learning regular expressions and patterns via membership and correction queries. In: [33], pp. 125–138

146

D. Combe, C. de la Higuera, and J.-C. Janodet

24. de la Higuera, C.: Data complexity issues in grammatical inference. In: Basu, M., Ho, T.K. (eds.) Data Complexity in Pattern Recognition, pp. 153–172. Springer, Heidelberg (2006) 25. Dean, T., Basye, K., Kaelbling, L., Kokkevis, E., Maron, O., Angluin, D., Engelson, S.: Inferring finite automata with stochastic output functions and an application to map learning. In: Swartout, W. (ed.) Proceedings of the 10th National Conference on Artificial Intelligence, San Jose, CA, pp. 208–214. MIT Press, Cambridge (1992) 26. Rivest, R.L., Schapire, R.E.: Inference of finite automata using homing sequences. Information and Computation 103, 299–347 (1993) 27. Carmel, D., Markovitch, S.: Model-based learning of interaction strategies in multi-agent systems. Journal of Experimental and Theoretical Artificial Intelligence 10(3), 309–332 (1998) 28. Carmel, D., Markovitch, S.: Exploration strategies for model-based learning in multiagent systems. Autonomous Agents and Multi-agent Systems 2(2), 141–172 (1999) 29. Carme, J., Gilleron, R., Lemay, A., Niehren, J.: Interactive learning of node selecting tree transducer. Machine Learning Journal 66(1), 33–67 (2007) 30. Br´eh´elin, L., Gascuel, O., Caraux, G.: Hidden Markov models with patterns to learn boolean vector sequences and application to the built-in self-test for integrated circuits. Pattern Analysis and Machine Intelligence 23(9), 997–1008 (2001) 31. Berg, T., Grinchtein, O., Jonsson, B., Leucker, M., Raffelt, H., Steffen, B.: On the correspondence between conformance testing and regular inference. In: Cerioli, M. (ed.) FASE 2005. LNCS, vol. 3442, pp. 175–189. Springer, Heidelberg (2005) 32. Raffelt, H., Steffen, B.: Learnlib: A library for automata learning and experimentation. In: Baresi, L., Heckel, R. (eds.) FASE 2006. LNCS, vol. 3922, pp. 377–380. Springer, Heidelberg (2006) 33. Clark, A., Coste, F., Miclet, L. (eds.): ICGI 2008. LNCS (LNAI), vol. 5278. Springer, Heidelberg (2008) 34. Sakakibara, Y., Kobayashi, S., Sato, K., Nishino, T., Tomita, E. (eds.): ICGI 2006. LNCS (LNAI), vol. 4201. Springer, Heidelberg (2006) 35. Paliouras, G., Sakakibara, Y. (eds.): ICGI 2004. LNCS (LNAI), vol. 3264. Springer, Heidelberg (2004)

Author Index

Alegria, I˜ naki

Kapanadze, Oleg 114 Kempe, Andr´e 31

105

Berg, Ansu 131 Bosch, Sonja 123 Bubenzer, Johannes Combe, David

Maletti, Andreas 56, 69 Maritxalar, Montserrat 105 Muhirwe, Jackson 48

93

139

Oflazer, Kemal

de la Higuera, Colin

1, 139

Etxeberria, Izaskun

105

Geldenhuys, Jaco

81

Hanneforth, Thomas Hulden, Mans 105

Pretorius, Laurette Pretorius, Rigardt

123, 131 131

Schalkwyk, Johan

47

van der Merwe, Brink van Zijl, Lynette 81 Viljoen, Biffie 131 Vogler, Heiko 69

13

Janodet, Jean-Christophe

11

139

W¨ urzner, Kay-Michael

81

93