127 0 5MB
English Pages 304 [298] Year 2002
Lecture Notes in Computer Science Edited by G. Goos, J. Hartmanis, and J. van Leeuwen
2540
3
Berlin Heidelberg New York Barcelona Hong Kong London Milan Paris Tokyo
William I. Grosky Frantiˇsek Pl´asˇil (Eds.)
SOFSEM 2002: Theory and Practice of Informatics 29th Conference on Current Trends in Theory and Practice of Informatics Milovy, Czech Republic, November 22-29, 2002 Proceedings
13
Series Editors Gerhard Goos, Karlsruhe University, Germany Juris Hartmanis, Cornell University, NY, USA Jan van Leeuwen, Utrecht University, The Netherlands Volume Editors William I. Grosky Department of Computer and Information Science University of Michigan - Dearborn 4901 Evergreen Road, Dearborn, Michigan 48128, USA E-mail: [email protected] Frantiˇsek Pl´asˇil Department of Software Engineering School of Computer Science Charles University Malostransk´e n´am. 25, 118 00 Prague, Czech Republic E-mail: [email protected]
Cataloging-in-Publication Data applied for Bibliographic information published by Die Deutsche Bibliothek Die Deutsche Bibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data is available in the Internet at
CR Subject Classification (1998): D.2, F.3, C.2.4, F.1, F.4, I.2, H.4, H.5.1, I.4 ISSN 0302-9743 ISBN 3-540-00145-X Springer-Verlag Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable for prosecution under the German Copyright Law. Springer-Verlag Berlin Heidelberg New York a member of BertelsmannSpringer Science+Business Media GmbH http://www.springer.de © Springer-Verlag Berlin Heidelberg 2002 Printed in Germany Typesetting: Camera-ready by author, data conversion by DA-TeX Gerd Blumenstein Printed on acid-free paper SPIN: 10871592 06/3142 543210
Preface
For the 29th time, SOFSEM (SOFtware SEMinar) was held. Having transformed over the years from a local event to a fully international conference, the contemporary SOFSEM is a mix of a winter school and a conference striving for multidisciplinarity in computer science, accompanied by workshops dedicated to a narrow field (this year multimedia and softcomputing) and a student forum. This volume constitutes the proceedings of SOFSEM 2002 held in Milovy, Czech Republic, November 22–29, 2002. This year, 23 papers were submitted from 11 countries. The selection of the 11 best papers accepted by the Program Committee was based on their contribution to the state of the art, technical soundness, clarity of presentation, and relevance of bibliography. The Steering Committee supported by the Advisory Board recommended 12 invited talks focused on the following key topic areas: distributed and parallel systems, system design and testing, databases and information systems, and fundamentals. SOFSEM is the result of considerable effort by a number of people. It is our pleasure to record our thanks to the Advisory Board for its support, to the Steering Committee for its general guidance, and to the Organizing Committee for making SOFSEM 2002 happen. It has been an honor for us to work with the members of the Program Committee and other referees who devoted a lot of effort to reviewing the submitted papers. Also, we would like to record a special credit to Aleˇs Kˇrenek, the PC secretary, for his tremendous job in keeping track of the hundreds of PC agenda-related issues and for the technical preparation of the proceedings (here substantially helped by Zdenˇek Salvet). Moreover, we would like to acknowledge the excellent cooperation we received from Jan Staudek as the Organizing Committee Chair. We are very grateful to the editors of the LNCS series for their continuing trust in SOFSEM, and to Springer-Verlag for publishing the proceedings. In addition, we highly appreciate the financial support of our sponsors which assisted with the invited speakers and advanced students. Ultimately, the success of SOFSEM will be determined not only by the speakers and the committee members, the core of the conference, but generally by all the attendees who contribute to the spirit of the conference. So we are very grateful to all of our professional colleagues who did their best to make SOFSEM a high-quality event. Finally, we would like to express our appreciation for having the opportunity to serve as the Program Committee Co-chairs of SOFSEM 2002.
Bill Grosky and Frantiˇsek Pl´aˇsil
VI
ss
SOFSEM 2002 Committees
Advisory Board
Dines Bjørner Peter van Emde Boas Manfred Broy Michal Chytil Georg Gottlob Keith G. Jeffery Maria Zem´ankov´ a s s
Steering Committee
Branislav Rovan, Chair Miroslav Bartoˇsek, Secretary Keith G. Jeffery Frantiˇsek Pl´aˇsil Jan Staudek Gerard Tel Jiˇr´ı Wiedermann ss
Techn. Univ. of Denmark, Lyngby, Denmark University of Amsterdam, The Netherlands Techn. Univ. Munich, Germany ANIMA Prague, Czech Republic Vienna University of Technology, Austria CLRC RAL, Chilton, Didcot, Oxon, UK NSF, Washington DC, USA
Comenius University, Bratislava, Slovakia Masaryk University, Brno, Czech Republic CLRC RAL, Chilton, Didcot, Oxon, UK Charles University, Prague, Czech Republic Masaryk University, Brno, Czech Republic Utrecht University, The Netherlands Academy of Sciences, Prague, Czech Republic
Program Committee
Bill Grosky, Co-chair Frantiˇsek Pl´aˇsil, Co-chair Aleˇs Kˇrenek, Secretary Radim Bartoˇs Monica Brockmeyer Tiziana Catarci Farshad Fotouhi Ophir Frieder Hermann Haertig V´ aclav Hlav´ aˇc Andrew Johnson Guenter Kniesel Kevin J. Compton Mojm´ır Kˇret´ınsk´ y Petr Kroha Andreas Mauthe Peter Ruˇziˇcka
University of Michigan, Dearborn, USA Charles University, Prague, Czech Republic Masaryk University, Brno, Czech Republic University of New Hampshire, USA Wayne State University, USA Universita degli Studi di Roma “La Sapienza”, Italy Wayne State University, USA Illinois Institute of Technology, USA Technical University Dresden, Germany Czech Technical University, Prague, Czech Republic University of Illinois-Chicago, USA University of Bonn, Germany University of Michigan, USA Masaryk University, Brno, Czech Republic Technical University Chemnitz, Germany Tecmath and Lancaster University, UK Komenius University, Bratislava, Slovakia
Organization
Jean-Bernard Stephani Frank Stomp Petr Tuma Rong Zhao Qiang Zhu ss
INRIA Alpes, France Oakland University, USA Charles University, Prague, Czech Republic State University of New York at Stony Brook, USA University of Michigan, USA
Additional Referees
D.V. Sreenath Ludˇek Matyska Claude-Joachim Hamann Mihaela Sabin Anne-Marie Bosneag ss
VII
Jianguo Lu Jan Bouda Jakub M´ acha Qi Cheng Radek Pel´anek
Organization
SOFSEM 2002 is organized by Faculty of Mathematics and Physics, Charles University, Prague Czech Society for Computer Science Faculty of Informatics, Masaryk University, Brno Institute of Computer Science, Academy of Sciences, Prague Institute of Computer Science, Masaryk University, Brno in co-operation with Slovak Society for Computer Science Czech ACM Chapter s s
Organizing Committee
Jan Staudek, Chair Miroslav Bartoˇsek Petr Han´ aˇcek Renata Havelkov´a Aleˇs Hor´ak Dana Kom´ arkov´a Petr Lidman ss
Zdenˇek Malˇc´ık Tom´aˇs Pitner ˇ ıha Zdenˇek R´ Jarom´ır Skˇrivan Petr Sojka Tom´aˇs Staudek Marek Veber
Sponsoring Institutions
ERCIM, the European Research Consortium for Informatics and Mathematics Hewlett-Packard, s.r.o. Oracle Czech, s.r.o.
Table of Contents
INVITED TALKS System Design and Testing Related Theory Conformance Testing Techniques for Timed Systems . . . . . . . . . . . . . . . . . . . . . . . . 1 Hac`ene Fouchal Counter-Constrained Finite State Machines: A New Model for Component Protocols with Resource-Dependencies . . . . . . . 20 Ralf Reussner Distributed and Parallel Systems Equivalence-Checking with Infinite-State Systems: Techniques and Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Anton´ın Kuˇcera and Petr Janˇcar Type Theory On Functions and Types: A Tutorial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 Fairouz Kamareddine Multimedia, Databases, and Vision Database Support for Multisource Multiresolution Scientific Data . . . . . . . . . . 94 Philip J. Rhodes, R. Daniel Bergeron, and Ted M. Sparr Semantic Annotation and Indexing of News and Sports Videos . . . . . . . . . . . . 115 J¨ urgen Assfalg, Marco Bertini, Carlo Colombo, Alberto Del Bimbo, and Walter Nunziati Multimedia Presentations Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 Maria Luisa Sapino Interactive Indexing and Retrieval of Multimedia Content . . . . . . . . . . . . . . . . . 135 Marcel Worring, Andrew Bagdanov, Jan v. Gemert, Jan-Mark Geusebroek, Hoang Minh, Guus Schreiber, Cees Snoek, Jeroen Vendrig, Jan Wielemaker, and Arnold Smeulders A Model-Based Approach to Semantic-Based Retrieval of Visual Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 Forouzan Golshani, Youngchoon Park, and Sethuraman Panchanathan
X
Table of Contents
Soft Computing Bipolarity in Possibilistic Logic and Fuzzy Rules . . . . . . . . . . . . . . . . . . . . . . . . . . 168 Didier Dubois and Henri Prade
PANEL SESSION Data Management Challenges for GRID Computing . . . . . . . . . . . . . . . . . . . . . . 174 Keith G. Jeffery
CONTRIBUTED PAPERS Hungarian Speech Synthesis Using a Phase Exact HNM Approach . . . . . . . . 181 Korn´el Kov´ acs, Andr´ as Kocsor, and L´ aszl´ o T´ oth Modelling Resource Transitions in Constraint-Based Scheduling . . . . . . . . . . . 186 Roman Bart´ ak A Specification Framework for Real-Time Scheduling . . . . . . . . . . . . . . . . . . . . . 195 K. Subramani String Transformation for n-Dimensional Image Compression . . . . . . . . . . . . . 208 Martin Kutrib and Jan-Thomas L¨ owe Validation and Decomposition of Partially Occluded Images . . . . . . . . . . . . . . .218 Costas S. Iliopoulos and Manal Mohamed Solving Conflicts of Agent Knowledge States in Multiagent Systems . . . . . . . 231 Radoslaw Piotr Katarzyniak and Ngoc Thanh Nguyen Specification and Verification of Secure Business Transaction Systems . . . . . 240 Vangalur S. Alagar and Kasi Periyasamy Agent-Oriented Model of Simulated Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 Marek Kisiel-Dorohinicki The Reconstruction of Some 3D Convex Polyominoes from Orthogonal Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .262 Maciej G¸ebala The Complexity of Probabilistic versus Quantum Finite Automata . . . . . . . . 273 Gatis Midrij¯ anis Do We Need Liveness? – Approximation of Liveness Properties by Safety Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 Ulrich Ultes-Nitsche Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .289
Conformance Testing Techniques for Timed Systems Hac`ene Fouchal Universit´e de Reims Champagne-Ardenne Moulin de la Housse, BP 1039 51687 Reims Cedex 2, France [email protected]
Abstract. In protocol engineering area, conformance testing is a very important step. It tries to detect remaining errors after the verification step. The addition of time in system modeling, makes this step more complicated. This paper aims to give an overview on techniques for conformance testing of timed systems. It first describes some important models used to specify timed systems. Then, it shows the main techniques used for testing such systems. It will focus on two specific techniques. The first one is based on the extraction of a reduced number of test sequences, guided by a property required by any designer (called a test purpose). The second technique is based on the identification of some states on the implementation. The paper also shows how to experiment those test sequences on a real implementation and how to consider time in such an architecture.
1
Introduction
Since few years, one of the most important challenge is to provide techniques and tools to design correct distributed real-time systems. These systems are widely used in our common life (e-commerce, video-conference, . . . ). Obviously, time is one the main feature of them. It is also one of the most difficult aspect to handle from modeling to implementation. This paper addresses some techniques devoted to test timed systems. This study is widely inspired from the protocol engineering area. In that field, two validation techniques are usually used: – the verification approach, which handles the system specification and tries to prove its correctness (in this case the system is a white box). Usually, user properties are expressed by another formalism as temporal logics and must be verified on the specification by using a model-checker for example, – the testing approach, which uses the implementation of the system and tries to detect potential faults without having a priori any information about the structure of the system (in this case the system is a black box). The test generation step produces sequences of inputs (actions) and outputs (reactions) from the specification, and the implementation must be able to respond with the expected outputs when it receives an input sequence. W.I. Grosky and F. Pl´ aˇ sil (Eds.): SOFSEM 2002, LNCS 2540, pp. 1–19, 2002. c Springer-Verlag Berlin Heidelberg 2002
2
Hac`ene Fouchal
We mention that various aspects can be checked by testing any system. The following types of testing can be found in the literature: performance testing, robustness testing, conformance testing. In this paper, we will only deal with conformance testing. Conformance testing aims to detect errors on any implementation of a system. In conformance testing, we usually perform two steps: Test sequence generation consists of the derivation of test sequences from the specification. The extraction may be exhaustive or partial. In order to test a system part, a test sequence is generally composed of: a preamble (action sequence from the initial state), the appropriate test actions for the specific part and the postamble (action sequence to go back to the initial state). In many case, this later is assumed as a reset action and should be correctly implemented. Test execution consists of the experimentation of test sequences on the implementation and the observation of its reaction. And then, a verdict is expressed: – success: we have not detect any error; – fail: there is an error on the implementation; – inconclusive: no verdict could be given. This paper is structured as follows. Section 2 will detail models dedicated to timed systems. Section 3 presents the related work to testing timed systems. Section 4 will detail our works on test conformance testing techniques for timed systems. Many directions have been followed: adaptation of classical untimed techniques, a test purpose based technique with or without identical timing constraints (between the test purpose and the specification), identification technique of some states, and finally a new technique to describe timed test cases. Section 5 will present the execution of test sequences and the testing architecture adapted to timed systems. Section 6 discusses these techniques and gives some ideas about future works.
2
Timed Systems
A timed system is a reactive system waiting and sending actions satisfying timing constraints. It may be a multimedia protocol or a real-time system. Many models have been presented with discrete time [1, 2, 3] or continuous time [4, 5]. The Alur-Dill automata model (proposed in 1991 [6] and confirmed in 1994 [4]) has the best theoretical foundations. In the following, we will recall the definitions of timed input output automaton. 2.1
Timed Automata Model
An automaton has a finite set of states and a finite set of clocks which are realvalued variables. All clocks proceed at the same rate and measure the amount of time that has elapsed since they were started or reset. Each transition of the system might reset some of the clocks, and has an associated enabling condition
Conformance Testing Techniques for Timed Systems
3
which is a constraint on the values of the clocks. A transition can be taken only if the current clock values satisfy its enabling condition. The following definitions are mainly identical to those given in [4], but some of them have been modified in order to widen the field of application of conformance testing. Definition 1. (Clock constraints and clock guard) A clock constraint over a set C of clocks is an expression as x oprel z where x ∈ C, oprel is a classical relational operator (), and z is either an integer constant n, or a clock y, or their sum y + n. A clock guard over C is a conjunction of clock constraints over C. It is important to notice at once that all these constraints can be expressed by a relation such as: Θ(x1 , . . . , x|C| ) oprel 0 where Θ is linear. Definition 2. (Timed Input Output Automata) A timed input output automa0 ton [4] A is defined as a tuple (ΣA , LA , lA , CA , EA ), where: – ΣA is a finite alphabet, split in two sets: I (input actions) beginning with a “?”, and O (output actions) beginning with a “!”, – LA is a finite set of states, 0 – lA ∈ S is the initial state, – CA is a finite set of clocks, – EA ⊆ LA × LA × ΣA × 2CA × Φ(CA ) is the set of transitions. An edge (l, l , a, λ, G) represents a transition from state l to state l on input or output symbol a. The subset λ ⊆ CA allows the clocks to be reset with this transition, and G is a clock guard over CA . Φ(CA ) is the set clock guards over CA . 2.2
Region Graph Model
This model has been formally defined in [4]. A Region Graph is an equivalent representation of a timed automaton where a state collects all the moments where the system has the same behavior. Clearly, a region graph state is composed of a timed automaton state (representing the system behavior) and a clock region (it is a polyhedron representing the inequations of the state timing constraints). Finally, we can say that in region graphs the timing constraints are moved to states. The transformation algorithm of timed automata into region graphs is defined in [4]. The theoretical framework about this model is detailed in [4]. Definition 3. (A region graph) A region graph RG is a tuple (ΣRG , SRG , s0RG , RRG , CRG , ERG ) where ΣRG is the set of actions, SRG is the set of states, s0RG is the initial state, RRG is the set of clock regions of RG, CRG is the set of clocks, ERG is the transition relation defined as:
4
Hac`ene Fouchal
– (s, s , a) from state s to state s , labeled with the symbol a. s is a tuple (x, R) where x is a state of the initial timed automaton and R is the clock region during which a can be executed. s is a tuple (x , R ) where x is a state of the initial timed automaton and R is the reached clock region. – (s, s , δ) from state s to state s , representing the elapse of time, needed to reach the clock region R from R. In our present studies, we used this model. It is more precise, for testing issues, it allows at any moment to have information about others clocks.
3
Timed System Testing
There are many works dedicated to the verification of timed automata [7, 8, 9]. Some tools [10, 11] have been developed for this purpose. But some other studies proposed various testing techniques for timed systems. [12] deals with an adaptation of the canonical tester for timed testing and it has been extended in [13]. In [14], the authors derive test cases from specifications described in the form of a constraint graph. They only consider the minimum and the maximum allowable delays between input/output events. [15] presents a specific testing technique which suggests a practical algorithm for test generation. They have used a timed transition system model. The test selection is performed without considering time constraints. [16] gives a particular method for the derivation of the more relevant inputs of the systems. [17] suggests a technique for translating a region graph into a graph where timing constraints are expressed by specific labels using clock zones. [18] suggests a selection technique of timed tests from a restricted class of dense timed automata specifications. It is based on the well known testing theory proposed by Hennessy in [19]. [20] derives test cases from Timed Input Output Automata extended with data. Automata are transformed in a kind of Input Output Finite State Machine in order to apply classical test generation technique. The closer studies to ours are the following ones. [21] gives a general outline and a theoretical framework for timed testing. They proved that exhaustive testing of deterministic timed automata with a dense interpretation is theoretically possible but is still difficult in practice. They suggested to perform a kind of discretization of the region graph model (which is an equivalent representation of the timed automata model). Clock regions are only equivalence classes of clock valuations. Their discretization step size takes into account the number of clocks as well as the timing constraints. Then they derive test cases from the generated model. The second study [22] differs from the previous one by using discretization step size depending only on the number of clocks which reduces the timing precision of the action execution. The resulting model has to be translated into a kind of Input/Output Finite State Machine which could be done only under strong and unrealistic assumptions. Finally they extract test cases by using the Wp-method [23]. This area is reach since many years [23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36]. Most of these methods
Conformance Testing Techniques for Timed Systems
5
use the IOFSM model where a transition is labeled by a couple of actions (Input one and Output one). The adaptation of classical methods may cause some problems: it is impossible to express real timing constraints between inputs and outputs.
4
Test Sequence Generation
In this section, we will detail our contributions on testing timed systems. Many directions have been investigated. Each part is detailed and illustrated by examples. 4.1
Label Translation
The first technique is the translation of the region graph into a flattened one where on any transition, we concatenate the label with the region where the action will happen [37, 17]. We generate a new finite state machine. We translate the resulting graph into an IOFSM. We do not gather each input action with an output one since we cannot always guarantee an output action after an input one in the specification. We follow the rules below: – after each input action we add a virtual output , – before each output action, we add a virtual input action GetClock which aims to get the actual clock valuation of clocks (we can check if we are in the right clock region), – each elapse action δ is translated into SendClockVal (X)/WakeUp which expresses that the tester asks the clocks to elapse X time units (computed on the fly) and then they send WakeUp signal. Then, we apply the Wp [23] method (characterization of all the specification states). During test execution the action is ignored. From the derived test cases, we generate executable test cases after the translation of the time content of the labels into the moments where the implementation will receive stimuli and will send responses. The hypothesis of such technique are: deterministic systems, completely specified, one initial state, from each state we cannot have an outgoing transition with an input action and another with an output one. 4.2
Test Purpose Based Techniques
The exhaustive generation techniques cost very much. The industry reduces this cost by deriving tests from test purposes suggested by testers. In fact, we will only test some system parts. This section will show how to extract executable test cases from properties expressed by users. Their timing constraints may be the same than the specification one. Two techniques have been developed: one considers timing constraints similar between the specification and the test purpose and another one considers them different.
6
Hac`ene Fouchal
Test Purpose Constraints Similar to the Specification Ones Here the specification is first translated into a region graph. The time purpose is also translated into the same model. The test purpose expresses finite properties. It is an acyclic graph. The entire test purpose graph is traversed. When a path (from the initial state) is found in a specification path, this later will be used as a test sequence [38, 39]. The extraction algorithm is described in Algorithm 1:
Algorithm 1 Test purpose based algorithm 1 Data: TP (test purpose), S (specification) Result: TSS (test sequence set) Translation of S into region graph RGS ; Translation of TP into region graph RGTP; PRGTP := possible paths of RGTP ; countTP ← 0; for countTP ← 0 to Length(PRGTP ) do TPcurrent ← PRGTP [TPcurrent ]; TSS ← TSS ∪ search(TPcurrent , S); end for search (T P, S): returns a set of S paths containing TP.
Example 1. Let S a specification defined in Fig. 1a and TP a test purpose defined in Fig. 1b. Clock regions of S are illustrated by Fig. 2. The region graph of S is shown in Fig. 3a. The region graph of the test purpose is shown in Fig. 3b. Finally, an extracted test sequence is shown in Fig. 4. We notice that the test sequence is an acyclic graph. In order to guarantee a correct implementation of the action action A, we have to submit all the paths starting at the initial state of the region graph of the test purpose. All of them have to lead to the verdict success except those which end with fail .
?A S1
S2 X slc(s), subject s is not to be granted access for any file on which subject s has write access: Let Os = {o | o ∈ O ∧ +write ∈ SP (s , o)}. If +read ∈ SP(s, o), o ∈ Os , then that right should be deleted from SP (s, o). – no write-down: If slc(s ) > slc(s), subject s is not to be granted write access for any file on which subject s has read access: Let Os = {o | o ∈ O ∧ +read ∈ SP (s, o)}. If +write ∈ SP (s , o), o ∈ Os , then that right should be deleted from SP (s , o). Constraints for other pairs of subject statuses are similarly defined. We omit the details. 2.4
Security Policy Update Semantics
We assume that S and O remain fixed, while groups, roles, access policies assigned by the SSO may change. In the initial state, the grant policy is as defined in Section 2.2. The semantics that computes SP in a new state FS = S, O, R , G , AS , AR , AG , SG , SR is given in Hoare style logic. Below we give the semantics for two situations; the rest can be dealt in a similar manner. We borrowed operators for function overwriting and domain filtering from Z specification language [1]. 1. [A subject s assumes a new role r ] The role r and the set AR(r, o) of rights for the role r on object o already exists in the system. pre s ∈ S, r ∈ R, r ∈ / SR(s) . The policy SP is updated to be consistent with P2 and P4. post AR = AR , SR = SR ⊕ {s → ({r} ∪ SR(s))};) (SG(s) = ∅) ∧ (+a ∈ AR(r, o)) ∼ (−a ∈ AR(r , o)) ⇒ r =r ∈SR(s)
⇒ (SP = SP ⊕ {(s, o) → SP(s, o) ∪ {+a}}) ,
244
Vangalur S. Alagar and Kasi Periyasamy
(SG(s) = ∅) ∧ (+a ∈ AR(r, o)) ∧
)
)
∼ (−a ∈ AR(r o)) ∧
r =r ∈SR(s)
∼ (−a ∈ AG(g, o)) ⇒ (SP = SP ⊕ {(s, o) → SP (s, o) ∪ {+a}}) .
g∈SG(s)
2. [A subject s leaves a group g ] We consider four situations depending upon the roles and membership of s in other groups. pre s ∈ S, g ∈ G The property that holds in the post-state of all the situations is SG = SG ⊕ {s → SG(s) − {g}} | g |= 1 ⇒ (g = ∅) ∧ (AG = (g, o) − AG) ∧ (G = G − {g}) Factoring out the above property, the post-condition for each of the four distinct situations is discussed below. (a) [SG(s) = {g} ∧ SR(s) = ∅] The subject s belongs to only one group g and plays no role. After deletion from the group the subject is reassigned the individual rights. post SP = SP ⊕ {(s, o) → AS (s, o)} (b) [SG(s) = {g} ∧ SR(s) = ∅] The subject s belongs to only one group and plays a set of roles. While s was a member of group g whatever rights were not granted due to conflicts with the role set SR(s) must be restored. post (−a ∈ AG(g, o)) ∧ (r ∈ SR(s) ∧ +a ∈ AR(r, o)) ⇒ SP = SP ⊕ {(s, o) → SP (s, o) ∪ {+a}} (c) [|SG(s)| > 1 ∧ SR(s) = ∅] Because subject s has no role to play, it is sufficient to ensure that rights denied to it due to its membership in other groups are restored. post (−a ∈ AG(g, o)) ∧ g ∈ SG(s) ∧ g ∈ SG(s) ∧ (+a ∈ AG(g o)) ⇒ SP = SP ⊕ {(s, o) → SP (s, o) ∪ {+a}} (d) [|SG(s)| > 1 ∧ SR(s) = ∅] No change in the enforcement of the policy is required. Theorem 1. At each system state FS the granting policy for access control does not have any conflict. The theorem can be proved by showing that the predicate ∼ (+a ∈ SP (s, o) ∧ −a ∈ SP(s, o)) is an invariant during the evolution of the system. We omit the proof.
Specification and Verification of Secure Business Transaction Systems
3
245
Business Policy Framework
A business rule mentions subjects, objects (resources), roles, and suggests either directly or indirectly a sequence of actions to be done when the rule is not followed. Business policies govern the sets S, O, G, and R of subjects, objects, groups of subjects, and roles that have been defined earlier. Each group is associated with a distinguished subject, called leader, who is responsible to enforce business rules in all transactions engaged by the group. 3.1
Formal Notation for Business Policy
We represent a rule as an implication H ⇐ B, where H is called the head (consequent) of the rule and B is called the body (antecedent) of the rule. In general, the body of a rule is a conjunction of one or more conditions; no disjunction is allowed in the body. The head of a rule, expressed declaratively, is an action specification. Negations may appear in the body, but not in the head of a rule. The context for applying a rule can be specified by a guard condition U . As an example, the rule “All designs created and approved within the company remain the property of the company” is represented by the pair (domain = USE POLICY : own(x, ABC ) ⇐ design(x)∧create(x, ABC )), where domain = USE POLICY is the guard for the rule. A rule (U : H ⇐ B) is applied in two steps. In the first step the rules satisfying the context condition U are selected. In the second step, B is evaluated for each selected rule. If B evaluates to true, then H is accepted for further processing. The set BP of business rules is partitioned so that each subset has policies associated with a specific domain. The function DP : D → P BP defines for each domain d ∈ D, the set DP (d) ⊂ BP of rules that are relevant to that domain. For every pair of domains di , dj ∈ D, di = dj , DP (di ) ∩ DP(dj ) = ∅ holds. The policy set DP (d) of domain d can be classified according to the context of their use. The function MA : G → S defines for each group g ∈ G a unique leader MA(g) ∈ S. Leadership is a role in the system and hence SR(MA(g)) group, the set of leaders Ls under whom s will work can be calculated as the set Ls = {MA(g) | g ∈ SG(s)}. An individual subject s (not in any group and not having a role to play) is responsible to follow the rules in only one domain. Denoting the domain that is specific to the subject s by sd , d ∈ D, the set DP (sd ) gives the set of rules to be followed by s. In general, a group g ∈ G may be responsible to deal with business transactions in more than one domain. Consequently, the leader of a group has to ensure that these rules from multiple domains are followed. The function RD : G → P D defines for each group g ∈ G, the set RD (g) ⊂ D of domains for business transactions. The set of business rules to be enforced by the leader of the group g is BR(g) = d∈RD(g) DP(d). For a subject s whose status is not an individual, we can associate the set BR(s) = {(MA(g), BR(g)) | g ∈ SG(s)}, where each member of the set is a pair giving the leader of a group to which s belongs and the rules to be followed by the subject s as a member of that group.
246
Vangalur S. Alagar and Kasi Periyasamy
3.2
Business Policy Update Semantics
The state of the business policy framework at any instant is given by the tuple FB = S, O, R, G, MA, RD, DP, BP . The state FB changes if the leader of a group changes, a group is assigned responsibilities for new domains, new domains and their policies are added, existing policies and domains are removed, rules within an existing domain change, policies in the policy database BP change with or without new domains being added. We discuss below semantics for two policy update situations. The sets S, O, R, G are shared between business and security frameworks, and hence the semantics discussed in Section 2.4 must be included wherever appropriate. 1. [add a new domain ] A new domain d1 can be added only with its corresponding business policies BP1 ⊂ BP . The new domain rules are to be followed by the groups G1 ⊂ G. pre / D, G1 ⊂ G d1 ∈ post D = D ∪ {d1 } RD = RD ⊕ {g → RD(g) ∪ {d1 } | g ∈ G1 } DP = DP ⊕ {d → DP(d1 ) ∪ BP1 } If the new domain policies require formation of new groups, then the semantics for group addition from Section 2.4 must be applied prior to applying the above semantics. 2. [new rules are added for an existing domain ] Let NR ⊂ BP be a set of existing rules that are to be included in the set of rules for an existing domain d. pre d ∈ D, NR ⊂ BP post DP = DP ⊕ {d → DP(d) ∪ NR} RD = RD ⊕ {g → RD(g) ∪ {d} | g ∈ Gd }, where Gd = {g | {d} ⊂ RD(g)}.
4
Secure Transaction Specification
Each action in an activity is carried out by a subject, or a group within the organization. An activity may involve an atomic action, a sub-activity, a dummy activity or a loop activity. Sub-activities may be executed either sequentially or in parallel as dictated by the business rule applicable to that activity. Every atomic action is activated by an event and must be controlled by the security policy. A loop activity is a repetition of the activity controlled by a constraint. A dummy activity does the job of a router, with routing decisions consistent with the business policy. A formal specification for a business activity includes a description of participants and actions. An atomic action specification is (s, a), where s ∈ S, and a is is
Specification and Verification of Secure Business Transaction Systems
247
specified in Hoare style logic. The precondition of the atomic action a is denoted pre(a) and its postcondition is denoted post(a). When the action defined in an activity is non-atomic, it can be performed by an individual subject, a group, or a collaboration of several subjects. An activity requiring the collaboration of several individuals, or several groups, necessarily involves non-atomic actions. In these situations, the activity specification is a workflow specification, indicating the order of execution of sub-actions, the order in which the participants execute their actions, and constraints, if any. We define a workflow specification as an extended state machine (V, E), where V is a finite number of locations and E is a finite number of transitions. Each location v ∈ V is a pair (s, a) of subject s, and action a to be performed by s. A transition e ∈ E from location vi to location vj indicates that the activity at location vj may follow the activity at location vi . Every transition has a constraint (guard) which must evaluate to true for the transition to occur. There are three kinds of locations: (i) a simple location where an atomic action is performed; (ii) a simple location where several atomic actions are simulataneously performed by several subjects; and (iii) a location where a nonatomic action is performed. A location of the third kind can be refined into a set of locations of the first two kinds. We follow the statechart syntax [4] for locations of the second kind. For instance, a location in which subject s1 performs atomic action a1 concurrently with subject s2 performing action a2 is written ((s1 , a1 ) AND(s2 , a2 )). An incoming transition to a location (s, a) of third kind is assumed to be the transition to the location (s, a0 ), where a0 is the first atomic sub-action of a. We simplify the model by not allowing nondeterminism in the state machine. 4.1
Semantics for Secure Transactions
A transaction is a finite sequence of the form W1 ◦ W2 ◦ · · · ◦ Wk , where each Wi is a workflow state machine and the symbol ◦ denotes either a sequential execution (;) or a parallel execution (). The parallel operator () binds more tightly than the sequential operator (;). That is, for the transaction W1 ; W2 W3 ; W4 the concurrent execution of W2 W3 starts after the termination of W1 , and the execution of W4 starts after the completion of the concurrent action W2 W3 . To specify the concurrent execution of two sequentially composed actions, one has to write (W1 ; W2 ) (W3 ; W4 ). A workflow is safe if in the state machine specification, at every one of its atomic action location security policy is satisfied, and every tansition is consistent with the information flow policy. A transaction is secure if every workflow state machine in its specification is safe. If the security policy forbids access to a resource at a workflow location, then the security policy is inconsistent with the business policy specification. An inconsisteny may be resolved by an institutional policy resolution mechanism. For instance, the subject at that location may be replaced by another subject for whom the grant policy gives access for the resources in that state.
248
Vangalur S. Alagar and Kasi Periyasamy
Atomic Activity Execution Semantics (AES) Let (s, a) be an atomic activity. The precondition pre(a) can involve only those objects for which +read has been granted to s. The postcondition post(a) may deliver a result or change state variables or do both. The subject s must have been granted +write access on such objects. In addition, the subject s must be certified by the business policies for performing action a. 1. [Validate s for activity a ] From the domains of activity for the subject s select the set of rules Lr that can be applied to activity a. – Lr = DP (sd ), if the status of s is an individual. – Lr = {(U, H ← B) | d ∈ RD(g) matches the domain predicate of U }, if the status of subject s is a group leader. – Lr = {(U, H ← B) | d ∈ RD(g) ∧ g ∈ SG(s) matches the domain predicate of U }, if the subject s is a member of one or more groups. If Lr = ∅, the activity a does not belong to any of the domains for which s has responsibility. 2. [Find the business rules applicable to a ] Let var(a) denote the set variables (including input and output variables) in the specification of a. Evaluate the guard U ∈ Lr after binding with variables in var(a). If U evaluates to false, the rule associated with the guard U does not apply to action a. If U evaluates to true, evaluate the implications pre(a) ⇒ B and H ⇒ post(a) after binding with the variables in var(a). If both implications are true, apply security policy in the next step, otherwise ignore the rule. 3. [Apply the grant policy ] Validate s and its granted rights. – The status of s is an individual. s ∈ S; SG(s) = ∅; SR(s) = ∅ must hold. If SR(s) = ∅, the subject has no role to play, hence grant policy rule P1 is used to calculate SP . If SR(s) = ∅, the subject s plays one or more rules, hence grant policy P2 is used to calculate SP. – The status of s is leader of group g. MA(g) ∈ S ∧SR(MA(g)) = {leader} must be true. Since the subject s belongs to group g and plays the role of leader, grant policy R4 is used to calculate SP. – The subject s belongs to one or more groups. That is SG(s) = ∅ holds. If SR(s) = ∅, grant rule P3 is used to compute SP , otherwise grant rule R4 is used to compute SP . Verify the two properties: – for every variable x ∈ pre(a), +read ∈ SP (s, x) is true; – for every variable y ∈ post (a) whose value or state is changed by the action +write ∈ SP (s, y) is true. The workflow for any activity satisfying the above semantics is safe, because in the workflow state machine there is only one location and no transition. Non-atomic Activity Execution Semantics We consider four situations, each depending on a specific status of the subject performing the activity.
Specification and Verification of Secure Business Transaction Systems
249
Case 1. A non-atomic activity is performed by a subject s whose status is an individual or leader of group g. The workflow specification is a state machine whose locations are (s, a0 ), (s, a1 ), . . . , (s, an ), where ai is an atomic activity. In such a machine there is no AND location, although loop action is allowed. The non-atomic action is secure if the following semantics is satisfied: 1. Apply semantics AES to each location. If the action in a location is a loop action, then in every iteration Step 3 of AES semantics must be applied. 2. If there is a transition from location (s, ai ) to location (s, aj ), then post (ai ) ⇒ pre(aj ) must hold to validate action decomposition. 3. (x ∈ pre(aj ) ∧ +read ∈ SP (s, x))∧ (y ∈ post (ai ) ∧ +write ∈ SP(s, y)) ⇒ olc(x) ≤ olc(y) must be verified to validate secure information flow. Case 2. A non-atomic activity is performed by a group g. In every location of the workflow location machine, the atomic action is performed by one or more members of group g. A location may be an AND location and may involve a looping action. AES semantics is applied to each location. If the action in a location is a loop action, then in every iteration, Step 3 of AES semantics is applied. For an AND location, AES semantics is applied for each pair (si , ai ) in the location. For each transition the following semantics is applied to ensure secure information flow: 1. [Both locations are not AND locations ] – A single subject performs the action at the source and destination locations of the transition and the subjects are the same. Apply the semantics from Step 2, Case 1. – The subjects are different: the transition is from location (si , ai ) to (sj , aj ). • Validity of action decomposition: post (ai ) ⇒ pre(aj ) . • Secure information flow: (x ∈ pre(aj ) ∧ +read ∈ SP (sj , x)) ∧ ∧ (y ∈ post (ai ) ∧ +write ∈ SP (si , y)) ⇒ olc(x) ≤ olc(y) slc(si ) ≤ slc(sj ) . 2. [At least one of the locations is an AND location] For simplicity we assume that in an AND location only two subjects are performing concurrently. It is easy to generalize. [1] The transition is from the AND location ((si , ai )(sj , aj )) to the location (sk , ak ). – Validity of action decomposition: post (ai ) ∧ post (aj ) ⇒ pre(ak )
250
Vangalur S. Alagar and Kasi Periyasamy
– Secure Information flow: (x ∈ pre(ak ) ∧ +read ∈ SP (sk , x)) ∧ ∧ (y ∈ post (ai ) ∧ +write ∈ SP (si , y)) ∧ ∧ (z ∈ post (sj ) ∧ +write ∈ SP(sj , z)) ⇒ ⇒ olc(x) ≤ min{olc(y), olc(z)} max{slc(si ), slc(sj )} < slc(sk ) [2] The transition is from location (sk , ak ) to the AND location ((si , ai ) (sj , aj )). – Validity of action decomposition: post (ak ) ⇒ pre(ai ) ∧ pre(aj ) – Secure Information flow: (x ∈ pre(ai ) ∧ +read ∈ SP(si , x)) ∧ ∧ (y ∈ pre(aj ) ∧ +read ∈ SP (sj , y)) ∧ ∧ (z ∈ post (ak ) ∧ +write ∈ SP (sk , z)) ⇒ ⇒ max{olc(x), olc(y)} < olc(z) slc(sk ) ≤ max{slc(si ), slc(sj )} [3] The transition is from the AND location ((si , ai ) (sj , aj )) to the AND location (sk , ak ) (sl , al ). – Validity of action decomposition: post (ai ) ∧ post (aj ) ⇒ pre(ak ) ∧ pre(al ) . – Secure information flow: (x ∈ pre(ak ) ∧ +read ∈ SP (sk , x)) ∧ ∧ (y ∈ pre(al ) ∧ +read ∈ SP (sl , y)) ∧ ∧ (z ∈ post (ai ) ∧ +write ∈ SP (si , z)) ∧ ∧ (w ∈ post (aj ) ∧ +write ∈ SP (sj , w)) ⇒ ⇒ max{olc(x), olx (y)} < min{olc(z), olc(w)} max{slc(si ), slc(sj )} ≤ min{slc(sk ), slc(sl )} . Case 3. A non-atomic activity is performed by a subject belonging to several groups. The atomic actions in different locations may require the application of different business rules from different domains. AES semantics takes care of this situation. Since the information flow is confined to the same subject, Case 1 semantics applies for verifying secure information flow.
Specification and Verification of Secure Business Transaction Systems
251
Case 4. A non-atomic activity is performed by a collection of several subjects. The atomic actions in different locations may require the application of different business rules from different domains. AES emantics takes care of this situation. If the information flow along a transition is between subjects of the same group Case 2 semantics applies. If the information flow along a transition is between two subjects who do not belong to the same group, Case 3 semantics applies. The semantic steps, when viewed as proof obligations, provide a means of verifying the security properties for a transaction. The proof method is the wellknown axiomatic approach to program verification. Based upon a formal verification the following properties can be proved: – Every executed action complies with security and business policies. – For an aborted action security and business policies conflict. – Information flow from one subject to the succesor subject in a workflow specification is minimal, in the sense that it does not violate the information flow policy and is necessary for the progression of transaction.
5
Conclusion
Several works have appeared in formal security model. However, not many formal business models have been discussed [5]. To the best of our knowledge, no formal specification approach has been investigated to combine security and business policies. However, approaches to detecting security inconsistencies have been studied. Bertino [3] has examined several restrictive role types that result in separation of business actions, avoiding conflicts. Petri net approach of Atluri and Huang [2] is not easily adaptable to this kind of application since Petri nets are reactive, whereas security issues here largely prescriptive. Conflict resolution and minimization of workflow are basically static. Verification of security properties in Petri net models can be reduced to reachability analysis. Our work uses set theory and logic for formalizing security and business policies, and uses Statechart formalism for specifying workflows. A simple calculus with two operators is introduced on workflow machines to formalize a transaction. The formal semantics for system evolution is the basis for a verifiable implementation. We are continuing our research in extending the formalism to handle meta policies, delegation of rights, and nondeterminism in workflow execution. Using our approach we are currently working on a prototype implementation of a secure business system.
Acknowledgement The authors wish to thank the referees for their insightful comments which helped to improve the presentation of the paper.
252
Vangalur S. Alagar and Kasi Periyasamy
References [1] V. S. Alagar, K. Periyasamy, Specification of Software Systems, Springer-Verlag, New York, 1998. 243 [2] V. Atluri, W. K. Huang, An Authorization Model for Workflows, In Proceedings of the Fourth ESORICS, LNCS: Eds. E. Bertino, H. Kurth, G. Martella, and E. Montolivo, Rome, Italy, 1996, pp. 44–64. 251 [3] E. Bertino, E. Ferrari, V. Atluri, A Flexible Model for the Specification and Enforcement of Role-Based Authorizations in Workflow Management Systems, In Proceedings of the 2nd ACM Workshop on Role-Based Access Control (RBAC97), ACM Press, New York, 1997, pp. 6–7. 251 [4] D. Harel, Statecharts: a visual formalism for complex systems, Science of Computer Programming, 8(3):231-274, June 1987. 240, 247 [5] F. Mili, On the Formalization of Business Rules: Generic Rules for Composition, and Containment, In Proceedings of the Second ECOOP Workshop on Precise Behavioral Semantics, June 1998, pp. 122–129. 251
Agent-Oriented Model of Simulated Evolution Marek Kisiel-Dorohinicki Department of Computer Science University of Mining and Metallurgy, Krak´ ow, Poland [email protected]
Abstract. The paper deals with a specific class of multi-agent systems, in principle similar to evolutionary algorithms, but utilising a more complex, since decentralised, model of evolution. The proposed layered architecture uses the notion of a profile that models strategies and goals of an agent with respect to some aspect of its operation. The paper presents main ideas of the architecture illustrated by a concrete realisation that is an evolutionary multi-agent system solving a generic optimisation problem.
1
Introduction
During the last years the idea of an intelligent/autonomous software agent gains more and more applications in various domains. Agent technology provides concepts and tools for development of complex, distributed and decentralised systems [6]. Apparently agents play also key role in integration of AI sub-disciplines, which is often necessary to design and build modern intelligent systems. Still the literature offers a variety of agent definitions, which range from very simple to lengthy and demanding. In fact this should not be considered a problem since “the notion of an agent is meant to be a tool for analysing systems, not an absolute characterization that divides the world into agents and non-agents” [8]. Indeed the term multi-agent system has a well-understood meaning and a corresponding definition can be easily formulated: a multi-agent system (MAS) is simply a collection of agents aiming at solving a given problem. Since usually solving the problem stays beyond the individual capabilities or knowledge of each single agent, the key concept here are intelligent interactions (coordination, cooperation, negotiation). Thus multi-agent systems are ideally suited to representing problems that have multiple problem solving methods, multiple perspectives and/or multiple problem solving entities [6]. Of course a mutli-agent system may be implemented without any software structures corresponding to agents at all. This often happens for simulation systems, where the introduction of agents facilitates modelling of complex phenomena – natural, social, etc. In such cases agents constitute building blocks of the simulation model, which may or may not be implemented with the use of agent
This work was partially sponsored by State Committee for Scientific Research (KBN) grant no. 4 T11C 027 22.
W.I. Grosky and F. Pl´ aˇ sil (Eds.): SOFSEM 2002, LNCS 2540, pp. 253–261, 2002. c Springer-Verlag Berlin Heidelberg 2002
254
Marek Kisiel-Dorohinicki
technology. An evolutionary multi-agent system (EMAS) has analogous characteristics, even though it is rather a computationally intelligent system that may be considered as an extension to classical evolutionary algorithms. The key idea of EMAS is the incorporation of evolutionary processes into a multi-agent system (MAS) at a population level. It means that besides interaction mechanisms typical for MAS (such as communication) agents are able to reproduce (generate new agents) and may die (be eliminated from the system). A decisive factor of an agent’s activity is its fitness, expressed by amount of possessed non-renewable resource called life energy. Selection is realised in such a way that agents with high energy are more likely to reproduce, while low energy increases possibility of death. Although evolutionary computation – a heuristic problem-solving approach based on models of organic evolution – has been successfully used in solving various problems for over 40 years, the model of evolution employed by most evolutionary algorithms is much simplified and lacks many important features observed in organic evolution. This includes dynamically changing environmental conditions, many criteria in consideration, neither global knowledge nor generational synchronisation assumed, co-evolution of species, evolving genotypefenotype mapping, etc. [1]. That is why many variations of classical evolutionary algorithms were proposed, introducing e.g. some population structure (in parallel evolutionary algorithms) or specialised selection mechanisms (like fitness sharing). The main advantage of EMAS is that it covers various specialised techniques in one coherent model. Yet EMAS is a very specific, due to its features and fields of application, sub-type of multi-agent systems and thus needs special architecture that would be more adequate and easier for design and implementation. Since existing formalisms for MAS may not be easily applied to this kind of agent systems a simple yet extensible model of MAS based on M-Agent architecture is first proposed in the paper (for further reference see e. g. [2]). This constitutes a base for a description of evolutionary phenomena at a level of a single agent and its internal architecture. The described concepts are shortly illustrated by an application of EMAS in the field of numerical optimisation.
2
General Model of MAS
The multi-agent system consists of a set of agents (ag ∈ Ag ) and some environment (env ) they live in: MAS ≡ Ag , env . (1) The environment may have spatial structure and contain some information and/or resources, which may be observed by the agents: env ≡ Res , Inf , sp
(2)
where Res and Inf represent global (available in the whole system) resources and information, respectively. At the same time sp represents all features of EMAS related to the existence of some space (if any) including:
Agent-Oriented Model of Simulated Evolution
255
MAS ag 1
ag 3 ag 2
env
inf
pl 1 res 1
tr 12
pl 2 res 2
res 3
Fig. 1. General structure of a multi-agent system according to the proposed model – – – –
possible locations of agents and local information or resources (topology), actual agents’ positions, information or resources available in specific regions of the space, range of observation and/or activity of agents.
The space is most often represented as a graph (Fig. 1), and thus may be described as: sp ≡ Pl , Tr , Loc (3) where: Pl – set of possible locations: pl ∈ Pl (nodes of the graph), Tr – relation of direct neighbourhood between locations: Tr ⊂ Pl × Pl (edges of the graph), Loc – relation representing positions of agents: Loc : Ag → Pl . Each location pl ∈ Pl may be described in terms of local (available in this location) resources and information: pl ≡ Res pl , Inf pl .
(4)
The state of local resources Res pl and information Inf pl may be observed and/or changed only by agents, which remain close enough (this should be defined by a particular application) to the location pl .
3
Profile-Based Architecture of an Agent
The functionality of each agent is defined by a set of actions (act ∈ Act ) it is able to perform. Its internal architecture is described in terms of profiles (prf ∈ Prf ): ag ≡ Act , Prf .
(5)
256
Marek Kisiel-Dorohinicki
The action is an atomic (indivisible) activity, which may be executed by the agent in the system. Each profile defines the state of an agent from the point of view of a particular aspect of its functionality. The profile may concern some resource possessed by the agent (“physical” or “energetic” profiles): prf res ≡ res , St , Gl where:
(6)
res – amount of the possessed resource, St – set of strategies related to this resource, Gl – set of goals related to this resource.
The profile may also be dedicated to modelling (a part of) the environment and/or (some features of) other agents (“information” or “intellectual” profiles): prf inf ≡ mdl , St , Gl where:
(7)
mdl – piece of information representing the agent’s knowledge about the world it lives in (the model of the world), St – set of strategies related to this model, Gl – set of goals related to this model.
The model is constructed by an agent using the information acquired via observation of its neighbourhood or from other agents via communication. Of course, this information may (in fact must) be incomplete and uncertain. In both cases St denotes a set of strategies (st ∈ St ) describing how each action is related to a particular profile. Thus strategy st , which describes action act , in physical profile may be defined as: st : res → res
(8a)
st : mdl → mdl .
(8b)
and in intelectual profile:
Strategies represent an agent’s expectations of the action results, the real effects of the performed action may differ from these expectations, and this difference may drive a learning process of an agent. A set of goals (gl ∈ Gl ) specifies the agent’s needs with respect to the resource or model and thus forms a base for a decision-making process. Active goals indicate the desired direction of changes, and conservative goals define the boundary conditions concerning the possessed resource or the state of the model from the point of view of the particular profile. In this framework a general scheme of MAS operation is that each agent observes (some part of) the system, builds its internal model(s), and acts on (maybe closer part of) the system according to goals defined, spending or gaining some resources.
Agent-Oriented Model of Simulated Evolution
prf 5 prf 4
gl ∗4
prf 4
prf 3 prf 2
st ∗5
prf 5
prf 2
prf 1
st ∗3 gl ∗2
(b)
prf 5
prf 2
st ∗5
prf 5
prf 4
prf 4 act 1
prf 2
prf 1
gl 1 (d)
(c)
st ∗5
prf 5
act 2
prf 4 st ∗3
prf 3
gl ∗2
act 1
gl ∗2
prf 1
(a)
prf 3
st ∗3
prf 3 prf 2
prf 1
st ∗3
prf 5 prf 4
prf 3 gl ∗2
257
gl ∗2
prf 3 prf 2
prf 1
gl ∗2
prf 1 (e)
(f)
Fig. 2. An example of agent’s decision making
In the particular case decision making means selection of the strategy to be realised and then the action(s) to be performed. The internal architecture of an agent does not enforce the specific rules of decision making. What is more, without stronger assumptions, this problem is ambiguous because of many profiles, and thus various goals to be achieved by an agent (at the same time). The most important thing to be established seems the selection of an active goal, for which such a strategy exists that actions to be performed do not violate passive goals of remaining profiles. The proposed model of decision making is related to the concept of a layered agent architecture [7] and assumes some order in the set of profiles Prf ≡ (Prf , ≺), which allows for definite selection of the action to perform. This order defines priorities of active goals, as well as the direction of search for appropriate strategy and its verification by passive goals. Based on this assumption the decision making process consists of three stages: 1. selection of the (next) active goal of the lowest priority, 2. search for a strategy which satisfies the selected goal, 3. verification of selected strategy by passive goals of remaining profiles.
258
Marek Kisiel-Dorohinicki
When any stage fails, the process returns to the previous stage looking for the next element to consider. When stage 1 fails an agent remains idle (i. e. performs no actions). This procedure is illustrated by a simple example in Fig. 2: a) selection of an active goal (stage 1), b) search for a strategy (stage 2), c) verification of an action (stage 3), d) action verification failed (return to stage 2), e) repeated search for a strategy (stage 2), f) action successfully verified (stage 3).
4
Modelling Phenomena of Evolution
Following neodarwinian paradigms, two main components of the process of evolution are inheritance (with random changes of genetic information by means of mutation and recombination) and selection. They are realised by the phenomena of death and reproduction, which may be easily modelled as actions executed by agents: – action of death results in the elimination of an agent from the system, – action of reproduction is simply the production of a new agent from its parent(s). Inheritance is to be accomplished by an appropriate definition of reproduction, which is similar to classical evolutionary algorithms. The set of parameters describing core properties of an agent (genotype) is inherited from its parent(s) – with the use of mutation and recombination. Besides, an agent may possess some knowledge acquired during its life, which is not inherited. Both the inherited and acquired information determines the behaviour of an agent in the system (phenotype). Selection is the most important and most difficult element of the model of evolution employed in EMAS. This is due to assumed lack of global knowledge (which makes it impossible to evaluate all individuals at the same time) and autonomy of agents (which causes that reproduction is achieved asynchronously). In such a situation selection mechanisms known from classical evolutionary computation cannot be used. The proposed principle of selection corresponds to its natural prototype and is based on the existence of non-renewable resource called life energy. The energy is gained and lost when agents execute actions in the environment. Increase in energy is a reward for ‘good’ behaviour of an agent, decrease – a penalty for ‘bad’ behaviour (which behaviour is considered ‘good’ or ‘bad’ depends on the particular problem to be solved). At the same time the level of energy determines actions an agent is able to execute. In particular low energy level should increase possibility of death and high energy level should increase possibility of reproduction. To provide a complete description of EMAS in terms of the proposed agent architecture only a few details reflecting evolutionary nature of the system should be completed. These are the mechanisms of selection and reproduction described in energetic and reproductive profiles. As it was already announced, selection in EMAS is based on specific mechanisms, which are mostly driven by an energetic profile (prf eng ) consisting of:
Agent-Oriented Model of Simulated Evolution
259
– resource eng – life energy, – goal to keep the level of energy above minimal value eng min , – strategies describing all agent’s actions in terms of energy gain and loss, particularly the action of death: st die : eng → eng ind
(9)
which is understood in this profile as a change of the state of life energy to indefinite level eng ind > eng min , and thus may be described as: prf eng = eng , St eng = {st die , . . .}, Gl eng = {eng > eng min } .
(10)
As long as the level of life energy is above eng min the goal of energetic profile is conservative and blocks the realisation of actions which may decrease the amount of eng below this limit. When the energetic state drops below eng min the goal energetic profile becomes active and triggers the strategy of death. The agent’s striving for reproduction is modelled by a reproductive profile (prf rp ), which consists of: – resource hr , which determines the agent’s ability to reproduce, – strategy describing the action of reproduction as reducing the level of hr to its minimal value (hr min ): st rp : hr → hr min
(11)
and maybe other strategies related to resource hr , – goal to keep the level of hr below the maximal value hr max > hr min , and thus may be described as: prf rp = hr , St rp = {st rp , . . .}, Gl rp = {hr < hr max } .
(12)
The amount of resource hr may increase (or decrease) depending on the situation of the agent, i.e. its age, interactions with the environment and other agents, etc. When it reaches the level of hr max the agent tries to reproduce, expecting that it should lower the level of hr . The reproduction is successful if the state of the agent (e. g. amount of life energy) and its neighbourhood allows for the generation of a new agent. Conforming to (5) an evolving agent is thus described as: ag = Act = {die , rp , . . .}, Prf = (prf eng , prf rp , . . .) .
(13)
What lacks here is a profile (or profiles) reflecting the problem, which is to be treated (solved) by EMAS, and actions reflecting the solving process. These elements cannot be specified here because they are closely related to a particular application domain. Such specific profile and actions dedicated for optimisation problems are shortly described in the next section.
260
5
Marek Kisiel-Dorohinicki
EMAS for Numerical Optimisation
As an illustration to above considerations the simplest possible practical application of the proposed architecture will be presented: an evolutionary multi-agent system for numerical optimisation (cf. [4]). In the particular EMAS the population of agents represents feasible solutions to the problem defined by (a system of) objective function(s). The key issue here is the design of energetic reward/punishment mechanism, which should prefer better (with respect to the defined criterion or criteria) agents. This is done via energy transfer principle forcing worse agents to give a fixed amount of their energy to the encountered better agents. This may happen, when two agents inhabiting one place communicate with each other and obtain information about their quality with respect to known objective function(s). According to (13) each agent in the system may be described as: ag = Act = {die , rp , ask , td , rd }, Prf = (prf eng , prf rp , prf opt )
(14)
where ask , td , rd denote actions realising the energy transfer principle and prf opt represents a dedicated optimisation profile. An optimisation profile is a problem-dependent profile, which encapsulates information about the solution represented by an agent, which is inherited during reproduction. In fact this is the only component of an agent’s genotype and thus the crucial element of the whole process. This profile also contains strategies describing actions of energy transfer principle: ask (ask for information about the quality of solution represented by another agent), td (transmit energy), and rd (receive energy). Of course the last two actions must be also described in an energetic profile. The flow of energy connected with the transfer principle causes that better agents are more likely to reproduce, whereas worse ones are more likely to die. This way, in successive generations, the agents should represent better approximations of the solution to the problem.
6
Concluding Remarks
The proposed model of EMAS was successfully used as a base for a number for applications. Application areas range from numerical optimisation to hybrid soft computing systems involving fuzzy systems (e. g. data classification) and neural networks (e. g. time-series prediction). Concerning computational systems, EMAS enables the following: – local selection allows for intensive exploration of the search space, which is similar to parallel evolutionary algorithms, – the way phenotype (behaviour of the agent) is developed from genotype (inherited information) depends on its interaction with the environment, – self-adaptation of the population size is possible when appropriate selection mechanisms are used.
Agent-Oriented Model of Simulated Evolution
261
What is more, explicitly defined living space facilitates implementation in a distributed computational environment. As the experimental results show the usefulness of the proposed model, the future research should lead to refining of the architecture based on analysis of the design and implementation process of EMAS applications in a variety of soft computing problems.
References [1] T. B¨ ack, U. Hammel, and H.-P. Schwefel. Evolutionary computation: Comments on the history and current state. IEEE Trans. on Evolutionary Computation, 1(1), 1997. 254 [2] E. Cetnarowicz, E. Nawarecki, and K. Cetnarowicz. Agent oriented technology of decentralized systems based on the M-Agent architecture. In Proc. of the MCPL’97, IFAC/IFIP Conference, 1997. 254 [3] K. Cetnarowicz, M. Kisiel-Dorohinicki, and E. Nawarecki. The application of evolution process in multi-agent world (MAW) to the prediction system. In M. Tokoro, editor, Proc. of the 2nd Int. Conf. on Multi-Agent Systems (ICMAS’96). AAAI Press, 1996. [4] G. Dobrowolski, M. Kisiel-Dorohinicki, and E. Nawarecki. Evolutionary multiagent system in multiobjective optimisation. In M. Hamza, editor, Proc. of the IASTED Int. Symp.: Applied Informatics. IASTED/ACTA Press, 2001. 260 [5] S. Franklin and A. Graesser. Is it an agent, or just a program?: A taxonomy for autonomous agents. In J. P. M¨ uller, M. Wooldridge, and N. R. Jennings, editors, Intelligent Agents III, volume 1193 of Lecture Notes in Artificial Intelligence. Springer-Verlag, 1997. [6] N. R. Jennings, K. Sycara, and M. Wooldridge. A roadmap of agent research and development. Journal of Autonomous Agents and Multi-Agent Systems, 1(1), 1998. 253 [7] J. P. M¨ uller. The Design of Intelligent Agents: A Layered Approach, volume 1177 of Lecture Notes in Artificial Intelligence. Springer-Verlag, 1996. 257 [8] S. J. Russell and P. Norvig. Artificial Intelligence: A Modern Approach. Prentice Hall, 1995. 253
The Reconstruction of Some 3D Convex Polyominoes from Orthogonal Projections Maciej G¸ebala Institute of Mathematics, Wroclaw University of Technology Janiszewskiego 14, 50–370 Wroclaw, Poland [email protected]
Abstract. The reconstruction of discrete two- or three-dimensional sets from their orthogonal projections is one of the central problems in the areas of medical diagnostics, computer-aided tomography, and pattern recognition. In this paper we will give a polynomial algorithm for reconstruction of some class of convex three-dimensional polyominoes that has time complexity O(n7 log n).
1
Introduction
A unit cube is a cube of volume one, whose centre belongs to Z3 and whose vertices belong to the lattice (Z + 1/2)3 . We do not distinguish between a unit cube and its centre, thus (x, y, z) denotes the unit cube of centre (x, y, z). A threedimensional polyomino is a finite connected union of unit cubes. In this paper we consider polyominoes contained in a finite lattice {1, . . . , n}3 . The lattice with a polyomino corresponds to the three-dimensional binary matrix R, when the 1’s correspond to filling positions in the lattice, and 0’s correspond to empty positions. The slices are two-dimensional sections of the matrix R. We define i i , SYj and SZk by SX [j, k] = R[i, j, k], SYj [i, k] = R[i, j, k] the slices SX k and SZ [i, j] = R[i, j, k], respectively, for i, j, k ∈ {1, . . . , n}. The bars are one-dimensional sections of the matrix R. We define j,k j,k , BYi,k and BZi,j by BX [i] = R[i, j, k], BYi,k [j] = R[i, j, k] the bars BX and BZi,j [k] = R[i, j, k], respectively, for i, j, k ∈ {1, . . . , n}. For a polyomino P in the matrix R we define three two-dimensional matrices of orthogonal projections (i. e. the number of 1’s in each bar of matrix R): PT , PF and PS by PT (P )[i, j] =
n
R[i, j, k],
PF (P )[i, k] =
n
R[i, j, k],
j=1
k=1
PS (P )[j, k] =
n
R[i, j, k],
i=1
for i, j, k ∈ {1, . . . , n}.
Supported by the KBN grant No. 7 T11C 03220.
W.I. Grosky and F. Pl´ aˇ sil (Eds.): SOFSEM 2002, LNCS 2540, pp. 262–272, 2002. c Springer-Verlag Berlin Heidelberg 2002
The Reconstruction of Some 3D Convex Polyominoes
263
´½ ½ ½µ
´
½
µ
½
a)
½
½
½
½
b)
Fig. 1. a) three-dimensional matrix R and adequate matrices of orthogonal projections; b) example of full convex 3D polyomino with the matrices of orthogonal projections
Definition 1. The three-dimensional polyomino P in the matrix R is a full convex polyomino if each slice of R contains a two-dimensional hv-convex polyomino (i. e. connected set of 1’s with the property that in every row and every column of the slice the 1’s forms a connected block) and at least one matrix of projections does not contain zeros (is full). We consider that always PT is a full matrix. Thus each bar BZi,j contains at least one 1. We can now define the problem of reconstructing a full convex 3D polyomino P from its orthogonal projections: Given three assigned matrices PF ∈ {0, . . . , n}n×n , PS ∈ {0, . . . , n}n×n and PT ∈ {1, . . . , n}n×n , we examine whether there exists at least one matrix R ∈ {0, 1}n×n×n with a full convex polyomino P such that PT (P ) = PT , PF (P ) = PF and PS (P ) = PS . 1.1
Previous Results
The reconstruction of polyominoes from orthogonal projections is one of classical problems of discrete tomography and has been an intensive study for some years. But all the results concern the two-dimensional case. First Ryser [8], and subsequently Chang [2] and Wang [9] studied the existence of a pattern satisfying orthogonal projections (H, V ) in the class of sets without any conditions. They showed that the decision problem can be solved in time O(mn). These authors also developed some algorithms that reconstruct the pattern from (H, V ). Woeginger [10] proved that the reconstruction problem in the class of polyominoes is an NP-complete problem. Barcucci, Del Lungo, Nivat, Pinzani [1]
264
Maciej G¸ebala
showed that the reconstruction problem is also NP-complete in the class of hconvex polyominoes and in the class v-convex polyominoes. The first algorithm that establishes the existence of an hv-convex polyomino satisfying a pair of assigned vectors (H, V ) in polynomial time was described by Barcucci et al. in [1]. Its time complexity is O(m4 n4 ) and it is rather slow. G¸ebala [5] showed the faster version of this algorithm with complexity O(min(m2 , n2 ) · mn log mn). The latest algorithm described by Chrobak and D¨ urr in [3] reconstructs the hv-convex polyomino from orthogonal projection in time O(min(m2 , n2 ) · mn). Moreover G¸ebala in [6] shows that the problem of the reconstruction twodimensional polyominoes from approximately orthogonal projections is NP-complete in the classes of (1) polyominoes, (2) horizontal convex polyominoes and (3) vertical convex polyominoes. And proves that, for an arbitrary chosen function of error, the problem is in P for the class of hv-convex polyominoes and shows the algorithm with complexity O(m3 n3 ). For three-dimensional case only one result was known. A general problem of the reconstruction of three-dimensional lattice sets from orthogonal projections is NP-complete [7, 4]. 1.2
Our Results
In this paper we show a polynomial algorithm for the reconstruction of full convex 3D polyominoes with complexity O(n7 log n). In this reconstruction we use some properties of 2D convex polyominoes described in [5].
2
Some Properties of 2D hv-Convex Polyominoes
Let Q be a matrix which has n × n cells containing 0’s and 1’s. Let S be a set of cells containing 1’s which represents an hv-convex 2D polyomino. Let hi denotes the projection of i-th row and vj denotes the projection of j-th column. Moreover let k k hi and Vk = vj , Hk = i=1
j=1
for k ∈ {1, . . . , n}, and H0 = 0, V0 = 0 and A = Hn = Vn . Definition 2. For all j ∈ {1, . . . , n} we define Dj = min{ i ∈ {1, . . . , m − 1} : A − Hi < A − Vj Uj = max{ i ∈ {2, . . . , m} : Hi−1 < Vj−1 Dj Uj
∨
i = m},
i = 1},
= min{ i ∈ {1, . . . , m − 1} : A − Hi < Vj−1 = max{ i ∈ {2, . . . , m} : Hi−1 < A − Vj
∨
∨
∨
i = m},
i = 1}.
Lemma 1 ([5]). For all j ∈ {1, . . . , n} we have Uj ≤ Dj and Uj ≤ Dj . And for all j ∈ {1, . . . , n − 1} we have Dj + 1 ≥ Uj+1 and Dj+1 + 1 ≥ Uj .
The Reconstruction of Some 3D Convex Polyominoes
265
Fig. 2. Properties of 2D hv-convex polyomino
Lemma 2 ([5]). Let cells [1, p1 ] and [n, p2 ] belong to the polyomino S in the matrix Q, for the fixed p1 and p2 . Then 1. if p1 ≤ p2 then cells [1, p1 ], . . . , [Dp1 , p1 ], and [Uj , j], . . . , [Dj , j] for j ∈ {p1 + 1, . . . , p2 − 1}, and [Up2 , p2 ], . . . , [n, p2 ] also belong to the polyomino S, or 2. if p1 ≥ p2 then cells [Up2 , p2 ], . . . , [n, p2 ], and [Uj , j], . . . , [Dj , j] for j ∈ {p2 + 1, . . . , p1 − 1}, and [Dp1 , p1 ], . . . , [1, p1 ] also belong to the polyomino S. Corollary 1 ([5]). If we have at least one cell which belongs to S in the first and the last row of the matrix Q then we can compute at least one cell which belongs to S in each row of the matrix Q. These computations cost O(n). We can change symmetrically the word row by the word column in above lemmas and they still hold.
3
Algorithm of Reconstruction
In this section we describe a polynomial algorithm for the reconstruction of full convex 3D polyominoes. The main idea of the algorithm is to test all possible positions of 1’s in four corner bars BZ1,1 , BZ1,n , BZn,1 and BZn,n . If we fix any initial positions in corner bars we will use Lemma 2 for computing positions at least one 1 in each bar BZi,j . If we have at least one 1 in each bar BZi,j we will start filling procedure that return the polyomino or the word fail, when a polyomino with these initial positions does not exist.
266
3.1
Maciej G¸ebala
Initial Positions
We arbitrary fix the positions of 1’s in corner bars BZ1,1 , BZ1,n , BZn,1 and BZn,n . 1 We call these positions by the initial positions. Now we consider two slices SX n and SX . Both will contain, by Definition 1, a two-dimensional hv-convex polyomino. Both contain in the first and the last column at least one 1. Thus from Corollary 1 we can compute positions of 1’s in each column of these slices, i. e. in each bar BZ1,i and BZn,i , for i ∈ {1, . . . , n}. Now we consider all slices SYj , for j ∈ {1, . . . , n}. Each slice SYj contain now at least one 1 in the first and the last column. We can use again Corollary 1 for each slice. Thus we have at least one 1 in each column of each slice SYj . In other words we have at least one 1 in each bar BZi,j , for i, j ∈ {1, . . . , n}. The set of positions computed above we denote by the start positions. It is easy to see that the following lemma is correct Lemma 3. If there exists the full convex 3D polyomino P that satisfies matrices of projections PT , PS and PF , and if the initial positions belong to the polyomino P then all of the start positions also belong to P. Lemma 4. Computing the start positions costs O(n2 ). Proof. We perform n + 2 times Corollary 1. Thus the cost is equal to O(n2 ). 3.2
Filling Procedure
The procedure described below is three-dimensional modifications of the filling procedure described in [5]. We use balanced binary trees (like e. g. AVL) with following operations: – empty(tree) – returning true if tree is empty and false otherwise, with the complexity O(1); – delete(k,tree) – deleting element k from tree, with the complexity O(log |tree|); – insert(k,tree) – inserting element k in tree where k ∈ tree; or doing nothing otherwise, with the complexity O(log |tree|); – min(tree) – returning a minimal element of tree, with complexity O(log |tree|); – max(tree) – returning a maximal element of tree, with complexity O(log |tree|); where |tree| means the size of tree. We have three global variables treeX , treeY and treeZ which are balanced binary trees. In these trees we will store the indices of bars, which we will review in the next step of the main loop of our procedure. For each bar we define the following auxiliary variables: l, r, p, q, ˜l, r˜, p˜, q˜ and free0 . The variable l is a minimal position containing 1, r is a maximal position containing 1, p is a minimal position without 0 and q is a maximal position without 0. The variables ˜l, r˜, p˜ and q˜ are the temporary values of l, r, p and q,
The Reconstruction of Some 3D Convex Polyominoes
267
respectively. The variable free0 is a balanced tree containing positions of 0’s which are between p˜ and q˜. We initialise these variables as follow: l = ˜l = n + 1, r = r˜ = 0, p = p˜ = 1, q = q˜ = n and free0 = nil, where nil means an empty tree. Now we can introduce two auxiliary operations putting 1 and 0 in a bar: j,k put 0 in bar BX at the position i ([i, j, k]): if R[i, j, k] = 1 then exit( fail ) if R[i, j, k] = 0 then R[i, j, k] ← 0, insert((i, k),treeY ), insert((i, j),treeZ ) modify0 bar BYi,k at the position j with the matrix PF i,j modify0 bar BZ at the position k with the matrix PT
j,k put 1 in bar BX at the position i ([i, j, k]): if R[i, j, k] = 0 then exit( fail ) if R[i, j, k] = 1 then R[i, j, k] ← 1, insert((i, k),treeY ), insert((i, j),treeZ ) modify1 bar BYi,k at the position j with the matrix PF i,j modify1 bar BZ at the position k with the matrix PT
Where suboperations modify0 and modify1 are defined consecutively by modify0 bar B j,k at the position i with the matrix P : with B j,k do if r ≥ l then if i < l and i ≥ p˜ then p˜ ← i + 1 if i > r and i ≤ q˜ then q˜ ← i − 1 else if i < p˜ + P [j, k] and i ≥ p˜ then p˜ ← i + 1 while not empty(free0 ) and (m ←min(free0 ))< p˜ + P [j, k] do delete(m,free0 ), p˜ ← m + 1 if i > q˜ − P [j, k] and i ≤ q˜ then q˜ ← i − 1 while not empty(free0 ) and (m ←max(free0 ))> q˜ − P [j, k] do delete(m,free0 ), q˜ ← m − 1 if p˜ + P [j, k] ≤ i ≤ q˜ − P [j, k] then insert(i,free0 ) modify1 bar B j,k at the position i with the matrix P : with B j,k do if r < l then l←r←˜ l ← r˜ ← i if p˜ < i − P [j, k] + 1 then p˜ ← i − P [j, k] + 1 if q˜ < i + P [j, k] − 1 then q˜ ← i + P [j, k] − 1 while not empty(free0 ) do m ←min(free0 ), delete(m,free0 ) if m < i and m + 1 > p˜ then p˜ ← n + 1 if m > i and m − 1 < q˜ then q˜ ← n − 1
268
Maciej G¸ebala
¼ ¼ ¼ ¼
ª
¼ ¼
½ ½ ½ ½ ½ ½
¬
Fig. 3. Operations ⊕, /, ⊗ and in the bar
else if i < ˜ l then ˜ l←i if i > r˜ then r˜ ← i
The operations described above retain in memory the indices of bars, that are modified when we put a new symbol and modify adequate auxiliary variables. Now we define operations putting new symbols in the bar (see Fig. 3): j,k operation ⊕ in bar BX : j,k with BX do if ˜ l < l then j,k at the position i for i ← ˜ l to l − 1 do put 1 in bar BX ˜ l←l if r˜ > r then j,k at the position i for i ← r + 1 to r˜ do put 1 in bar BX r ← r˜
The operation ⊕ fills empty cells between 1’s and integrates the block of 1’s in the bar. j,k operation in bar BX : j,k with BX do if p < p˜ then j,k at the position i for i ← p to p˜ − 1 do put 0 in bar BX p ← p˜ if q > q˜ then j,k at the position i for i ← q˜ + 1 to q do put 0 in bar BX q ← q˜
The operation / integrates the two final blocks of 0’s in the bar. j,k operation ⊗ in bar BX : j,k with BX do if l > r and p + PS [j, k] − 1 ≥ q − PS [j, k] + 1 then
The Reconstruction of Some 3D Convex Polyominoes
269
l←˜ l ← q − PS [j, k] + 1, r ← r˜ ← p + PS [j, k] − 1 j,k at the position i for i ← l to r do put 1 in bar BX if l ≤ r and q − PS [j, k] + 1 < l then j,k at the position i for i ← q − PS [j, k] + 1 to l − 1 do put 1 in bar BX l←˜ l ← q − PS [j, k] + 1 if l ≤ r and p + PS [j, k] − 1 > r then j,k at the position i for i ← r + 1 to p + PS [j, k] − 1 do put 1 in bar BX r ← r˜ ← p + PS [j, k] − 1
The operation ⊗ expands the block of 1’s if the area without 0’s is adequately narrow. j,k operation in bar BX : j,k with BX do if l ≤ r and p ≤ r − PS [j, k] then j,k at the position i for i ← p to r − PS [j, k] do put 0 in bar BX p ← p˜ ← r − PS [j, k] + 1 if l ≤ r and q ≥ l + PS [j, k] then j,k at the position i for i ← l + PS [j, k] to q do put 0 in bar BX q ← q˜ ← l + PS [j, k] − 1
The operation expands the two final blocks of 0’s if the block of 1’s is adequately wide in the bar. It is obvious that we define these operations for bars BY and BZ analogously. Moreover if the bar contains 1’s then it has five following blocks of cells: the block of 0’s, the block of empty cells, the block of 1’s, the block of empty cells and the block of 0’s. The size of each block of empty cells is equal to the number of the deficient 1’s. Each bar containing 1’s has this property after operations ⊕, /, ⊗ and . The main loop of filling procedure has the following form now: repeat while not empty(treeX ) do (j, k) ←min(treeX ), delete((j, k),treeX ) j,k perform operations ⊕, , ⊗, in bar BX while not empty(treeY ) do (i, k) ←min(treeY ), delete((i, k),treeY ) perform operations ⊕, , ⊗, in bar BYi,k while not empty(treeZ ) do (i, j) ←min(treeZ ), delete((i, j),treeZ ) perform operations ⊕, , ⊗, in bar CZi,j until empty(treeX ) and empty(treeY ) and empty(treeZ ) if there exist empty cells in the matrix R then build and solve the adequate 2SAT formula
When we compute the start positions we put neither 0 nor 1 in the matrix R. We only modify variables p˜ and q˜ in bars BZ and put the indices of these bars in treeZ . Operations / and ⊗ put these symbols in the first step of the loop.
270
3.3
Maciej G¸ebala
Correctness and Complexity
Lemma 5. If we have at least one 1 in each bar BZ then the filling procedure works correctly. Proof. When we start the filling procedure we have in each bar BZ at least one 1, thus the number of empty cells in the matrix R will be equal to the double number of the deficient 1’s. And these properties occur in the whole time of the procedure work. If the filling procedure returns fail, we know that a convex 3D polyomino which has projections PT , PS and PF and the fixed initial positions does not exist. Otherwise, if trees treeX , treeY and treeZ are empty, we have two different cases: 1. Each cell of the matrix R contains 0 or 1. We have the solution. The set P of 1’s is a full convex 3D polyomino and satisfies matrices of projections. 2. The matrix R contains empty cells. Its number is equal to the double number of the deficient 1’s. In each bar we have two blocks of empty cells such as the length of the block is equal to the number of the deficient 1’s in this bar (it follows the properties of operations ⊕, /, ⊗ and ). We group these empty cells into labelling cycles. If we have an empty cell we label it by, for example, x. This cell has exactly three empty neighbour cells in the cycle, one in each bar BX or BY or BZ containing this cell. These cells are in other block of empty cells and have the same positions in blocks in the same bar. We label these cells by negation of x (¯ x). And we repeat this step for these cells until we cannot find a new cell for labelling in this way. We create a three-dimensional cycle in the matrix R which cells are labelled by x and x ¯ alternately and each cell has three neighbours in the cycle. We create cycles as long as we have empty cells in matrix R. Each cycle we label by a different letter. Cells in the cycles are valued alternately by 0 and 1 (true and false). But some cycles are valued dependently. Thus we build a 2SAT formula. If in some bar we have two successive cells, labelled by, for example, x and y, we add to the formula the implication x → y if y is between x and the block of 1’s, or implication y → x otherwise. If the bar does not contain the block of 1’s we create cycle of implications for each block of empty cells in this way that variables in the block have the same value. This 2SAT formula has the size at most O(n3 ) and can be solved in the linear time. Thus if the 2SAT formula is satisfiable then there exists an adequate full convex 3D polyomino that satisfies matrices of projections. Lemma 6. The filling procedure costs at most O(n3 log n). Proof. We estimate the complexity of the main loop of the filling procedure. In each position [i, j, k] we perform operation put only three times (one operation in each bar containing this position). Moreover, when we do operations ⊕, /, ⊗
The Reconstruction of Some 3D Convex Polyominoes
271
and in the bar in our algorithm, we execute at least one put operation. Hence, we review only O(n3 ) bars and the review of one bar costs O(log n)+[cost of the put operations]. Therefore, the global cost of the main loop of the procedure is O(n3 log n)+[cost of all put operations]. Now we estimate the global cost of all put operations. In the fixed bar when we perform put operations we execute at most n insert operations in an adequate tree. It costs O(n log n). Since the insert operations in free0 in the bar we have done no more than one time for each position. There are no more than n delete operations, either. We execute the functions min and max only while modifying p˜ or q˜. Hence, the number of these operations is at most n. All operations in the tree free0 cost at most O(n log n). For all 3n2 bars the cost is equal to O(n3 log n). The complexity of all residual operations is at most O(n3 ). Hence, the cost of the filling procedure is O(n3 log n). Theorem 1. The problem of the reconstruction of full convex 3D polyominoes from orthogonal projections has the complexity O(n7 log n). Proof. The number of all different initial positions is equal to n4 and the cost of the reconstruction for fixed initial position is equal to O(n3 log n). Thus the cost of testing all possible initial positions is equal to O(n7 log n).
4
Conclusion
In this paper, we have studied the reconstruction of three-dimensional convex polyominoes from their orthogonal projection. We have designed the polynomialtime algorithm for the reconstruction of some wide class of the 3D polyominoes called full convex. Yet some complexity problems are still unsolved: are the general reconstruction problem for convex three-dimensional polyominoes polynomial or NP-complete?
References [1] Barcucci, E., Del Lungo, A., Nivat, M., Pinzani, R.: Reconstructing Convex Polyominoes from Horizontal and Vertical Projections. TCS 155 (1996) 321– 347. 263, 264 [2] Chang, S. K.: The Reconstruction of Binary Patterns from their Projections. Communications of the ACM 14 (1971) 21–25. 263 [3] Chrobak, M., D¨ urr, Ch.: Reconstructing hv-convex Polyominoes from Orthogonal Projections. IPL 69(6) (1999) 283–289. 264 [4] Gardner R. J., Gritzmann P., Prangenberg D.: On the Computional Complexity of Reconstructing Lattice Sets from Their X-Rays. Discrete Mathematics 202 (1999) 45–71. 264 [5] G¸ebala, M.: The Reconstruction of Convex Polyominoes from Horizontal and Vertical Projections. SOFSEM’98, LNCS 1521 (1998) 350–359. 264, 265, 266 [6] G¸ebala, M.: The Reconstruction of Polyominoes from Approximately Orthogonal Projections. SOFSEM’2001, LNCS 2234 (2001) 253–260. 264
272
Maciej G¸ebala
[7] Irving R. W., Jerrum M. R.: Three-Dimensional statistical data security problems. SIAM Journal on Computing 23 (1994) 170–184. 264 [8] Ryser, H.: Combinatorial Mathematics. The Carus Mathematical Monographs Vol. 14 (The Mathematical Association of America, Rahway, 1963). 263 [9] Wang, X. G.: Characterisation of Binary Patterns and their Projections. IEEE Transactions on Computers C-24 (1975) 1032–1035. 263 [10] Woeginger, G. J.: The Reconstruction of Polyominoes from Their Orthogonal Projections. IPL 77(5-6) (2001) 225–229. 263
The Complexity of Probabilistic versus Quantum Finite Automata Gatis Midrij¯ anis University of Latvia Rai¸ na bulv¯ aris 19, Riga, Latvia [email protected]
Abstract. We present a language Ln which is recognizable by a probabilistic finite automaton (PFA) with probability 1 − ε for all ε > 0 with O(log 2 n) states, with a deterministic finite automaton (DFA) with O(n) states, but a quantum finite automaton (QFA) needs at least 2Ω(n/ log n) states.
1
Introduction
A PFA is generalization of DFA. Many authors have tried to find out (a. o., [2, 5, 7, 4]) the size advantages of PFA over DFA. On the other side it is known [3, 2] that the size of reversible finite automata (RFA) and the size of QFA exceed the size of the corresponding DFA almost exponentially for some regular languages (i. e.for languages recognizable by DFA). And so A. Ambainis, A. Nayak, A. Ta-Shma, U. Vazirani [3] wrote: Another open problem involves the blow up in size while simulating a 1-way PFA by a 1-way QFA. The only known way for doing this is by simulating the PFA by a 1-way DFA and then simulating the DFA by a QFA. Both simulating a PFA by a DFA [5, 6, 7] and simulating DFA by a QFA (this paper) can involve exponential or nearly exponential increase in size. This means that the straightforward simulation of a probabilistic automaton by a QFA (described above) could result in a doubly-exponential increase in size. However, we do not known of any examples where both transforming a PFA into a DFA and transforming a DFA into a QFA cause big increases of size. Better simulations of PFA by QFAs may well be possible. We will solve this problem.
2
Definitions and Known Results
We use the definition of 1-way QFA (further in text simply QFA) as in [2] and [3]. This model was first introduced in [1] and is not the most general
Research supported by Grant No.01.0354 from the Latvian Council of Science, and Contract IST-1999-11234 (QAIP) from the European Commission.
W.I. Grosky and F. Pl´ aˇ sil (Eds.): SOFSEM 2002, LNCS 2540, pp. 273–278, 2002. c Springer-Verlag Berlin Heidelberg 2002
274
Gatis Midrij¯ anis
one, but is easy to implement and deal with. A quantum finite automaton has a finite set of basis states Q, which consists of tree parts: accepting states (Qacc ), rejecting states (Qrej ) and non-halting states (Qnon ). One of the states, qini , is distinguished as the starting state. Inputs to a QFA are words over a finite alphabet Σ. We shall also use the symbols øand $ that do not belong to Σ to denote the left and the right end marker, respectively. The set Γ = Σ ∪ {ø, $} denotes the working alphabet of the QFA. For each symbol σ ∈ Γ , a QFA has a corresponding unitary transformation Uσ on the space CQ . At any time, the state of a QFA is a superposition of basis states in Q. The computation starts in the superposition |qini . Then the transformations corresponding to the left end marker ø, the letters of the input word x and the right end marker $ are applied in succession to the state of the automaton, unless a transformation results in acceptance or rejection of the input. A transformation consists of two steps: 1. First, Uσ is applied to |ψ , the current state of the automaton, to obtain the new state |ψ . 2. Then, |ψ is measured with respect to the observable Eacc ⊕ Erej ⊕ Enon , where Eacc = span{|q | q ∈ Qacc }, Erej = span{|q | q ∈ Qrej }, Enon = span{|q | q ∈ Qnon }. The probability of observing Ei is equal to the squared norm of the projection of |ψ onto Ei . On measurement, the state of the automaton “collapses” to the projection onto the space observed, i. e., becomes equal to the projection, suitably normalized to a unit superposition. If we observe Eacc (or Erej ), the input is accepted (or rejected). Otherwise, the computation continues, and the next transformation, if any, is applied. A QFA is said to accept (or recognize) a language L with probability p > 12 if it accepts every word in L with probability at least p, and rejects every word not in L with probability at least p. A RFA is a QFA with elements only 0 and 1 in the matrices. A PFA is the same as a QFA but only instead of unitary matrices it has stochastic ones. A DFA is a PFA with only 0 and 1 in the matrices. The size of a finite automaton is defined as the number of (basis) states in it. More exact definitions one can find, for example, in [2]. n In [2] there was given a language L× n consisting of one word a in a singleletter alphabet and it was proved: Theorem 1. 1. Any deterministic automaton that recognizes L× n , has at least n states. 2. For any ε > 0, there is a probabilistic automaton with O(log2 n) states recognizing L× n with probability 1 − ε. Sketch of Proof. The first part is evident. To prove the second part, Freivalds [5] used the following construction. O( logloglogn n ) different primes are employed and
The Complexity of Probabilistic versus Quantum Finite Automata
275
O(log n) states are used for every employed prime. At first, the automaton randomly chooses a prime p, and the the remainder modulo p of the length of input word is found and compared with the standard. Additionally, once in every p steps a transition to a rejecting state is made with a “small” probabilp . The number of used primes suffices to assert that, for every input of ity const n length less than n, most of primes p give remainders different from the remainder of n modulo p. The “small” probability is chosen to have the rejection high enough for every input length N such both N = n and ε-fraction of all the primes used have the same remainders mod p as n. In [3] was definition and theorem: n
Definition 1. f : {0, 1}m × R −→ C2 serially encodes m classical bits into n qubits with p success, if for any i ∈ [1..n] and b[i+1,n] = bi+1 . . . bn ∈ {0, 1}n−i , there is a measurement Θi,b[i+1,n] that returns 0 or 1 and has property that ∀b ∈ {0, 1}m : P rob(Θi,b[i+1,n] |f (b, r) = bi ) ≥ p. Theorem 2. Any quantum serial encoding of m bits into n qubits with constant success probability p > 1/2 has n ≥ Ω( logmm ). And also in [3] there was defined an r-restricted 1-way QFA for a language L as a 1-way QFA that recognizes the language with probability p > 1/2, and which halts with non-zero probability before seeing the right end marker only after it has read r letters of the input. The following theorem was proved: Theorem 3. Let M be a 1-way QFA with S states recognizing a language L with probability p. Then there is an r-restricted 1-way QFA M with O(rS) states that recognizes L with probability p.
3
Results
One of the components of the proof of Theorem 4 below is the following lemma: Lemma 1. Language L1 = {ω ∈ {0, 1}∗ : ∃x, y ∈ {0, 1}∗ : ω = x00y} is recognizable by a DFA. Sketch of Proof. The automaton has five states: q0 , q1 , q2 , qacc and qrej . Values of the transition function between states are: f (q0 , 0) = q1 , f (q0 , 1) = q0 , f (q1 , 0) = q2 , f (q1 , 1) = q0 , f (q2 , 0) = q2 , f (q2 , 1) = q2 , f (q0 , $) = qrej , f (q1 , $) = qrej , f (q2 , $) = qacc . Theorem 4. For all k ≥ 1, n = 2k, we define language Ln = {ω ∈ {0, 1}n : ∃x, y ∈ {0, 1}∗ : ω = x00y} .
276
Gatis Midrij¯ anis
qi
qi
0
1
qm qj
qm
0
qj
1
Fig. 1. From different states go to one with the same input symbol qi
0
qm qj
1
Fig. 2. From different states go to one with different input symbols
0. There is a RFA (so also a QFA, a PFA and a DFA) that recognizes Ln . 1. Any RFA that recognizes Ln , has at least 2O(n) states. n 2. Any QFA that recognizes Ln with probability p > 1/2, has at least 2Ω( log n ) states. 3. Any DFA that recognizes Ln , has at least O(n) states. 4. For any ε > 0, there is a PFA with O(log2 n) states recognizing Ln with probability 1 − ε. Proof. Zero part follows from fact that all finite languages are recognizable by some RFA and Ln is finite language. First part: We give to automaton word a1 1a2 1a3 1a4 1a5 1a6 1 . . . ak 1, where ai ∈ {0, 1}. It is obvious that then automaton cannot decide what to answer till the end of word. We prove that automaton always has to branch at every ai . Suppose contrary, there is ai where automaton goes to the same state whether it read ai = 0 or ai = 1. Then forward we give the next symbols 01n−2i and automaton cannot decide what to answer. So it must branch for every ai , we can say it “remembers” this bit. But maybe it can merge (“forget”) afterwards? No, because constructions in Figure 1 are forbidden by reversibility, but construction in Figure 2 by the same reason as branching must occur (for all states qi , qj , qm , i = j). Then it follows that automaton “remembers” all bits and the total number of states is at least 2k . Second part: We use technique introduced by [3]. Let M be any n-restricted QFA accepting Ln with probability p > 1/2. The following claim formalizes the intuition that the state of M after n symbols in form a1 1a2 1a3 1a4 1a5 1a6 1 . . . ak 1 have been read is an “encoding” (in case of RFA, so deterministic, we said “remember”) of the {ai }. Claim. There is a serial encoding of k bits into CQ , and hence into 0log |Q|1 qubits, where Q is the set of basis states of the M .
The Complexity of Probabilistic versus Quantum Finite Automata
277
Proof. Let Qacc and Qrej be the set of accepting and rejecting states respectively. Let Uσ be a unitary operator of M corresponding to the symbol σ ∈ {0, 1, ø, $}. We define an encoding f : {0, 1}k −→ CQ of k-bit strings into unit superpositions over the basis states of the QFA M by letting |f (x) be the state of the automaton M after the input string a1 1a2 1a3 1a4 1a5 1a6 1 . . . ak 1 where ai ∈ {0, 1} has been read. We assert that f is a serial encoding. To show that indeed f is such an encoding, we exhibit a suitable measurement for the ai -th bit for every i ∈ [1..k]. Let, for y ∈ {0, 1}n−2∗i+1 , Vi (y) = U$ U1n−2∗i U0 Uy−1 Uy−1 . . . Uy−1 Uy−1 Uy−1 . 1 2 n−2∗i−1 n−2∗i n−2∗i+1 The i-th measurement then consists of first applying the unitary transformation Vi (1ai+1 1 . . . 1ak 1) to |f (x) , and then measuring the resulting superposition with respect to Eacc ⊗ Erej ⊗ Enon . Since for words with form a1 1a2 1 . . . 1ai 01n−2∗i , containment in Ln is decided by the ai , and because such words are accepted or rejected by then n-restricted QFA M with probability at least p only after the entire input has been read, the probability of observing Eacc if ai = 0, or Erej if ai = 1, is at least p. Thus, f defines a serial encoding. Then it follows from Theorem 2 that 0log |Q|1 = Ω( logk k ), but since k =
n 2,
we
n Ω( log n)
. From Theorem 3 it follows that any quantum automaton have |Q| = 2 n that recognize Ln also require 2Ω( log n ) states. Third part: Easy. Fourth part: The PFA Q in Theorem 1 has one rejecting (qrej ), one accepting (qacc ), one initial (qini ) state and many non-halting states qi . We build PFA Q recognizing language Ln with one rejecting (qrej ), one accepting (qacc ), one starting (qini ) state and several non-halting states qi,0 , qi,1 and qi,2 , where i is from set of states’ indexes from automaton Q. For every transition from state qi to state qj with probability p for the input symbol a (we denote this by f (qi , a, qj , p)) there are 6 transitions in Q (we denote it by f ): 1. 2. 3. 4. 5. 6.
f (qi,0 , 1, qi,0 , p) , p) f (qi,0 , 0, qi,1 f (qi,1 , 1, qi,0 , p) , 0, qi,2 , p) f (qi,1 , 1, qi,2 , p) f (qi,2 , 0, qi,2 , p) f (qi,2
, ø, For every transformation f (qini , ø, qi , p), there is a transformation f (qini qi,0 , p). For every f (qi , a, qrej , p) there is f (qi,k , x, qrej , p) such that for all k ∈ {0, 1, 2}, x ∈ {0, 1}, and for every f (qi , $, qrej , p) there is f (qi,k , $, qrej , p) for all k ∈ {0, 1, 2}, and for any f (qi , $, qacc , p) there are f (qi,2 , $, qacc , p), f (qi,0 , $, qrej , p), f (qi,1 , $, qrej , p). Informally, we make 3 copies from states in Q and their meaning is similar than for states of automaton from Lemma 1. Automata computes parallel two
278
Gatis Midrij¯ anis
things: is length of input word n and is there any adjacent zeroes in it. It is obviously that the accepted words are those whose length is n and there are two adjacent 0 in them.
4
Conclusion
We have shown that sometimes quantum automata must be almost doubly exponential larger than classical automaton. But there still remains open the other question. As follows from result of Ambainis and Freivalds [2], any language accepted by a QFA with high enough probability can be accepted by a RFA which is at most exponentially bigger that minimal DFA accepting the language. Thus follows that Theorem 4 is close to maximal gap between probabilistic and quantum automaton with high enough (this was precisely computed by Ambai√ 7 = 0.7726 . . . ) probability of success. nis and K ¸ ikusts [8] - greater than 52+4 81 But it is not clear how it is when we allow smaller probability of correctness. Author do not now any lower or upper bound in this case.
Acknowledgements I would like to thank R¯ usi¸ nˇs Freivalds for suggesting the problem and help during research.
References [1] A. Kondacs, J. Watrous, On the power of quantum finite state automata, in Proc. 38th FOCS, 1997, pp. 66–75. 273 [2] A. Ambainis, R. Freivalds, 1–way quantum finite automata: strengths, weaknesses, and generalizations, in Proc. 39th FOCS, 1998, pp. 332–341. 273, 274, 278 [3] A. Ambainis, A. Nayak, A. Ta-Shma, U. Vazirani: Dense Quantum Coding and a Lower Bound for 1-way Quantum Automata, quant-ph/9804043, 1998. 273, 275, 276 [4] Z. Rasˇsˇcevskis, The Complexity of Probabilistic versus Deterministic Finite Automata, http://www.ima.mdh.se/personal/rbr/courses/sundbyholmproc/zigmars.ps 273 [5] R. Freivalds, On the growth of the number of states of determinization of probabilistic automata, , Avtomatika i Vychislitelnaja Tehnika (in Russian), 1982, N.3, pp. 39–42. 273, 274 [6] M. O. Rabin, Probabilistic Automata, Information and Control, 6(1963), pp. 230–245. 273 [7] A. Ambainis, The complexity of probabilistic versus deterministic finite automata, Proceedings of ISAAC’96, Lecture Notes in Computer Science, 1178(1996), pp. 233–237. 273 [8] A. Ambainis, A. K ¸ ikusts, Exact results for accepting probabilities of quantum automata, quant-ph/0109136, 2001. 278
Do We Need Liveness? – Approximation of Liveness Properties by Safety Properties Ulrich Ultes-Nitsche Department of Electronics and Computer Science University of Southampton, Southampton, SO17 1BJ, United Kingdom [email protected] phone: +44 23 80593055, fax: +44 23 80593045
Abstract. We explore in this paper the approximation of a liveness property by a safety property (with respect to a given system behaviour). The approximation is such that, if a system satisfies only the approximation of a linear-time property but not the property itself, we will only detect this by observing the system for an infinite time. As an infinitely long observation is practically impossible, we must deem the property to hold as soon as its approximation is satisfied.
1
Introduction
When verifying linear-time properties, the satisfaction relation is defined as the ω-language inclusion B ⊆ P , where B represents the behaviour of a concurrent system and P represents the property to be checked [1]. Properties can be partitioned into safety and liveness properties, and the intersection of a safety and a liveness property [1]. Very frequently, a system – described by its behaviour – satisfies a liveness property (or the liveness part of a property) only when additional fairness constraints [4] are imposed on the behaviour. In [9] it is shown that a satisfaction relation for linear-time properties can be defined that enjoys an inherent fairness condition. In this paper the concepts presented in [8, 9] will be developed further, showing that the satisfaction of any property can be reduced to the satisfaction of a safety property in a practically meaningful way. The constructed safety property approximates the given property: If a system satisfies the safety approximation of a given property but not the property itself, we will only be able to detect this by observing the system infinitely long. As this is practically impossible, the system must be considered correct as soon as the approximation of the given property is satisfied. Hence, checking the safety approximation of a property is sufficient to ensure the correct behaviour of a system. As a consequence, the original linear-time satisfaction relation appears to be too fine-grained as it distinguishes between system behaviours that are practically indistinguishable.
W.I. Grosky and F. Pl´ aˇ sil (Eds.): SOFSEM 2002, LNCS 2540, pp. 279–287, 2002. c Springer-Verlag Berlin Heidelberg 2002
280
2
Ulrich Ultes-Nitsche
Motivation
Before starting with the main results of this paper, let’s consider a motivating example. (The example is designed for a reader with a basic knowledge of lineartime verification.) The example system is an infinite-state system that selects an arbitrary non-negative integer n, then operates n times, performs a complete event, and then idles ad infinitum. We can represent this particular infinitestate system by a finite-state system with acceptance conditions. The resulting B¨ uchi automaton for the example system is represented in Figure 1. The system represented in Figure 1 satisfies the linear-time property that “eventually the complete event occurs” (in linear-time temporal logic: F (complete)). If the system that we consider can initially also decide to produce infinitely many operate events (i. e. it may never execute the complete event), then we obtain the automaton represented in Figure 2. This system does not satisfy the linear-time property F (complete) anymore. However, the only difference between the systems in Figure 1 and Figure 2 is the one behaviour in which the system in Figure 2 produces only operate events. We could spot the difference between the two systems only if we could observe them for an infinite time: If we observed a system, not knowing which one of the two it were, we could never decide which of the two we observe. Only if we could observe the system infinitely long, which is practically impossible, we may see an infinite sequence of operates, which would then enable us to decide that the system under observation is the one in Figure 2. As mentioned before, infinite observations are practically impossible and I will therefore argue in this paper that the two systems are practically indistinguishable. I will discuss a satisfaction relation for linear-time properties that is insensitive to such practically not observable differences and show that it can be reduced to the satisfaction of a safety property, even for liveness properties (note that F (complete) is a liveness property). Property F (complete) will be satisfied within fairness [9] by both systems, the one in Figure 1 and the one in Figure 2 (satisfaction within fairness is the relation discussed in this paper).
Fig. 1. An example system
Fig. 2. An example system
Do We Need Liveness?
281
Fig. 3. An example system To present a system that does not satisfy F (complete) within fairness, let’s look at Figure 3. For this system, whenever the error event occurs, it will be impossible to produce a complete event anymore, which will violate the satisfaction of F (complete) within fairness: Satisfaction within fairness is concerned with the question whether or not it is always possible to satisfy a property, which explains its relation to fairness [4].
3
Linear-Time Properties
The subsequent definitions are basically taken from [1] and ω-language theory (see e. g. [11]). In the verification of linear-time properties, both a system’s behaviour B and the property P are represented as sets of infinite sequences over an alphabet E of events. A single infinite event sequence is called a computation. The set of all infinite sequences over E is denoted by E ω , the set of all finite event sequences by E ∗ . B and P are therefore subsets of E ω . Definition 1. Behaviour B satisfies property P , written B P , if and only if B ⊆ P . (For a computation x ∈ E ω , we write x P for {x} P .) We will need the following auxiliary definitions in subsequent parts of this paper. Definition 2. Let S ⊆ E ω , let x ∈ E ω , let L ⊆ E ∗ and let w ∈ E ∗ . Then: – – – –
pre(S) = {w ∈ E ∗ | ∃x ∈ E ω : wx ∈ S} (finite prefixes of S), pre(x) = pre({x}), cont (w, S) = {x ∈ E ω | wx ∈ S} (continuation of w in S),1 lim(L) = {x ∈ E ω | pre(x) ∩ L is infinite } (Eilenberg-limit of L).
We will use the definitions of safety and liveness as in [1], modifying the notation slightly: 1
In formal language theory, cont (w, S) is called the leftquotient of S by w [6].
282
Ulrich Ultes-Nitsche
Definition 3. P is a safety property if and only if ∀x ∈ E ω : (x P ⇒ ∃w ∈ pre(x) : ∀z ∈ E ω : wz P ) . P is a liveness property if and only if ∀w ∈ E ∗ : ∃x ∈ E ω : wx P . Occasionally, we will use the counterpositive version of the definition of a safety property: ∀x ∈ E ω : ((∀w ∈ pre(x) : ∃z ∈ E ω : wz P ) ⇒ x P ). An immediate consequence of the definition of a liveness property is: P is a liveness property if and only if pre(P ) = E ∗ [2].
4
Satisfaction within Fairness
Subsequent definitions and results are taken from [8, 9]. Satisfaction within fairness is a relaxation of the linear satisfaction relation, replacing “all computation must satisfy the property” by “after each finite computation, it is always possible to satisfy the property”. One of its properties is that inherently it contains a notion of truth under fairness [9], where its name results from. Definition 4. B satisfies P within fairness, written B P in this paper, if and only if ∀w ∈ pre(B) : ∃x ∈ cont(w, B) : wx P . This definition can be rewritten in two different ways that will help us to establish the results of this paper. Lemma 1. B P if and only if pre(B) ⊆ pre(B ∩ P ). Proof. By definition, B P if and only if to each w ∈ pre(B) exists x ∈ cont(w, B) such that wx ∈ P . Therefore all w ∈ pre(B) are also in pre(B ∩ P ). Thus pre(B) ⊆ pre(B ∩ P ). If pre(B) ⊆ pre(B ∩ P ), then w ∈ pre(B ∩ P ), for all w ∈ pre(B). So to all w ∈ pre(B) there exists x ∈ cont(w, B) such that wx ∈ P . Hence B P . A second representation of satisfaction within fairness is based on the following metric on E ω . Definition 5. Let common(x, y) designate the longest common prefix of two computations x and y in E ω . We define the metric d(x, y) by ∀x, y ∈ E ω , x = y : d(x, y) =
1 |common(x, y)| + 1
∀x ∈ E ω : d(x, x) = 0 . Lemma 2. B P if and only if B ∩ P is a dense set in B.
Do We Need Liveness?
283
Proof. Let B P and let x ∈ B. By Lemma 1, pre(x) ⊆ pre(B ∩ P ), implying ∀w ∈ pre(x) : ∃y ∈ B ∩ P : w ∈ pre(y). So, for all x ∈ B and all ε > 0 (ε and 1 1 |w|+1 are related; i. e. we have to select a w ∈ pre(x) such that |w| > ε − 1), there is y ∈ B ∩ P such that d(x, y) < ε. So B ∩ P is a dense set in B. 1 Let B ∩ P be a dense set in B. Let x ∈ B, w ∈ pre(x), and ε = |w|+1 . Let y ∈ B ∩ P such that d(x, y) < ε (y must exist because B ∩ P is a dense set in B). Then w is in pre(y) and therefore in pre(B ∩P ). Because w is an arbitrary element of pre(B), pre(B) ⊆ pre(B ∩ P ). So, by Lemma 1, B P . The previous result says that satisfaction within fairness approximates the linear satisfactions relation. It does this arbitrarily precisely: To detect that a system, which satisfies a property within fairness, does not satisfy the property linearly, we must observe the system for an infinite time – a practically meaningless distinction. Therefore linear satisfaction distinguishes between systems that cannot be distinguished in practice, whereas satisfaction within fairness is insensitive to such a practically irrelevant distinction.
5
Approximation of Liveness by Safety
In this section, which is the main contribution of this paper, we establish that the approximation of linear satisfaction by the satisfaction-within-fairness relation can be reduced to the linear satisfaction of a safety property. This enables us to reduce model-checking liveness properties to a safety check that can be done in terms of traces refinement [10]. An implication of this observation is that this section makes liveness properties accessible to refinement-based model checkers such as FDR.2 Before concluding this result, we prove auxiliary lemmas. First of all, quite trivially, linear satisfaction implies satisfaction within fairness (whereas the contrary is not true): Lemma 3. B P ⇒ B P for all B and P . Proof. Assume B P . Let w ∈ pre(B) and let x ∈ cont(w, B). Then wx ∈ B and, because B P , wx P . Therefore w ∈ pre(B ∩ P ). When dealing with safety properties, satisfaction within fairness and linear satisfaction are equivalent. Lemma 4. P is a safety property if and only if B P ⇔ B P for all B. Proof. “if”: If x ∈ E ω , such that to all w ∈ pre(x) there exists y ∈ E ω with wy P , then {x} ∪ {wy | w ∈ pre(x) ∧ wy P } P . Taking into account the lemma’s assumption that “B P ⇔ B P ”, we deduct that {x} ∪ {wy | w ∈ pre(x) ∧ wy P } P , which implies x P . Therefore we obtain ∀x ∈ 2
FDR is a refinement checker (kind of model checker) for CSP specifications. It can check traces refinement as well as failure and failure/divergence refinement [10].
284
Ulrich Ultes-Nitsche
E ω : ((∀w ∈ pre(x) : ∃y ∈ E ω : wy P ) ⇒ x P ), implying that P is a safety property. “only if”: Assume that P is a safety property. Because B P always implies B P , we only have to show that B P implies B P . So let B P . We assume B P and show a contradiction. If B P , then there would be an x ∈ B such that x P . But, because B P , there exist y ∈ cont (w, B) to all finite prefixes w ∈ pre(x) such that wy ∈ P . Since P is assumed to be a safety property this implies x P , contradicting the choice of x. Therefore all x ∈ B must be in P and hence B P . The following four quite obvious lemmas are results about Eilenberg-limit (lim) and prefix sets (pre) that we will use to prove the main result of this section. One may note that lim(pre(. . . )) is the topological closure operation in the metric space defined in Definition 5. Lemma 5. R ⊆ S ⇒ pre(R) ⊆ pre(S) for all R, S ⊆ E ω . Proof. Let w ∈ pre(R) and let x ∈ R such that w ∈ pre(x). Then x ∈ S and therefore w ∈ pre(S). Lemma 6. S ⊆ lim(pre(S)), for all S ⊆ E ω . Proof. Let x ∈ S. Then pre(x) ⊆ pre(S), implying x ∈ lim(pre(S)).
Lemma 7. R ⊆ S ⇒ lim(R) ⊆ lim(S) for all R, S ⊆ E ∗ . Proof. Let x ∈ lim(R). Because all finite prefixes of x that are in R are also in S, x must be in lim(S). Lemma 8. pre(lim(pre(S))) = pre(S), for all S ⊆ E ω . Proof. Following from Lemmas 6 and 5, pre(S) ⊆ pre(lim(pre(S))). So we only have to show pre(lim(pre(S))) ⊆ pre(S). Let w ∈ pre(x) such that x ∈ lim(pre(S)). We show w ∈ pre(S), which proves the lemma. There must be infinitely many different prefixes of x (of increasing length), which are in pre(S) (otherwise x would not be in lim(pre(S)) ). Let v ∈ pre(x) such that v ∈ pre(S) and v is longer than w. Since w ∈ pre(x), it is a prefix of v and therefore w ∈ pre(S). Putting the previous results together, we get another representation of satisfaction within fairness, which is, however, very close to the one given in Lemma 1. Corollary 1. pre(B) ⊆ pre(B ∩ P ) ⇔ lim(pre(B)) ⊆ lim(pre(B ∩ P )). Proof. “if”: Assuming lim(pre(B)) ⊆ lim(pre(B ∩ P )) we use Lemma 5 to conclude pre(lim(pre(B))) ⊆ pre(lim(pre(B ∩ P ))). Applying Lemma 8 we get pre(B) ⊆ pre(B ∩ P ). “only if”: If pre(B) ⊆ pre(B ∩ P ), then lim(pre(B)) ⊆ lim(pre(B ∩ P )) by Lemma 7.
Do We Need Liveness?
285
To prove that we are in fact dealing with the linear satisfaction of a safety property when considering satisfaction within fairness, the following lemma establishes a representation of safety properties. It is a direct consequence of lim(pre(. . . )) being the topological closure operation and safety properties being closed sets [1]. Lemma 9. lim(pre(S)) is a safety property for all S ⊆ E ω . Proof. Let x ∈ E ω . If x lim(pre(S)), then there exists prefix w ∈ pre(x) that is not in pre(S) (otherwise x lim(pre(S))). Then wy lim(pre(S)) for all y ∈ E ω (otherwise w ∈ pre(S)). So, whenever x lim(pre(S)), there exists w ∈ pre(x) such that wy lim(pre(S)) for all y ∈ E ω , which is exactly the condition for lim(pre(S)) being a safety property. Finally, we have proved all auxiliary results needed to show that B P is equivalent to B satisfying the safety property lim(pre(B ∩ P )) linearly: Theorem 1. B P if and only if B lim(pre(B ∩ P )). Proof. By Lemma 1, B P if and only if pre(B) ⊆ pre(B ∩ P ), which is equivalent to lim(pre(B)) ⊆ lim(pre(B ∩ P )) by Corollary 1. Lemma 6 implies B lim(pre(B ∩ P )). So it remains to show that B lim(pre(B ∩ P )) implies B P. By Lemma 5, B ⊆ lim(pre(B ∩ P )) implies pre(B) ⊆ pre(lim(pre(B ∩ P ))), which is, following from Lemma 8, equivalent to pre(B) ⊆ pre(B ∩ P )). Finally, applying Lemma 1 completes the proof of the theorem.
6
Conclusion
In this paper it has been presented that the approximation of linear-time satisfaction, which is inherent in the definition of satisfaction within fairness [8, 9], can be reduced to the linear-time satisfaction of a safety property. This holds true even when the original property is a liveness property. Lemma 2 establishes that the satisfaction-within-fairness relation approximates linear-time satisfaction: B P (B satisfies P within fairness) if and only if B ∩ P is a dense set in B (the topological definition of approximation). This implies that, if B ∩ P (i. e. the correct subset of B) is a dense set in B and B and B∩P are different, then B∩P and B can only be distinguished by observing them for an infinitely long time – they share all finite computations (Lemma 1 implies: pre(B) = pre(B ∩ P )). Hence, from a practical point of view, we cannot distinguish a system behaving like B from a system behaving like B ∩ P as we cannot observe them for more than a finite time. Because satisfaction within fairness is a coarser satisfaction relation – it distinguishes less systems – than linear-time satisfaction, it appears that linear-time satisfaction distinguishes more systems than it reasonably should from a practical perspective. Additionally, we have seen that, given a behaviour B and a property P , we can construct a safety property Π(B, P ) dependent on B and P , such that
286
Ulrich Ultes-Nitsche
B P if and only if B Π(B, P ). The structure of Π(B, P ) is very simple: Π(B, P ) = lim(pre(B ∩P )) (Theorem 1). As a result, independent of the type of property P – it may be a liveness property, we can check the satisfaction of safety property lim(pre(B ∩ P )) to obtain whether P is satisfied by B in a practically meaningful way. In the regular case, constructing lim(pre(B ∩ P )) is simple: take the synchronous product of the automata representing B and P , reduce it, and remove all acceptance conditions (i. e. make all states accepting). The paper’s title raises the question whether we still need liveness properties in the verification process. We obviously do since we construct lim(pre(B ∩ P )) from P (and B); so we need P initially.3 However, the conclusion to be drawn from this paper is that model-checking any linear-time property can be reduced to model-checking a safety property. The result of this safety check will tell us whether the original property is satisfied (in a practically meaningful way). As safety properties model-checking can be treated in a way specifically tailored to this property class [7], it will be part of future research to explore how such results can be used to implement the approach presented in this paper reasonably efficiently. Model-checking safety properties can be reduced to checking traces refinement [10]. However, refinement-based model-checkers such as FDR suffer from a not fully satisfactory treatment of liveness properties (failures and failures/ divergences refinement [10] can deal with some aspects of liveness; sometimes in a bit a peculiar way). Since the results of this paper practically reduce modelchecking of e. g. liveness properties to checking traces refinement, it will be part of future work to experiment with their integration into refinement-based modelcheckers.
Acknowledgements I would like to thank Simon St James for discussions related to the topics of this paper as well as the anonymous referees of my paper for their excellent suggestions to swap the order of Lemma 5 and Lemma 6, to comment on the state-space size in the Conclusions section (the footnote), and to introduce a short motivating example, which is now contained prior to the technical part of the paper. In addition, I want to thank Familie Kranz of the Landhaus B¨ ormoos for their hospitality – I had the pleasure to finish off the final version of this paper 3
One may note that the automaton representation of lim(pre(B ∩ P )) is usually much bigger than the one of P . This can seem like a significant drawback of the results presented in this paper. However, the model checking algorithm for lineartime temporal properties requires the construction of an automaton for B ∩P , where P is the negation of property P [3, 5, 13]. Hence the usual model checking algorithm will not perform better than the one inherent in pre(B) ⊆ pre(B ∩ P ), which is: Construct B ∩ P , ignore acceptance conditions (pre(. . . )), determinise and minimise the resulting labelled transition system (LTS), and check whether the resulting LTS is the same as the one for B. (Note that pre(B) ⊆ pre(B ∩ P ) is equivalent to pre(B) = pre(B ∩ P ) since pre(B ∩ P ) is trivially a subset of pre(B).)
Do We Need Liveness?
287
in their beautiful garden, and my wife Heidrun for allowing the work on this paper to interfere with our summer vacations.
References [1] B. Alpern and F. B. Schneider. Defining liveness. Information Processing Letters, 21(4):181–185, October 1985. 279, 281, 285 [2] B. Alpern and F. B. Schneider. Recognizing safety and liveness. Technical Report TR 86-727, Department of Computer Science, Cornell University, January 1986. 282 [3] E. A. Emerson. Temporal and modal logic. In van Leeuwen [12], pages 995–1072. 286 [4] N. Francez. Fairness. Springer Verlag, New York, first edition, 1986. 279, 281 [5] R. Gerth, D. Peled, M. Y. Vardi, and P. Wolper. Simple on-the-fly automatic verification of linear temporal logic. In P. Dembinski and M. Sredniawa, editors, Protocol Specification, Testing, and Verification XV ’95, pages 3–18. Chapman & Hall, 1996. 286 [6] M. A. Harrison. Introduction to Formal Language Theory. Addison-Wesley, Reading, Mass., first edition, 1978. 281 [7] O. Kupferman and M. Y. Vardi. Model-checking of safety properties. In N. Halbwachs and D. Peled, editors, CAV’99, volume 1633 of Lecture Notes in Computer Science, pages 172–183, Trento, Italy, 1999. Springer Verlag. 286 [8] U. Nitsche and P. Ochsenschl¨ ager. Approximately satisfied properties of systems and simple language homomorphisms. Information Processing Letters, 60:201– 206, 1996. 279, 282, 285 [9] U. Nitsche and P. Wolper. Relative liveness and behavior abstraction (extended abstract). In Proceedings of the 16th ACM Symposium on Principles of Distributed Computing (PODC’97), pages 45–52, Santa Barbara, CA, 1997. 279, 280, 282, 285 [10] A. Roscoe. The Theory and Practice of Concurrency. Series in Computer Science. Prentice Hall International, 1997. 283, 286 [11] W. Thomas. Automata on infinite objects. In van Leeuwen [12], pages 133–191. 281 [12] J. van Leeuwen, editor. Formal Models and Semantics, volume B of Handbook of Theoretical Computer Science. Elsevier, 1990. 287 [13] P. Wolper. The tableau method for temporal logic: An overview. Logique et Analyse, 110/111:119–136, Juin–Septembre 1985. 286
I would be very happy to receive any kind of comments on this paper – in particular from people who have “played” with the presented approach – since, even though I am convinced of the usefulness of the presented results, I have not yet enough experimental evidence for its full practical justification.
Author Index
Alagar, Vangalur S. . . . . . . . . . . . . 240 Assfalg, J¨ urgen . . . . . . . . . . . . . . . . 115 Bagdanov, Andrew . . . . . . . . . . . . 135 Bart´ ak, Roman . . . . . . . . . . . . . . . . 186 Bergeron, R. Daniel . . . . . . . . . . . . . 94 Bertini, Marco . . . . . . . . . . . . . . . . . 115 Bimbo, Alberto Del . . . . . . . . . . . . 115 Colombo, Carlo . . . . . . . . . . . . . . . . 115 Dubois, Didier . . . . . . . . . . . . . . . . . 168 Fouchal, Hac`ene . . . . . . . . . . . . . . . . . 1 Gemert, Jan v. . . . . . . . . . . . . . . . . 135 Geusebroek, Jan-Mark . . . . . . . . . 135 Golshani, Forouzan . . . . . . . . . . . . 149 G¸ebala, Maciej . . . . . . . . . . . . . . . . 262 Iliopoulos, Costas S. . . . . . . . . . . . 218 Janˇcar, Petr . . . . . . . . . . . . . . . . . . . . 41 Jeffery, Keith G. . . . . . . . . . . . . . . . 174 Kamareddine, Fairouz . . . . . . . . . . 74 Katarzyniak, Radoslaw Piotr . . 231 Kisiel-Dorohinicki, Marek . . . . . . 253 Kocsor, Andr´ as . . . . . . . . . . . . . . . . 181 Kov´ acs, Korn´el . . . . . . . . . . . . . . . . 181 Kuˇcera, Anton´ın . . . . . . . . . . . . . . . . 41 Kutrib, Martin . . . . . . . . . . . . . . . . 208
L¨ owe, Jan-Thomas . . . . . . . . . . . . .208 Midrij¯ anis, Gatis . . . . . . . . . . . . . . .273 Minh, Hoang . . . . . . . . . . . . . . . . . . 135 Mohamed, Manal . . . . . . . . . . . . . . 218 Nguyen, Ngoc Thanh . . . . . . . . . . 231 Nunziati, Walter . . . . . . . . . . . . . . . 115 Panchanathan, Sethuraman . . . . 149 Park, Youngchoon . . . . . . . . . . . . . 149 Periyasamy, Kasi . . . . . . . . . . . . . . 240 Prade, Henri . . . . . . . . . . . . . . . . . . . 168 Reussner, Ralf . . . . . . . . . . . . . . . . . . 20 Rhodes, Philip J. . . . . . . . . . . . . . . . 94 Sapino, Maria Luisa . . . . . . . . . . . 132 Schreiber, Guus . . . . . . . . . . . . . . . .135 Smeulders, Arnold . . . . . . . . . . . . . 135 Snoek, Cees . . . . . . . . . . . . . . . . . . . 135 Sparr, Ted M. . . . . . . . . . . . . . . . . . . 94 Subramani, K. . . . . . . . . . . . . . . . . . 195 T´oth, L´ aszl´o . . . . . . . . . . . . . . . . . . . 181 Ultes-Nitsche, Ulrich . . . . . . . . . . . 279 Vendrig, Jeroen . . . . . . . . . . . . . . . . 135 Wielemaker, Jan . . . . . . . . . . . . . . . 135 Worring, Marcel . . . . . . . . . . . . . . . 135