Petri Net Primer: A Compendium on the Core Model, Analysis, and Synthesis (Computer Science Foundations and Applied Logic) 3031482778, 9783031482779

Petri nets model concurrent and distributed systems where active components communicate through the production and absor

144 71 6MB

English Pages 558 [548] Year 2024

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Contents
Chapter 1
First Steps in Petri Nets
1.1
Introductory Remarks
1.2
Basic definitions
1.3
Firing transitions
1.4
Bibliographical remarks and further reading
1.5
Exercises
Chapter 2
Languages of Petri Nets
2.1
Introduction
2.2
Labelled Petri nets, and Petri net languages
2.3
Simulation of finite automata
2.4
Petri net languages are context-sensitive
2.5
Closure properties of Petri net languages
2.6
Algebraic characterisation of Petri net languages
2.7
Bibliographical remarks and further reading
2.8
Exercises
Chapter 3
Reachability and Coverability
3.1
Introduction
3.2
Graphs and multigraphs
3.3
The reachability graph
3.4
Boundedness, safeness, and liveness notions
3.5
Generalised markings, and vector comparisons
3.6
Coverability trees and coverability graphs
3.7
Bibliographical remarks and further reading
3.8
Exercises
Chapter 4 Linear-algebraic Structure of Petri Nets
4.1
Introductory remarks
4.2
Structural boundedness and structural liveness
4.3
Incidence matrix, marking equation, realisability
4.4
Transposition lemmata
4.5
Structural boundedness and infinite executions
4.6
S-invariants and T-invariants
4.7
Positive S-invariants and T-invariants
4.8
Bibliographical remarks and further reading
4.9
Exercises
Chapter 5
Graph-theoretical Structure of Petri Nets
5.1
Introduction
5.2
Loops, weights, and border elements
5.3
S-nets and T-nets
5.4
Free-choice nets
5.5
A liveness criterion for FC-systems
5.6
Liveness criteria for S-systems and T-systems
5.7
Boundedness criteria for live FC- (S-/T-) systems
5.8
An algorithm for the construction of T-components
5.9
An algorithm for the construction of S-components
5.10
Bibliographical remarks and further reading
5.11
Exercises
Chapter 6
More Structure Theory
6.1
Introduction
6.2
Conditions for structural liveness
6.3
Rank theorem for free-choice nets
6.4
Duality
6.5
Net reductions
6.6
Home states and reversibility
6.7
Bibliographical remarks and further reading
6.8
Exercises
Chapter 7
Program Verification Using Traps
7.1
Introduction
7.2
Properties of systems
7.3
Underapproximations and overapproximations
7.4
Verification using the marking equation and traps
7.5
Peterson's mutual exclusion algorithm
7.6
Limits of marking equation and traps
7.7
Bibliographical remarks and further reading
7.8
Exercises
Chapter 8
Fairness, Simulations, and Inhibitor Arcs
8.1
Introduction
8.2
Asymmetric choice nets
8.3
Simulations and net transformations
8.4
Strong fairness and very strong fairness
8.5
Weak fairness and finite delay
8.6
Inhibitor arcs and zero tests
8.7
Bibliographical remarks and further reading
8.8
Exercises
Chapter 9
Unfoldings and Reachability Checking
9.1
Introduction
9.2
An example
9.3
Occurrence nets, and the maximal unfolding
9.4
Finite and canonical prefixes
9.5
Verification methods based on the prefix
9.6
A linear-algebraic method
9.7
A logic-oriented method
9.8
An ``on-the-fly'' method
9.9
Bibliographical remarks and further reading
9.10
Exercises
Chapter 10
Petri Net Computers
10.1
Introduction
10.2
Strongly Petri net computable functions
10.3
Weakly Petri net computable functions
10.4
Strong versus weak PN-computability
10.5
The Ackermann function
10.6
Sizes of reachability and coverability graphs
10.7
Bibliographical remarks and further reading
10.8
Exercises
Chapter 11
Synthesis of Petri Nets from Labelled Transition Systems
11.1
Introduction
11.2
Labelled transition systems
11.3
Regions
11.4
Separation properties, and the region theorem
11.5
Characterising state and event/state separation
11.6
Solving the coffee machine
11.7
Homomorphisms and overapproximations
11.8
Bibliographical remarks and further reading
11.9
Exercises
Chapter 12
Persistent Transition Systems and Choice-free Petri Nets
12.1
Introductory remarks
12.2
Persistence, and some other lts properties
12.3
Choice-free Petri nets
12.4
Keller's theorem, and home states
12.5
Small cycle decompositions
12.6
Bibliographical remarks and further reading
12.7
Exercises
Chapter 13
Divide-and-Conquer Methods for Synthesis
13.1
Introduction
13.2
Pre-synthesis
13.3
Extracting concurrency from an lts
13.4
Detecting loose coupling in an lts
13.5
Bibliographical remarks and further reading
13.6
Exercises
Chapter 14
Marked Graph Synthesis
14.1
Introduction
14.2
Some examples and counterexamples
14.3
Short and general paths
14.4
Connected, bounded and live marked graphs
14.5
Distances, restricted lts, and lattices
14.6
Synthesis using sequentialising states
14.7
Minimality, and place bounds
14.8
Bibliographical remarks and further reading
14.9
Exercises
Chapter 15
Bounded Choice-free Net Synthesis
15.1
Introduction
15.2
Concurrent implementations of Petri nets
15.3
Some properties of bounded choice-free-nets
15.4
Pre-synthesis
15.5
Choice-free synthesis
15.6
Bibliographical remarks and further reading
15.7
Exercises
Chapter 16
Model Checking Safe, Strongly Persistent Petri Nets
16.1
Introductory Remarks
16.2
Syntax and semantics of S4
16.3
The model checking problem
16.4
Safe, strongly persistent Petri net systems
16.5
Conjunctions of literals
16.6
Elimination of diamonds
16.7
The model checker
16.8
Computing Last polynomially for marked graphs
16.9
Bibliographical remarks and further reading
16.10
Exercises
Chapter 17
Semilinearity
17.1
Introduction
17.2
Semilinear sets
17.3
Presburger arithmetic and Presburger sets
17.4
Minimal nondecreasing transition sequences
17.5
Persistent nets have semilinear reachability sets
17.6
Decidability of persistence
17.7
Weakly persistent Petri nets
17.8
Some non-semilinear Petri nets
17.9
Bibliographical remarks and further reading
17.10
Exercises
Chapter 18
Decidability of the Reachability Problem
18.1
Introduction
18.2
Controlled coverability graphs
18.3
Seeds, and seed-transition-sequences
18.4
Reduction of the reachability problem
18.5
The characteristic equation
18.6
A pumping lemma
18.7
Decomposition of seed-transition sequences
18.8
Bibliographical remarks and further reading
18.9
Exercises
Chapter 19
The Box Algebra 1/2: Refinement and Recursion
19.1
Introduction
19.2
Motivation
19.3
Simultaneous refinement and relabelling of nets
19.4
Compositional, denotational Petri net semantics
19.5
Compositional, operational Petri net semantics
19.6
Recursion
19.7
Bibliographical remarks and further reading
19.8
Exercises
Chapter 20
The Box Algebra 2/2: Iteration and Data
20.1
Introductory remarks
20.2
Iteration
20.3
A programming language
20.4
Declarations and variables
20.5
Control flow
20.6
Summary on boundedness and finiteness
20.7
Bibliographical remarks and further reading
20.8
Exercises
Chapter 21
High-level Petri Nets
21.1
Introductory remarks, and an example
21.2
Coloured and multirelational Petri nets
21.3
Algebraic Petri nets
21.4
M-nets
21.5
Bibliographical remarks and further reading
21.6
Exercises
Literature
Index
Recommend Papers

Petri Net Primer: A Compendium on the Core Model, Analysis, and Synthesis (Computer Science Foundations and Applied Logic)
 3031482778, 9783031482779

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Computer Science Foundations and Applied Logic

Eike Best Raymond Devillers

Petri Net Primer A Compendium on the Core Model, Analysis, and Synthesis

Computer Science Foundations and Applied Logic Editors-in-Chiefs Vijay Ganesh, University of Waterloo, Waterloo, Canada Ilias S. Kotsireas, Department of Physics and Computer Science, Wilfrid Laurier University, Waterloo, ON, Canada Editorial Board Erika Abraham, Department of Computer Science, RWTH Aachen University, Aachen, Nordrhein-Westfalen, Germany Olaf Beyersdorff, Friedrich Schiller University Jena, Jena, Thüringen, Germany Jasmin Blanchette, Garching, Germany Armin Biere, Informatik, ETH Zentrum, RZH, Computer Systems Institute, Zürich, Switzerland Sam Buss, Department of Mathematics, University of California, San Diego, La Jolla, CA, USA Matthew England

, Engineering and Computing, Coventry University, Coventry, UK

Jacques Fleuriot, The University of Edinburgh, Scotland, Selkirkshire, UK Pascal Fontaine, University of Lorraine, Villers Les Nancy Cedex, France Arie Gurfinkel, Pittsburgh, PA, USA Marijn Heule, Algorithms, University of Texas, Austin, TX, USA Reinhard Kahle, Departamento de Matematica, Universidade Nova de Lisboa, Caparica, Portugal Phokion Kolaitis, University of California, Santa Cruz, CA, USA Antonina Kolokolova, Department of Computer Science, Memorial University of Newfoundland, St. John’s, NL, Canada Ralph Matthes, Universität München, München, Germany Assia Mahboubi, Institut National de Recherche en Informatique et en Automatique, Nantes Cedex 3, France Jakob Nordström, Stockholm, Sweden Prakash Panangaden, School of Computer Science, McGill University, Montreal, QC, Canada Kristin Yvonne Rozier, Champaign, IL, USA Thomas Studer, Institute of Computer Science and Applied Mathematics, University of Berne, Berne, Switzerland Cesare Tinelli

, The University of Iowa, Iowa City, IA, USA

Computer Science Foundations and Applied Logic is a growing series that focuses on the foundations of computing and their interaction with applied logic, including how science overall is driven by this. Thus, applications of computer science to mathematical logic, as well as applications of mathematical logic to computer science, will yield many topics of interest. Among other areas, it will cover combinations of logical reasoning and machine learning as applied to AI, mathematics, physics, as well as other areas of science and engineering. The series (previously known as Progress in Computer Science and Applied Logic) welcomes proposals for research monographs, textbooks and polished lectures, and professional text/references. The scientific content will be of strong interest to both computer scientists and logicians.

Eike Best · Raymond Devillers

Petri Net Primer A Compendium on the Core Model, Analysis, and Synthesis

Eike Best Department of Computing Science Carl von Ossietzky Universität Oldenburg Oldenburg, Germany

Raymond Devillers Département d’Informatique Université Libre de Bruxelles Bruxelles, Belgium

ISSN 2731-5754 ISSN 2731-5762 (electronic) Computer Science Foundations and Applied Logic ISBN 978-3-031-48277-9 ISBN 978-3-031-48278-6 (eBook) https://doi.org/10.1007/978-3-031-48278-6 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This book is published under the imprint Birkhäuser, www.birkhauser-science.com by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Paper in this product is recyclable.

Carl Adam Petri 1926–2010

Preface

This text has arisen out of notes for a lecture on Petri nets that has been held regularly at the Carl von Ossietzky University of Oldenburg for more than 20 years, mostly by the first author, but occasionally also by Harro Wimmel. The original idea was to cut the material into digestible portions that could be presented, and discussed, within a week or two, while being as self-contained and easily accessible as possible. What remains of this idea is a series of chapters each of which is only about 20 pages long, and an effort to present key proofs in a detailed and understandable fashion. We are very gratefully indebted to Harro Wimmel who wrote sizeable portions of this text, especially the chapters on Petri net languages, on Petri net computers, and on the Petri net reachability problem. While we have tampered with the first two chapters, the chapter presenting the Kosaraju / Lambert / Mayr proof of the decidability of reachability is entirely Harro’s. It is included here with his consent. We have only done the translation, some re-drawing of the figures and added some supplementary comments (summary, literature, and exercises). Any translational mishaps and any remaining errors are ours, of course. Credit for a considerable part of material is due to colleagues who we have been lucky to work with closely, and with great pleasure, for many years now: Philippe Darondeau (†), Jörg Desel, Javier Esparza, and Maciej Koutny in particular. The relevant references to their and to our common work are given at the end of each chapter. Our heartfelt thanks go to every one of them, and to all other colleagues we interacted with. It has also been a joy to cooperate with Uli Schlachter, principal developer of apt. How should this book be read? The novice is advised to start at the beginning and then to proceed sequentially at first and guided by interest later on. Every chapter contains some worked exercises to assist reading and understanding the material. Readers who are already acquainted with Petri nets (or labelled transition systems) may immediately jump to the chapter(s) they are most interested in, browsing unfamiliar definitions as they occur; an extensive index is meant to support such a (non-sequential) way of studying. Every chapter ends with a multitude of quotes which do not just cover the immediate content but also refer to related literature.

Oldenburg and Brussels, December 12, 2023

vii

Contents

1 First Steps in Petri Nets 1.1 Introductory Remarks . . . . . . . . . . . 1.2 Basic definitions . . . . . . . . . . . . . . 1.3 Firing transitions . . . . . . . . . . . . . . 1.4 Bibliographical remarks and further reading 1.5 Exercises . . . . . . . . . . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

1 1 2 7 13 13

2 Languages of Petri Nets 2.1 Introduction . . . . . . . . . . . . . . . . . . . . 2.2 Labelled Petri nets, and Petri net languages . . . 2.3 Simulation of finite automata . . . . . . . . . . . 2.4 Petri net languages are context-sensitive . . . . . 2.5 Closure properties of Petri net languages . . . . 2.6 Algebraic characterisation of Petri net languages 2.7 Bibliographical remarks and further reading . . . 2.8 Exercises . . . . . . . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

17 17 18 20 22 33 36 37 38

3 Reachability and Coverability 3.1 Introduction . . . . . . . . . . . . . . . . . . . 3.2 Graphs and multigraphs . . . . . . . . . . . . 3.3 The reachability graph . . . . . . . . . . . . . 3.4 Boundedness, safeness, and liveness notions . . 3.5 Generalised markings, and vector comparisons 3.6 Coverability trees and coverability graphs . . . 3.7 Bibliographical remarks and further reading . . 3.8 Exercises . . . . . . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

43 43 44 45 49 52 53 62 63

4 Linear-algebraic Structure of Petri Nets 4.1 Introductory remarks . . . . . . . . . . . . . . . 4.2 Structural boundedness and structural liveness . 4.3 Incidence matrix, marking equation, realisability 4.4 Transposition lemmata . . . . . . . . . . . . . . 4.5 Structural boundedness and infinite executions . 4.6 S-invariants and T-invariants . . . . . . . . . . . 4.7 Positive S-invariants and T-invariants . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

69 69 70 72 77 78 80 82

. . . . .

. . . . .

ix

CONTENTS

x

4.8 4.9

Bibliographical remarks and further reading . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

84 85

5 Graph-theoretical Structure of Petri Nets 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . 5.2 Loops, weights, and border elements . . . . . . . . 5.3 S-nets and T-nets . . . . . . . . . . . . . . . . . . 5.4 Free-choice nets . . . . . . . . . . . . . . . . . . 5.5 A liveness criterion for FC-systems . . . . . . . . 5.6 Liveness criteria for S-systems and T-systems . . . 5.7 Boundedness criteria for live FC- (S-/T-) systems . 5.8 An algorithm for the construction of T-components 5.9 An algorithm for the construction of S-components 5.10 Bibliographical remarks and further reading . . . . 5.11 Exercises . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

89 89 90 92 96 98 108 110 117 121 122 123

6 More Structure Theory 6.1 Introduction . . . . . . . . . . . . . . . . . 6.2 Conditions for structural liveness . . . . . . 6.3 Rank theorem for free-choice nets . . . . . 6.4 Duality . . . . . . . . . . . . . . . . . . . 6.5 Net reductions . . . . . . . . . . . . . . . 6.6 Home states and reversibility . . . . . . . . 6.7 Bibliographical remarks and further reading 6.8 Exercises . . . . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

133 133 134 141 145 147 148 152 153

7 Program Verification Using Traps 7.1 Introduction . . . . . . . . . . . . . . . . . . . . 7.2 Properties of systems . . . . . . . . . . . . . . . 7.3 Underapproximations and overapproximations . . 7.4 Verification using the marking equation and traps 7.5 Peterson’s mutual exclusion algorithm . . . . . . 7.6 Limits of marking equation and traps . . . . . . 7.7 Bibliographical remarks and further reading . . . 7.8 Exercises . . . . . . . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

157 157 158 160 164 167 171 172 173

8 Fairness, Simulations, and Inhibitor Arcs 8.1 Introduction . . . . . . . . . . . . . . . . . 8.2 Asymmetric choice nets . . . . . . . . . . 8.3 Simulations and net transformations . . . . 8.4 Strong fairness and very strong fairness . . 8.5 Weak fairness and finite delay . . . . . . . 8.6 Inhibitor arcs and zero tests . . . . . . . . 8.7 Bibliographical remarks and further reading 8.8 Exercises . . . . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

177 177 178 181 186 191 194 197 199

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

CONTENTS

xi

9 Unfoldings and Reachability Checking 9.1 Introduction . . . . . . . . . . . . . . . . . . 9.2 An example . . . . . . . . . . . . . . . . . . 9.3 Occurrence nets, and the maximal unfolding 9.4 Finite and canonical prefixes . . . . . . . . . 9.5 Verification methods based on the prefix . . . 9.6 A linear-algebraic method . . . . . . . . . . 9.7 A logic-oriented method . . . . . . . . . . . 9.8 An “on-the-fly” method . . . . . . . . . . . 9.9 Bibliographical remarks and further reading . 9.10 Exercises . . . . . . . . . . . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

203 204 204 205 209 215 217 220 221 222 224

10 Petri Net Computers 10.1 Introduction . . . . . . . . . . . . . . . . . . 10.2 Strongly Petri net computable functions . . . 10.3 Weakly Petri net computable functions . . . 10.4 Strong versus weak PN-computability . . . . 10.5 The Ackermann function . . . . . . . . . . . 10.6 Sizes of reachability and coverability graphs . 10.7 Bibliographical remarks and further reading . 10.8 Exercises . . . . . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

229 229 230 234 238 239 243 244 245

11 Synthesis of Petri Nets from Labelled Transition Systems 11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . 11.2 Labelled transition systems . . . . . . . . . . . . . . . 11.3 Regions . . . . . . . . . . . . . . . . . . . . . . . . . 11.4 Separation properties, and the region theorem . . . . . 11.5 Characterising state and event/state separation . . . . . 11.6 Solving the coffee machine . . . . . . . . . . . . . . . 11.7 Homomorphisms and overapproximations . . . . . . . 11.8 Bibliographical remarks and further reading . . . . . . 11.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

249 250 251 255 257 262 266 269 270 272

12 Persistent Transition Systems and Choice-free Petri Nets 12.1 Introductory remarks . . . . . . . . . . . . . . . . . . 12.2 Persistence, and some other lts properties . . . . . . . 12.3 Choice-free Petri nets . . . . . . . . . . . . . . . . . . 12.4 Keller’s theorem, and home states . . . . . . . . . . . 12.5 Small cycle decompositions . . . . . . . . . . . . . . 12.6 Bibliographical remarks and further reading . . . . . . 12.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

279 279 280 282 285 290 293 294

13 Divide-and-Conquer Methods for Synthesis 13.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2 Pre-synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3 Extracting concurrency from an lts . . . . . . . . . . . . . . . . . .

297 297 298 300

xii

CONTENTS

13.4 Detecting loose coupling in an lts . . . . . . . . . . . . . . . . . . 308 13.5 Bibliographical remarks and further reading . . . . . . . . . . . . . 318 13.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318 14 Marked Graph Synthesis 14.1 Introduction . . . . . . . . . . . . . . . . . . 14.2 Some examples and counterexamples . . . . 14.3 Short and general paths . . . . . . . . . . . . 14.4 Connected, bounded and live marked graphs 14.5 Distances, restricted lts, and lattices . . . . . 14.6 Synthesis using sequentialising states . . . . 14.7 Minimality, and place bounds . . . . . . . . 14.8 Bibliographical remarks and further reading . 14.9 Exercises . . . . . . . . . . . . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

321 321 322 324 327 329 331 335 336 337

15 Bounded Choice-free Net Synthesis 15.1 Introduction . . . . . . . . . . . . . . . . . . 15.2 Concurrent implementations of Petri nets . . 15.3 Some properties of bounded choice-free-nets 15.4 Pre-synthesis . . . . . . . . . . . . . . . . . 15.5 Choice-free synthesis . . . . . . . . . . . . . 15.6 Bibliographical remarks and further reading . 15.7 Exercises . . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

339 339 340 342 348 349 351 351

16 Model Checking Safe, Strongly Persistent Petri Nets 16.1 Introductory Remarks . . . . . . . . . . . . . . . 16.2 Syntax and semantics of 𝑆4 . . . . . . . . . . . . . 16.3 The model checking problem . . . . . . . . . . . . 16.4 Safe, strongly persistent Petri net systems . . . . . 16.5 Conjunctions of literals . . . . . . . . . . . . . . . 16.6 Elimination of diamonds . . . . . . . . . . . . . . 16.7 The model checker . . . . . . . . . . . . . . . . . 16.8 Computing Last 𝜒 polynomially for marked graphs 16.9 Bibliographical remarks and further reading . . . . 16.10 Exercises . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

355 356 356 359 360 362 364 366 368 370 371

17 Semilinearity 17.1 Introduction . . . . . . . . . . . . . . . . . . . 17.2 Semilinear sets . . . . . . . . . . . . . . . . . 17.3 Presburger arithmetic and Presburger sets . . . 17.4 Minimal nondecreasing transition sequences . 17.5 Persistent nets have semilinear reachability sets 17.6 Decidability of persistence . . . . . . . . . . . 17.7 Weakly persistent Petri nets . . . . . . . . . . 17.8 Some non-semilinear Petri nets . . . . . . . . 17.9 Bibliographical remarks and further reading . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

375 376 376 378 381 384 387 389 391 393

. . . . . . . . .

. . . . . . . . .

CONTENTS

xiii

17.10 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394 18 Decidability of the Reachability Problem 18.1 Introduction . . . . . . . . . . . . . . . . . 18.2 Controlled coverability graphs . . . . . . . 18.3 Seeds, and seed-transition-sequences . . . 18.4 Reduction of the reachability problem . . . 18.5 The characteristic equation . . . . . . . . . 18.6 A pumping lemma . . . . . . . . . . . . . 18.7 Decomposition of seed-transition sequences 18.8 Bibliographical remarks and further reading 18.9 Exercises . . . . . . . . . . . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

397 398 398 405 410 412 418 421 424 425

19 The Box Algebra 1/2: Refinement and Recursion 19.1 Introduction . . . . . . . . . . . . . . . . . . . 19.2 Motivation . . . . . . . . . . . . . . . . . . . 19.3 Simultaneous refinement and relabelling of nets 19.4 Compositional, denotational Petri net semantics 19.5 Compositional, operational Petri net semantics 19.6 Recursion . . . . . . . . . . . . . . . . . . . . 19.7 Bibliographical remarks and further reading . . 19.8 Exercises . . . . . . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

427 428 428 431 439 443 447 451 452

20 The Box Algebra 2/2: Iteration and Data 20.1 Introductory remarks . . . . . . . . . . . . 20.2 Iteration . . . . . . . . . . . . . . . . . . . 20.3 A programming language . . . . . . . . . 20.4 Declarations and variables . . . . . . . . . 20.5 Control flow . . . . . . . . . . . . . . . . 20.6 Summary on boundedness and finiteness . 20.7 Bibliographical remarks and further reading 20.8 Exercises . . . . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

457 457 458 465 466 469 474 474 475

21 High-level Petri Nets 21.1 Introductory remarks, and an example . . . 21.2 Coloured and multirelational Petri nets . . 21.3 Algebraic Petri nets . . . . . . . . . . . . . 21.4 M-nets . . . . . . . . . . . . . . . . . . . 21.5 Bibliographical remarks and further reading 21.6 Exercises . . . . . . . . . . . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

479 479 480 490 496 499 501

. . . . . . . . .

Literature

505

Index

535

Chapter 1

First Steps in Petri Nets

Abstract This first chapter contains motivating and historical remarks, as well as some basic definitions, relating to Petri nets. These definitions are divided into structural ones, relating to the static structure of an unmarked Petri net, and behavioural ones, relating to the dynamic structure of a marked Petri net. Not only this chapter, but also this text as a whole, is concerned with place / transition nets, one of the oldest and most basic Petri net models.

Contents 1.1 1.2 1.3 1.4 1.5

Introductory Remarks . . . . . . . . . . . Basic definitions . . . . . . . . . . . . . . Firing transitions . . . . . . . . . . . . . . Bibliographical remarks and further reading Exercises . . . . . . . . . . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

1 2 7 13 13

1.1 Introductory Remarks The reader might remember a lecture in Theoretical Computer Science in which Finite Automata were introduced. Finite Automata accept regular languages. Other classes of languages were also introduced and analysed. This was done for good reasons. For instance, compiler construction is based on a variety of language types. However, formal languages may also be viewed from a system-oriented perspective. If every letter is interpreted as an atomic activity, then the words of a language might describe feasible sequences of actions. The evolutions permitted in some industrial production process could be described in this way. Its atomic actions could perhaps consist of the basic activities of various machines involved in the process. This idea © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. Best and R. Devillers, Petri Net Primer, Computer Science Foundations and Applied Logic, https://doi.org/10.1007/978-3-031-48278-6_1

1

2

1 First Steps in Petri Nets

can be exploited both for the simulation and for the validation of a production process in its planning stage, before it is actually implemented. Much effort (and money) can be saved if design errors are detected and corrected in this way, well before the physical realisation. However, it should be taken into account that in general, and in particular in such processes, more than just one activity can be executed in parallel. Such parallelism (or concurrency) cannot be described either by a formal language or by a finite automaton, at least not in the form in which they are usually taught in a beginners’ course. Problems concerning parallelism cannot be handled just by using words built as sequences of the letters of an alphabet. Another kind of problem that can hardly be tackled by classical models is the notion of causality, explaining how some action(s) need previous ones in any possible evolutions. In the beginning of the Sixties of the last century, Carl Adam Petri became aware of this two-fold lack of descriptive power of the traditional finite automata model, and of its bias towards sequential rather than concurrent execution. His dissertation, which he completed in the year 1962, was fundamentally concerned with redressing the balance. The idea behind Petri nets (as they were called some years later) is to modify some of the concepts behind finite automata. Most fundamentally, in his view, states are thought to be structured and may consist of smaller parts called local states. Transitions may affect certain local states but may leave remote states unchanged or unaffected. Local states are represented in Petri nets by means of places while state transitions are simply called transitions. It is this principle of locality, together with a certain duality between states and transitions, which underlies the definition of a Petri net and allows an explicit representation of concurrency and causality.

1.2 Basic definitions The reader is assumed to be acquainted with the most elementary notions and notations of logic, set theory, integer numbers, linear algebra, graph theory, formal languages, and automata theory, as well as with a basic understanding of algorithms and their complexity. We shall use standard mathematical notation, even if this is sometimes not unique. For instance, 𝑓 : 𝑋 → 𝑌 and 𝑓 ∈ 𝑌 𝑋 are two equivalent ways of expressing the same thing. Both denote the fact that 𝑓 is a function (or a mapping) from 𝑋 to 𝑌 . In general, 𝑌 𝑋 is a shorthand way of writing the set of all functions from 𝑋 to 𝑌 . Definition 1.1 Petri nets A Petri net is a triple (𝑆, 𝑇, 𝐹) consisting of • A countable set 𝑆 of places and a countable set 𝑇 of transitions with 𝑆 ∩ 𝑇 = ∅.a • A mapping 𝐹 : (𝑆 × 𝑇) ∪ (𝑇 × 𝑆) → N which defines arcs (also called arrows, or edges) between places and transitions.b 𝐹 (𝑠, 𝑡) defines the

1.2 Basic definitions

3

number of arcs from 𝑠 to 𝑡. Analogously, 𝐹 (𝑡, 𝑠) defines the number of arcs from 𝑡 to 𝑠. These numbers are allowed to be zero. a b

𝑆 stands for German: “Stellen”. 𝐹 stands for “Flow”.

If 𝐹 (𝑠, 𝑡) = 0, there is no arc from 𝑠 to 𝑡, and if 𝐹 (𝑡, 𝑠) = 0, there is no arc from 𝑡 to 𝑠: this allows to control the locality of actions. In the following, we will predominantly consider finite Petri nets, that is, Petri nets in which both the set of places and the set of transitions (hence also the set of arcs) are finite. For such nets, we often use finite sets of indices as follows: 𝑆 = {𝑠1 , . . . , 𝑠 |𝑆 | } and 𝑇 = {𝑡 1 , . . . , 𝑡 |𝑇 | }. Sometimes transitions are simply denoted by {𝑎, 𝑏, 𝑐, . . .}. This is to be understood such that 𝑎 is 𝑡 1 , 𝑏 is 𝑡 2 , etc. In the exceptional cases in which infinite nets are discussed, we will mention this fact explicitly. The reader may assume nets to be finite if nothing is specified to the contrary. In the graphical representation of a Petri net, every place is drawn as a circle and every transition as a – normally square, and in general, rectangular – box. Furthermore, we draw exactly 𝐹 (𝑠𝑖 , 𝑡 𝑗 ) arcs from the 𝑖 th place 𝑠𝑖 to the 𝑗 th transition 𝑡 𝑗 , and 𝐹 (𝑡 𝑗 , 𝑠𝑖 ) arcs from the 𝑗 th transition 𝑡 𝑗 to the 𝑖 th place 𝑠𝑖 . Places and transitions are enumerated for reasons of convenience, but in general, any naming is allowed. Enumerations are helpful for an alternative representation of the arcs in the calculus of matrices. In this representation, we replace the mapping 𝐹 by two |𝑆| × |𝑇 |-matrices F, B ∈ N𝑆×𝑇 . The value B𝑖, 𝑗 in the 𝑖 th row and 𝑗 th column is, by definition, the number of arcs from 𝑠𝑖 to 𝑡 𝑗 , and the value F𝑖, 𝑗 is, by definition, the number of arcs from 𝑡 𝑗 to 𝑠𝑖 . We will also occasionally write F(𝑠) or F(𝑠, .) for the “𝑠 th row” in F, given 𝑠 ∈ 𝑆, F(𝑡) or F(., 𝑡) for the “𝑡 th column”, given 𝑡 ∈ 𝑇, and F(𝑠, 𝑡) for the entry in row 𝑠 and column 𝑡 of F. B and F are called backward matrix and forward matrix, respectively. This is to be understood from the point of view of transitions: arcs emanating from a transition (i.e. “forward arcs”, as seen from this transition) are described by the forward matrix. Looking backward from a transition, one encounters its incoming arcs which are described in the backward matrix. As we will see later, these matrices allow linear algebra to be applied. We introduce a number of elementary concepts. Definition 1.2 Preset, postset, loops, simpleness, pureness, plainness Let (𝑆, 𝑇, 𝐹) be a Petri net. For 𝑥 ∈ 𝑆 ∪𝑇, we call • 𝑥 = {𝑦 ∈ 𝑆 ∪𝑇 | 𝐹 (𝑦, 𝑥) ≥ 1} and 𝑥 • = {𝑦 ∈ 𝑆 ∪ 𝑇 | 𝐹 (𝑥, 𝑦) ≥ 1} the preset (postset, respectively) of 𝑥. Ð Ð Generalising this, • 𝑋 = 𝑥 ∈𝑋 • 𝑥 and 𝑋 • = 𝑥 ∈𝑋 𝑥 • , for 𝑋 ⊆ 𝑆 ∪ 𝑇. An element 𝑥 ∈ 𝑆 ∪𝑇 satisfying • 𝑥 ∪ 𝑥 • = ∅ is called isolated. If there are arcs in both directions between a place 𝑠 and a transition 𝑡, i.e. if 𝐹 (𝑠, 𝑡) ≥ 1 ≤ 𝐹 (𝑡, 𝑠), then this situation is called a loop or a self-loop or a side condition. A loop is called simple if 𝐹 (𝑠, 𝑡) = 1 = 𝐹 (𝑡, 𝑠). A net is pure if there are no self-loops, and plain if there are no multiple arcs, that is, if the function 𝐹 returns 0 or 1, but no number greater than 1.

4

1 First Steps in Petri Nets

Definition 1.3 States and markings Let 𝑁 = (𝑆, 𝑇, 𝐹) be a Petri net. The state set of 𝑁 is defined to be N𝑆 , that is, the set of all functions from 𝑆 to N. This is to be understood as the set of all potential states of 𝑁, of which the actually reachable states – to be defined later – are a subset. A function 𝑀 : 𝑆 → N is called a state, or a marking, of 𝑁. If 𝑀 (𝑠) = 𝑚, then we say that “place 𝑠 carries 𝑚 tokens (in the state 𝑀)”, that is, there are 𝑚 copies of the resource kind represented by 𝑠. We may write states as column vectors, indexed by places. Graphically, tokens are represented as solid dots within a place (or by inscribing their numbers in places when they are too numerous). A function 𝑀 : 𝑆 → N can also be understood as a multiset of places. Example 1.4 A simple Petri net Using the matrix representation of arcs, the Petri net shown in Figure 1.1 can be described by the quadruple (𝑆, 𝑇, B, F) and the state 𝑀 with 𝑆 = {𝑠1 , 𝑠2 , 𝑠3 } 𝑇 = {𝑡 1 , 𝑡2 , 𝑡3 }

B 𝑠1 𝑠2 𝑠3

𝑡1 1 0 0

𝑡2 0 1 0

𝑡3 0 0 1

F 𝑠1 𝑠2 𝑠3

𝑡1 1 0 1

𝑡2 0 0 2

𝑡3 1 0 0

𝑀 𝑠1 0 𝑠2 1 𝑠3 2

We have • 𝑠1 = {𝑡1 , 𝑡3 }, 𝑠1 • = {𝑡 1 }, • 𝑠2 = ∅, 𝑠2 • = {𝑡 2 }, • 𝑠3 = {𝑡1 , 𝑡2 }, 𝑠3 • = {𝑡 3 }, • 𝑡 1 = {𝑠1 }, 𝑡1 • = {𝑠1 , 𝑠3 }, • 𝑡 2 = {𝑠2 }, 𝑡2 • = {𝑠3 }, • 𝑡 3 = {𝑠3 }, 𝑡3 • = {𝑠1 }. In multiset notation, the marking 𝑀 is 𝑀 = {𝑠2 , 𝑠3 , 𝑠3 }. The function 𝐹 can also be written down, but this is usually rather circumstantial, compared with the graphical representation. For instance, the net shown in Figure 1.1 has the following arcs:

𝐹:

(𝑡 2 , 𝑠3 ) → ↦ 2 (𝑡3 , 𝑠1 ), (𝑠3 , 𝑡3 ), (𝑡1 , 𝑠1 ), (𝑡 1 , 𝑠3 ), (𝑠1 , 𝑡1 ), (𝑠2 , 𝑡2 ) → ↦ 1   (𝑥, 𝑦) ↦ 0 otherwise →     

This Petri net has a loop between 𝑠1 and 𝑡 1 , as well as a multiple arc leading from 𝑡 2 to 𝑠3 ; in other words, we have 𝐹 (𝑡2 , 𝑠3 ) > 1. A multiple arc will often be represented as a plain arc together with some natural number which is inscribed at it. This number denotes the multiplicity of the arc. A simple loop may also be represented by an arc with two arrow heads leading in both directions.a a

However, such a representation is discouraged in small figures where loops could easily be mistaken for plain arcs.

1.2 Basic definitions

5 𝑡3

𝑡3

𝑡1

𝑡1 𝑠3

𝑠1

𝑠2

𝑠3

𝑠1

2

𝑠2 𝑡2

𝑡2

Fig. 1.1: Two representations of the same Petri net. A simple loop is represented either by arrows in opposite directions (left-hand side, between 𝑠1 and 𝑡 1 ), or by arrows having double arrow heads (right-hand side). Multiple arcs are represented either directly (left-hand side, between 𝑡 2 and 𝑠3 ), or by a single arc annotated by the multiplicity it represents (right-hand side).

Definition 1.5 Petri net with an initial marking An initial Petri net 𝑁 is defined to be a tuple 𝑁 = (𝑆, 𝑇, B, F, 𝑀0 ), or 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 ), where • (𝑆, 𝑇, B, F) (respectively, (𝑆, 𝑇, 𝐹)) is a Petri net; • 𝑀0 ∈ N𝑆 is an initial state. We also denote a Petri net 𝑁 = (𝑆, 𝑇, 𝐹) with initial state 𝑀0 by (𝑁, 𝑀0 ). The initial state is also called initial marking or starting state. In the following, we will call an initial Petri net simply a Petri net, if it is clear from the context that we are talking about an initially marked net. Also, we often use the word system in an informal way when talking about an initially marked net. Typically, the starting state is included in the graphical representation of a net by means of tokens (solid dots) on the places. In Figure 1.1, for example, 𝑀 = (0 1 2) T is the starting state. The notation T means “transposed” and shows, in this case, that the marking is viewed as a column vector. Sometimes it is desirable to ignore certain parts of a Petri net, for instance a set of transitions and/or places, together with all arcs connected to them. This leads to the notion of a subnet. Notation 1.6 Restriction | of a function Let 𝑓 : 𝐴 → 𝐵 be a function, that is, a relation 𝑓 ⊆ 𝐴 × 𝐵 which is left-total (∀𝑎 ∈ 𝐴∃𝑏 ∈ 𝐵 : (𝑎, 𝑏) ∈ 𝑓 ) and injective (∀𝑎 ∈ 𝐴∀𝑏, 𝑏 ′ ∈ 𝐵 : (𝑎, 𝑏) ∈ 𝑓 ∧ (𝑎, 𝑏 ′ ) ∈ 𝑓 ⇒ 𝑏=𝑏 ′ ). Let 𝐴′ ⊆ 𝐴 be a subset of 𝐴. Then 𝑓 | 𝐴′ , the restriction of 𝑓 to 𝐴′ , is the function 𝑓 ∩ ( 𝐴′ × 𝐵).

6

1 First Steps in Petri Nets

Definition 1.7 Subnet Let 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 ) be a Petri net and let 𝑆 ′ ⊆ 𝑆 and 𝑇 ′ ⊆ 𝑇. The subnet induced (or spanned) by 𝑆 ′ and 𝑇 ′ is denoted by 𝑁 (𝑆 ′ , 𝑇 ′ ) and defined bya 𝑁 (𝑆 ′ , 𝑇 ′ ) = (𝑆 ′ , 𝑇 ′ , 𝐹 | (𝑆 ′ ×𝑇 ′ )∪(𝑇 ′ ×𝑆 ′ ) , 𝑀0 | 𝑆 ′ ) a

𝐹 | (𝑆 ′ ×𝑇 ′ ) ∪(𝑇 ′ ×𝑆 ′ ) = 𝐹 ∩ ( ( (𝑆 ′ × 𝑇 ′ ) ∪ (𝑇 ′ × 𝑆 ′ ) ) × N) and 𝑀0 | 𝑆 ′ = 𝑀0 ∩ (𝑆 ′ × N).

The idea is that in 𝑁 (𝑆 ′ , 𝑇 ′ ), the part of the net defined by 𝑆 ′ and 𝑇 ′ is left intact, while all elements of 𝑆 \ 𝑆 ′ and 𝑇 \ 𝑇 ′ , and their adjacent arcs, are neglected. The definition of 𝐹 ′ means that all arcs between elements of 𝑆 ′ and 𝑇 ′ (including their multiplicities) are just inherited from 𝑁 to 𝑁 (𝑆 ′ , 𝑇 ′ ). All other arcs, that is, arcs with at least one endpoint in 𝑆 \ 𝑆 ′ or in 𝑇 \𝑇 ′ , are ignored. The definition of 𝑀0 | 𝑆 ′ means that places in 𝑆 ′ carry exactly as many tokens in 𝑁 (𝑆 ′ , 𝑇 ′ ) as they do in 𝑁. Places in 𝑆 \ 𝑆 ′ and all tokens on them are ignored. Example 1.8 A simple subnet In Figure 1.1, the subnet induced by 𝑆 ′ = {𝑠1 } and 𝑇 ′ = {𝑡 1 , 𝑡2 , 𝑡3 } consists of one place, 𝑠1 , with zero tokens, three arcs (two of which form a loop), and three transitions (one of which, namely 𝑡 2 , is isolated). It is shown on the left-hand side of Figure 1.2.

𝑡3

𝑡3

𝑡1

𝑡1 𝑠3

𝑠1

𝑠3

𝑠1

𝑠2

𝑠2 𝑡2

𝑡2

Fig. 1.2: A subnet (shown in blue) and not a subnet (shown in orange).

1.3 Firing transitions

7

1.3 Firing transitions We now describe the dynamics, i.e. the behaviour, of initially marked nets. This will be defined in analogy to the successive execution of state transitions in a finite automaton. In Petri nets, this process is called firing or executing transitions. Definition 1.9 The transition rule of Petri nets Let 𝑁 = (𝑆, 𝑇, B, F) be a Petri net, let 𝑀 ∈ N𝑆 be a state of 𝑁, and let 𝑡 ∈ 𝑇 be a transition of 𝑁. We call 𝑡 𝑀-activated (or enabled, firable, executable in state 𝑀), if 𝑀 ≥ B(𝑡) (that is, ∀𝑠 ∈ 𝑆 : 𝑀 (𝑠) ≥ B𝑠,𝑡 = 𝐹 (𝑠, 𝑡)). 𝑡

Formally, the fact that 𝑡 is enabled in 𝑀 is denoted by 𝑀 −→ . A transition 𝑡 fires from state 𝑀 to state 𝑀 ′ (or is executed in state 𝑀, leading to state 𝑀 ′ , or simply leads from 𝑀 to 𝑀 ′ ) if: • 𝑀 ≥ B(𝑡) (that is, 𝑡 is activated in 𝑀), and • 𝑀 ′ = 𝑀 − B(𝑡) + F(𝑡). This rule is called the transition (or firing) rule, and it is the basic behavioural (state change) rule for Petri nets. Formally, the fact that 𝑡 is firable in 𝑀 and 𝑡 leads from 𝑀 to 𝑀 ′ is denoted by 𝑀 −→ 𝑀 ′ . Informally, when a transition fires, it consumes tokens from every place of its preset (hence there needs to be at least one token on every such place just prior to firing) and produces tokens on every place of its postset. The number of tokens consumed and produced are calculated according to the multiplicity of arcs around the transition. More precisely, if a place 𝑠 has an outgoing arc of multiplicity 𝑘 towards 𝑡, then every single firing of 𝑡 needs at least 𝑘 tokens on 𝑠 and consumes exactly 𝑘 tokens from 𝑠. Similarly, if 𝑡 is connected to a place 𝑠′ of its postset by an arc of multiplicity 𝑚, then every single firing of 𝑡 produces exactly 𝑚 tokens on 𝑠′ , which are added to the already existing ones. In the case of self-loops, tokens are first taken from a place and later reproduced. That is, if 𝐹 (𝑠′′ , 𝑡) = 𝑘 > 0 and 𝐹 (𝑡, 𝑠′′ ) = 𝑚, 𝑘 tokens on 𝑠′′ are necessary for firing 𝑡. When firing 𝑡, we might think of it as the 𝑘 tokens being removed from 𝑠′′ first, and then, in a second step, 𝑚 tokens being added to 𝑠′′ again. In the special case of a simple loop (𝑘 = 𝑚 = 1), the effect of firing is that the number of tokens on such a place is neither decreased nor increased, because the single token that is taken away by firing, is put back by the same firing. This defines the sequential dynamics of Petri nets, which will be enough for most parts of the present book. However, when useful, we shall see how explicit concurrency and causality features may be defined and exploited. Example 1.10 “Playing the token game” Let us reconsider the Petri net from Figure 1.1. We have, on the one hand, that 𝑡3 𝑡2 (0 1 2) T −→ (0 0 4) T and (0 1 2) T −→ (1 1 1) T

8

1 First Steps in Petri Nets

On the other hand, firing 𝑡1 in state (0 1 2) T is not possible, since there is no token on 𝑠1 . We may also fire from other states. For instance, 𝑡1

𝑡2

(7 3 5) T −→ (7 3 6) T and (2 1 0) T −→ (2 0 2) T Simulating possible firings in such a way is sometimes called playing the token game. This refers to the idea that movable markers could be used on top of the graphical representation of a Petri net, in order to understand which states are reachable from given ones. Of course, it is usually possible to execute a sequence of transitions, one after another, instead of just one of them. This naturally leads to two interesting questions: • How can the set of states that are reachable from the initial state through such sequences be characterised? • How can the set of firable sequences be characterised? As we will see, these two questions are of a rather different nature. Moreover, the answers to both of them are non-trivial. Definition 1.11 Firing sequences, and reachability set Let 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 ) be a Petri net. We define inductively: ∀𝑡 ∈ 𝑇, 𝜎 ∈ 𝑇 ∗ : 𝜀 𝑀 −→ 𝑀 ′ iff 𝑀 = 𝑀 ′ 𝜎𝑡

𝜎

𝑡

𝑀 −→ 𝑀 ′ iff ∃𝑀 ′′ ∈ N𝑆 : 𝑀 −→ 𝑀 ′′ −→ 𝑀 ′ 𝜎

𝑡

where 𝜀 is the empty sequence and 𝑀 −→ 𝑀 ′′ −→ 𝑀 ′ is a shorthand 𝜎 𝑡 notation for 𝑀 −→ 𝑀 ′′ ∧ 𝑀 ′′ −→ 𝑀 ′ . 𝜎 We read 𝑀 −→ 𝑀 ′ as 𝜎 fires from 𝑀 to 𝑀 ′ , or 𝜎 is executed from 𝑀 and leads to 𝑀 ′ , or, more simply, 𝜎 leads from 𝑀 to 𝑀 ′ . Alternatively, 𝑀 ′ is called reachable from 𝑀 (by 𝜎). We also define: 𝜎

𝑀 −→

𝜎

:⇐⇒

∃𝑀 ′ ∈ N𝑆 : 𝑀 −→ 𝑀 ′

Thus, a sequence 𝜎 ∈ 𝑇 ∗ is called a firing sequence or an execution (sequence) 𝜎 from 𝑀 (or executable/firable/enabled at 𝑀), denoted by 𝑀 −→ , if there is 𝜎 some marking 𝑀 ′ with 𝑀 −→ 𝑀 ′ . 𝜎 Further, we call [𝑀 ⟩ = {𝑀 ′ | ∃𝜎 ∈ 𝑇 ∗ : 𝑀 −→ 𝑀 ′ } the reachability set generated by (or the state space of) 𝑀; and [𝑁⟩ := [𝑀0 ⟩ is the reachability set generated by the initial marking 𝑀0 of 𝑁.

Example 1.12 Finite and infinite firing sequences, and boundedness In Figure 1.1, we have the following firing sequences: 𝜎1 = 𝑡 3 𝑡 3 𝑡 1 𝑡 3 𝑡2 𝑡3 𝑡3 and 𝜎1 𝜎2 = 𝑡3 𝑡1 𝑡1 𝑡1 𝑡 1 𝑡 3 , amongst others. More precisely, we have (0 1 2) T −→

1.3 Firing transitions

9 𝜎2

(5 0 0) T and (0 1 2) T −→ (2 1 4) T . By contrast, 𝜎3 = 𝑡 3 𝑡 3 𝑡 1 𝑡3 𝑡3 𝑡2 𝑡3 is not firable from 𝑀0 = (0 1 2) T , since the fourth instance of 𝑡3 cannot be executed. The reachability set, [𝑀0⟩, of this example is as follows: { (0 1 2) T , (0 0 4) T } ∪ { (𝑖 𝑗 𝑘) T | 𝑖 ≥ 1 ∧ 𝑗 ≤ 1 ∧ 𝑖 + 2 𝑗 + 𝑘 ≥ 4 } (1.1) Formula (1.1) does not follow immediately from the net and its initial marking; it is left to the reader to check carefully that all – and only – reachable markings are in the set specified in (1.1). Generally, the reachability set does not have to be representable so (relatively) smoothly by means of a closed formula, even if the given Petri net is small. Note that this net allows infinitely long “plays” of the token game. For example, from the initial marking, we may fire 𝑡 3 , and then 𝑡1 𝑡1 𝑡1 . . . arbitrarily many times. The sequence 𝑡 3 𝑡1 𝑡1 𝑡1 . . . is called an infinite firing sequence, and it is written formally as 𝑡3 (𝑡 1 ) ∞ . Note also that during this sequence, more and more tokens are assembled on place 𝑠3 (but on no other place). Places on which an indefinitely increasing number of tokens can be collected from the initial marking are called unbounded. Other places are called bounded.a a

These notions are defined more formally later (see Definition 3.9 in Chapter 3).

Firing enjoys several basic properties that one should know about. If a transition can be fired in some state, then it can also be fired in every larger state (where “larger” means that no place contains less tokens and at least one place more tokens); this property is called the monotonicity of firing. More formally: Lemma 1.13 Monotonicity of firing Transition firing is monotonic, i.e.:  𝑡 𝑡 ∀𝑀, 𝑀 ′ , 𝑀 ′′ ∈ N𝑆 ∀𝑡 ∈ 𝑇 : 𝑀 −→ 𝑀 ′ =⇒ (𝑀 + 𝑀 ′′ ) −→ (𝑀 ′ + 𝑀 ′′ ) . 𝑡

𝑡

Proof: By 𝑀 −→ , 𝑀 ≥ B(𝑡), hence 𝑀 + 𝑀 ′′ ≥ B(𝑡) + 𝑀 ′′ ≥ B(𝑡). By 𝑀 −→ 𝑀 ′ , 𝑀 ′ + 𝑀 ′′ = 𝑀 − B(𝑡) + F(𝑡) + 𝑀 ′′ = (𝑀 + 𝑀 ′′ ) − B(𝑡) + F(𝑡). 1.13 The state reached after firing a transition depends only on the previous state, rather than on the (possibly large) history by which this state has been reached; this property is called memorylessness or local determinism of firing. If two transitions can be fired in arbitrary order, then the resulting marking after firing both does not depend on their ordering; this is called the commutativity of firing. More formally, we have: Lemma 1.14 Memorylessness and commutativity of firing Transition firing is • locally determined, i.e., for all 𝑡 ∈ 𝑇 and for all 𝑀, 𝑀 ′ , 𝑀 ′′ ∈ N𝑆 : 𝑡 𝑡 (𝑀 −→ 𝑀 ′ ∧ 𝑀 −→ 𝑀 ′′ ) implies 𝑀 ′ = 𝑀 ′′ ;

10

1 First Steps in Petri Nets

• and commutative, i.e., if 𝑀, 𝑀1 , 𝑀2 , 𝑀3 , 𝑀4 are states and 𝑡, 𝑡 ′ are tran𝑡

𝑡′

𝑡′

𝑡

sitions with 𝑀 −→ 𝑀1 −→ 𝑀2 and 𝑀 −→ 𝑀3 −→ 𝑀4 , then 𝑀2 = 𝑀4 . Proof: Local determinism: We have 𝑀 ′ = 𝑀 − B(𝑡) + F(𝑡) = 𝑀 ′′ . Commutativity: 𝑀2 = 𝑀1 − B(𝑡 ′ ) + F(𝑡 ′ ) = 𝑀 − B(𝑡) + F(𝑡) − B(𝑡 ′ ) + F(𝑡 ′ ) = 𝑀 − B(𝑡 ′ ) + F(𝑡 ′ ) − B(𝑡) + F(𝑡) = 𝑀3 − B(𝑡) + F(𝑡) = 𝑀4 . 1.14 In fact, both parts of Lemma 1.14 can be viewed as a special case of a more general property of determinism valid for all Petri nets. To understand this property, consider 𝜎 a marking 𝑀 ′ reachable from a marking 𝑀 by 𝜎, that is: 𝑀 −→ 𝑀 ′ . Then 𝑀 ′ can be calculated from 𝑀 and from the number of times each transition occurs in 𝜎, rather than from the exact order in which they occur. In order to formulate this precisely, Parikh vectors are useful. Definition 1.15 Occurrence count, and Parikh vector Let 𝜏 = 𝑡1 . . . 𝑡 𝑘 ∈ 𝑇 ∗ be a sequence of transitions from 𝑇. Let #(𝑡, 𝜏) denote the number of times transition 𝑡 occurs in 𝜏. The Parikh vector or occurrence count vector of 𝜏 is defined as a (column) T-vector (that is, a 𝑇based column vector) P (𝜏) ∈ N𝑇 which contains, at entry 𝑡, the occurrence count #(𝑡, 𝜏).

Example 1.16 Parikh vectors For four transitions {𝑡 1 , 𝑡2 , 𝑡3 , 𝑡4 }, P (𝜀) = (0 0 0 0) T ,

P (𝑡2 ) = (0 1 0 0) T ,

P (𝑡 1 𝑡2 𝑡4 𝑡2 𝑡3 𝑡 4 ) = (1 2 1 2) T

Using Parikh vectors, Lemma 1.14 can be extended and generalised: Lemma 1.17 General determinism of firing Transition firing is fully forward (a) and fully backward (b) deterministic if: (a) for all 𝜎, 𝜎 ′ ∈ 𝑇 ∗ and for all 𝑀, 𝑀 ′ , 𝑀 ′′ ∈ N𝑆 : 𝜎

𝜎′

(𝑀 −→ 𝑀 ′ ∧ 𝑀 −→ 𝑀 ′′ ∧ P (𝜎) = P (𝜎 ′ )) implies 𝑀 ′ = 𝑀 ′′ ; (b) for all 𝜎, 𝜎 ′ ∈ 𝑇 ∗ and for all 𝑀, 𝑀 ′ , 𝑀 ′′ ∈ N𝑆 : 𝜎

𝜎′

(𝑀 ′ −→ 𝑀 ∧ 𝑀 ′′ −→ 𝑀 ∧ P (𝜎) = P (𝜎 ′ )) implies 𝑀 ′ = 𝑀 ′′ . Proof: Í Í (a): 𝑀 ′ = 𝑀 − Í𝑡 ∈𝑇 #(𝑡, 𝜎)·𝐵(𝑡) + Í𝑡 ∈𝑇 #(𝑡, 𝜎)·𝐹 (𝑡) ( firing rule ) = 𝑀 − 𝑡 ∈𝑇 #(𝑡, 𝜎 ′ )·𝐵(𝑡) + 𝑡 ∈𝑇 #(𝑡, 𝜎 ′ )·𝐹 (𝑡) ( by P (𝜎) = P (𝜎 ′ ) ) = 𝑀 ′′ Í Í ′ (b): 𝑀 = 𝑀 + Í𝑡 ∈𝑇 #(𝑡, 𝜎)·𝐵(𝑡) − Í𝑡 ∈𝑇 #(𝑡, 𝜎)·𝐹 (𝑡) ( firing rule ) = 𝑀 + 𝑡 ∈𝑇 #(𝑡, 𝜎 ′ )·𝐵(𝑡) − 𝑡 ∈𝑇 #(𝑡, 𝜎 ′ )·𝐹 (𝑡) ( by P (𝜎) = P (𝜎 ′ ) ) = 𝑀 ′′ 1.17

1.3 Firing transitions

11

It is important to note that commutativity (in the sense of Lemma 1.14) does not mean that 𝑡 ′ 𝑡 is firable whenever 𝑡𝑡 ′ is. Thus, the above concept of commutativity is not exactly the same as that used frequently in mathematics. Mathematical commutativity is not normally satisfied in Petri net firings. More precisely: Lemma 1.18 Non-properties of firing: persistence and confluence The following properties are not normally satisfied: • persistence, i.e., for all 𝑡, 𝑡 ′ ∈ 𝑇 and for all 𝑀 ∈ N𝑆 : 𝑡′

𝑡

𝑡𝑡 ′

𝑡′𝑡

(𝑡 ≠ 𝑡 ′ ∧ (𝑀 −→ ) ∧ (𝑀 −→ )) ⇒ ((𝑀 −→ ) ∧ (𝑀 −→ )); • confluence, i.e. for all 𝑀, 𝑀 ′ , 𝑀 ′′ ∈ N𝑆 and for all 𝜎, 𝜎 ′ ∈ 𝑇 ∗ : 𝜎′

𝜎

((𝑀 −→ 𝑀 ′ ) ∧ (𝑀 −→ 𝑀 ′′ )) ⇒ 𝜎 ′′ 𝜎 ′′′ b ∈ N𝑆 ∃𝜎 ′′ , 𝜎 ′′′ ∈ 𝑇 ∗ : ((𝑀 ′ −→ b ∧ (𝑀 ′′ −→ b ). ( ∃𝑀 𝑀) 𝑀)) Proof: Both properties can be disproved by the following simple Petri net shown in Figure 1.3: 𝑆 = {𝑠1 , 𝑠2 , 𝑠3 }, 𝑇 = {𝑡 1 , 𝑡2 }, 𝐹 = {(𝑠1 , 𝑡1 ), (𝑠1 , 𝑡2 ), (𝑡 1 , 𝑠2 ), (𝑡2 , 𝑠3 )} and 𝑀 (𝑠1 ) = 1, 𝑀 (𝑠2 ) = 𝑀 (𝑠3 ) = 0. 1.18 𝑠1

𝑡1 𝑠2

𝑡2 𝑠3

Fig. 1.3: A Petri net which is neither persistent nor confluent. Persistence means that an activated transition cannot be deactivated by other transitions. In general, however, it is possible for some transition to deactivate another one; such a situation is called a conflict. In the example given in Figure 1.3, 𝑡2 is activated in the initial marking. However, firing 𝑡1 deactivates 𝑡2 (as well as 𝑡1 itself). Confluence means that executions drifting apart in a net can be brought together again. In general, however, this may not be the case. In Figure 1.3, the markings reached after firing 𝑡1 and 𝑡2 , respectively, have no common successor marking. For example, if 𝑡 2 denoted an erronous exection, then there would be no way of “correcting” the error by reaching a state that could also have been reached after 𝑡1 . Later, however, we shall investigate circumstances which guarantee persistence and confluence. Lemma 1.14 allows one to generalise the notion of a transition being enabled (Definition 1.9) to sets, and even to multisets, of transitions. It has already been remarked that a marking 𝑀 can be understood as a multiset 𝑀 : 𝑆 → N of places, assigning a natural number to every place. In the same way, a multiset of transitions is a function 𝜇 : 𝑇 → N, assigning a natural number to every transition.

12

1 First Steps in Petri Nets

Definition 1.19 Concurrent enabling, steps, and auto-concurrency Let 𝑁 = (𝑆, 𝑇, B, F) be a Petri net, let 𝑀 : 𝑆 → N be a state of 𝑁, and let 𝜇 : 𝑇 → N be a multiset of transitions of 𝑁. We call 𝜇 concurrently 𝑀-activated (or concurrently enabled, concurrently firable, concurrently exÍ ecutable in state 𝑀), if 𝑀 ≥ BT ·𝜇 (that is, ∀𝑠 ∈ 𝑆 : 𝑀 (𝑠) ≥ 𝑡 ∈𝑇 B𝑠,𝑡 ·𝜇(𝑡)). 𝜇 Formally, the fact that 𝜇 is enabled in 𝑀 is denoted by 𝑀 −→ . {𝑡 ,𝑡 }

𝜇

If 𝑀 −→ , then 𝜇 is also called a step (from 𝑀). If 𝑀 −→ , then 𝑡 is called auto-concurrent (at 𝑀). It can be seen that “concurrent enabling” is an extension of “enabling”, since if 𝜇 = {𝑡} is a singleton set, then the condition given in Definition 1.19 reduces to the condition given in Definition 1.9. Lemma 1.20 Commutativity of concurrently enabled transitions Let 𝜇 : 𝑇 → N be a multiset of transitions of a net 𝑁 and 𝑀 : 𝑆 → N be 𝜇 a marking of 𝑁 such that 𝑀 −→ . Then, for every sequence 𝜏 ∈ 𝑇 ∗ with 𝜏 P (𝜏) = 𝜇, we have 𝑀 −→ . Proof: Using Lemma 1.14 repeatedly. 1.20 In other words, concurrently enabled transitions can be fired in some sequence in any arbitrary order, reaching a unique (by Lemma 1.17) follower marking. 𝜇

Lemma 1.20 is the basis (i) for being able to write 𝑀 −→ 𝑀 ′ with a unique follower marking 𝑀 ′ , and (ii) for several semantics of Petri nets which extend the firing sequence semantics described in the present chapter, such as step semantics or partial order semantics (for which see Chapter 9 and Sections 7.7, 9.9, and 19.5). Figure 1.4 explains both Definition 1.19 and Lemma 1.20. 𝑠1

𝑠3

𝑠2

2

𝑡1

𝑡2

Fig. 1.4: A Petri net and a marking 𝑀. The multiset {𝑡1 , 𝑡2 , 𝑡2 } is concurrently enabled, and the multiset {𝑡1 , 𝑡1 , 𝑡2 } is not concurrently enabled. All sequences 𝑡1 𝑡2 𝑡 2 , 𝑡2 𝑡1 𝑡2 , and 𝑡2 𝑡 2 𝑡 1 are enabled. Each of them leads to the marking (1 0 0) T .

1.5 Exercises

13

1.4 Bibliographical remarks and further reading Petri nets were conceived in [Pet62] and brought into the form presented in this book, called the place/transition nets, by several research teams, prominent amongst whom were Hartmann J. Genrich, Kurt Lautenbach, Fred G. Commoner, Anatol W. Holt, Amir Pnueli, et al. (see [Com+71; GL73]). The basic motivation of Petri nets, relating to locality and duality as mentioned above in Section 1.1, is described more fully in [Smi15], by Einar Smith. At around the same time (the late Sixties of the past century), a closely related model, the vector addition systems, was put forward and investigated by Richard M. Karp and Raymond E. Miller [KM69]. A difference is that vector addition systems cannot express self-loops. For more information on vector addition systems, see also Sections 5.10 and 10.6. Several basic textbooks and surveys articles are concerned with Petri nets (and place/transition nets in particular), for instance the ones by Wolfgang Reisig [Rei85; Rei13], James L. Peterson [Pet77], G.W. Brams [GB+82], and Tadao Murata [Mur89], or, slightly more recently, [PW03] (by Lutz Priese and Harro Wimmel), and even more recently, [Liu22] (by Guanjun Liu).

1.5 Exercises Exercise 1.1 Drawing a Petri net Draw the Petri net 𝑁 = (𝑆, 𝑇, B, F, 𝑀0 ) with 𝑆 = {𝑠1 , 𝑠2 , 𝑠3 , 𝑠4 }, 𝑇 = {𝑡 1 , 𝑡2 , 𝑡3 , 𝑡4 }, and 𝑡1 𝑡2 𝑡3 𝑡4

© ­ B=­ ­ «

0 0 1 0

0 0 1 2

0 0 0 3

1 ª 0® ® 0® 1¬

𝑡1 𝑡2 𝑡3 𝑡4

𝑠1 𝑠2 𝑠3 𝑠4

,

011 © ­ 000 F=­ ­ 012 « 101

0 ª 0® ® 1® 0¬

𝑠1 𝑠2 𝑠3 𝑠4

,

0 © ª ­0® 𝑀0 = ­ ® ­0® «3¬

Exercise 1.2 Understanding basic concepts Answer the following questions about the Petri net 𝑁 shown in Exercise 1.1. Justify your answers. a) What is • 𝑠3 ? What is B(𝑠3 )? b) Where does the net have side-loops? Which are its isolated elements? c) Which transitions cannot fire in the initial state 𝑀0 ? d) Specify a firing sequence that can be fired in the initial state 𝑀0 and contains every transition at least once. e) Determine a marking 𝑀 ∈ N𝑆 , which is as small as possible, and in which every one of the four transitions in 𝑇 as well as the sequence 𝜎 = 𝑡 1 𝑡2 𝑡3 𝑡4 are firable. f) Is there a firing sequence from 𝑀0 after which no transition is firable?

14

1 First Steps in Petri Nets 𝜎𝑖

g) For which places 𝑠𝑖 do there exist firing sequences 𝜎𝑖 with 𝑀0 −→ 𝑀𝑖 and 𝑀𝑖 (𝑠𝑖 ) = 𝑀0 (𝑠𝑖 ) + 1 as well as 𝑀𝑖 (𝑠) = 𝑀0 (𝑠) for 𝑠 ≠ 𝑠𝑖 ? h) Are the markings (1 0 0 0) T or (0 0 1 1) T reachable from 𝑀0 ? i) Let 𝑘 ∈ N be arbitrary. Specify a firing sequence from 𝑀0 in which 𝑡4 occurs exactly 𝑘 times. j) How should an initial state look like so that no arbitrarily long firing sequences exist? k) Are there two different multisets which are enabled at the initial marking? ℓ) Is there a reachable marking at which two different singleton sets are enabled? Exercise 1.3 Reachability set Determine the reachability set [𝑁⟩ of the Petri net 𝑁 shown in Exercise 1.1. Exercise 1.4 Constructing a net 𝑡1 𝑡2 Find a Petri net with two transitions 𝑡 1 , 𝑡2 and a marking 𝑀 such that 𝑀 −→ and 𝑡2 𝑡1 ¬(𝑀 −→ ). Exercise 1.5 Constructing nets For every one of the following two items, construct a Petri net having the required properties. a) In the initial state of the Petri net 𝑁1 , the sequences 𝜎1 = 𝑡 1 𝑡 2 𝑡 3 𝑡 2 and 𝜎2 = 𝑡3 𝑡 2 (and their prefixes) are firable, but no other sequences (such as, for instance, 𝑡 2 or 𝑡3 𝑡2 𝑡1 or 𝑡 1 𝑡 1 ). Also construct the initial state. b) The Petri net 𝑁2 has exactly three places, 𝑠1 , 𝑠2 and 𝑠3 , and it satisfies [(3 3 3) T⟩ = {(𝑖 𝑗 𝑘) T | 𝑖 + 𝑗 + 𝑘 = 9 ∧ 𝑖 < 9 ∧ 𝑗 < 9 ∧ 𝑘 < 9}. Exercise 1.6 Firing rule (an exchange lemma) Let (𝑁, 𝑀) = (𝑆, 𝑇, 𝐹, 𝑀) be a Petri net with a marking 𝑀. Let 𝑢, 𝑣 ∈ 𝑇 be two 𝑣𝑢 𝑢𝑣 transitions with • 𝑢 ∩ 𝑣• = ∅. Prove that if 𝑀 −→ then 𝑀 −→ . Solution 1.1 (Exercise 1.1) See Figure 1.5. Solution 1.2 (Exercise 1.2) a): • 𝑠3 = {𝑡 2 , 𝑡3 , 𝑡4 }, B(𝑠3 ) = (1 1 0 0) b): simple loop: between 𝑠3 and 𝑡2 and between 𝑠4 and 𝑡3 (only the former is plain); isolated: 𝑠2 c): 𝑡 1 , 𝑡2 , 𝑡4 cannot be executed in the initial state. d) 𝑡 3 𝑡 1 𝑡2 𝑡1 𝑡4 e): (1 0 2 4) T

1.5 Exercises

15

𝑠2

𝑠3

𝑡2 2 2 𝑡3

𝑡1

3 𝑠1

𝑡4

𝑠4

Fig. 1.5: Solution of Exercise 1.1. f): 𝑡 3 𝑡 4 𝑡 1 𝑡1 𝑡2 𝑡1 𝑡4 𝑡 1 , leading to the marking (0 0 0 1) T g): Only for 𝑠1 (𝑡3 𝑡1 𝑡 1 ). For 𝑠2 , 𝑠3 , 𝑠4 , see Exercise 1.3. h): No for both i) (𝑡3 𝑡4 𝑡 1 𝑡 1 𝑡 2 𝑡 1 ) 𝑘 j): (𝑖 𝑗 𝑘 ℓ) T with 𝑘 + ℓ < 3 (see Exercise 1.3) k): Yes. The initial marking enables both the empty multiset of transitions and the singleton set {𝑡 3 }. ℓ): Yes. After firing 𝑡3 , a marking is reached which enables the singleton set {𝑡1 } and the singleton set {𝑡 4 } (and even the multiset {𝑡1 , 𝑡4 }). Solution 1.3 (Exercise 1.3) Claim: [𝑁⟩ = { (𝑖 𝑗 𝑘 ℓ) T | 𝑖 ∈ N, 𝑗 = 0, (𝑘 + ℓ = 1 ∨ 𝑘 + ℓ = 3) } 𝑡3 𝑡4 𝑡1 𝑡1 𝑡1

Place 𝑠2 remains token-free throughout any firings. By 𝑀 −−−−−→ −−− , it is possible to cycle through all possible markings where the number of tokens on place 𝑠1 stays the same and the sum of the tokens on places 𝑠3 and 𝑠4 is 3. This can be done regardless 𝑡3 𝑡1 𝑡1 of the number of tokens on 𝑠1 (including zero). By 𝑀 −→ from a marking in which place 𝑠3 has zero tokens and place 𝑠4 has 3 tokens, the number of tokens on place 1 can be increased by 1. Firing 𝑡 2 kills 𝑡 3 , and henceforth, only one token is ever on places 𝑠3 and 𝑠4 ; again, all numbers of tokens on 𝑠1 are possible. The claim follows. Solution 1.4 (Exercise 1.4) Set of places {𝑠}, set of transitions {𝑡1 , 𝑡2 }, marking 𝑀 (𝑠) = 1, and flow relation 𝐹 (𝑠, 𝑡1 ) = 𝐹 (𝑡 1 , 𝑠) = 𝐹 (𝑠, 𝑡2 ) = 1, 𝐹 (𝑡2 , 𝑠) = 0. Solution 1.5 (Exercise 1.5) See Figure 1.6. b): Every transition consumes as many tokens as it produces, so that the number of tokens (9) remains constant. Furthermore, it is impossible to move all tokens onto

16

1 First Steps in Petri Nets

a single place, since every transition produces tokens on different places. As long as no place is token-free, all transitions can fire and essentially distribute tokens arbitrarily, as long as every place retains at least one token. It remains to show that markings in which one place is token-free are also reachable. Assume that we want to have 𝑥 tokens on 𝑠1 and 9 − 𝑥 tokens on 𝑠2 (for 0 < 𝑥 < 9). To reach this marking, we may fire (3 3 3) T

𝑡1 𝑡1 𝑡2 𝑡2 𝑡2 𝑡2

−−−−−→ −−−

(7 1 1) T

𝑡38− 𝑥

𝑡2

and then (7 1 1) T −→ (𝑥 −1 9−𝑥 1) T −→ (𝑥 9−𝑥 0) T

𝑠1 b): 𝑡1

a):

𝑡3

𝑡2 𝑡2

𝑠3

𝑡3

𝑠2

𝑡1

Fig. 1.6: Solution of Exercise 1.5.

Solution 1.6 (Exercise 1.6) 𝑣

𝑢

𝑢

𝑣

Suppose that 𝑀 −→ 𝐾 −→ . We want to show that 𝑀 −→ 𝐾 ′ −→ . To this end, we need to show that 𝑀 enables 𝑢 and that 𝐾 ′ enables 𝑣. • 𝑀 enables 𝑢: Let 𝑠 ∈ • 𝑢. By • 𝑢 ∩ 𝑣• = ∅, 𝑠 ∉ 𝑣• , and therefore, 𝑀 (𝑠) ≥ 𝐾 (𝑠). 𝑢 By 𝐾 −→ , 𝐾 (𝑠) ≥ 𝐹 (𝑠, 𝑢). By transitivity, 𝑀 (𝑠) ≥ 𝐹 (𝑠, 𝑢). Since this is true for every 𝑠 ∈ • 𝑢, 𝑀 enables 𝑢. 𝑣𝑢

• 𝐾 ′ enables 𝑣: Let 𝑞 ∈ • 𝑣. First, observe that because of 𝑀 −→ and • 𝑢 ∩ 𝑣• = ∅, 𝑀 (𝑞) ≥ 𝐹 (𝑞, 𝑣) + 𝐹 (𝑞, 𝑢). Then, 𝐾 ′ (𝑞) = 𝑀 (𝑞) − 𝐹 (𝑞, 𝑢) + 𝐹 (𝑢, 𝑞) ( by the firing rule ) ≥ 𝐹 (𝑞, 𝑣) + 𝐹 (𝑢, 𝑞) ( by 𝑀 (𝑞) ≥ 𝐹 (𝑞, 𝑣) + 𝐹 (𝑞, 𝑢) ) ≥ 𝐹 (𝑞, 𝑣) ( by 𝐹 (𝑢, 𝑞) ≥ 0 ) Since this is true for every 𝑞 ∈ • 𝑣, 𝐾 ′ enables 𝑣.

Chapter 2

Languages of Petri Nets

Abstract If the transitions of a Petri net are associated with the letters of some alphabet, then the firing sequences correspond to words over that alphabet. The set of firing sequences is clearly prefix-closed, but if a notion of final state (in addition to the initial marking) is also introduced, some interesting, not necessarily prefixclosed, sets of accepted words (i.e., languages) can be investigated. In this chapter, the languages which arise through such an extension are characterised.

Contents 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8

Introduction . . . . . . . . . . . . . . . . . . . . Labelled Petri nets, and Petri net languages . . . Simulation of finite automata . . . . . . . . . . . Petri net languages are context-sensitive . . . . . Closure properties of Petri net languages . . . . Algebraic characterisation of Petri net languages Bibliographical remarks and further reading . . . Exercises . . . . . . . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

17 18 20 22 33 36 37 38

2.1 Introduction The idea of expressing concurrency explicitly in the Petri net model is appealing, but nevertheless it is also important to study Petri net languages, in which concurrency information is absent. We will once again resort to the analogy to finite automata in order to understand what concepts may still be missing for Petri nets to function “like finite automata”. Concepts that have already been defined are: states; state transitions; and the starting state. What we still need are: end states; and inscriptions © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. Best and R. Devillers, Petri Net Primer, Computer Science Foundations and Applied Logic, https://doi.org/10.1007/978-3-031-48278-6_2

17

18

2 Languages of Petri Nets

for transitions. In a finite automaton, each transition is effected by some letter of its alphabet, without constraint. So far, however, transitions in Petri nets have unique names. We will now relax this requirement slightly and allow transitions to be labelled by letters of some fixed, given alphabet. When we finally define the set of words accepted by such a Petri net, what counts are the labellings of transitions, rather than their names. To start this section, we introduce a few concepts from the theory of formal languages such as alphabets and homomorphisms, and then we enrich Petri nets by end states and transition labellings.

2.2 Labelled Petri nets, and Petri net languages An alphabet is simply a finite, nonempty set. We will use uppercase Greek letters for alphabets, such as Σ (standing for “symbols”). If Σ is an alphabet, then Σ∗ denotes the set of finite words (including the empty word 𝜀) over Σ. The set Σ∗\{𝜀} is denoted by Σ+ . A language (over Σ) is a subset of Σ∗ . Let Γ and Σ be two alphabets. In general, a homomorphism is a structure-preserving mapping. In the case of formal languages, we consider mappings from Γ∗ to Σ∗ . The algebraic structure of the set of words is determined by the empty word, 𝜀, and by the concatenation of words. These must be preserved by a word homomorphism ℎ. Consequently, a function ℎ : Γ∗ → Σ∗ will be called a word homomorphism, if ℎ(𝜀) = 𝜀, and ℎ(𝑣𝑤) = ℎ(𝑣)ℎ(𝑤) for arbitrary words 𝑣 and 𝑤 in Γ∗ . A homomorphism ℎ is completely determined by the images of letters 𝑎 ∈ Γ, since the images of words 𝑤 = 𝑎 1 . . . 𝑎 𝑛 can be computed as follows:  𝜀 if 𝑛 = 0 ℎ(𝑎 1 . . . 𝑎 𝑛 ) = (2.1) ℎ(𝑎 1 ) . . . ℎ(𝑎 𝑛 ) if 𝑛 > 0 By (2.1), it follows that every function ℎ : Γ → Σ∗ can uniquely be extended to a homomorphism from Γ∗ to Σ∗ . A homomorphism ℎ is called non-erasing, if ℎ(𝑎) ≠ 𝜀 for all 𝑎 ∈ Γ, and fine, if ℎ(𝑎) ∈ Σ for all 𝑎 ∈ Γ. Definition 2.1 labelled Petri net with end states A labelled Petri net 𝑁 with end states is a tuple of the form 𝑁 = (𝑆, 𝑇, B, F, 𝑀0 , M 𝑓 , Σ, ℎℓ ) where (𝑆, 𝑇, B, F, 𝑀0 ) is an initial Petri net, Σ is an alphabet, ℎℓ : 𝑇 → Σ is a labelling and M 𝑓 ⊆ N𝑆 is a finite set of end states (or final states).

2.2 Labelled Petri nets, and Petri net languages

19

As in (2.1), ℎℓ can be extended to a (fine, hence non-erasing) homomorphism ℎℓ : 𝑇 ∗ → Σ ∗ : ℎℓ (𝜀) := 𝜀 ℎℓ (𝜎𝑡) := ℎℓ (𝜎)ℎℓ (𝑡) and further, for languages 𝐿 ⊆ 𝑇 ∗ : ℎℓ (𝐿) =

Ø

ℎℓ (𝑤)

𝑤∈ 𝐿

A labelled Petri net with final states is called free if Σ = 𝑇 and ℎℓ = id𝑇 (i.e. if ℎℓ is the trivial labelling, which means that the net is de facto not labelled). Given a labelled Petri net 𝑁 = (𝑆, 𝑇, B, F, 𝑀0 , M 𝑓 , Σ, ℎℓ ) with final states, the free version of 𝑁 is defined to be the net 𝑁 𝑓 := (𝑆, 𝑇, B, F, 𝑀0 , M 𝑓 , 𝑇, id𝑇 ) Throughout this chapter, we will mean a labelled Petri net with final states whenever we simply speak of a “Petri net”. Definition 2.2 Language of a Petri net Let 𝑁 = (𝑆, 𝑇, B, F, 𝑀0 , M 𝑓 , Σ, ℎℓ ) be a labelled Petri net with final states. The set 𝜎

𝐿(𝑁) := { 𝑤 ∈ Σ∗ | ∃𝜎 ∈ 𝑇 ∗ ∃𝑀 𝑓 ∈ M 𝑓 : 𝑀0 −→ 𝑀 𝑓 ∧ 𝑤 = ℎℓ (𝜎) } is called the language of 𝑁. We define L to be the class of all languages of Petri nets. Moreover, L 1 is defined to be the class of all languages of Petri nets having exactly one final state (i.e., |M 𝑓 | = 1). If 𝑁 is free, then we have 𝐿(𝑁) ⊆ 𝑇 ∗ because of Σ = 𝑇. If, additionally, [𝑀0 ⟩ is finite (i.e., if 𝑁 is bounded) and M 𝑓 = [𝑀0 ⟩, then 𝐿(𝑁) is simply the set of firing sequences of 𝑁. In that case, 𝐿(𝑁) is also prefix-closed, i.e., if 𝑤 ∈ 𝐿(𝑁) and 𝑣 is a prefix of 𝑤, then also 𝑣 ∈ 𝐿(𝑁). Observation 2.3 By the definitions of 𝐿(𝑁), ℎℓ and 𝑁 𝑓 , we have 𝐿 (𝑁) = ℎℓ (𝐿 (𝑁 𝑓 )). Clearly, L 1 ⊆ L holds true. We shall later see that the inclusion is strict. Example 2.4 A context-sensitive language Consider the Petri net in Figure 2.1 with the initial state 𝑀0 = (1 0 0 1 0) T and the (only) final state 𝑀 𝑓 = (0 0 0 0 1) T . The transitions have not been named explicitly, but their respective labelling is shown in this figure. The

20

2 Languages of Petri Nets

language of the net, 𝑁, is 𝐿 (𝑁) = {𝑎 𝑚 𝑏 𝑚 𝑐 𝑚 | 𝑚 ∈ N, 𝑚 ≥ 1} It is well-known, from any classical textbook on Automata Theory, that this language is context-sensitive but not context-free. It follows immediately that Petri nets accept a different class of languages than finite automata or than pushdown automata. If we choose, alternatively, M 𝑓 = {(0 0 0 0 1) T , (0 0 1 1 0) T }, then 𝐿 (𝑁) = {𝑎 𝑚 𝑏 𝑚 𝑐 𝑚 | 𝑚 ∈ N, 𝑚 ≥ 1} ∪ {𝑎}

𝑡1

𝑠2

𝑎

𝑡3

𝑠4

𝑡5 𝑐

𝑏

Initial marking: 𝑀0 = (1 0 0 1 0) T Set of final markings:

𝑎 𝑠1

𝑡2

{ (0 0 0 0 1) T }

𝑏 𝑠3

𝑡4

𝑠5

Fig. 2.1: A Petri net whose language is context-sensitive, but not context-free. In order to reach the marking which has a single token on 𝑠5 and nowhere else from the initial marking, a nonzero number of 𝑎’s must be followed by the same number of 𝑏’s, and then by the same number of 𝑐’s. This is effected by the place 𝑠2 (which counts the number of 𝑎’s except the last one) and by the place 𝑠4 (which counts the number of 𝑏’s except the last one). Since Petri nets accept neither exactly all regular nor exactly all context-free languages, the question arises how powerful they are in respect to language acceptance. Our next goal is to examine their position in the well-known Chomsky hierarchy of formal language theory.

2.3 Simulation of finite automata We start with the observation that every regular language can, in fact, be accepted by a Petri net. The idea is very simple. Let REG be the set of regular languages. As is well known, every language in REG can be accepted by a finite automaton. We draw this automaton as usual, and then we replace every state of the automaton by a place and every arrow of the automaton by a sequence arc–transition–arc. In this way, we not only introduce the transitions of a Petri net, but we also turn every arc inscription of the finite automaton into a transition label. Further, we mark the place

2.3 Simulation of finite automata

21

corresponding to the initial state of the automaton by one token, and we inherit its final states. In this way, a labelled Petri net with final states is obtained. We may formalise this as follows: Proposition 2.5 Regular languages are Petri net languages REG ⊆ L Proof: We show that every finite deterministic automaton 𝐴 = (𝑆, Σ, 𝛿, 𝑠0 , F ) (with state set 𝑆, state transition function 𝛿 : 𝑆 × Σ → 𝑆, start state 𝑠0 ∈ 𝑆 and set F ⊆ 𝑆 of final states) can be transformed into a Petri net 𝑁 with 𝐿(𝑁) = 𝐿( 𝐴). We re-interpret every element 𝑠 from 𝑆 as a unit vector 𝑀 = (0 0 . . . 1 . . . 0 0) T with |𝑆| entries, whose entry 1 is exactly at the position corresponding to 𝑠. Then we define 𝑁 := (𝑆, 𝑇, 𝐹, 𝑀0 , M 𝑓 , Σ, ℎℓ ) where 𝑀0 is the unit vector belonging to 𝑠0 , M 𝑓 is the set of unit vectors belonging to F , and 𝑇 is the set {(𝑀, 𝑎, 𝑀 ′ ) | 𝑎 ∈ Σ ∧ 𝛿(𝑀, 𝑎) = 𝑀 ′ }. Further, let 𝐹 (𝑀, (𝑀, 𝑎, 𝑀 ′ )) := 1 and 𝐹 ((𝑀, 𝑎, 𝑀 ′ ), 𝑀 ′ ) := 1, 𝐹 equal 0 elsewhere, and let ℎℓ ((𝑀, 𝑎, 𝑀 ′ )) := 𝑎 for all (𝑀, 𝑎, 𝑀 ′ ) ∈ 𝑇. The following equivalence can be deduced directly from these definitions: b 𝑎, 𝑀 b′) ∈ 𝑇 : ∀𝑀, 𝑀 ′ ∈ 𝑆 ∀( 𝑀, 𝑀

c 𝑀 c′ ) ( 𝑀,𝑎, −→ −−− 𝑁

b ∧ 𝑀′ = 𝑀 b ′ ∧ 𝛿(𝑀, 𝑎) = 𝑀 ′ 𝑀 ′ ⇐⇒ 𝑀 = 𝑀

Since the start state and the final states were translated one-by-one, 𝐿 (𝑁) = 𝐿 ( 𝐴) 2.5 follows from this equivalence. Thus, Petri nets are at least as powerful as (deterministic) finite automata, considering their language acceptance behaviour. All regular languages are also Petri net languages. Now we clarify their relationship to the other language classes of the Chomsky hierarchy. We have already seen that a non-context-free language, namely {𝑎 𝑚 𝑏 𝑚 𝑐 𝑚 | 𝑚 ≥ 1}, can be generated by a Petri net. The following result states that there is also a context-free language which cannot be generated by a Petri net: Proposition 2.6 2-letter palindromes are not Petri net acceptable The language PAL = {𝑤𝑤 𝑅 | 𝑤 ∈ {𝑎, 𝑏}∗ }, where 𝑤 𝑅 is the reverse of 𝑤, cannot be generated by a Petri net. The idea is that a Petri net cannot store unlimited information about the ordering of letters thus far generated in its finite set of places. But such information is needed in order to generate the reverse of some word. By contrast, a Petri net can store a finite set of unlimited numbers, viz. one for each place. This is why the language {𝑎 𝑚 𝑏 𝑚 𝑐 𝑚 | 𝑚 ≥ 1} can be generated. Proof: Suppose that PAL can be generated by a Petri net with 𝑚 transitions and initial state 𝑀0 . The proof consists of two steps.

22

2 Languages of Petri Nets

Step 1: We estimate how many states can at most be reached from 𝑀0 with firing sequences 𝜎 whose lengths do not exceed some number ℓ ∈ N, i.e., for which |𝜎| ≤ ℓ. Since according to Lemma 1.17, the reached states depend only on P (𝜎), we need to estimate how many Parikh vectors P (𝜎) with |𝜎| ≤ ℓ exist. In such a Parikh vector, every transition can occur from zero to ℓ times, that is, there are at most (ℓ + 1) 𝑚 Parikh vectors corresponding to sequences of length at most ℓ. It follows that sequences of length less or equal to ℓ can lead to at most (ℓ + 1) 𝑚 states. Step 2: Choose ℓ to be such that (ℓ + 1) 𝑚 < 2ℓ (which is always possible since exponentials grow faster than polynomials). Then, by Step 1 and by the fact that the number of {𝑎, 𝑏}-words of length ℓ is 2ℓ , there are two different words 𝑤, 𝑤′ of length ℓ such that after reading them, 𝑁 reaches the same marking. But then 𝑁 accepts not only the word 𝑤𝑤 𝑅 , but also the word 𝑤′ 𝑤 𝑅 , which is not in PAL. 2.6 Thus, it is already clear that Petri net languages and context-free languages are askew; neither class is contained in the other. In the Chomsky hierarchy, context-sensitive languages are the next bigger class properly containing the context-free languages. In the following, we will show that Petri net languages are indeed context-sensitive. Proposition 2.6 implies that this containment is proper, since PAL is context-sensitive. In order to prove that every Petri net language is also context-sensitive, we need to go through several steps. At the end, we will also have proved an algebraic characterisation of the class of Petri net languages; and as corollaries, some closure properties will be revealed.

2.4 Petri net languages are context-sensitive 2.4.1 The Petri net 𝑵 𝑫 , the Dyck language, and the shuffle We start our investigation about the context-sensitivity of Petri net languages with a very simple Petri net, called 𝑁 𝐷 and shown in Figure 2.2. The index 𝐷 stands for Dyck language. This language describes bracketing structures. It is well-known from the theory of context-free languages. 𝑠 𝑎

𝑏

𝑁𝐷

Fig. 2.2: The Petri net 𝑁 𝐷 , shown here without (or with empty) initial state. Let us assume for a start that there is only one final state and that both start state and final state are the empty marking (i.e. zero tokens on place 𝑠). Then it is clear that every word in 𝐿 (𝑁 𝐷 ) must contain exactly the same number of 𝑎’s and 𝑏’s, since place 𝑠 is empty in the initial state and must be empty in the final state. Further, there

2.4 Petri net languages are context-sensitive

23

can be no 𝑏 if previously there were as many 𝑎’s as 𝑏’s, or put differently: every prefix of a word in 𝐿(𝑁 𝐷 ) must contain at least as many 𝑎’s as 𝑏’s. But this is exactly the description of correct bracketing (with only one kind of brackets), where 𝑎 is the opening bracket and 𝑏 is the closing bracket. Formally: Definition 2.7 Dyck language 𝐷 Let Pref 𝑤 denote the set of all prefixes of a word 𝑤. The Dyck language 𝐷 is defined as 𝐷 := {𝑤 ∈ {𝑎, 𝑏}∗ | #𝑎 (𝑤) = #𝑏 (𝑤) ∧ ∀𝑢 ∈ Pref 𝑤 : #𝑎 (𝑢) ≥ #𝑏 (𝑢)} where #𝑎 (𝑤) is the number of occurrences of 𝑎 in 𝑤 (see Definition 1.15). We now consider an arbitrary start state and an arbitrary (but singular) end state of 𝑁 𝐷 . This complicates the situation slightly. We define the number of tokens on 𝑠 in the start state, 𝑖, and the number of tokens on 𝑠 in the final state, 𝑗, as parameters of 𝑁 𝐷 , obtaining an infinite collection of nets. Definition 2.8 The net 𝑁 𝐷 , with start state and final state For 𝑖, 𝑗 ∈ N, define 𝑁 𝐷 (𝑖, 𝑗) := (𝑆, 𝑇, 𝐹, 𝑀0 , {𝑀 𝑓 }, Σ, ℎℓ ), with 𝑆 = {𝑠}, 𝑇 = {𝑡1 , 𝑡2 }, 𝐹 (𝑡 1 , 𝑠) = 1 = 𝐹 (𝑠, 𝑡2 ), 𝐹 (𝑠, 𝑡1 ) = 0 = 𝐹 (𝑡 2 , 𝑠), 𝑀0 = (𝑖), 𝑀 𝑓 = ( 𝑗), Σ = {𝑎, 𝑏}, ℎℓ (𝑡 1 ) = 𝑎 and ℎℓ (𝑡2 ) = 𝑏.

Example 2.9 𝑁 𝐷 (0, 0) and 𝑁 𝐷 (1, 1) Definition 2.8 includes the case where 𝑖 = 𝑗 = 0. Then 𝑁 𝐷 (0, 0) is the Petri net shown in Figure 2.2 with initial state (0) and final state (0). Let us consider, by contrast, the language generated by 𝑁 𝐷 (1, 1). Apparently, this language strongly resembles the language of 𝑁 𝐷 (0, 0), but there is a difference; namely, in any prefix of a word 𝑤 ∈ 𝐿 (𝑁 𝐷 (1, 1)), the number of 𝑏’s may exceed the number of 𝑎’s by at most 1. But still, both letters must occur equally often in any word of the language. There exists an operation on languages, called the shuffle, that allows such situations to be expressed conveniently. Intuitively, the shuffle works almost as it would for a pack of cards. We may imagine some pack of cards to be divided into two halves and the second half be merged into the first. Instead of cards we may think of letters, and the two half-packs then correspond to words. Shuffling these two words leaves the order of letters stemming from one of the words unchanged. If two letters are not from the same word, however, we cannot predict their order in the resulting word. Formally, we may define the shuffle of two words as follows:

24

2 Languages of Petri Nets

Definition 2.10 The shuffle operation Let 𝑣, 𝑤 ∈ Σ∗ be two words and let Σ′ := {𝑎 ′ | 𝑎 ∈ Σ} be a new alphabet with Σ ∩ Σ′ = ∅. The shuffle of 𝑣 and 𝑤 is defined as 𝑣

𝑤

:= { ℎ(𝑢) ∈ Σ∗ | 𝑢 ∈ (Σ ∪ Σ′ ) ∗ ∧ ℎ1 (𝑢) = 𝑣 ∧ ℎ2 (𝑢) = 𝑤 }

where ℎ, ℎ1 and ℎ2 are homomorphisms from (Σ ∪ Σ′ ) ∗ to Σ∗ , such that for each 𝑎 ∈ Σ ℎ(𝑎) = ℎ(𝑎 ′ ) = 𝑎 ℎ1 (𝑎) = 𝑎, ℎ1 (𝑎 ′ ) = 𝜀 and ℎ2 (𝑎) = 𝜀, ℎ2 (𝑎 ′ ) = 𝑎 The shuffle can be extended canonically to languages as follows: Ø Ø 𝐿 1 𝐿 2 := 𝑢 𝑣





𝑢∈ 𝐿1 𝑣∈ 𝐿2

for two languages 𝐿 1 and 𝐿 2 on a same alphabet. This definition needs to be explained. The idea is to attach a mark (e.g., a superscript dash) to every letter of the word 𝑤, simply in order to be able to tell the letters of 𝑤 apart from the letters of 𝑣. The two words are then shuffled, obtaining, amongst others, a word 𝑢. If we erase the superscripted letters in 𝑢 (this being the function of ℎ1 ), we obtain 𝑣. On the other hand, if we erase the unmarked letters in 𝑢 (using ℎ2 ), then 𝑤 is obtained. Finally, if we simply erase all markers (without erasing any letters), then a word out of the shuffle of 𝑣 and 𝑤 is obtained. This is done by ℎ. Example 2.11 An example of the shuffle operation Let 𝑣 := 𝑎𝑏 and 𝑤 := 𝑐𝑏𝑑. Then 𝑣 𝑤 = {𝑎𝑏𝑐𝑏𝑑, 𝑎𝑐𝑏𝑏𝑑, 𝑎𝑐𝑏𝑑𝑏, 𝑐𝑎𝑏𝑏𝑑, 𝑐𝑎𝑏𝑑𝑏, 𝑐𝑏𝑎𝑏𝑑, 𝑐𝑏𝑎𝑑𝑏, 𝑐𝑏𝑑𝑎𝑏}. For instance, 𝑐𝑎𝑏𝑏𝑑 is obtained in two possible ways. We may put 𝑢 := 𝑐 ′ 𝑎𝑏𝑏 ′ 𝑑 ′ , or we may put 𝑢 := 𝑐 ′ 𝑎𝑏 ′ 𝑏𝑑 ′ . In both cases, we have ℎ1 (𝑢) = 𝑎𝑏, ℎ2 (𝑢) = 𝑐𝑏𝑑, and ℎ(𝑢) = 𝑐𝑎𝑏𝑏𝑑.



How can the shuffle be used in order to represent 𝐿 (𝑁 𝐷 (1, 1))? It appears that 𝐿(𝑁 𝐷 (1, 1)) is the same as 𝐷 = 𝐿 (𝑁 𝐷 ) = 𝐿(𝑁 𝐷 (0, 0)), except that one of the 𝑏’s may occur before, rather than after, its corresponding 𝑎. Therefore, we can write 𝐿(𝑁 𝐷 (1, 1)) = 𝐷 {𝑏𝑎, 𝜀}. The 𝑏𝑎 indicates that once (but not twice) a 𝑏 may occur before the corresponding 𝑎. The 𝜀 indicates that such an exchange of ordering does not necessarily have to take place; it is always the case that 𝑤 𝜀 = 𝑤, and hence also 𝐷 𝜀 = 𝐷. We need the shuffle in 𝐷 {𝑏𝑎, 𝜀}, because we don’t know where exactly the reverse-order pair 𝑏, 𝑎 is contained in the resulting word; all positions are equally possible.









2.4 Petri net languages are context-sensitive

25

Example 2.12 A word in 𝐿 (𝑁 𝐷 (1, 1)) We may check that 𝑎𝑏𝑏𝑎𝑏𝑎 ∈ 𝐿(𝑁 𝐷 (1, 1)), by inspection of the net. Let us see how it comes about that 𝑎𝑏𝑏𝑎𝑏𝑎 ∈ 𝐷 {𝑏𝑎, 𝜀}. Choose 𝑢 := 𝑎𝑏𝑏 ′ 𝑎𝑏𝑎 ′ . Then ℎ1 (𝑢) = 𝑎𝑏𝑎𝑏 ∈ 𝐷, ℎ2 (𝑢) = 𝑏𝑎 ∈ {𝑏𝑎, 𝜀}, and ℎ(𝑢) = 𝑎𝑏𝑏𝑎𝑏𝑎, and thus, 𝑎𝑏𝑏𝑎𝑏𝑎 ∈ 𝐷 {𝑏𝑎, 𝜀}.









Analogously, we see that 𝐿(𝑁 𝐷 (2, 2)) = 𝐷 {𝑏𝑎, 𝜀} {𝑏𝑎, 𝜀}, 𝐿 (𝑁 𝐷 (3, 3)) = 𝐷 {𝑏𝑎, 𝜀} {𝑏𝑎, 𝜀} {𝑏𝑎, 𝜀}, and so on. In general, the shuffle operation is commutative and associative.1 This motivates and justifies the definition of a derived operation as follows.







Definition 2.13 The big shuffle operation Let 𝐿 be a language and let 𝑛 ∈ N. Then

 𝐿 := {𝜀},  𝐿 := 𝐿  ( 𝐿), and 𝐿 := Ø  𝐿 For example,  𝐿 = {𝜀},  𝐿 = 𝐿,  𝐿 = 𝐿  𝐿, and so on. Using the big shuffle, we can thus write 𝐿(𝑁 (𝑖, 𝑖)) = 𝐷  (  {𝑏𝑎, 𝜀}) for any 𝑖∈N. This includes 𝑖=0, since {𝜀} 𝐿=𝐿=𝐿 {𝜀} for every language 𝐿. 0

𝑛+1

𝑛

𝑛

𝑛∈N

0

1

2

𝑖

𝐷

The net 𝑁 𝐷 (3, 2) has one token more on 𝑠 initially than 𝑁 𝐷 (2, 2). Therefore, the only difference between 𝐿(𝑁 𝐷 (2, 2)) and 𝐿 (𝑁 𝐷 (3, 2)) is that the additional token on place 𝑠 has to be consumed at some arbitrary point in time. Therefore, we have 𝐿 (𝑁 𝐷 (3, 2)) = 𝐿(𝑁 𝐷 (2, 2)) {𝑏}. More generally:



𝐿 (𝑁 𝐷 ( 𝑗 + 𝑘, 𝑗)) = 𝐿 (𝑁 𝐷 ( 𝑗, 𝑗))

 (𝑘 {𝑏})

We proceed in an analogous way if the second parameter is bigger than the first. In that case, there must be a corresponding number of excess 𝑎’s. In general, we have the following result for arbitrary nets 𝑁 𝐷 (𝑖, 𝑗): Proposition 2.14 Language of 𝑁 𝐷 (𝑖, 𝑗) Let 𝑖, 𝑗 ∈ N. If 𝑖 ≥ 𝑗, then 𝐿(𝑁 𝐷 (𝑖, 𝑗)) = 𝐷 ( and if 𝑖 ≤ 𝑗,

  𝑗 {𝑏𝑎, 𝜀})  (𝑖− 𝑗 {𝑏}) then 𝐿 (𝑁 𝐷 (𝑖, 𝑗)) = 𝐷  ( 𝑖 {𝑏𝑎, 𝜀})  (  𝑗 −𝑖 {𝑎})







Note that if 𝑖 = 𝑗, both formulas reduce to 𝐿 (𝑁 𝐷 (𝑖, 𝑖)) = 𝐷 ( 𝑖 {𝑏𝑎, 𝜀}). Also, for any 𝑘 ∈ N and 𝑎 ∈ Σ, ( 𝑘 {𝑎}) = {𝑎 𝑘 }, but we preferred to use consistently the shuffle operation. 1 See https://encyclopediaofmath.org/wiki/Shuffle_algebra.

26

2 Languages of Petri Nets

2.4.2 Inverse homomorphisms and elementary nets In the net 𝑁 𝐷 , place 𝑠 is connected to exactly two transitions, with two single arcs. Usually, however, this is not the case. We will now extend our considerations in order to describe the language of a net with exactly one place and arbitrarily (but finitely) many transitions and arcs. Such nets will be called elementary, provided that, in addition, they have a single end state. Definition 2.15 Elementary net A Petri net 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 , M 𝑓 , Σ, ℎℓ ) is called elementary if |𝑆| = 1 and |M 𝑓 | = 1. In order to describe the languages of such nets, we need the results of the previous section (in particular, Proposition 2.14) and one further operation, the inverse homomorphism. Definition 2.16 Inverse homomorphism Let Σ and Γ be alphabets and let ℎ : Σ∗ → Γ∗ be a homomorphism (with the usual properties, i.e., ℎ(𝜀) = 𝜀 and ℎ(𝑣𝑤) = ℎ(𝑣)ℎ(𝑤) for all words 𝑣, 𝑤 ∈ Σ∗ ). For a word 𝑢 ∈ Γ∗ , we define ℎ −1 (𝑢) := {𝑤 ∈ Σ∗ | ℎ(𝑤) = 𝑢} This formula describes the calculation of all words that are mapped to 𝑢 by ℎ, i.e., the domain of 𝑢 under ℎ. The calculation is called an application of the inverse homomorphism ℎ −1 . It can be extended canonically to languages 𝐿 ⊆ Γ∗ : Ø ℎ −1 (𝐿) := ℎ −1 (𝑢) 𝑢∈ 𝐿

What is the effect of such an inverse homomorphism? Let us consider an example. Example 2.17 An inverse homomorphism Let ℎ : {𝑎, 𝑏, 𝑐, 𝑑}∗ → {𝑎, 𝑏}∗ be the homomorphism defined (uniquely) by ℎ(𝑎) := 𝑎, ℎ(𝑏) := 𝑏, and ℎ(𝑐) := 𝑏𝑏𝑎 and ℎ(𝑑) := 𝜀. For example, we have ℎ −1 (𝑎𝑏𝑏𝑎𝑏𝑎𝑏𝑏𝑎) = {𝑎𝑏𝑏𝑎𝑏𝑎𝑏𝑏𝑎, 𝑎𝑐𝑏𝑎𝑏𝑏𝑎, 𝑎𝑏𝑏𝑎𝑏𝑎𝑐, 𝑎𝑐𝑏𝑎𝑐}

 ({𝑑})

(2.2)

How can this be verified? Apparently, every 𝑎 and every 𝑏 may stay. Any sequence 𝑏𝑏𝑎 in the original word might, however, also be replaced by a letter 𝑐. This yields the four words, 𝑎𝑏𝑏𝑎𝑏𝑎𝑏𝑏𝑎, . . . , 𝑎𝑐𝑏𝑎𝑐, shown in (2.2). Since there are “invisible” 𝜀’s between any two letters of (as well as before and after) 𝑎𝑏𝑏𝑎𝑏𝑎𝑏𝑏𝑎, we may also insert arbitrarily many 𝑑’s in any word of

2.4 Petri net languages are context-sensitive

27



the domain of 𝑎𝑏𝑏𝑎𝑏𝑎𝑏𝑏𝑎. This is achieved by shuffling all resulting words with {𝑑}.a a

The latter happens to be the regular language 𝑑 ∗ .

An inverse homomorphism can be applied to the language of a Petri net, for instance to the language of the net 𝑁 𝐷 (3, 2), which happens to contain the word 𝑎𝑏𝑏𝑎𝑏𝑎𝑏𝑏𝑎 from Example 2.17. A more interesting question is whether it is somehow possible to apply an inverse homomorphism “to a Petri net”. We will start with a restricted question: Example 2.18 Example 2.17 continued How should a Petri net look like if its language is required to be ℎ −1 (𝐿 (𝑁 𝐷 (3, 2)))? If we take the homomorphism ℎ from Example 2.17, then the answer is given in Figure 2.3. It can be checked that on the righthand side, 𝑎𝑏𝑏𝑎𝑏𝑎𝑏𝑏𝑎, 𝑎𝑐𝑏𝑎𝑏𝑏𝑎, 𝑎𝑏𝑏𝑎𝑏𝑎𝑐 and 𝑎𝑐𝑏𝑎𝑐 are indeed firable and lead to the final state (with 2 tokens on 𝑠). Moreover, it can be checked that the transition labelled 𝑑 can always be executed.

𝑐

𝑑

2 𝑠

𝑠

𝑎

𝑏

𝑁 𝐷 (3, 2)

𝑎

𝑏

ℎ −1 (𝑁 𝐷 (3, 2))

Fig. 2.3: The Petri net 𝑁 𝐷 (3, 2) (l.h.s.) and the Petri net ℎ −1 (𝑁 𝐷 (3, 2)) (r.h.s.) (in both cases, at the end, 𝑠 must contain two tokens). It is now not difficult to get a construction for a relatively general case. Definition 2.19 The net ℎ −1 (𝑁 𝐷 ) Let 𝑖, 𝑗 ∈ N, let Σ be an alphabet, and let ℎ : Σ∗ → {𝑎, 𝑏}∗ be a homomorphism with ℎ(𝑥) ∈ 𝑏 ∗ 𝑎 ∗ for all 𝑥 ∈ Σ. Then we define ℎ −1 (𝑁 𝐷 (𝑖, 𝑗)) := ({𝑠}, 𝑇, 𝐹, (𝑖), {( 𝑗)}, Σ, ℎℓ ) with 𝑇 = {𝑡 𝑥 | 𝑥 ∈ Σ}, ℎℓ (𝑡 𝑥 ) = 𝑥 for 𝑡 𝑥 ∈ 𝑇, 𝐹 (𝑠, 𝑡 𝑥 ) = #𝑏 (ℎ(𝑥)), and 𝐹 (𝑡 𝑥 , 𝑠) = #𝑎 (ℎ(𝑥)).

28

2 Languages of Petri Nets

Example 2.20 Example 2.17 continued If, for instance, ℎ(𝑐) = 𝑏𝑏𝑎 (as in the above example), then this means that firing 𝑐 must have the same effect as firing 𝑏 two times in a row, followed by an 𝑎. Therefore, the transition labelled 𝑐 must have 𝑠 twice in its pre-set and once in its post-set. This explains the arc weight 2 from 𝑠 to 𝑐 and the arc weight 1 from 𝑐 to 𝑠. Analogously, the transition labelled 𝑑 has empty pre- and post-sets, because ℎ(𝑑) = 𝜀 contains neither 𝑏 nor 𝑎. Practically, 𝑏 stands for the removal of a token from 𝑠 and 𝑎 stands for the insertion of a token into 𝑠. Because the removal of a token always precedes the production of a token in firing a transition, we require ℎ(𝑥) ∈ 𝑏 ∗ 𝑎 ∗ to hold for all 𝑥 ∈ Σ in Definition 2.19. Note that in Definition 2.19, the net ℎ −1 (𝑁 𝐷 (𝑖, 𝑗)) is always elementary, since there is only one place, 𝑠. The arguments just given can be stated and proved more formally as follows. Proposition 2.21 Properties of ℎ −1 (𝑁 𝐷 (𝑖, 𝑗)) Let 𝑖, 𝑗 ∈ N, let Σ be an alphabet, and let ℎ : Σ∗ → {𝑎, 𝑏}∗ be a homomorphism such that ℎ(𝑥) ∈ 𝑏 ∗ 𝑎 ∗ for each 𝑥 ∈ Σ. Then 𝐿(ℎ −1 (𝑁 𝐷 (𝑖, 𝑗))) = ℎ −1 (𝐿(𝑁 𝐷 (𝑖, 𝑗))). Proof: (⊆:) Let 𝑤 ∈ 𝐿 (ℎ −1 (𝑁 𝐷 (𝑖, 𝑗)). Assume 𝑤 = 𝑎 1 . . . 𝑎 𝑛 with 𝑛 ≥ 0 and 𝑎 𝑖 ∈ Σ for 1 ≤ 𝑖 ≤ 𝑛. Then 𝜎 = 𝑡 𝑎1 . . . 𝑡 𝑎𝑛 by the definition of a firing sequence of ℎ −1 (𝑁 𝐷 (𝑖, 𝑗)). By ℎ(𝑎 𝑖 ) = 𝑏 𝐹 (𝑠,𝑡𝑎𝑖 ) 𝑎 𝐹 (𝑡𝑎𝑖 ,𝑠) and because start and end states of ℎ −1 (𝑁 𝐷 (𝑖, 𝑗)) and 𝑁 𝐷 (𝑖, 𝑗) are equal, ℎ(𝑤) = 𝑏 𝐹 (𝑠,𝑡𝑎1 ) 𝑎 𝐹 (𝑡𝑎1 ,𝑠) . . . 𝑏 𝐹 (𝑠,𝑡𝑎𝑛 ) 𝑎 𝐹 (𝑡𝑎𝑛 ,𝑠) is in the language of 𝑁 𝐷 (𝑖, 𝑗). Hence 𝑤 ∈ ℎ −1 (ℎ(𝑤)) ⊆ ℎ −1 (𝐿(𝑁 𝐷 (𝑖, 𝑗))). (⊇:) Let 𝑤 ∈ ℎ −1 (𝐿 (𝑁 𝐷 (𝑖, 𝑗))). Then ℎ(𝑤) ∈ 𝐿 (𝑁 𝐷 (𝑖, 𝑗)) ⊆ Σ∗ . Let 𝑤 = 𝑐 1 . . . 𝑐 𝑛 for some 𝑛 ≥ 0. Then ℎ(𝑤) = 𝑏 𝑖1 𝑎 𝑜1 . . . 𝑏 𝑖𝑛 𝑎 𝑜𝑛 with some 𝑖 𝑘 ∈ N and 𝑜 𝑘 ∈ N. Since 𝑏 can fire 𝑖1 times, followed by 𝑜1 times 𝑎, and since 𝐹 (𝑠, 𝑡 𝑐1 ) = 𝑖1 and 𝐹 (𝑡 𝑐1 , 𝑠) = 𝑜1 , 𝑡 𝑐1 in ℎ −1 (𝑁 𝐷 (𝑖, 𝑗)) can also fire. Firing 𝑡 𝑐1 in ℎ −1 (𝑁 𝐷 (𝑖, 𝑗)) leads to the same state as firing 𝑏 𝑖1 𝑎 𝑜1 in 𝑁 𝐷 (𝑖, 𝑗). We can argue analogously for the second letter of 𝑤 and for every subsequent letter. Hence 𝑡 𝑐1 . . . 𝑡 𝑐𝑛 is a firing sequence in ℎ −1 (𝑁 𝐷 (𝑖, 𝑗)), and 𝑤 = 𝑐 1 . . . 𝑐 𝑛 ∈ 𝐿 (ℎ −1 (𝑁 𝐷 (𝑖, 𝑗))). 2.21 By this result, and using Proposition 2.14, we can determine, without further proof, all languages of free, elementary nets. Corollary 2.22 Languages of elementary nets For every free and elementary net 𝑁 with transition set 𝑇, there exist a homomorphism ℎ with domain 𝑇 ∗ and range {𝑎, 𝑏}∗ (and ℎ(𝑥) ∈ 𝑏 ∗ 𝑎 ∗ for all 𝑥 ∈ 𝑇) and two numbers 𝑖, 𝑗 ∈ N such that either

2.4 Petri net languages are context-sensitive

29

𝐿 (𝑁) = ℎ −1 (𝐷

 ( 𝑗 {𝑏𝑎, 𝜀})  (𝑖− 𝑗 {𝑏}))

𝐿(𝑁) = ℎ −1 (𝐷

 (𝑖 {𝑏𝑎, 𝜀})  ( 𝑗 −𝑖 {𝑎}))

or

(2.3)

(2.4)

depending on whether 𝑖 ≥ 𝑗 or 𝑗 ≥ 𝑖.a a

If 𝑖 = 𝑗, both lines come to the same.

The conclusion of this corollary is not valid for arbitrarily labelled elementary nets, because in this case, the homomorphism cannot be determined uniquely. For some label 𝑐 there might have to be a number of different images ℎ(𝑐), in order to model the (possibly very different) behaviours of all transitions labelled by 𝑐. This is impossible to achieve with a single homomorphism, since ℎ(𝑐) is exactly one word. Instead, the notion of a substitution could be used (but we will not pursue this any further).

2.4.3 Compositions of elementary nets Every Petri net can be decomposed in a very simple way into a set of elementary nets. By singling out some place and omitting all other places, as well as all arcs which are not connected to the chosen place, we obtain an elementary subnet. Definition 2.23 Elementary subnet Let 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 , {𝑀 𝑓 }, Σ, ℎℓ ) be a Petri net with a single final state and let 𝑠 ∈ 𝑆 be a place. The elementary subnet 𝑁 𝑠 generated by 𝑁 is defined as follows: 𝑁 𝑠 := ({𝑠}, 𝑇, 𝐹 ′ , (𝑀0 (𝑠)), {(𝑀 𝑓 (𝑠))}, Σ, ℎℓ ) with 𝐹 ′ := 𝐹 | ( {𝑠}×𝑇 )∪(𝑇 ×{𝑠} ) . As the name suggests, an elementary subnet 𝑁 𝑠 is a subnet of 𝑁 in the sense of Definition 1.7 (Chapter 1). More precisely, 𝑁 𝑠 = 𝑁 ({𝑠}, 𝑇). As previously, the vertical bar used in this definition denotes the restriction of the domain to the set given in its index. A sequence is firable in a Petri net 𝑁 if and only if it is firable in all elementary subnets of 𝑁: Lemma 2.24 Characteristion of firability by subnets Let 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 , {𝑀 𝑓 }, Σ, ℎℓ ) be a Petri net with |𝑆| ≥ 1 and let 𝑁 𝑓 be the free version of 𝑁. Then we have, for all 𝑤 ∈ 𝑇 ∗ , 𝑓

𝑤 ∈ 𝐿(𝑁 𝑓 ) ⇐⇒ ∀𝑠 ∈ 𝑆 : 𝑤 ∈ 𝐿(𝑁 𝑠 )

30

2 Languages of Petri Nets

Proof: The language of 𝑁 𝑓 consists of the firing sequences of 𝑁. Hence it is sufficient to analyse these sequences. The lemma is then automatically valid for the firing sequences of 𝑁, but not necessarily for the language of 𝑁. (=⇒:) 𝑡

𝑡

Let 𝑀 ∈ N𝑆 be a state and let 𝑡 ∈ 𝑇 be a transition. If 𝑀 −→ 𝑁 𝑓 𝑀 ′ (where −→ 𝑁 𝑓 𝑡 refers to the fireability relation in 𝑁 𝑓 ), then also 𝑀 (𝑠) −→ 𝑁 𝑓 𝑀 ′ (𝑠) for every place 𝑠

𝑓

𝑓

𝑠 ∈ 𝑆, since in 𝑁 𝑠 , transition 𝑡 is connected (at most) with 𝑠. Because in 𝑁 𝑠 , 𝑡 is connected with 𝑠 in the same way as in 𝑁 𝑓 , the token change on 𝑠 is exactly the same 𝑓 in 𝑁 𝑠 as in 𝑁 𝑓 if 𝑡 fires. Since this observation holds for arbitrary states, it remains valid for arbitrary firing sequences. (⇐=:) 𝑡

Let 𝑀, 𝑀 ′ ∈ N𝑆 be two states and let 𝑡 ∈ 𝑇 be a transition. Suppose that 𝑀 (𝑠) −→ 𝑓 ′ 𝑓 𝑀 (𝑠) for all 𝑠 ∈ 𝑆. This implies that 𝑀 (𝑠) ≥ 𝐹 (𝑠, 𝑡) in 𝑁 𝑠 , and hence also 𝑁 𝑠

𝑓

in 𝑁 𝑓 , because 𝑁 𝑠 and 𝑁 𝑓 have the same arcs with respect to place 𝑠. Hence ∀𝑠 ∈ 𝑆 : 𝑀 (𝑠) ≥ 𝐹 (𝑠, 𝑡), implying immediately that 𝑀 ≥ B(𝑡), hence that 𝑡 is firable 𝑓 in 𝑁 𝑓 . Furthermore, for every 𝑠 ∈ 𝑆 in 𝑁 𝑠 we have 𝑀 ′ (𝑠) = 𝑀 (𝑠) − 𝐹 (𝑠, 𝑡) + 𝐹 (𝑡, 𝑠). This is separately passed on to 𝑁 𝑓 for every 𝑠, entailing 𝑀 ′ = 𝑀 −B(𝑡) +F(𝑡). Hence 𝑡 𝑀 −→ 𝑁 𝑓 𝑀 ′ . It follows that a transition can fire in 𝑁 𝑓 , provided it can fire in every elementary subnet. As 𝑀 was arbitrary, the property is also true for arbitrary firing sequences. 2.24 From this lemma, we can instantly determine the languages of all free Petri nets with a single final state. Proposition 2.25 Languages of free nets with a single final state Let 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 , {𝑀 𝑓 }, Σ, ℎℓ ) be a Petri net with one final state and let 𝑁 𝑓 be the free version of 𝑁. Then we have: Ù 𝑓 𝐿(𝑁 𝑓 ) = 𝐿(𝑁 𝑠 ) (2.5) 𝑠∈𝑆

Proof: Let 𝑤 ∈ 𝑇 ∗ . Then 𝑓

𝑤 ∈ 𝐿 (𝑁 𝑓 ) ⇐⇒ ∀𝑠 ∈ 𝑆 : 𝑤 ∈ 𝐿(𝑁 𝑠 ) ( Lemma 2.24 ) Ñ 𝑓 ⇐⇒ 𝑤 ∈ 𝑠∈𝑆 𝐿(𝑁 𝑠 ) ( logic ) 2.25 This statement is not true for arbitrary (not necessarily free) Petri nets. A counterexample is shown in Figure 2.4. With final state (0 0) T , the net 𝑁 shown on the left-hand side generates the language {𝜀}. With final states (0), the elementary subnets 𝑁 𝑠1 and 𝑁 𝑠2 of 𝑁 shown in the remaining part of the figure both generate the language {𝑎 𝑛 | 𝑛 ∈ N}. Therefore, the intersection of the languages generated by 𝑁 𝑠1 and by 𝑁 𝑠2 is {𝑎 𝑛 | 𝑛 ∈ N}, which is different from 𝐿 (𝑁) = {𝜀}.

2.4 Petri net languages are context-sensitive

𝑠1

𝑠1

𝑎

31

𝑎

𝑁 𝑠1

𝑁 𝑠2

𝑎

𝑎

𝑁 𝑠2 𝑠2

𝑎

𝑎

Fig. 2.4: The net 𝑁 with final state (0 0) T , and two subnets 𝑁 𝑠1 and 𝑁 𝑠2 with final states (0).

2.4.4 Homomorphic images of free Petri nets In the next step, we will determine the languages of all Petri nets, not necessarily freely labelled, having a single final state. This is very easy to derive from the previous considerations. We only need to re-introduce the labelling that has been removed when the free version of a net was considered. Since the labelling induces a homomorphism, we may simply apply the latter to the derived language. Proposition 2.26 Languages of nets with a single final state Let 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 , {𝑀 𝑓 }, Σ, ℎℓ ) be a Petri net and let 𝑁 𝑓 be its free version. Then Ù 𝑓 (2.6) 𝐿(𝑁) = ℎℓ (𝐿(𝑁 𝑓 )) = ℎℓ ( 𝐿 (𝑁 𝑠 )) 𝑠∈𝑆

Proof: By Observation 2.3, 𝐿(𝑁) = ℎℓ (𝐿 (𝑁 𝑓 )). The rest follows from Proposition 2.25. 2.26 Let us examine, for instance, what happens with the counterexample of Figure 2.4, assuming the transition connected to 𝑠1 is 𝑡1 and the transition connected to 𝑠2 is 𝑡2 : 𝑓 𝑓 the language of 𝑁 𝑠1 is 𝑡 2∗ and the language of 𝑁 𝑠2 is 𝑡 1∗ , with an empty intersection as expected.

2.4.5 Petri nets with more than one final state The last step of our investigation concerns the number of final states. So far, we know how to describe languages of Petri nets with exactly one final state. If 𝑁 has no final state, then 𝐿 (𝑁) = ∅. The case that 𝑁 has two or more final states is more interesting, but it is still easy to analyse: Proposition 2.27 Languages of general Petri nets Let 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 , M 𝑓 , Σ, ℎℓ ) be a Petri net. Further, let 𝑁 𝑀 𝑓 := (𝑆, 𝑇, 𝐹, 𝑀0 , {𝑀 𝑓 }, Σ, ℎℓ )

32

2 Languages of Petri Nets

be the Petri net 𝑁, except that its only final state is 𝑀 𝑓 . Then 𝐿(𝑁) =

Ø 𝑀𝑓 ∈M 𝑓

𝐿(𝑁 𝑀 𝑓 )

(2.7)

𝜎

Proof: (⇐=:) If 𝑤 ∈ 𝐿(𝑁 𝑀 𝑓 ), then 𝑀0 −→ 𝑁𝑀 𝑓 𝑀 𝑓 for a firing sequence 𝜎 with ℎℓ (𝜎) = 𝑤. But 𝜎 is also a firing sequence in 𝑁, and because of 𝑀 𝑓 ∈ M 𝑓 , we have 𝑤 = ℎℓ (𝜎) ∈ 𝐿(𝑁). (=⇒:) Let 𝑤 ∈ 𝐿 (𝑁). Then there is a firing sequence 𝜎 with ℎℓ (𝜎) = 𝑤 and Ð 𝜎 𝑀0 −→ 𝑀 for some 𝑀 ∈ M 𝑓 . Hence 𝑤 ∈ 𝐿 (𝑁 𝑀 ) ⊆ 𝑀 𝑓 ∈ M 𝑓 𝐿 (𝑁 𝑀 𝑓 ). 2.27 Note that M 𝑓 is required to be finite by definition, so that the union in (2.7) is actually a finite union.

2.4.6 Petri net languages are context-sensitive Using the results so far, we can give a description for the languages of Petri nets that does not resort (as opposed to Proposition 2.27) to the languages of other Petri nets, but only to some fixed languages. The next corollary comprises all previous results in a single statement. Corollary 2.28 Languages of Petri nets Let 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 , M 𝑓 , Σ, ℎℓ ) be a Petri net. For every 𝑠 ∈ 𝑆, there is a homomorphism ℎ 𝑠 , and for every 𝑠 ∈ 𝑆 and 𝑀 ∈ M 𝑓 , there are numbers 𝑖 𝑀,𝑠 , 𝑗 𝑀,𝑠 ∈ N and some 𝑥 𝑀,𝑠 ∈ {𝑎, 𝑏}, such that the following is true: 𝐿(𝑁) =

Ø 𝑀∈M 𝑓

ℎℓ (

Ù

ℎ −1 (𝐷 𝑠∈𝑆 𝑠

 (𝑖

𝑀,𝑠

{𝑏𝑎, 𝜀})

 ( 𝑗

𝑀,𝑠

{𝑥 𝑀,𝑠 })))

Proof: Using (2.3), (2.4), (2.5), (2.6), and (2.7). 2.28 Strictly speaking, the homomorphisms ℎ 𝑠 should perhaps formally be made dependent on 𝑀. However, this would be superfluous since the final state does not influence the choice of the homomorphism at this point. From the above representation of Petri net languages, we can derive the consequence that such languages are indeed context-sensitive. Proposition 2.29 Context-sensitivity of Petri net languages All languages that are generated by Petri nets are context-sensitive. Proof: We consider the formula representing all Petri net languages in Corollary 2.28. Basic languages in this formula are 𝐷, {𝑏𝑎, 𝜀}, {𝑎}, and {𝑏}. The language 𝐷

2.5 Closure properties of Petri net languages

33

is context-free (using, for instance, the grammar 𝑆 → 𝑎𝑆𝑏𝑆|𝜀). The other languages are finite and hence regular. Thus, all the basic languages are context-sensitive. Furthermore, it is known that context-sensitive languages are closed under shuffle, inverse homomorphisms, intersection, non-erasing homomorphisms, and union.2 It follows by the formula in Corollary 2.28 that 𝐿(𝑁) is context-sensitive for every Petri net 𝑁, since 𝐿(𝑁) is built from context-sensitive languages by means of operations with respect to which the class of context-sensitive languages is closed. 2.29 This proof is not valid for context-free languages, because the class of context-free languages is neither closed with respect to intersection nor closed with respect to shuffle.

2.5 Closure properties of Petri net languages It is not only possible to describe every individual language of a Petri net in this way, but it is also possible to capture the language classes L 1 and L in a closure-oriented, formal way. Recall, from Definition 2.2, that L 1 and L are the classes of languages of Petri nets with exactly one final state and arbitrarily (but finitely) many final states, respectively. First, we introduce some notation. Definition 2.30 Closure operation Let 𝐿 1 , . . . , 𝐿 𝑖 be formal languages and let 𝑜 𝑝 1 , . . . , 𝑜 𝑝 𝑗 be some operations (with 𝑖, 𝑗 ∈ N). Then Cl

𝑜 𝑝1 ,...,𝑜 𝑝 𝑗

(𝐿 1 , . . . , 𝐿 𝑖 )

denotes, by definition, the smallest class of languages that contains all languages 𝐿 𝑘 (1 ≤ 𝑘 ≤ 𝑖) and is closed with respect to all operations 𝑜 𝑝 ℓ (1 ≤ ℓ ≤ 𝑗). Operations in the sense of this definition are, for instance, intersection or shuffle, but also the application of arbitrary inverse homomorphisms (written as ℎ −1 ) as well as the application of arbitrary fine homomorphisms (written as ℎℓ ). Recall that a fine homomorphism is a homomorphism ℎℓ : Σ∗ → Γ∗ with the special property that ℎℓ (𝑎) ∈ Γ for all 𝑎 ∈ Σ. Examples of fine homomorphisms are the labelling functions of Petri nets. Corollary 2.28 allows us to formulate the following corollary:

2 These facts are not proved here; interested readers may work out the details by themselves, or consult the literature.

34

2 Languages of Petri Nets

Corollary 2.31 Algebraic characterisation of Petri net languages With the preceding definitions and notations, ∀𝐿 ∈ L : 𝐿 ∈ Cl

 (𝐷, {𝑏𝑎, 𝜀}, {𝑎}) , (𝐷, {𝑏𝑎, 𝜀}, {𝑎})

∪,𝐻𝐿 ,∩,𝐼𝐿 ,

∀𝐿 ∈ L 1 : 𝐿 ∈ Cl

𝐻𝐿 ,∩,𝐼𝐿

where, for each language 𝐿, 𝐻 𝐿 is some finite set of homomorphisms, and 𝐼 𝐿 is some finite set of inverse homomorphisms. This provides a kind of inclusion property, and hence is slightly annoying, because we would prefer to have an equivalence. In that case we would have a full algebraic characterisation of the Petri net language classes. In order to have such an equivalence, we still have to prove the following: (i) The languages 𝐷, {𝑏𝑎, 𝜀} and {𝑎} are, in fact, Petri net languages. (ii) Petri net language classes are closed with respect to the operations used in Corollary 2.31, that is, with respect to each ℎ ∈ 𝐻 𝐿 , ∩, each ℎ −1 ∈ 𝐼 𝐿 , and (in the case of L) the union ∪. We already know that the language 𝐷 is generated by the Petri net 𝑁 𝐷 (0, 0). Figure 2.5 shows two nets generating {𝑏𝑎, 𝜀} and {𝑎}, respectively.



𝐿 = {𝑏𝑎, 𝜀}:

𝑎

𝑏 𝑠1

𝑠2

Final states: (0 0) T and (1 0) T

𝐿 = {𝑎}:

𝑎 𝑠1 Final state: (0) T

Fig. 2.5: Petri nets generating {𝑏𝑎, 𝜀} (l.h.s.) and {𝑎} (r.h.s.). Note that the Petri net generating {𝑏𝑎, 𝜀} has two final states. This cannot be accepted if we aim for equality in the second line of Corollary 2.31, that is, if we aim at a full algebraic characterisation of L 1 . Unfortunately, this problem cannot be solved, because of the following lemma. Lemma 2.32 {𝑏𝑎, 𝜀} cannot be generated with only one final state There exists no Petri net with exactly one final state which generates the language {𝑏𝑎, 𝜀}. In particular, we have L 1 ⊆ L and L 1 ≠ L. Proof: Assume that there is a net with one final state which generates {𝑏𝑎, 𝜀}. Since in 𝜀, no transition at all fires, the initial state 𝑀0 of this net must also be a final state, and it is the only final state by assumption. Since 𝑀0 is the only final state and 𝑏𝑎 is 𝜎 in the language, there is some firing sequence 𝜎 with ℎℓ (𝜎) = 𝑏𝑎 and 𝑀0 −→ 𝑀0 . 𝜎𝜎 But then we also have 𝑀0 −→ 𝑀0 , which implies that the word ℎℓ (𝜎𝜎) = 𝑏𝑎𝑏𝑎 is

2.5 Closure properties of Petri net languages

35

also in the language of this net. This contradicts the property that the net generates {𝑏𝑎, 𝜀}, since 𝑏𝑎𝑏𝑎 is not contained in this language. The fact that L 1 ⊆ L follows directly from the definitions, and L 1 ≠ L follows from Figure 2.5 where a Petri net generating {𝑏𝑎, 𝜀} has been given. Thus {𝑏𝑎, 𝜀} ∈ L but {𝑏𝑎, 𝜀} ∉ L 1 . 2.32 Hence we can, at most, prove the full algebraic characterisation of L, not that of L 1 . What is missing are the closure properties of L. In the next subsections, we will give sketches of their proofs.

2.5.1 Closure with respect to shuffle The shuffle operator leaves the “internal structure” of two objects unchanged while merging their behaviour in arbitrary ways. Suppose that we are given two Petri nets describing two languages and we are looking for another Petri net generating the shuffle of the two languages. This can be defined by simply laying the two given Petri nets side by side, without any connections between them. The initial state is the union of the two individual initial states, and the final states arise similarly as unions of two individual states, for any such possibility. By this construction (usually called disjoint sum or disjoint parallel composition), the internal structures of the two nets are preserved, while neither of them constrains the other one in any way. The language of the net so constructed is the shuffle of the two individual languages.

2.5.2 Closure with respect to fine homomorphisms A fine homomorphism simply changes every letter to another letter (i.e., neither to 𝜀 nor to a word with two or more letters). The effect of such a homomorphism to the language generated by a Petri net can be modelled on the net itself. To this end, we may simply re-label every transition according to the given homomorphism.

2.5.3 Closure with respect to inverse homomorphisms Let a net 𝑁 and a homomorphism ℎ be given. We are looking for a net 𝑁 ′ satisfying 𝐿(𝑁 ′ ) = ℎ −1 (𝐿 (𝑁)). For example, if ℎ(𝑎) = 𝑏𝑐𝑑, then a transition labelled 𝑎 must be firable in 𝑁 ′ if and only if a sequence with labels 𝑏𝑐𝑑 is firable in 𝑁. To model this, we look for all combinations of transitions generating the sequence of labels 𝑏𝑐𝑑 in 𝑁. For each such combination, we calculate its effect (i.e., which tokens are needed for firing it, and what is its marking change), and we introduce a new transition

36

2 Languages of Petri Nets

labelled 𝑎 which has the same effect. This construction is repeated for all images of letters under ℎ. Note that it resembles the construction given in Section 2.4.2.

2.5.4 Closure with respect to intersection In principle, the construction achieving the intersection of two Petri net languages has already been presented in Section 2.4.3 where a net was decomposed into elementary subnets, and its behaviour be analysed as the intersection of the behaviours of these subnets. The general construction is very similar to this. Let two nets be given and let a net be sought whose language is the intersection of their languages. If the two participating nets have two transitions with the same label, say 𝑎, then the new net gets a transition labelled 𝑎 whose pre- and post-sets are the unions of the respective pre-sets and post-sets of the two original nets. This is done for all combinations of such transitions. That is, if one of the nets has 3 transitions labelled 𝑎 and the other net has 5 such transitions, then the resulting net has 15 transitions labelled 𝑎 (while the original transitions are erased). In principle, we are constructing a kind of cross product. The resulting net generates the intersection of the two original languages.

2.5.5 Closure with respect to union In order to create a net which generates the union of the languages of two given nets, we first construct a normal form for the original nets. This normal form has three properties: • The start state has exactly one token. • The first firable transition removes the token from the start place. • No transition can subsequently put any token on the start state. Given two nets with this property, we can simply merge their two starting states (and nothing else), yielding a new starting state. The new set of final states is the union of the two individual sets (rather than, as before, taking the union of the markings in these sets). The new net generates the union of the two original languages.

2.6 Algebraic characterisation of Petri net languages Combining the previous results, we obtain the following full algebraic characterisation of L.

2.7 Bibliographical remarks and further reading

37

Theorem 2.33 Algebraic characterisation of Petri net languages L = Cl

∪,ℎℓ ,∩,ℎ −1 ,

 (𝐷, {𝑏𝑎, 𝜀}, {𝑎})

2.7 Bibliographical remarks and further reading One of the earliest, and a very influential, source on Petri net languages is [Hac75b], written by Michel Henri Théodore Hack. There has been plenty of work on the subject since then. To mention only one paper, R. Valk and G. Vidal-Naquet prove the decidability of the regularity of a Petri net language in [VV81]. At around the same time, the same result was proved by A. Ginzburg and M. Yoeli for vector addition systems in [GY80]. There are also textbooks in which Petri net languages play a role, e.g. [PW03] by L. Priese and H. Wimmel. See also [Pet77] by James Lyle Peterson and the survey article by Matthias Jantzen [Jan86]. These works, and several others (e.g. [Pel87], by Elisabeth Pelz) can be consulted for the closure results mentioned in Section 2.5. For Dyck languages, see https://en.wikipedia.org/wiki/Dyck_language. From the literature, several slight variants of the characterisation given in Section 2.6 are known. For example, we may trade the two finite languages {𝑏𝑎, 𝜀} and {𝑎} against a weaker notion of fine homomorphism, by which letters may be erased completely (or mapped to letters, but not to longer words). Also, in general, the shuffle can be expressed by means of ℎℓ , ∩ and ℎ −1 . It may also be of interest that in the literature, e.g. in Peterson’s [Pet77], other classes of Petri net languages have been defined which have similar characterisations. For example, we may investigate what happens if a final state is not necessarily reached. Another possibility is the consideration of 𝜏-labelled transitions, the idea being that the firing of such transitions does not contribute to the generated language, that is, that such transitions behave as if they were labelled by the empty word 𝜀. The resulting language classes have similar properties as the class L, but they are certainly not always identical. Had we investigated the language of firing sequences (or, in other words, the class of free Petri net languages), we would have arrived at a rather different characterisation. This is due to the fact that most closure properties of labelled Petri nets do not hold anymore for free Petri nets.

38

2 Languages of Petri Nets

2.8 Exercises Exercise 2.1 Homomorphisms Let ℎ : {𝑎, 𝑏, 𝑐, 𝑑, 𝑒}∗ → {𝑎, 𝑏, 𝑐, 𝑑, 𝑒}∗ be the homomorphism defined by ℎ(𝑎) = 𝑏𝑐, ℎ(𝑏) = 𝑐, ℎ(𝑐) = 𝑎𝑒𝑏, ℎ(𝑑) = 𝑐𝑎𝑒, ℎ(𝑒) = 𝑎𝑒. a) What is ℎ(𝑤), for 𝑤 = 𝑎𝑏𝑐𝑑𝑑? For 𝑤 = 𝑒𝑒𝑎𝑏𝑐𝑎𝑑? b) Which words 𝑣 satisfy 𝑤 = ℎ(𝑣) for 𝑤 = 𝑐𝑎𝑒𝑏𝑐? For 𝑤 = 𝑎𝑒𝑏𝑐𝑐𝑎𝑒? c) Modify the definition of ℎ by ℎ(𝑑) := 𝜀 (everything else remaining the same). Which words 𝑣 satisfy ℎ(𝑣) = 𝑎𝑒𝑏𝑐? (As there are infinitely many such words, please do not enumerate them.) d) Let Σ be an alphabet, let ℎ : Σ∗ → Σ∗ be an arbitrary homomorphism, and let 𝐿 ⊆ Σ∗ be an arbitrary language. Do we have ℎ(ℎ −1 (𝐿)) = 𝐿? Or ℎ −1 (ℎ(𝐿)) = 𝐿? Or neither? Exercise 2.2 Net construction Consider the language 𝐿 = {𝑎 𝑖 (𝑏𝑐) 𝑗 𝑑 𝑖− 𝑗 | 𝑖, 𝑗 ∈ N ∧ 𝑖 ≥ 𝑗 } over the alphabet Σ = {𝑎, 𝑏, 𝑐, 𝑑}. Construct a labelled Petri net 𝑁 with final states that generates this language (𝐿(𝑁) = 𝐿). Explain your construction. Hints: (a) N includes the number 0. (b) Unlabelled transitions or transitions labelled by the empty word are not allowed. (c) Do not forget to write down the set of final states explicitly. Exercise 2.3 Synthesis of elementary Petri nets a) For the following expressions 𝜚𝑖 (1 ≤ 𝑖 ≤ 6), determine elementary Petri nets 𝑁𝑖 satisfying 𝐿(𝑁𝑖 ) = 𝜚 𝑖 : 𝜚1 = 𝑎 ∗ 𝑏 𝑐∗ , 𝜚2 = 𝑎∗ 𝑏 ∗ 𝑐𝑐, 𝜚3 = {𝑎𝑐, 𝑐𝑐} 𝑏∗ , ∗ ∗ 𝜚4 = 𝐷 ℎ(𝐷) (ℎ(𝑎) = 𝑎, ℎ(𝑏) = 𝑐), 𝜚5 = (𝑎 ∪ 𝑏) 𝑐𝑏 𝑐, 𝜚 6 = 𝑎𝑎(𝑏 ∪ 𝑐) ∗ . Recall that a net is elementary if has a single place. b) Using the nets from a), and considering Proposition 2.25, construct two Petri nets that have the languages determined by 𝜅 1 = 𝜚1 ∩ 𝜚2 ∩ 𝜚3 , and by 𝜅 2 = 𝜚4 ∩ 𝜚 5 ∩ 𝜚6 , respectively. c) Specify the languages of the nets constructed in b) as simply as possible. In particular, do not use intersection.









Exercise 2.4 Language of an elementary Petri net Consider the Petri net 𝑁 shown in Figure 2.6. Let 𝑀 𝑓 = (1) be the (only) final state. a) Determine all words in the language 𝐿 (𝑁) which have 5 or less letters. b) Determine the homomorphism ℎ which satisfies 𝐿(𝑁) = ℎ −1 (𝐿 (𝑁 𝐷 (0, 1))). (referring to Definition 2.8) c) Specify the language 𝐿(𝑁) as a closed formula in which no Petri net occurs. Exercise 2.5 What happens to 𝑑? Let 𝑁 be the Petri net from Exercise 2.4. Construct a Petri net 𝑁 ′ with 𝐿 (𝑁 ′ ) = ℎ(𝐿(𝑁)), where ℎ(𝑎) = 𝑑, ℎ(𝑏) = 𝑏, ℎ(𝑐) = 𝑎, and ℎ(𝑑) = 𝜀. Explain the idea behind your construction. Note: Transitions may not be labelled by 𝜀.

2.8 Exercises

39

𝑑 3 𝑠1 2

𝑎

𝑏

2 𝑐

Fig. 2.6: Petri net of Exercise 2.4.

Exercise 2.6 Petri net languages Determine the language of the labelled Petri net shown in Figure 2.7. Final state(s) are deadlock state(s). (This is mainly an exercise in “playing the token game”.)

𝑠

𝑡

𝑐 𝑚 ℎ 2 2 𝑎 𝑒 3 3 𝑟

2 2 𝑦

3

𝑖

3

Fig. 2.7: Petri net of Exercise 2.6.

40

2 Languages of Petri Nets

Solution 2.1 (Exercise 2.1) a) ℎ(𝑎𝑏𝑐𝑑𝑑) = 𝑏𝑐𝑐𝑎𝑒𝑏𝑐𝑎𝑒𝑐𝑎𝑒, ℎ(𝑒𝑒𝑎𝑏𝑐𝑎𝑑) = 𝑎𝑒𝑎𝑒𝑏𝑐𝑐𝑎𝑒𝑏𝑏𝑐𝑐𝑎𝑒. b) ℎ −1 (𝑐𝑎𝑒𝑏𝑐) = {𝑏𝑐𝑏, 𝑏𝑒𝑎, 𝑑𝑎}, ℎ −1 (𝑎𝑒𝑏𝑐𝑐𝑎𝑒) = {𝑐𝑏𝑏𝑒, 𝑐𝑏𝑑, 𝑒𝑎𝑏𝑒, 𝑒𝑎𝑑}. c) ℎ −1 (𝑎𝑒𝑏𝑐) = 𝑑 ∗ 𝑐𝑑 ∗ 𝑏𝑑 ∗ ∪ 𝑑 ∗ 𝑒𝑑 ∗ 𝑎𝑑 ∗ . d) Neither. With ℎ from c), we have ℎ(ℎ −1 ({𝑏}) = ∅ and ℎ −1 (ℎ({𝑑})) = 𝑑 ∗ . Solution 2.2 (Exercise 2.2) See Figure 2.8.

𝑠2

M𝑓 = { (0 1 0 0 0) T , (0 0 0 1 0) T , (0 0 0 0 1) T }

𝑎

𝑑

𝑠3

𝑏

𝑠1 𝑠5

𝑐

𝑑

𝑏

𝑠4

𝑑

Fig. 2.8: Solution of Exercise 2.2. Solution 2.3 (Exercise 2.3) For a), see Figure 2.9. For b), see Figure 2.10. For c): 𝐿 (𝑁 (𝜅1 )) = {𝑏𝑐𝑐}, 𝐿(𝑁 (𝜅 2 )) = {𝑎𝑎𝑐𝑐}. Solution 2.4 (Exercise 2.4) a): 𝑎, 𝑎𝑎𝑐, 𝑎d𝑐𝑐, 𝑎d𝑏, 𝑎d𝑎𝑐𝑏, 𝑎d𝑎𝑏𝑐, 𝑎𝑎d𝑐𝑏, 𝑎𝑎d𝑏𝑐, 𝑎d𝑐𝑎𝑏, 𝑎d𝑏𝑎𝑐, 𝑎d𝑐𝑏𝑎, 𝑎𝑎𝑎𝑏, 𝑎𝑎𝑐𝑎𝑐, 𝑎𝑎𝑎𝑐𝑐, 𝑎dd𝑏𝑏, 𝑎𝑎d𝑐𝑏, 𝑎𝑎𝑐d𝑏, 𝑎d𝑏d𝑏 b): ℎ(𝑎) ℎ(𝑏) ℎ(𝑐) ℎ(𝑑)

= = = =

𝑎 𝑏𝑏 𝑏𝑏𝑎 𝑏𝑎𝑎𝑎

c): 𝐿(𝑁) = ℎ −1 (𝐷

 {𝑎})

Solution 2.5 (Exercise 2.5) See Figure 2.11. The three non-greyed transitions are taken straight from Exercise 2.4 (after applying ℎ). The two greyed transitions “simulate” the sequences 𝜀𝑎 (previously d𝑐) and 𝜀𝑏 (previously d𝑏), respectively.

2.8 Exercises

41

𝑁1 : 𝑎

𝑐

𝑏

𝑏

2 2

M 𝑓 = { (0) T }

𝑐

𝑎

𝑎

𝑐 M 𝑓 = { (0) T }

𝑁5 : 𝑐

𝑎

𝑎

M 𝑓 = { (0) T }

𝑏

2 M 𝑓 = { (0) T }

𝑁4 : 𝑏

𝑁3 :

𝑁2 :

𝑁6 :

2 2

𝑐

𝑏

2 𝑐 2

2 2

𝑏

M 𝑓 = { (0) T }

𝑎

M 𝑓 = { (2) T }

Fig. 2.9: Solution of Exercise 2.3(a). 𝑠1

𝑠1 𝑁 (𝜅2 ):

𝑁 (𝜅1 ):

2 𝑎

𝑐

𝑏

2

2

2

𝑐

2

2

2 𝑠3

2 𝑠3

𝑠2

𝑎

𝑏

2 M 𝑓 = { (0 0 0) T }

M 𝑓 = { (0 0 2) T }

Fig. 2.10: Solution of Exercise 2.3(b). 𝑎

2 𝑠1 2

𝑑

𝑏

2 𝑏 𝑎

Fig. 2.11: Solution of Exercise 2.5.

𝑠2

42

2 Languages of Petri Nets

Solution 2.6 (Exercise 2.6) The solution is {merrychristmas}. The Petri net having this language was constructed by the apt tool, for which see Section 11.6.

Chapter 3

Reachability and Coverability

Abstract The reachable markings of a Petri net can be represented as the vertices of a directed graph whose edges are labelled by transitions and whose paths correspond to firing sequences. This graph can be infinite for unbounded nets. Coverability trees and graphs are also defined for every net. They can be understood as finite approximations of the potentially infinite reachability graph. These objects are distinguished in terms of the Petri net properties, such as boundedness and liveness, that can be tested on them.

Contents 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8

Introduction . . . . . . . . . . . . . . . . . . . Graphs and multigraphs . . . . . . . . . . . . The reachability graph . . . . . . . . . . . . . Boundedness, safeness, and liveness notions . . Generalised markings, and vector comparisons Coverability trees and coverability graphs . . . Bibliographical remarks and further reading . . Exercises . . . . . . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

43 44 45 49 52 53 62 63

3.1 Introduction In this chapter, the focus changes from discussing the set of firing sequences (as in Chapter 1) or the set of words (as in Chapter 2) generated by a Petri net 𝑁 to investigating the set of states that can be reached in 𝑁. The reachable markings 𝑡 𝑀 may be regarded as the vertices of a graph, such that if 𝑀 −→ 𝑀 ′ , then a directed edge labelled 𝑡 is drawn from vertex 𝑀 to vertex 𝑀 ′ . This graph is called © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. Best and R. Devillers, Petri Net Primer, Computer Science Foundations and Applied Logic, https://doi.org/10.1007/978-3-031-48278-6_3

43

44

3 Reachability and Coverability

the reachability graph. It will obviously be infinite if the set of reachable markings is infinite (such as in the nets shown in Figures 1.1 and 2.1). It is possible to approximate the reachability graph by finite trees and graphs. They are called coverability trees and coverability graphs, respectively. This comes at the price that the notion of a marking is slightly generalised and that not all reachable markings are represented precisely. Also, such approximations reflect several, but not all, interesting properties that one might want to check for a Petri net. Consequently, the present chapter has five sections: two technical sections explaining graphs and generalised markings (Sections 3.2 and 3.5); two sections describing the reachability graph (Section 3.3) and the coverability trees and coverability graphs (Section 3.6); and one section defining the properties that one can check (or not) on these different types of graphs (Section 3.4).

3.2 Graphs and multigraphs A Petri net can be understood as a (multi-) graph in the mathematical sense. Various other graphs can be derived from it. Indeed, graphs are about to play a major role throughout this book. In this section, we recall some general notions pertaining to graphs and multigraphs. A (directed) graph is a structure (𝑋, 𝐸) where 𝐸 ⊆ 𝑋 × 𝑋. An element of 𝑋 is called a vertex, or a node. An element 𝑒 = (𝑥, 𝑦) ∈ 𝐸 is called an edge, or an arc, or sometimes also an arrow, leading from 𝑥 to 𝑦. The union of a family of graphs is simply obtained by taking the union of the vertex sets and of the arc sets: ∪𝑖 ∈ 𝐼 (𝑋𝑖 , 𝐸 𝑖 ) = (∪𝑖 ∈ 𝐼 𝑋𝑖 , ∪𝑖 ∈ 𝐼 𝐸 𝑖 ); the union is disjoint if the vertex sets are pairwise disjoint. An arc-labelled graph is a structure (𝑋, 𝐿, 𝐸) where 𝐸 ⊆ 𝑋 × 𝐿 × 𝑋. For 𝑒 = (𝑥, ℓ, 𝑦) ∈ 𝐸, ℓ is also called the label (or the inscription) of the arc 𝑒 from 𝑥 to 𝑦. A multigraph is a structure (𝑋, E) where E is a multiset of pairs from 𝑋 × 𝑋. Thus, we may have several arrows in parallel from one node to another one. Likewise, a labelled multigraph is a structure (𝑋, 𝐿, E) where E is a multiset of triples from 𝑋 × 𝐿 × 𝑋.1 A subgraph of a (multi-)graph with vertex set 𝑋 is a graph with vertex set 𝑋 ′ ⊆ 𝑋 and (labelled) arcs restricted to those between nodes in 𝑋 ′ . A (directed) path is a directed sequence of edges, such that the endpoint of one edge is the beginning of the next one. A path is a cycle if its starting vertex equals its end vertex. The length of a path is the number of edges in it. A single node without any edge is called a path of length 0. A path is simple or elementary if no vertex appears twice in it, except possibly that the very beginning is also the very end, in which case it is called a simple cycle or an elementary cycle. We also consider infinite paths. By this, we mean an infinite directed sequence of edges such that, as before, the endpoint of one edge is the beginning of the next one, 1 We may also define undirected graphs, whose edges are of the form { 𝑥, 𝑦 } (sets, rather than pairs). However, undirected graphs will only play a minor role in this text.

3.3 The reachability graph

45

but also such that it has a starting vertex but no end vertex. The length of such a path is defined to be ∞, which stands for “infinite”. Formally, an infinite path can be understood as a function from N (the indices) to 𝐸 (the edges). A (multi-)graph 𝐺 is called strongly connected if for any two vertices 𝑥 and 𝑦, there is a directed path from 𝑥 to 𝑦. 𝐺 is called weakly connected if for any two nodes 𝑥, 𝑦, there is some chain of arrows (not necessarily pointing in the same direction) from 𝑥 to 𝑦.2 𝐺 is called covered by (directed) cycles if for any arrow from 𝑥 to 𝑦, there is a directed path from 𝑦 to 𝑥. It is clear that if a graph is strongly connected, then it is also weakly connected and covered by cycles. Conversely, if a graph is covered by cycles and weakly connected, then it is also strongly connected. However, a graph may be covered by cycles and still not be strongly connected. A strongly connected component (weakly connected component) of a graph 𝐺 is a maximal subset 𝑋 of vertices such that the subgraph 𝐺 ′ with vertex set 𝑋 is strongly (weakly) connected. If a graph is covered by cycles, then it is the disjoint union of a set of strongly connected components. Example 3.1 The graph shown in Figure 3.1 Consider the directed graph shown in Figure 3.1. In this graph, (𝑣1 , 𝑣2 )(𝑣2 , 𝑣3 ) is a path, (𝑣1 , 𝑣2 )(𝑣2 , 𝑣3 )(𝑣3 , 𝑣1 ) is a cycle, (𝑣1 , 𝑣1 )(𝑣1 , 𝑣1 ) . . . and (𝑣1 , 𝑣2 )(𝑣2 , 𝑣3 )(𝑣3 , 𝑣1 )(𝑣1 , 𝑣2 ) . . . are infinite paths, and (𝑣1 , 𝑣3 )(𝑣2 , 𝑣3 ) is an (undirected) chain of edges (but (𝑣1 , 𝑣1 )(𝑣2 , 𝑣3 ) is not). It is strongly connected.

𝑣2

𝑣1

𝑉 = {𝑣1 , 𝑣2 , 𝑣3 } 𝐸 = { (𝑣1 , 𝑣1 ) , (𝑣1 , 𝑣2 ) , (𝑣1 , 𝑣3 ) , (𝑣2 , 𝑣3 ), (𝑣3 , 𝑣1 ) } 𝑣3

Fig. 3.1: A graph with 3 vertices and 5 edges. It is shown graphically (so to speak) on the left-hand side, and formally, on the right-hand side.

3.3 The reachability graph The concept of reachability set of an initially marked Petri net has already been introduced. It comprises all states that can be reached from the initial state after the execution of arbitrary firing sequences, including the initial state which is reached 2 A chain is like a path, except that the direction of its arrows plays no role.

46

3 Reachability and Coverability

after “firing” the empty sequence. This set can be provided with some structure. Instead of just recording the reachable states, we may also record the information which transition can be fired in order to get from one state to another one. In this way, we obtain the reachability graph. Definition 3.2 The reachability graph Let 𝑁 = (𝑆, 𝑇, 𝐹) be a Petri net, let 𝑀 ∈ N𝑆 and let [𝑀⟩ be the reachability set of 𝑀 in 𝑁. The reachability graph 𝑅𝐺 (𝑁, 𝑀) is defined to be an arc-labelled graph ([𝑀⟩, 𝐸) with the following set of arcs: 𝑡

𝐸 = { (𝑀1 , 𝑡, 𝑀2 ) | 𝑀1 ∈ [𝑀⟩ ∧ 𝑀1 −→ 𝑀2 } The second component 𝑡 of an arc (𝑀1 , 𝑡, 𝑀2 ) is just the inscription of this arc. If 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 ) is a Petri net with an initial marking, then the reachability graph of 𝑁 is defined as 𝑅𝐺 (𝑁) = 𝑅𝐺 (𝑁, 𝑀0 ). The reachability graph is always weakly, but not necessarily strongly, connected. 𝑡′

𝑡

Since we may have two arrows 𝑀1 −→ 𝑀2 and 𝑀1 −→ 𝑀2 with 𝑡 ≠ 𝑡 ′ , the reachability graph may be a true multigraph if its labelling is ignored. It is not, however, a true labelled multigraph; there may be several arrows leading from 𝑀1 to 𝑀2 , but they are all differently labelled. 𝑡3

𝑡3 𝑁′

𝑁 𝑡1

𝑡1 𝑠3

𝑠1

𝑠2

2 𝑡2

𝑠3

𝑠1

𝑠2

2 𝑡2

𝑁′

Fig. 3.2: The net is a modification of the net 𝑁 shown in Figure 1.1 (which is reproduced on the right-hand side). The initial marking is 𝑀0 = (0 1 2) T .

Example 3.3 A finite reachability graph The Petri net 𝑁 ′ shown on the left-hand side of Figure 3.2 is a modification of the net 𝑁 shown previously in Figure 1.1. The loop between 𝑠1 and 𝑡 1 was replaced by a single arc. The modified net has the reachability graph shown in Figure 3.3. It has two strongly connected components and three edges that are not covered by cycles. The terminal strongly connected component has 5 states and 8 edges.

3.3 The reachability graph 𝑀0

47

𝑡3 (0 1 2) T

𝑡3 (1 1 1) T

𝑡1 𝑡2

𝑡2

𝑡2

𝑡3 (0 0 4)

(2 1 0) T 𝑡1

T

𝑡3 (1 0 3)

𝑡1

T

𝑡3 (2 0 2)

𝑡1

T

𝑡3 (3 0 1)

𝑡1

Fig. 3.3: The reachability graph of the Petri net

T

(4 0 0) T 𝑡1

𝑁′

shown in Figure 3.2.

The graph in Figure 3.3 looks much like a finite automaton. Indeed, if we designate (0 1 2) T as initial state and all reachable states as final states, then it accepts exactly all firing sequences of the net. We know already that Petri nets can generate nonregular languages, while all languages of finite automata are regular. Consequently, there must be some Petri nets whose reachability graphs cannot be viewed as finite automata. This is indeed the case. Example 3.4 An infinite reachability graph Consider the net 𝑁 which was shown originally in Figure 1.1 and reproduced in Figure 3.3. It has a loop (instead of just a single arc) between 𝑠1 and 𝑡 1 . Its reachability graph 𝑅𝐺 (𝑁, 𝑀0 ) with 𝑀0 = (0 1 2) T is sketched in Figure 3.4 by depicting a tiny initial part. Since it is infinite, it cannot be included fully in the figure. The representation ends at some arbitrarily chosen (sufficiently convenient) point. In this case, the targets of the arcs at the bottom of the figure are not shown explicitly, because it is (hopefully) easy to guess how they should be continued. This graph has infinitely many strongly connected components (each node being one, in fact). An infinitely large reachabilibity graph is, of course, not a finite automaton. In an infinite graph, the task of searching whether a given state is reachable is burdensome. Nevertheless, we may discover quickly that in our example, for instance, state (1 0 9) T is reachable while state (0 0 9) T is not. It suffices to examine the systematic structure of the graph in order to answer this or similar questions. In general, however, reachability graphs are much more complex, and the question whether a given state is reachable is very hard to answer. In fact, it has been a famous open question for several years whether or not this question is even decidable. Decision Problem 3.5 The reachability problem RP for Petri nets Input: A Petri net 𝑁 = (𝑆, 𝑇, 𝐹) and two markings 𝑀, 𝑀 ′ ∈ N𝑆 . To decide: Does 𝑀 ′ ∈ [𝑀⟩?

48

3 Reachability and Coverability (0 1 2) T

𝑀0 𝑡2

𝑡3

(1 1 1) T

𝑡3

𝑡2

(2 1 0) T 𝑡2

𝑡1 (0 0 4) T

𝑡3

(1 0 3) T

𝑡3

𝑡1

𝑡1

𝑡2 𝑡3

(2 1 1) T 𝑡2

(2 0 3) T 𝑡1

••• •••

(3 1 0) T 𝑡2

𝑡1 𝑡3

𝑡1

𝑡3

(3 0 2) T 𝑡1

••• •••

(4 0 0) T

𝑡1

𝑡3

𝑡1

𝑡3

(3 0 1) T

𝑡1 (1 1 2) T

(1 0 4) T 𝑡1

𝑡3

(2 0 2) T

𝑡1 𝑡3

(4 0 1) T 𝑡1

𝑡3

(5 0 0) T 𝑡1

••• •••

•••

•••

Fig. 3.4: Part of the reachability graph of the net 𝑁 shown in Figure 3.2.

Theorem 3.6 Reachability is decidable The reachability problem RP is decidable. This means that there is a terminating algorithm which, when fed with a net 𝑁 and two markings 𝑀 and 𝑀 ′ , returns “reachable” if 𝑀 ′ ∈ [𝑀 ⟩ and “not reachable” if 𝑀 ′ ∉ [𝑀⟩. Such an algorithm will be described in Chapter 18. An algorithm building the reachability graph 𝑅𝐺 (𝑁, 𝑀) is not good enough in general. If 𝑀 ′ is encountered during this process, it is certainly reachable and “reachable” can be output. But it may be the case that 𝑀 ′ is not encountered while building 𝑅𝐺 (𝑁, 𝑀) continues indefinitely, so that at no finite point of the construction, the answer “not reachable” can be ascertained for sure. There are, however, cases where the answer “not reachable” can be obtained very easily. The idea is to search for a weighting function of the places of the Petri net with adequate properties. Definition 3.7 Invariance, subvariance, survariance Let 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 ) be a Petri net. We shall consider an integer function 𝜇 : 𝑆 → Z; it may also be considered a vector in Z |𝑆 | . If 𝑠 ∈ 𝑆 is a place, 𝜇(𝑠) may be seen as the weight of aÍ token in 𝑠, and this allows to associate a weight to any marking 𝑀: 𝜇(𝑀) = 𝑠∈𝑆 𝜇(𝑠) · 𝑀 (𝑠). Í Such a function will be called an invariant if ∀𝑡 ∈ 𝑇 we have 𝑠∈𝑆 𝜇(𝑠) · Í 𝐹 (𝑠, 𝑡) = 𝑠∈𝑆 𝜇(𝑠) · 𝐹 (𝑡, 𝑠), meaning that the total weight of the tokens absorbed by an execution of 𝑡 is the same as the total weight of the tokens produced by it.a

3.4 Boundedness, safeness, and liveness notions

49

Í It Í will be called a subvariant if ∀𝑡 ∈ 𝑇 we have 𝑠∈𝑆 𝜇(𝑠) · 𝐹 (𝑠, 𝑡) ≥ 𝑠∈𝑆 𝜇(𝑠) · 𝐹 (𝑡, 𝑠), meaning that the total weight of the tokens absorbed by an execution of 𝑡 is not less than the total weight of theÍtokens produced by it. It Í will be called a survariant if ∀𝑡 ∈ 𝑇 we have 𝑠∈𝑆 𝜇(𝑠) · 𝐹 (𝑠, 𝑡) ≤ 𝑠∈𝑆 𝜇(𝑠) · 𝐹 (𝑡, 𝑠), meaning that the total weight of the tokens absorbed by an execution of 𝑡 is not larger than the total weight of the tokens produced by it. a

Later, such invariants shall be called S-invariants; see Definition 4.15.

Note that, if 𝜇 is a subvariant iff −𝜇 is a survariant. As an immediate corollary, we get Proposition 3.8 Non-reachability Let 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 ) be a Petri net and 𝜇 be a weighting function on it. • If 𝜇 is an invariant of 𝑁 and 𝜇(𝑀) ≠ 𝜇(𝑀0 ), then 𝑀 may not be reached from 𝑀0 . • If 𝜇 is an subvariant of 𝑁 and ¬(𝜇(𝑀) ≤ 𝜇(𝑀0 )), then 𝑀 may not be reached from 𝑀0 . • If 𝜇 is an survariant of 𝑁 and ¬(𝜇(𝑀) ≥ 𝜇(𝑀0 )), then 𝑀 may not be reached from 𝑀0 . Unfortunately, it is not always possible to find an adequate (in-, sub-, or sur-) variant for a reachability problem. For instance, the null function is an invariant, but it is of no use.

3.4 Boundedness, safeness, and liveness notions The reachability graph reveals more information than just the set of reachable states. We discuss some interesting and relevant properties that can be inferred from a (finite) reachability graph. Definition 3.9 Boundedness and safeness Let 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 ) be a Petri net. A place 𝑠 ∈ 𝑆 is called safe if 𝑀 (𝑠) ≤ 1 𝜎 whenever 𝜎 is a firing sequence and 𝑀 is a state with 𝑀0 −→ 𝑀; 𝑠 is called 𝜎 𝑚-bounded (for 𝑚 ∈ N), if 𝑀0 −→ 𝑀 always entails 𝑀 (𝑠) ≤ 𝑚. A place 𝑠 is bounded if it is 𝑚-bounded, for some 𝑚 ∈ N, otherwise it is unbounded. A Petri net 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 ) is called safe (bounded) if all places 𝑠 ∈ 𝑆 are safe (bounded, respectively). 𝑁 is called 𝑚-bounded, for some 𝑚 ∈ N, if every place 𝑠 ∈ 𝑆 is 𝑚-bounded. 𝑁 is called unbounded if it contains an unbounded place.

50

3 Reachability and Coverability

The safeness (𝑚-boundedness) of 𝑁 can be deduced by inspecting the reachability graph 𝑅𝐺 (𝑁). Similarly, the safeness (𝑚-boundedness) of any place can be deduced by inspection. We simply need to check all states which occur in 𝑅𝐺 (𝑁). This is practical only if 𝑅𝐺 (𝑁) is finite (and is, even then, likely to be extremely timeconsuming). If 𝑁 is finite and bounded, then the maximum of the bounds of all places is a general bound for the net. If 𝑁 is infinite, then it may happen that there is no such general bound, even if the net is bounded; compare Figure 3.5.

2

2

•••

Fig. 3.5: An infinite Petri net in which every place is bounded but for which there is no finite overall bound.

Definition 3.10 Liveness, deadlock-freeness, and reversibility Let 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 ) be a Petri net. • A transition 𝑡 ∈ 𝑇 is called singly live or not dead, if there is a firing 𝜎𝑡 sequence 𝜎 with 𝑀0 −→ . • A transition 𝑡 is called weakly live, if there is an infinite word 𝑤 ∈ 𝑇 ∞ such 𝑤 𝜎 that 𝑡 occurs infinitely often in 𝑤 and 𝑀0 −→ (meaning that 𝑀0 −→ ab holds for every finite prefix 𝜎 of 𝑤). • A transition 𝑡 is called live or strongly live, if for every reachable marking 𝜎𝑡 𝑀 ∈ [𝑀0⟩, there is some firing sequence 𝜎 with 𝑀 −→ . • A transition 𝑡 is called reversible, if for every reachable marking 𝑀 ∈ 𝑡 [𝑀0⟩, if 𝑀 −→ 𝑀 ′ , then 𝑀 ∈ [𝑀 ′⟩. The Petri net 𝑁 is called (singly / weakly / strongly) live or reversible, if every transition in the net is (singly / weakly / strongly) live or reversible, respectively. 𝑁 is called dead if 𝑁 contains no singly live transition. A reversible net is sometimes also called cyclic. A reachable marking 𝑀 ∈ [𝑀0⟩ is called a deadlock if no transition is activated at 𝑀. 𝑁 is called deadlock-free if there is no marking 𝑀 ∈ [𝑀0⟩ which is a deadlock. 𝑇 ∞ denotes the set of infinite words with letters from 𝑇. For instance, 𝑡3 𝑡2 𝑡1 𝑡3 𝑡1 𝑡3 . . . is an infinite word labelling an infinite path in the graph shown in Figure 3.4. b The condition on prefixes means that 𝑤 is an “infinitely long firing sequence”. a

It is easy to check on the reachability graph whether or not a transition is singly live. If it contains some arc (𝑀, 𝑡, 𝑀 ′ ), then every path from 𝑀0 to 𝑀 determines a firing 𝜎 𝑡 sequence 𝜎 with 𝑀0 −→ 𝑀, and in state 𝑀 we have 𝑀 −→ . That is, 𝑡 is singly live if and only if 𝑡 occurs as the label of some arc in the reachability graph.

3.4 Boundedness, safeness, and liveness notions

51

If the reachability graph is finite, weak liveness can be reduced to cycle checking. If a transition 𝑡 is weakly live, then it can be fired arbitrarily often, and since the reachability graph is finite, it must contain a cycle containing an arc of the form (𝑀, 𝑡, 𝑀 ′ ). Conversely, if the reachability graph contains such a cycle, then 𝑡 is weakly live, since we can fire into the cycle, and then along the cycle arbitrarily often. Checking strong liveness of a transition 𝑡 is not so easy, but it can also, in principle, be done on the reachability graph. For every marking 𝑀 contained in it, it must be checked whether a path leads from 𝑀 to an arc inscribed by 𝑡. For a finite reachability graph this means that every terminal strongly connected component (i.e., with no arcs going out to other strongly connected components) must contain an edge labelled 𝑡. Checking deadlock-freeness can be done by examining the reachability graph for vertices which have no output arc. The net is deadlock-free if and only if such vertices are absent. Similarly, persistence and confluence (Lemma 1.18) can be checked. For persistence to be violated, there must be a reachable marking 𝑀 enabling two different transitions 𝑡

𝑡′

𝑡′

𝑡

𝑡 ≠ 𝑡 ′ , such that 𝑀 −→ 𝑀 ′ and 𝑀 −→ 𝑀 ′′ , but either ¬(𝑀 −→ ) or ¬(𝑀 ′ −→ ), or both. For confluence to be violated, there must be two reachable markings 𝑀1 , 𝑀2 such that no reachable marking 𝑀 satisfies 𝑀 ∈ [𝑀1⟩ ∩ [𝑀2⟩. Reversibility can be checked on the reachability graph as well. Recall that the reachability graph is always weakly connected. For a transition 𝑡 to be reversible, each edge labelled with 𝑡 must lie on some cycle (or equivalently, within some strongly connected component) of the reachability graph, and for the whole net to be reversible, the entire reachability graph must be strongly connected. Conversely, if the reachability graph is strongly connected, then the net is reversible. Note that this holds even if the initial marking is a deadlock. The considerations of this section are summarised as follows: Proposition 3.11 All that can be checked on the reachability graph If the reachability graph of a Petri net 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 ) is finite, then there exist terminating algorithms which decide the following properties: • whether a place is safe; • whether a place is 𝑚-bounded; • whether a place is bounded; • whether a transition is dead (i.e., not singly live); • whether a transition is weakly live; • whether a transition is strongly live; • whether a transition is reversible; • whether a pair of transitions is non-persistent; • and whether the net 𝑁 is safe (𝑚-bounded, bounded, dead, singly live, weakly live, live, deadlock-free, reversible, or persistent).

52

3 Reachability and Coverability

What happens if the reachability graph is not finite? Then none of the above questions can be solved efficiently by some algorithm. We now address the following question. Given that the reachability graph may be infinite and therefore unmanageable, does there perhaps exist a similar construction which always leads to a finite structure from which at least some interesting information about safeness, boundedness and liveness can be inferred? There are indeed such constructions, leading to objects called the coverability tree and the coverability graph. Before we can define these objects in Section 3.6, however, we need to generalise the notion of a marking in the way shown in the next section.

3.5 Generalised markings, and vector comparisons So far, a marking always assigns a definite number of tokens to a given place. Thus, a marking is a vector indexed by places with entries in N. We shall also define generalised markings which may or may not assign the “generalised number” 𝜔 to a given place. Intuitively, the symbol 𝜔 stands for “arbitrarily many tokens”. However, we will use it only in very well-defined circumstances. In particular, the coverability graph, to be defined in a moment, may contain generalised markings. Formally, generalised markings are vectors over N ∪ {𝜔} (instead of just N, as in markings so far). For the sake of brevity, we write N 𝜔 instead of N ∪ {𝜔}. The “number” 𝜔 obeys, by definition, the following rules: 𝜔+𝑛 𝜔−𝑛 𝑛 𝜔

= = < ≤

𝜔 = 𝑛+𝜔 𝜔 𝜔 𝜔

for all 𝑛 ∈ N 𝜔 for all 𝑛 ∈ N for all 𝑛 ∈ N (and, of course, 𝜔 = 𝜔)

(3.1)

As usual, ≤, > and ≥ are derived notions. In particular, by the third line, ¬∃𝑛 ∈ N : 𝜔 ≤ 𝑛. For subtraction, we will make sure that we will never be asked to compute 𝜔 − 𝜔 or 𝑛 − 𝜔. Vectors with entries in N or Z (or in N 𝜔 ) occur often in this text, and it will turn out to be important to be able to compare such vectors, as follows. Definition 3.12 Relations between vectors, and (semi-) positivity Let 𝑣, 𝑣′ be two vectors over the same index set 𝑋 and with entries in N 𝜔 or in Z (or in Q, for that matter). 𝑣 ≤ 𝑣′ iff ( ∀𝑥 ∈ 𝑋 : 𝑣(𝑥) ≤ 𝑣′ (𝑥) ) 𝑣 < 𝑣′ iff ( 𝑣 ≤ 𝑣′ ) ∧ ( 𝑣 ≠ 𝑣′ ) 𝑣 ≪ 𝑣′ iff ( ∀𝑥 ∈ 𝑋 : 𝑣(𝑥) < 𝑣′ (𝑥) )

3.6 Coverability trees and coverability graphs

53

If the index set of a vector 𝑣 is the set of places, it is called an S-vector, for short, and if it is the set of transitions, 𝑣 is called a T-vector. A vector 𝑣 is called semipositive if 0 ≤ 𝑣, and positive if 0 ≪ 𝑣. If 𝑣 < 𝑣′ , we shall also say that 𝑣′ dominates or covers 𝑣. The symbols ≥, > and ≫ are defined symmetrically. Note that these comparisons are well-defined, even if 𝜔 occurs as an entry in either 𝑣 or 𝑣′ . Note also that markings, whether generalised or not, are semipositive by nature.

3.6 Coverability trees and coverability graphs The notion of firability and of firing (of a transition 𝑡 of a net 𝑁) can be extended to generalised markings 𝑀 in (N 𝜔 ) 𝑆 , as follows: 𝑡

𝑀 −→ 𝑡

𝑀 −→

⇐⇒ (B(𝑡) ≤ 𝑀) 𝑀′

⇐⇒ (B(𝑡) ≤ 𝑀) ∧ (𝑀 ′ = 𝑀 + F(𝑡) − B(𝑡))

With this definition and with the definitions given in Section 3.5, it is now known what 𝑀 ′ ∈ [𝑀⟩ means for 𝑀, 𝑀 ′ ∈ (N 𝜔 ) 𝑆 and a Petri net 𝑁 = (𝑆, 𝑇, B, F); namely, 𝜎 𝜎 it simply means that there is some 𝜎 ∈ 𝑇 ∗ with 𝑀 −→ 𝑀 ′ . If 𝑀 −→ 𝑀 ′ and 𝑀 (𝑠) = 𝜔 for some 𝑠 ∈ 𝑆, then we also have 𝑀 ′ (𝑠) = 𝜔, since firing cannot decrease an entry 𝜔; by (3.1), 𝜔 + F(𝑠, 𝑡) − B(𝑠, 𝑡) always equals 𝜔. The monotonicity and determinism of the firing rule (see Lemmata 1.13 and 1.17) immediately extend to general markings. Lemma 3.13 Monotonicity and determinism of general firings For all 𝑀, 𝑀 ′ , 𝑀 ′′ ∈ (N 𝜔 ) 𝑆 and for all 𝑡 ∈ 𝑇: 𝑡 𝑡 (𝑀 −→ 𝑀 ′ ) ⇒ ((𝑀 + 𝑀 ′′ ) −→ (𝑀 ′ + 𝑀 ′′ )); For all 𝜎, 𝜎 ′ ∈ 𝑇 ∗ and for all 𝑀, 𝑀 ′ , 𝑀 ′′ ∈ (N 𝜔 ) 𝑆 : 𝜎

𝜎′

(𝑀 −→ 𝑀 ′ ∧ 𝑀 −→ 𝑀 ′′ ∧ P (𝜎) = P (𝜎 ′ )) ⇒ (𝑀 ′ = 𝑀 ′′ ). Coverability trees and coverability graphs of a Petri net 𝑁 are intimately related. Coverability trees will be defined first. Definition 3.14 Coverability tree Let 𝑁 = (𝑆, 𝑇, B, F, 𝑀0 ) be a Petri net. A coverability tree of 𝑁 is defined as a node-labelled and arc-inscripted tree 𝐺 = (𝑉, 𝐸, 𝑇) (with node labels 𝜆 from the set of generalised markings (N 𝜔 ) 𝑆 of 𝑁, and arc inscriptions from the set 𝑇), constructed as in Algorithm 3.15. If 𝜆(𝑣)(𝑠) = 𝜔 for some place 𝑠 of 𝑁 and some node 𝑣 of this tree, then 𝑠 is called an 𝜔-place or an 𝜔-coordinate.

54

3 Reachability and Coverability

Algorithm 3.15 Construction of a coverability tree 𝑉 := {𝑣0 }; 𝜆(𝑣0 ) := 𝑀0 ; 𝐸 := ∅; New := {𝑣0 }; while New ≠ ∅ do choose some node 𝑣 ∈ New; 𝑀 := 𝜆(𝑣); 𝑡 for all 𝑡 ∈ 𝑇 with (𝑀 −→ ) do 𝑡 compute 𝑀 ′ ∈ N𝑆𝜔 with 𝑀 −→ 𝑀 ′ ; compute 𝑣′′ as the first node met on the (unique) backward path from 𝑣 to 𝑣0 satisfying 𝜆(𝑣′′ ) < 𝑀 ′ ; if 𝑣′′ exists then for all 𝑠 ∈ 𝑆 with 𝜆(𝑣′′ )(𝑠) < 𝑀 ′ (𝑠) do 𝑀 ′ (𝑠) := 𝜔 endfor fi; create a new node 𝑣′ ; 𝑉 := 𝑉 ∪ {𝑣′ }; 𝜆(𝑣′ ) := 𝑀 ′ ; 𝐸 := 𝐸 ∪ {(𝑣, 𝑡, 𝑣′ )}; if no node 𝑤 ≠ 𝑣′ with 𝜆(𝑤) = 𝑀 ′ exists then New := New ∪ {𝑣′ } % comment: otherwise 𝑣′ is a duplicate fi endfor; New := New \ {𝑣} endwhile Since such a coverability tree may contain two or more different nodes with the same marking, it is defined in terms of a set 𝑉 of nodes 𝑣 which are labelled by a marking 𝑀 in (N 𝜔 ) 𝑆 , formalised by 𝜆(𝑣) = 𝑀. It is non-deterministic since, depending on how the choice in line 3 of the algorithm is resolved, and on the order of scanning transitions in line 4, different coverability trees may result (in particular, in case of duplicates, only the first one found may have descendants). However, they all share the same relevant properties as detailed below.

𝑡1 𝑠3

𝑁 2 𝑡2 4

𝑠1

𝑠2

𝑡3 4

3 3

𝑡4

𝑠4

Fig. 3.6: A Petri net 𝑁.

4

3.6 Coverability trees and coverability graphs

55

(3 0 0 0)

𝑣0

𝑡1 (1 1 1 0)

𝑣1

𝑡2 (0 4 1 0)

𝑣2

𝑡3

𝑡4 (3 0 𝜔 𝜔)

(1 1 𝜔 0)

(1 1 𝜔 𝜔)

𝑣7

𝑡3 𝑣11

(3 0 𝜔 𝜔)

(0 4 𝜔 0)

𝑡3

(3 0 𝜔 0)

𝑣9

𝑡4

𝑡2 𝑣8

𝑣5

𝑡2

𝑡1 𝑣6

𝑣3

𝑡1

1 1

𝑣4

1

(3 0 𝜔 0)

1

(3 0 𝜔 𝜔)

(0 4 𝜔 𝜔)

𝑣10

𝑡4 (3 0 𝜔 𝜔)

𝑣12

Fig. 3.7: A coverability tree of the net shown in Figure 3.6. Node 𝑣9 duplicates node 𝑣3 , and nodes 𝑣10 , 𝑣11 and 𝑣12 duplicate node 𝑣4 .

Example 3.16 An example of a coverability tree Figure 3.7 shows a coverability tree of the Petri net 𝑁 shown in Figure 3.6. The numbering of the nodes 𝑣0 , 𝑣1 , . . . indicates that the tree has been built in a breadth-first fashion, and the transitions have been processed in the sequence of increasing indices. The nodes drawn in double ellipses are duplicates of previously defined nodes (and the construction stops there). The values shown in red have been superseded by 𝜔. For instance, the marking reached at 𝑣4 after firing 𝑡 4 from node 𝑣2 is (3 0 1 1) T . This marking dominates the initial marking (3 0 0 0) T at the third and fourth coordinates, and the initial marking, of course, lies on a directed path from 𝑣0 to 𝑣4 . Hence two 𝜔 “values” are introduced at the corresponding coordinates of the label of 𝑣4 , replacing the two numbers 1. Something similar happened at nodes 𝑣3 (covering 𝑣0 ) and 𝑣10 (covering 𝑣3 ).

56

3 Reachability and Coverability

We get a coverability graph by collapsing equally labelled nodes of a coverability tree. Definition 3.17 Coverability graph Let 𝑁 = (𝑆, 𝑇, B, F, 𝑀0 ) be a Petri net. A coverability graph Cov(𝑁) of 𝑁 is defined as an arc-inscripted graph 𝐺 = (𝑉, 𝐸, 𝑇) (with 𝑉 being a set of generalised markings of 𝑁, and with arc inscriptions from the set 𝑇). It is constructed by collapsing all nodes of a coverability tree with the same (generalised) marking into a single node.a Alternatively, it may be constructed directly as in Algorithm 3.18. a

This is why 𝑉 can now be the identifed with its set of labels, making 𝜆 unnecessary.

Algorithm 3.18 Construction of a coverability graph 𝑉 := {𝑀0 }; 𝐸 := ∅; New := {𝑀0 }; while New ≠ ∅ do b ∈ New; choose some 𝑀 for all 𝑡 ∈ 𝑇 do 𝑡 b −→ if ( 𝑀 ) then 𝑡 e ∈ N𝑆𝜔 with 𝑀 b −→ e compute 𝑀 𝑀; if there is an (already computed) path b and there is some 𝑀 ∈ N𝑆𝜔 leading from 𝑀0 to 𝑀 e (𝑠′ ) 1 can be dealt with by induction and componentwise consideration. From an infinite sequence of vectors with 𝑛 components, we first choose an infinite subsequence that is (weakly) monotonically increasing with respect to components 1 to 𝑛 − 1. This can be done by induction hypothesis. From this subsequence, we then choose another subsequence which (weakly) increases with respect to the last (𝑛 th) component. This can be done as in the case that 𝑛 = 1. The resulting sub-subsequence is (weakly) monotonically increasing with respect to all 𝑛 components. 3.20 It may be observed that this result is also valid for vectors in (N 𝜔 ) 𝑛 . Later, Dickson’s lemma will also be used in a slightly different form: Corollary 3.21 Dickson’s corollary Let 𝑋 be a set of mutually incomparable vectors in N𝑛 (or (N 𝜔 ) 𝑛 ), i.e., for any two different 𝑥, 𝑦 ∈ 𝑋: neither 𝑥 ≤ 𝑦 nor 𝑦 ≤ 𝑥. Then 𝑋 is a finite set. Proof: By contradiction. Assume 𝑋 is infinite. Then there is an infinite sequence 𝑥1 , 𝑥2 , 𝑥3 , . . . with 𝑥𝑖 ∈ 𝑋. By Lemma 3.20, there are indices 𝑖1 , 𝑖2 such that 𝑥 𝑖1 ≤ 𝑥𝑖2 . 3.21 Another useful lemma is known as König’s lemma. Lemma 3.22 König’s lemma In an infinite tree in which every node has only finitely many direct successors, there exists an infinite path starting at the root node. Proof: Since the root node has only finitely many direct successors and the tree is infinite, one of these successors starts an infinite subtree. Choose this subtree and repeat the argument. The sequence of offsprings chosen in this way is infinite. 3.22 König’s lemma also has several variants. In a moment, we shall apply it in the domain of generalised markings. Proposition 3.23 Finiteness of the coverability trees and graphs Every Petri net has only finite coverability trees and graphs. Proof: By contradiction. Assume that a coverability tree of a Petri net 𝑁 = (𝑃, 𝑇, B, F, 𝑀0 ) is infinite. Since each node has at most |𝑇 | successors, König’s lemma implies that there exists an infinite path starting at 𝑀0 . The sequence of all states on such a path is an infinitely long sequence of generalised states. No state (label) may occur twice, since the second one will be detected as duplicate and has no successor. By (the variant on N 𝜔 of) Dickson’s lemma, we find a weakly monotonically (with regard to ≤) increasing subsequence. Because no node label occurs more than once, this subsequence is even strongly monotonically increasing. That is, for any two successive elements 𝑀 and 𝑀 ′ occurring in this order in the subsequence, we have

3.6 Coverability trees and coverability graphs

59

𝑀 < 𝑀 ′ . Because 𝑀 lies on a path from 𝑀0 to 𝑀 ′ , all entries in 𝑀 ′ that are larger than the corresponding entries in 𝑀 have been replaced by 𝜔, and this has been done for at least one of the entries. Hence in the chosen subsequence, there is always (between any two subsequent elements) at least one component in which some non-𝜔 entry is replaced by an 𝜔-entry. However, since there are only finitely many (namely exactly |𝑆|) entries, the (|𝑆|+1)’th element of the subsequence must, at the latest, be equal to (𝜔, . . . , 𝜔) T , and since there is no larger vector, the subsequence ends there, contradicting its infinite length. For a coverability graph, either occurring from the collapsing of a tree or from a direct application of Algorithm 3.18, we may apply almost the same proof to any spanning tree, again leading to finiteness 3.23 Now that we know that any coverability graph is finite, we may try to use it in order to solve the same problems (safeness etc.) as can be done using the reachability graph in Proposition 3.11. We will see that this can largely, but not completely, be achieved. First, we relate the generalised markings in a coverability graph to the markings in the reachability graph. More generally, we show how paths in the two graphs can be connected. In order to distinguish the two, we call sequences and markings in the considered reachability graph 𝜎 and 𝐿, respectively, and we call sequences and markings in the coverability graph, respectively, 𝜏 and 𝑀. 𝐿 0 is the start marking of the reachability graph and 𝑀0 is the start marking of the coverability graph, and 𝐿 0 = 𝑀0 by definition. All the 𝐿 will be non-generalised markings while the 𝑀 may or may not be generalised markings. Lemma 3.24 Firing sequences in 𝑅𝐺 (𝑁) and in Cov(𝑁) Let 𝑁 = (𝑆, 𝑇, B, F, 𝑀0 ) be a Petri net and 𝐿 0 = 𝑀0 . Let 𝑥1

𝑥2

𝑥𝑚

𝜎 = 𝐿 0 −→ 𝐿 1 −→ . . . −→ 𝐿 𝑚 be a firing sequence, i.e., a path in 𝑅𝐺 (𝑁). Then there is also a path 𝑥1

𝑥2

𝑥𝑚

𝜏 = 𝑀0 −→ 𝑀1 −→ . . . −→ 𝑀𝑚 in Cov(𝑁) satisfying 𝐿 𝑖 ≤ 𝑀𝑖 for all 0 ≤ 𝑖 ≤ 𝑚, and for each 𝑠 ∈ 𝑆, 𝑀𝑖 (𝑠) = 𝐿 𝑖 (𝑠) unless 𝑀𝑖 (𝑠) = 𝜔. Proof: From the definitions, and from monotonicity, it follows that every path in 𝑅𝐺 (𝑁) is also a path in Cov(𝑁). If we trace simultaneously the two paths in the reachability graph and in the coverability graph, then the state in the reachability graph is always less than or equal to the corresponding state in the coverability graph. It can be the case that some entries which are finite in the reachability graph have become infinite in the coverability graph, but no finite entry in the coverability graph differs from the corresponding entry in the reachability graph. 3.24

60

3 Reachability and Coverability

A converse relationship is not quite as easy to establish. Not every path in the coverability graph needs to be a path in the reachability graph, but the markings reached by it can be “over-approximated” in the following sense. Lemma 3.25 Markings in Cov(𝑁) and in 𝑅𝐺 (𝑁) e be a (possibly generalised) Let 𝑁 = (𝑆, 𝑇, B, F, 𝑀0 ) be a Petri net. Let 𝑀 marking in Cov(𝑁). Then for every number 𝑘 ∈ N there is a marking e 𝐿 in 𝑅𝐺 (𝑁) such that e e (𝑠)=𝜔 and e e (𝑠) for all 𝑠 with 𝑀 e (𝑠) 0. a

𝐴T is the transposed of 𝐴.

Proof: (Sketch). It is easy to see that (i) and (ii) cannot be true at the same time, because (i) ∧ (ii) entails 0 ≥ 𝑦 T · 𝐴T · 𝑥 > 0 but 0 > 0 is a contradiction. The first inequality comes from 𝑦 ≥ 0 (ii) and from 𝐴T ·𝑥 ≤ 0 (i), if the middle product is associated as 𝑦 T · ( 𝐴T ·𝑥). The second inequality comes from 𝑥 ≫ 0 (i) and 𝐴 · 𝑦 > 0 (ii), if the product is associated as (𝑦 T · 𝐴T ) · 𝑥. It is not so easy to see that (i) ∨ (ii) holds. 4.11 Just “for fun”, the next table shows three variations.4 If 𝐴 is some rational (or integer) matrix, then in every line of the following table, exactly one of the two systems of inequalities has a rational (integer, respectively) solution. 3 Apologies for sounding redundant in the last few lines. This just serves to emphasise “the message”. 4 Hold on. Variation 3 will be used later (in Section 6.2).

78

4 Linear-algebraic Structure of Petri Nets

Variation 1: 𝐴T · 𝑥 ≪ 0

𝐴 · 𝑦 = 0, 𝑦 ≥ 0, 𝑦 ≠ 0

Variation 2: 𝐴T · 𝑥 ≪ 0, 𝑥 ≥ 0

𝐴 · 𝑦 ≥ 0, 𝑦 ≥ 0, 𝑦 ≠ 0

(4.2)

Variation 3: 𝐴T · 𝑥 ≤ 0, 𝐴T · 𝑥 ≠ 0 𝐴 · 𝑦 = 0, 𝑦 ≫ 0

4.5 Structural boundedness and infinite executions This section contains two applications of the linear-algebraic machinery developed so far. Dickson’s lemma also plays a useful role. First, we give a linear-algebraic characterisation of structural boundedness. Then, we characterise the existence of infinite firing sequences. Proposition 4.12 Characterisation of structural boundedness Let 𝑁 be a net and let 𝐶 be the incidence matrix of 𝑁. The following statements are equivalent: (A) 𝑁 = (𝑆, 𝑇, 𝐹) is structurally bounded. (B) There exists a vector 𝑥 ∈ N |𝑆 | with 𝑥 ≫ 0 and 𝐶 T · 𝑥 ≤ 0. Proof: (A)⇒(B) can be shown by contraposition. Assume ¬(B), i.e., there is no vector 𝑥 as in (B). By Lemma 4.11, there is some vector 𝑦 ∈ N |𝑇 | with 𝐶 · 𝑦 > 0. Choose some marking 𝑀 which guarantees that a firing sequence 𝜏 with P (𝜏) = 𝑦 is firable from it, for instance the following: ∑︁ 𝑀 (𝑠) = (𝐹 (𝑠, 𝑡) · 𝑦(𝑡)), for 𝑠 ∈ 𝑆 𝑡 ∈𝑠 • 𝜏

Let 𝑀 ′ be defined by 𝑀 −→ 𝑀 ′ . The firing lemma yields 𝑀 ′ = 𝑀 + 𝐶 · P (𝜏) ( by Lemma 4.8 ) = 𝑀 +𝐶 · 𝑦 ( by P (𝜏)=𝑦 ) >𝑀 ( by 𝐶 · 𝑦 > 0 ) Furthermore, 𝑀 ′ > 𝑀 entails the existence of a place 𝑟 with 𝑀 ′ (𝑟) > 𝑀 (𝑟). Since 𝜏 can be fired arbitrarily often from 𝑀 ′ because of 𝑀 ′ > 𝑀 and the monotonicity of firing (Lemma 1.13), at least the place 𝑟 is unbounded. Hence ¬(A) holds. To show (B)⇒(A), choose 𝑥 such that property (B) is satisfied. Let 𝑀1 be an arbitrary 𝜏 marking of 𝑁 and let 𝑀1 −→ 𝑀2 with an arbitrary firing sequence 𝜏. Using (B) and P (𝜏) ≥ 0 one gets: 𝑥 T · 𝑀2 = 𝑥 T · (𝑀1 + 𝐶 · P (𝜏)) = 𝑥 T · 𝑀1 + 𝑥 T · (𝐶 · P (𝜏)) ≤ 𝑥 T · 𝑀1

4.5 Structural boundedness and infinite executions

79

where the first equality follows from the firing lemma and the last inequality from (B). For 𝑠 ∈ 𝑆, ∑︁ 𝑥(𝑠) 𝑀2 (𝑠) ≤ ( by 𝑥 ≫ 0 ) 𝑥(𝑟) 𝑀2 (𝑟) = 𝑥 T · 𝑀2 ≤ ( by the above ) 𝑥 T · 𝑀1 𝑟 ∈𝑆

Therefore, (𝑥 T · 𝑀1 )/𝑥(𝑠) is an upper bound for the number of tokens on an arbitrary place 𝑠 in 𝑀2 , depending neither on 𝑀2 nor on 𝜏. Therefore, place 𝑠 is bounded. Since the above is true for arbitrary 𝑀1 and for arbitrary 𝑠, Property (A) is satisfied. Note that this may be considered as a special case of Proposition 3.28. 4.12 The next result characterises the existence of an infinite firing sequence (i.e., in terms of the previous chapter, the existence of a weakly live transition) linear-algebraically. Proposition 4.13 Existence of an infinite firing sequence For an unmarked net 𝑁, there is some marking 𝑀0 such that an infinite firing sequence from 𝑀0 exists, if and only if the system of inequalities 𝐶 · 𝑦 ≥ 0, 𝑦 > 0 has a solution. 𝑡1

𝑡3

𝑡2

Proof: (⇒): Let 𝑀0 be a marking of 𝑁 with 𝑀0 −→ 𝑀1 −→ 𝑀2 −→

...

𝑡𝑖+1 ...𝑡 𝑗

By Lemma 3.20, there exist indices 𝑖 < 𝑗 with 𝑀𝑖 ≤ 𝑀 𝑗 and 𝑀𝑖 −→ 𝑀 𝑗 . The vector 𝑦 defined by 𝑦 = P (𝑡 𝑖+1 . . . 𝑡 𝑗 ) solves the system of inequalities given in the proposition, since: 𝑦 ≥ 0, because 𝑦 is a Parikh vector; 𝑦 ≠ 0, because 𝑖 < 𝑗; and 𝐶 · 𝑦 ≥ 0 by 𝑀 𝑗 = 𝑀𝑖 + 𝐶 · 𝑦 (firing lemma) and by 𝑀 𝑗 ≥ 𝑀𝑖 . (⇐): Let 𝑦 be a solution of the system of inequalities given in the lemma. As in the proof of Proposition 4.12, we can find a “sufficiently large” marking 𝑀 that activates 𝜎 a firing sequence 𝜎 with P (𝜎) = 𝑦. Let 𝑀 ′ be the marking defined by 𝑀 −→ 𝑀 ′ . By the firing lemma, 𝑀 ′ = 𝑀 + 𝐶·𝑦. By 𝐶 · 𝑦 ≥ 0, we have 𝑀 ′ ≥ 𝑀. Hence 𝜎 can be iterated indefinitely by the monotonicity of firing (Lemma 1.13), and 𝜎𝜎𝜎 . . . is firable from 𝑀. Moreover, 𝜎𝜎𝜎 . . . is an infinite sequence since 𝜎 ≠ 𝜀 because of 𝑦 ≠ 0. 4.13 The next result has a similar flavour, and a similar proof. Proposition 4.14 Well-formed nets are covered by positive T-vectors Let 𝑁 be a well-formed net. Then there is a T-vector 𝑦 ≫ 0 with 𝐶 · 𝑦 = 0.a a

Later, T-vectors 𝑦 ≫ 0 satisfying 𝐶 · 𝑦 = 0 will be called positive T-invariants.

Proof: Let 𝑀 be a live and bounded marking of 𝑁. By liveness, there is an infinite 𝜏 firing sequence 𝑀 −→ in which every transition occurs infinitely often. By boundedness, there is some marking 𝑀 ′ which occurs infinitely often during 𝜏. Thus, our infinite firing sequence 𝜏 is of the form 𝜏 = 𝜎𝜏1 𝜏2 𝜏3 . . ., as follows: 𝜎

𝜏1

𝜏2

𝜏3

𝑀 −→ 𝑀 ′ −→ 𝑀 ′ −→ 𝑀 ′ −→ 𝑀 ′ . . .

80

4 Linear-algebraic Structure of Petri Nets

Because every transition occurs infinitely often in 𝜏1 𝜏2 𝜏3 . . ., we can find indices 𝜏𝑖 ...𝜏 𝑗

𝑖 ≥ 1 and 𝑗 ≥ 𝑖 such that 𝑀 ′ −→ −−− 𝑀 ′ and all transitions occur in 𝜏𝑖 . . . 𝜏 𝑗 . Then 𝑦 = P (𝜏𝑖 . . . 𝜏 𝑗 ) satisfies the claim, i.e., 𝑦 ≫ 0 and 𝐶 · 𝑦 = 0. 4.14 However, unfortunately, the existence of a T-vector 𝑦 ≫ 0 satisfying 𝐶 · 𝑦 = 0 implies neither structural liveness, nor, a fortiori, well-formedness.

4.6 S-invariants and T-invariants In the previous section, linear algebra was used for two behavioural characterisations: structural boundedness in Proposition 4.12 and the existence of an infinite firing sequence in Proposition 4.13. The first characterisation involved the inequality 𝐶 T · 𝑥 ≤ 0 (multiplying the transposed of the incidence matrix 𝐶 with an S-vector 𝑥). The second characterisation involved the inequality 𝐶 · 𝑦 ≥ 0 (a multiplication between 𝐶 and a T-vector 𝑦). Before trying to generalise and widen these results, let us have a look at some strengthenings or special cases. The case that the above inequalities become actual equalities, 𝐶 T · 𝑥 = 0 and 𝐶 · 𝑦 = 0 (as in Proposition 4.14), is of particular importance. As we shall discover, structural boundedness then becomes a sort of token preservation property while the existence of an infinite firing sequence becomes a kind of repetition property. Definition 4.15 Place invariants and transition invariants A vector 𝑥 ∈ Z |𝑆 | is called S-invariant or place invariant, ifa 𝐶 T · 𝑥 = 0. The set {𝑠 ∈ 𝑆 | 𝑥(𝑠) ≠ 0} is called the support of 𝑥. A semipositive S-invariant 𝑥 is minimal if there is no S-invariant 𝑥 ′ with 0 < 𝑥 ′ < 𝑥. A vector 𝑦 ∈ Z |𝑇 | is called T-invariant or transition invariant, if 𝐶 · 𝑦 = 0. The set {𝑡 ∈ 𝑇 | 𝑦(𝑡) ≠ 0} is called the support of 𝑦. Minimality is defined in the same way as for S-invariants. a

This may be seen as a reformulation of invariance as defined in Definition 3.7.

Example 4.16 Some S- and T-invariants Consider Figure 4.8. 𝑥1 is a minimal semipositive S-invariant. 𝑥 2 is a nonsemipositive S-invariant. 𝑥3 is a semipositive S-invariant which arises from another semipositive S-invariant, namely (0, 0, 1, 1, 0, 0) T , by multiplication with 2; thus, it is not minimal. 𝑥3 is also the sum of 𝑥 1 and 𝑥 2 . 𝑦 1 and 𝑦 2 are minimal semipositive T-invariants which are (for typographic reasons) represented as row vectors. The following proposition is a direct consequence of the firing lemma.

4.6 S-invariants and T-invariants 𝑠3

𝑠1

𝑎

𝑏

𝑠2

81

𝑐

𝑠5

𝑑

𝑠4

𝐶 𝑎 𝑏 𝑐 𝑑 𝑒 𝑠1 1 −1 −1 0 0 𝑠2 −1 1 1 0 0 𝑠3 0 0 1 −1 0 𝑠4 0 0 −1 1 0 𝑠5 0 0 0 −1 1 𝑠6 0 0 0 1 −1

𝑒

𝑦1T 𝑦2T

𝑠6

1 1

1 0

0 1

0 1

𝑥1 1 1 0 0 0 0

𝑥2 −1 −1 2 2 0 0

𝑥3 0 0 2 2 0 0

0 1

Fig. 4.8: The example of Figure 4.4, with some S- and T-invariants.

Proposition 4.17 Basic properties of S- and T-invariants 𝜏 Let 𝑥 be an S-invariant of 𝑁 and let 𝑀1 , 𝑀2 be markings of 𝑁 with 𝑀1 −→ 𝑀2 , for some sequence 𝜏. Then 𝑥 T · 𝑀1 = 𝑥 T · 𝑀2 . 𝜏 Let 𝑀 be a marking of 𝑁 with 𝑀 −→ 𝑀 for some sequence 𝜏. Then P (𝜏) 𝜏 is a T-invariant of 𝑁. Conversely, if 𝑀 −→ and P (𝜏) is a T-invariant of 𝑁, 𝜏 then 𝑀 −→ 𝑀. 𝜏

Proof: For the first claim, suppose that 𝑀1 −→ 𝑀2 . The firing lemma yields 𝑀2 = 𝑀1 + 𝐶·P (𝜏). Then, 𝑥 T ·𝑀2 = 𝑥 T ·𝑀1 + 𝑥 T ·(𝐶·P (𝜏)) ( by 𝑀2 = 𝑀1 + 𝐶·P (𝜏) and distributivity ) = 𝑥 T ·𝑀1 + (𝑥 T ·𝐶)·P (𝜏) ( by associativity ) = 𝑥 T ·𝑀1 ( by 𝐶 T ·𝑥 = 0 ) 𝜏

For the first part of the second claim, 𝑀 −→ 𝑀 implies 𝑀 = 𝑀 + 𝐶·P (𝜏) by the firing lemma, hence 𝐶·P (𝜏) = 0. For the other part of the second claim, 𝜏 𝜏 𝑀 −→ means that 𝑀 −→ 𝑀 ′ for some marking 𝑀 ′ . The firing lemma yields ′ 𝑀 = 𝑀 + 𝐶·P (𝜏), and then 𝑀 ′ = 𝑀 because P (𝜏) is a T-invariant. 4.17 Intuitively, the first part of this proposition states that the 𝑥-weighted marking on any S-invariant 𝑥 is constant. This does not mean that the number of tokens remains constant; see Figure 4.9. In particular, if a net has a positive S-invariant, then it is necessarily structurally bounded. The second part of the proposition states that any reproduction sequence generates a T-invariant. In particular, any reproduction sequence containing every transition at least once, generates a positive T-invariant. A weak converse also holds true: if P (𝜏) is a T-invariant, then 𝜏 is a reproduction sequence of any marking from which 𝜏 can be fired. It can be reproduced ad infinitum (“eventually” leading to an infinite firing sequence) whenever it is enabled. The semipositive, the positive, and the minimal invariants will turn out to be of outstanding interest. For example, they are intimately related to the cycle-coveredness and / or the strong connectedness of a Petri net. This relationship is explored in the next (and last) section of this chapter.

82

4 Linear-algebraic Structure of Petri Nets 𝑠1

𝑡1

𝑡2 2

2

𝐶 𝑡1 𝑡2 𝑥 𝑠1 −1 1 2 𝑠2 2 −2 1 𝑦T

1

1

𝑠2

Fig. 4.9: The number of tokens changes: after firing 𝑡 1 , two tokens are in the net while after firing 𝑡1 𝑡 2 , there is (again) only one. But the weighted sum of tokens 𝑥(𝑠1 )·𝑀 (𝑠1 ) + 𝑥(𝑠2 )·𝑀 (𝑠2 ), corresponding to the S-invariant 𝑥, is constant throughout the set of reachable markings 𝑀.

4.7 Positive S-invariants and T-invariants The remark after Proposition 4.17 indicates that positive S-invariants, that is, ones which assigns a positive number to every place, are particularly nice to have. Their existence guarantees structural boundedness. A positive S-invariant is said to cover the net. Similarly, a positive T-invariant is said to cover a net. For instance, the net shown in Figure 4.9 is covered by the positive S-invariant 𝑥 = (2 1) T and by the positive T-invariant 𝑦 = (1 1) T . As these properties occur frequently, often in connection with well-formedness, we abbreviate them as follows: (PS): The net under consideration is covered by a positive S-invariant. (PT): The net under consideration is covered by a positive T-invariant. (WF): The net under consideration is well-formed, i.e., it has a live and bounded marking. By Proposition 4.14, (WF) is stronger than (PT). However, (WF) is not stronger than (PS). Nevertheless, (WF) implies the cycle-coveredness of a net. Proposition 4.18 Cycle-coveredness of well-formed nets Let 𝑁 be a well-formed Petri net. Then 𝑁 is covered by cycles. Proof: (Sketch.) Let (𝑎, 𝑏) ∈ ((𝑆 × 𝑇) ∪ (𝑇 × 𝑆)) be some arc in 𝑁, i.e., 𝐹 (𝑎, 𝑏) ≥ 1. Case 1: (𝑎, 𝑏) ∈ 𝑆 × 𝑇. If there was no path from 𝑏 to 𝑎, then no transition following 𝑏 may help to put tokens in 𝑎 and they may be dropped without affecting the liveness of 𝑏, and any token absorbed by 𝑏 may not be reinjected in 𝑎. That means that the liveness of 𝑏 could only be guaranteed if arbitrarily many tokens could be assembled on 𝑎, contradicting boundedness. Case 2: (𝑎, 𝑏) ∈ 𝑇 × 𝑆. If there was no path from 𝑏 to 𝑎, then liveness would allow the part of the net which does not depend on 𝑏 (in particular, 𝑎) to fire sufficiently many times in order to put arbitrarily many tokens on 𝑏, again prompting a contradiction. Hence there is a directed path from 𝑏 to 𝑎. 4.18

4.7 Positive S-invariants and T-invariants

83

Corollary 4.19 Strong connectedness of well-formed nets Let 𝑁 be a weakly connected, well-formed Petri net. Then 𝑁 is strongly connected. Proof: This follows from Proposition 4.18 and the fact that every cycle-covered, weakly connected, graph is strongly connected. 4.19 Together, (PS) and (PT) yield another sufficient conditon for cycle-coveredness. Proposition 4.20 Cycle-coveredness of nets satisfying (PS) and (PT) Let 𝑁 be a Petri net containing a positive S-invariant and a positive Tinvariant. Then 𝑁 is covered by cycles. Proof: Let 𝑁 = (𝑆, 𝑇, 𝐹), let 𝑥 ≫ 0 be an S-vector such that 𝑥 T ·𝐶 = 0, and let 𝑦 ≫ 0 be a T-vector such that 𝐶·𝑦 = 0. Let (𝑎, 𝑏) ∈ ((𝑆 × 𝑇) ∪ (𝑇 × 𝑆)) be some arc in 𝑁, i.e., 𝐹 (𝑎, 𝑏) ≥ 1. We prove that there is a directed path from 𝑏 to 𝑎. Case 1: (𝑎, 𝑏) = (𝑠, 𝑡) ∈ (𝑆 × 𝑇). Define 𝑧 : 𝑇 → Z to be the following T-vector:  𝑦(𝑢) if there is a directed path from 𝑡 to 𝑢 (including the case 𝑡=𝑢) 𝑧(𝑢) = 0 otherwise We show that 𝑧 satisfies ∑︁ 𝑧(𝑢) ≤ 𝑢∈ • 𝑟

∑︁

𝑧(𝑢),

for every place 𝑟 ∈ 𝑆

(4.3)

𝑢∈𝑟 •

To see this, let 𝑟 be an arbitrary place. If every transition 𝑢 ∈ • 𝑟 satisfies 𝑧(𝑢) = 0, then Í 𝑢∈ • 𝑟 𝑧(𝑢) = 0, which entails (4.3), because all entries of 𝑧 are ≥ 0, by definition. Otherwise, if there is some transition 𝑢 ′ ∈ • 𝑟 with 𝑧(𝑢 ′ ) > 0, then, by definition of 𝑧, there exists a directed path from 𝑡 to 𝑢 ′ . Thus, there is a directed path from 𝑡 to 𝑣, for every 𝑣 ∈ 𝑟 • . Hence 𝑧(𝑣) = 𝑦(𝑣) > 0 for all 𝑣 ∈ 𝑟 • . Consequently, Í ′ • ′ 0 < 𝑢∈ • 𝑟 𝑧(𝑢) ( since ∃𝑢 ∈ 𝑟 : 𝑧(𝑢 ) > 0, and by 𝑧 ≥ 0 ) Í ≤ 𝑢∈ • 𝑟 𝑦(𝑢) ( since the definition of 𝑧 and 𝑦 ≫ 0 implies 𝑧 ≤ 𝑦 ) Í = 𝑢∈𝑟 • 𝑦(𝑢) ( since 𝑦 is a T-invariant ) Í • = 𝑢∈𝑟 • 𝑧(𝑢). ( since 𝑧(𝑣) = 𝑦(𝑣) > 0 for all 𝑣 ∈ 𝑟 ) Again, we get (4.3). In the place vector 𝐶 · 𝑧, the entry for a place 𝑟 is given by ∑︁ ∑︁ 𝑧(𝑢) − 𝑧(𝑢) 𝑢∈ • 𝑟

𝑢∈𝑟 •

84

4 Linear-algebraic Structure of Petri Nets

Hence, by (4.3), 𝐶 · 𝑧 has no positive entry. Since 𝑥 is an S-invariant, 𝑥T · 𝐶 · 𝑧 = 0 and since 𝑥 has only positive components, 𝐶 · 𝑧 does not have negative entries. Hence 𝐶 · 𝑧 = 0, which implies that 𝑧 is a T-invariant. In particular, for 𝑠, we get: Í Í 𝑢∈ • 𝑠 𝑧(𝑢) = 𝑢∈𝑠 • 𝑧(𝑢) ( since 𝑧 is a T-invariant ) ≥ 𝑧(𝑡)

( because of 𝑡 ∈ 𝑠• )

= 𝑦(𝑡)

( by the definition of 𝑧 )

> 0

( since 𝑦 is a positive T-invariant )

Thus, there exists a transition 𝑢 ∈ • 𝑠 satisfying 𝑧(𝑢) > 0. By the definition of 𝑧, there is a directed path from 𝑡 to 𝑢, hence also a path from 𝑡 to 𝑠. Case 2: (𝑎, 𝑏) = (𝑡, 𝑠) ∈ (𝑇 × 𝑆). We consider the dual net 𝑁 𝑑 = (𝑇, 𝑆, 𝐹), in which transitions and places are swapped, but edges are not reversed. The incidence matrix of 𝑁 𝑑 is −𝐶 T . Hence, in 𝑁 𝑑 , 𝑥 is a positive T-invariant, and 𝑦 is a positive S-invariant. Moreover, in 𝑁 𝑑 , 𝑡 is a place, and 𝑠 is a transition. We apply Case 1, getting a directed path from (transition) 𝑠 to (place) 𝑡 in 𝑁 𝑑 . This path is also present in 𝑁 and leads from place 𝑠 to transition 𝑡. 4.20 Both premises (PS) and (PT) are necessary. Consider the Petri nets 𝑁1 = ({𝑠1 , 𝑠2 }, {𝑡}, {(𝑠1 , 𝑡), (𝑡, 𝑠2 )}) 𝑁2 = ({𝑠}, {𝑡1 , 𝑡2 }, {(𝑡 1 , 𝑠), (𝑠, 𝑡2 )}) 𝑁1 satisfies only (PS) and is not covered by cycles, while 𝑁2 satisfies only (PT) and is not covered by cycles. Corollary 4.21 Strong connectedness of nets satisfying (PS) and (PT) Let 𝑁 be a weakly connected Petri net containing a positive S-invariant and a positive T-invariant. Then 𝑁 is strongly connected. Proof: This follows from Proposition 4.20 and the fact that every cycle-covered, weakly connected, graph is strongly connected. 4.21

4.8 Bibliographical remarks and further reading By Proposition 4.12, checking whether a Petri net is structurally bounded can be viewed as an integer linear programming problem. Structural liveness is far more difficult and challenging. It has been located in the algorithmic complexity grid only

4.9 Exercises

85

fairly recently as being decidable [Jan17] (Petr Jancar) and EXPSPACE-hard [JP19] (Petr Jancar, David Purser). The use of the incidence matrix, of S- and T-invariants, and of Dickson’s and Farkas’ lemma, date back to early work in [Com+71] (Fred G. Commoner, Anatol W. Holt, Shmuel Even, Amir Pnueli), [GL73] (Hartmann J. Genrich, Kurt Lautenbach), and [KM69] (R.M. Karp, R.E. Miller). The linear-algebraic approach has been standard in Petri net theory for some time [GB+82] (G.W. Brams), [MR79] (Gérard Memmi, Gérard Roucairol), [Pet77] (James L. Peterson), and [Rei85] (Wolfgang Reisig). In French and Spanish parlance, semi-positive invariants are often called semiflows. Nets satisfying (PT) are called consistent, and nets satisfying (PS) are called conservative. Farkas’ lemma5 and its variations go back to Paul A. Gordan [Gor73], Julius Farkas [Far02], Jean Ville [Vil38], and others.6 For a proof of ((i) ∨ (ii)) of Lemma 4.11, see, e.g., [Sch99] by Alexander Schrijver. Often in the literature, “Dickson’s lemma” denotes a statement which may be more general or slightly different from the one(s) described in this chapter. Innocent as it might seem, this lemma can be viewed as a very restricted special case of one of the most famous new results in graph theory, the graph minor theorem; see, for instance, [Die12] (Reinhard Diestel).

4.9 Exercises Exercise 4.1 Realisability of T-vectors Let 𝐽1 and 𝐽2 be two nonnegative T-invariants of a Petri net system (𝑁, 𝑀0 ). Assume that the T-invariant 𝐽1 + 𝐽2 is realisable from 𝑀0 , that is, there is a firing sequence 𝜎 𝑀0 −→ such that its Parikh vector satisfies P (𝜎) = 𝐽1 + 𝐽2 . Show that 𝐽1 and 𝐽2 are not necessarily realisable from 𝑀0 . You get extra points for finding an example in which neither 𝐽1 nor 𝐽2 is realisable (but 𝐽1 + 𝐽2 is). Exercise 4.2 Well-formedness Construct a Petri net which is well-formed but not structurally bounded. Exercise 4.3 (No) Positive S-invariant Construct a Petri net 𝑁 with initial marking 𝑀0 , so that 𝑀0 is live and safe (1bounded) and such that 𝑁 has no positive S-invariant. Solution 4.1 (Exercise 4.1) See Figure 4.10. 5 See also https://en.wikipedia.org/wiki/Farkas_lemma. 6 See http://wwwbiz.meijo-u.ac.jp/SEBM/ronso/no6_4/ozaki.pdf and http://www. m-hikari.com/ams/ams-2017/ams-41-44-2017/p/perngAMS41-44-2017.pdf. Plenty of variations of Farkas’ lemma are described in these papers.

86

4 Linear-algebraic Structure of Petri Nets

𝑡3

𝑡2

𝑡1

𝑡4

Fig. 4.10: Solution of Exercise 4.1. Let 𝐽1 = (1 1 0 0) 𝑇 and 𝐽2 = (0 0 1 1) 𝑇 . 𝐽1 + 𝐽2 = (1 1 1 1) 𝑇 is realisable by 𝑡 1 𝑡 3 𝑡 2 𝑡 4 , but 𝐽1 is not realisable since neither 𝑡 1 𝑡 2 nor 𝑡2 𝑡1 are firable, and analogously, 𝐽2 is not realisable since neither 𝑡3 𝑡4 nor 𝑡 4 𝑡 3 are firable. Solution 4.2 (Exercise 4.2) See Figure 4.11. In fact, this net is live, except for the initial markings (0 0 0) T , (0 1 0) T , (0 1 1) T , (0 0 1) T , and (0 0 2) T . The net is unbounded for initial markings 𝑀0 satisfying 6·𝑀0 (𝑠1 ) + 3·𝑀0 (𝑠2 ) + 2·𝑀0 (𝑠3 ) > 7. Otherwise, it is bounded.

𝑡1

2

𝑠2 2

𝑠1

𝑡3

𝑡2 2 𝑡4

3

2 𝑠3

Fig. 4.11: Solution of Exercise 4.2. This net is well-formed since (1 0 0) T is a live and bounded marking (from (1 0 0) T , only (𝑡1 𝑡 2 𝑡 3 𝑡 4 ) ∗ can be fired, and we have 𝑡1 𝑡2 𝑡3 𝑡4 (1 0 0) T −−−−−→ −−− (1 0 0) 𝑇 ). This system is 3-bounded and live. On the other hand, T 𝑡1 𝑡1 𝑡2 𝑡2 𝑡2 𝑡3 𝑡4 𝑡4 (2 0 0) −−−−−→ −−− (2 0 1) T , and thus, the system with initial marking (2 0 0) T is not bounded.

Solution 4.3 (Exercise 4.3) See Figure 4.12. From the initial marking 𝑀0 = (1 1 0 1 0 0) T , both 𝑀0 𝑡5 𝑡6 𝑡7 𝑡8 𝑡2 𝑡3 𝑡4

𝑡1 𝑡2 𝑡3 𝑡4

−→ −−−

𝑀0 and 𝑀0 −−−−−→ −−− 𝑀0 are firable. The intermediate markings enable exactly one transition. As these two firing sequences contain all transitions and since in 𝑀0 , only 𝑡1 and 𝑡 5 are firable, the net is live. The reachable markings are 𝑀0 , 𝑀1 = (1 1 1 0 0 0) T , 𝑀2 = (0 1 1 1 0 0) T , 𝑀3 = (1 0 1 1 0 0) T , 𝑀4 = (1 0 0 0 1 0) T , 𝑀5 = (0 0 0 0 1 1) T and 𝑀6 = (0 1 0 0 0 1) T . Hence the net is safe. We now check that there is no positive S-invariant.

4.9 Exercises

87

Assume that 𝐼 is a positive S-invariant. By 𝐼 T · 𝑀0 = 𝐼 T · 𝑀1 = 𝐼 T · 𝑀2 = 𝐼 T · 𝑀3 , necessarily 𝐼 (𝑠1 ) = 𝐼 (𝑠2 ) = 𝐼 (𝑠3 ) = 𝐼 (𝑠4 ). 𝐼 T · 𝑀4 = 𝐼 T · 𝑀5 entails 𝐼 (𝑠1 ) = 𝐼 (𝑠6 ), and 𝐼 T · 𝑀5 = 𝐼 T · 𝑀6 entails 𝐼 (𝑠2 ) = 𝐼 (𝑠5 ). Hence all entries in 𝐼 are equal. But then, 𝐼 T · 𝑀0 = 3 · 𝐼 (𝑠1 ) ≠ 2 · 𝐼 (𝑠1 ) = 𝐼 T · 𝑀6 . This is a contradiction; hence no positive S-invariant exists.

𝑡5 𝑠1 𝑠4

𝑡4

𝑡3

𝑠3

𝑡6

𝑠5

𝑡7

𝑠6

𝑡1

𝑡2 𝑠2 𝑡8

Fig. 4.12: A solution of Exercise 4.3.

Chapter 5

Graph-theoretical Structure of Petri Nets

Abstract By imposing restrictions on the bipartite graphs that may occur as Petri nets, it is possible to distinguish certain classes of nets whose behavioural analysis is more easily possible than for the full class of nets. In this chapter, the focus is on three such classes: S-systems, T-systems, and FC- (free-choice) systems.

Contents 5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8 5.9 5.10 5.11

Introduction . . . . . . . . . . . . . . . . . . . . . Loops, weights, and border elements . . . . . . . . S-nets and T-nets . . . . . . . . . . . . . . . . . . Free-choice nets . . . . . . . . . . . . . . . . . . A liveness criterion for FC-systems . . . . . . . . Liveness criteria for S-systems and T-systems . . . Boundedness criteria for live FC- (S-/T-) systems . An algorithm for the construction of T-components An algorithm for the construction of S-components Bibliographical remarks and further reading . . . . Exercises . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

89 90 92 96 98 108 110 117 121 122 123

5.1 Introduction In the previous section, we have defined two structural counterparts of the boundedness and the liveness of a Petri net, and we have succeeded in characterising one of them (structural boundedness) by linear-algebraic means. As for the other property, structural liveness, only a small aspect (the existence of a weakly live transition) was characterised linear-algebraically. Indeed, the liveness problem has long been © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. Best and R. Devillers, Petri Net Primer, Computer Science Foundations and Applied Logic, https://doi.org/10.1007/978-3-031-48278-6_5

89

90

5 Graph-theoretical Structure of Petri Nets

intriguing, both because of its difficulty (in particular, it is not monotonic: it may happen that a live net loses its liveness if we add tokens) and because of its practical relevance. If a general solution seems out of reach, what is usually done in mathematics is to look at some special cases in order to get a first grip on the problem. This is what happened to the liveness problem of Petri nets. By the fact that Petri nets are graphs, it is possible to delineate structural classes of nets that are easier to analyse than Petri nets in general. In this chapter, we are concerned with three such classes. For each class, we strive to find – preferably structural, that is, relating to the graph-theoretical structure of a net – criteria allowing the following two types of results: • A net is live iff ⟨liveness criterion⟩. • A live net is bounded iff ⟨boundedness criterion⟩. Of the three Petri net classes presented in this chapter, two are subclasses of the third one. We shall proceed “top down”, meaning that the liveness and boundedness criteria will first be given for the larger class, and then, the criteria for the two smaller classes will be derived as special cases.1

5.2 Loops, weights, and border elements While linear algebra has no problems at all with the arc weights of a Petri net (only with its loops), graph theory does, in a sense. The true multigraph structure of a net is embodied by arc weights greater than 1. Such arc weights fit smoothly into an incidence matrix, but have to be taken into account very carefully if, for instance, the sets • 𝑥 and 𝑥 • are involved in defining net classes. This is because only the existence of an arc, rather than its weight, is represented by these sets. Since we are interested in true subclasses of nets, we shall, for the moment, rule out arc weights > 1. Provision 5.1 Plainness In this chapter, all Petri nets considered, either in definitions or in results or in examples, will be required to be plain. Actually, there is a construction transforming an arc-weighted Petri net into an “equivalent” (whatever this means, exactly) plain Petri net. This construction is shown for arc weight 2 in Figure 5.1. This construction should be applied with care. For instance, in a labelled net context, what should be the labels of the new transitions 𝑎 and 𝑏? Also, there is an asymmetry with respect to arcs from transitions to places and arcs from places to transitions. In the latter case, it has to be avoided that one of the additional transitions 𝑎, 𝑏 is preferred over the other one. This could lead to a deadlock where the original 2-weighted arc would not. The situation is not unsimilar to the “killing sequence” 1 Definitions will be given the other way round: first the smaller classes, then the larger one.

5.2 Loops, weights, and border elements

𝑎

𝑠

𝑡

91

𝑡

𝑠

{

2

𝑏

𝑠

𝑎

𝑠

𝑡

𝑡

{

2

𝑏

Fig. 5.1: A construction turning a non-plain Petri net into a plain net. The additional marked cycle around 𝑎 and 𝑏 on the second line is necessary in order to avoid undesired deadlocks.

in Figure 4.2. In a choice, the wrong alternative can be chosen. The absence of undesirable choices is enforced by a small, 1-token circuit that usually goes by the name of “regulation circuit”. With Provision 5.1 in force, there is no need to worry about transforming arc weights greater than 1. Incidentally, there is also a transformation which allows one to get rid of side loops. It is exemplified in Figure 5.2. Similar words of caution apply.

𝑠

2 3

𝑡

𝑠

2

𝑡1

3

𝑡2

{

Fig. 5.2: A construction turning a Petri net with a loop into a loop-free Petri net. Transition 𝑡 is split into a beginning 𝑡1 and an end 𝑡 2 , with |𝑡1• ∩ • 𝑡2 | = 1. Let us first consider some special configurations. An isolated place 𝑠 (i.e., such that • 𝑠 = ∅ = 𝑠• ) neither loses any of the tokens it has initially, nor does it gain any new tokens. Therefore, such a place affects neither liveness nor boundedness, and we might as well (and will, for the time being) exclude such places. More generally, if 𝑠 is only connected to the rest of the net through loops (∀𝑡 ∈ 𝑇 : 𝑡 ∈ • 𝑠 ∨ 𝑡 ∈ 𝑠• ⇒ 𝑡 ∈ • 𝑠 ∧ 𝑡 ∈ 𝑠• ), the marking of 𝑠 never changes. If 𝑠 is initially marked, it has no impact on the rest of the net and we may drop it. If it is initially empty, all the transitions connected to it are dead and we may drop 𝑠 as well as its connected transitions. An isolated transition 𝑡 (i.e., such that • 𝑡 = ∅ = 𝑡 • ) can occur indefinitely often at any time (we have a kind of shuffle as in Chapter 2), neither needing any input tokens nor producing any output tokens. As such, it also has no impact on (strong or weak) liveness or on boundedness and we will exclude isolated transitions from consideration as well. If 𝑡 is only connected to the rest of the net through loops (∀𝑠 ∈ 𝑆 : 𝑠 ∈ • 𝑡 ∨ 𝑠 ∈

92

5 Graph-theoretical Structure of Petri Nets

𝑡 • ⇒ 𝑠 ∈ • 𝑡 ∧ 𝑠 ∈ 𝑡 • ), the situation is slightly more subtle. Firing 𝑡 never changes the marking and does not impact the rest of the net. On the contrary, the rest of the net may impact the behaviour of 𝑡. If 𝑡 is disabled at some point, it will remain so until (if ever) all the places connected to 𝑡 have at least one token; then 𝑡 may occur indefinitely until (if ever) another transition empties one or more of the places connected to 𝑡, in which case the situation is resumed. Hence, we may analyse the system without 𝑡 and later wonder when exactly its connected places are marked simultaneously. We shall thus assume the following: Provision 5.2 Isolated nodes In this chapter, the Petri nets considered are assumed to have no nodes connected only through loops to the rest of the net (and to have both transitions and places). Next, just in order to illustrate the possible interplay between liveness and boundedness, consider places and transitions with mixed empty and non-empty pre- or postsets. A place 𝑠 with • 𝑠 ≠ ∅ = 𝑠• destroys either liveness or boundedness, because if the transitions in • 𝑠 are live, then 𝑠 is most certainly not bounded. A place 𝑠 with • 𝑠 = ∅ ≠ 𝑠• destroys liveness, because the transitions in 𝑠• can fire at most as many times as there are tokens on 𝑠 initially. A transition 𝑡 with • 𝑡 ≠ ∅ = 𝑡 • destroys either liveness or boundedness, because it is not live, unless unboundedly many tokens can be assembled on the places in • 𝑡. A transition 𝑡 with • 𝑡 = ∅ ≠ 𝑡 • destroys boundedness, because it can fire indefinitely often in isolation, putting unboundedly many tokens on every place in 𝑡 • .2 Observe, however, that a net may be free of border places or transitions, even if it is weakly but not strongly connected.

5.3 S-nets and T-nets S-nets forbid synchronisation and splitting as shown on the top of Figure 5.3. T-nets prevent merging and branching as shown on the bottom of Figure 5.3. Definition 5.3 S-nets and T-nets, S-systems and T-systems A plain net 𝑁 = (𝑆, 𝑇, 𝐹) is called • an S-net if ∀𝑡 ∈ 𝑇 : | • 𝑡| ≤ 1 ≥ |𝑡 • |; • a T-net if ∀𝑠 ∈ 𝑆 : | • 𝑠| ≤ 1 ≥ |𝑠• |. A plain marked net 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 ) is called • an S-system if (𝑆, 𝑇, 𝐹) is an S-net; • a T-system if (𝑆, 𝑇, 𝐹) is a T-net.

2 These observations are, in fact, special cases of Proposition 4.18 and Corollary 4.19 of the previous chapter.

5.3 S-nets and T-nets

93

synchronisation (join)

backward conflict (merge)

splitting (fork)

conflict (branch)

Fig. 5.3: Forbidden structures: S-nets (above) and T-nets (below).

Since we are already equipped with linear algebraic machinery, we may check what this definition means in terms of the incidence matrix. Remark 5.4 Definition 5.3 in terms of the incidence matrix of a net First of all, no incidence matrix has a row or a column that consists only of zeroes. This is a consequence (even a reformulation) of Provision 5.2. Next, all incidence matrices have entries in the set {−1, 0, 1}. This corresponds precisely to Provision 5.1. In addition, Definition 5.3 requires that in a T-net’s incidence matrix, all rows have either a single +1, or a single −1, or a pair of −1, +1 entries. Similarly, the incidence matrix of an S-net may have only columns with a single +1, a single −1 or a pair −1, +1. Remark 5.4 has consequences for the S- and T-invariants of such nets. A minimal semipositive S-invariant of a T-net is supported by a set of places spanning either a simple cycle, or a simple path starting with a place 𝑠 satisfying • 𝑠 = ∅ and ending with a place 𝑠′ satisfying 𝑠′ • = ∅. A minimal semipositive T-invariant of an S-net is supported by a set of transitions spanning either a simple cycle, or a simple path starting with a transition 𝑡 satisfying • 𝑡 = ∅ and ending with a transition 𝑡 ′ satisfying 𝑡 ′ • = ∅. Figure 5.4 gives an example.3 Remembering the remarks on border elements in Section 5.2, no T-net with an S-invariant that is not a cycle can be structurally live. Similarly, no S-net with a T-invariant which is not a cycle can be structurally bounded. This leads directly to the following lemma: Lemma 5.5 Structurally live, and well-formed, T-nets A structurally live T-net 𝑁 = (𝑆, 𝑇, 𝐹) satisfies ∀𝑠 ∈ 𝑆 : • 𝑠 ≠ ∅, and additionally, ∀𝑠 ∈ 𝑆 : 𝑠• ≠ ∅ if it is well-formed. 3 It is arbitrary where the cycle 𝑠1 𝑡1 𝑠2 𝑡2 𝑠1 starts and ends in this net. It is customary to say that, for example, 𝑡1 𝑠2 𝑡2 𝑠1 𝑡1 , or 𝑠2 𝑡2 𝑠1 𝑡1 𝑠2 , is really the same cycle.

94

5 Graph-theoretical Structure of Petri Nets 𝑠1

𝑡1

𝑡2 𝑠′

𝑠 𝑠2

𝐶 𝑡1 𝑠1 −1 𝑠2 1 𝑠 −1 𝑠′ 0

𝑡2 1 −1 0 1

𝑥1 1 1 0 0

𝑥2 0 1 1 1

Fig. 5.4: Minimal semipositive S-invariants 𝑥1 and 𝑥2 in a T-net. The support of 𝑥 1 is the set {𝑠1 , 𝑠2 }, which spans the simple cycle 𝑠1 𝑡 1 𝑠2 𝑡 2 𝑠1 . The support of 𝑥2 is the set {𝑠, 𝑠2 , 𝑠′ }, which spans the simple path 𝑠 𝑡1 𝑠2 𝑡2 𝑠′ . Proof: The existence of a live marking entails ∀𝑠 ∈ 𝑆 : • 𝑠 ≠ ∅. The existence of a live and bounded marking entails ∀𝑠 ∈ 𝑆 : 𝑠• ≠ ∅. 5.5 Lemma 5.6 Structurally bounded, and well-formed, S-nets A structurally bounded S-net 𝑁 = (𝑆, 𝑇, 𝐹) satisfies ∀𝑡 ∈ 𝑇 : • 𝑡 ≠ ∅, and additionally, ∀𝑡 ∈ 𝑇 : 𝑡 • ≠ ∅ if it is well-formed. Proof: By contraposition. If • 𝑡 0 = ∅ for some 𝑡 0 ∈ 𝑇, all markings are unbounded because 𝑡0• ≠ ∅ by Provision 5.2. This proves the first part of the claim. If 𝑁 is wellformed, then ∀𝑡 ∈ 𝑇 : • 𝑡 ≠ ∅ because a bounded marking exists. If • 𝑡 0 ≠ ∅ ∧ 𝑡 0• = ∅, then no marking can be both live and bounded, since the number of tokens on an S-system never increases except through transitions with • 𝑡 = ∅, and the liveness of 𝑡0 leads to a token decrease. This proves the second part of the claim. 5.6 These results imply neither that a live T-system has to be covered by cycles, or even be strongly connected, nor that a bounded S-system has to be covered by cycles. However, as we shall discover later on, this changes when we consider well-formed S- or T-systems. In general, out of the cycle-covered Petri nets, the strongly connected ones are of primary interest. The others are just disjoint unions of strongly connected components which can normally be analysed separately. As far as boundedness and liveness are concerned, S-systems are not hard to analyse while T-systems are not quite as straightforward. Lemma 5.5 indicates that the following class of T-nets is particularly interesting. Definition 5.7 Marked graphs A plain net 𝑁 = (𝑆, 𝑇, 𝐹) with initial marking 𝑀0 is called a marked graph if ∀𝑠 ∈ 𝑆 : | • 𝑠| = 1 = |𝑠• |.a a

The name “marked graph” comes from the fact that in early visual representations, an arc • 𝑏 . That is, tokens were 𝑏 was simply drawn as 𝑎 sequence 𝑎 placed on an arc from 𝑎 to 𝑏, rather than on a place between 𝑎 and 𝑏.

Every (marked) strongly connected T-net is also a marked graph. Likewise, every strongly connected S-net satisfies the symmetrical property ∀𝑡 ∈ 𝑇 : | • 𝑡| = 1 = |𝑡 • |.

5.3 S-nets and T-nets

95

Such S-nets do not appear to have a nice name of their own, possibly because they are relatively easy to analyse and are of less theoretical significance than T-nets. As we shall see, however, they may easily occur as important substructures in larger Petri nets. T-systems satisfy a basic token conservation property on cycles. Definition 5.8 Characteristic vector of a set of places Let 𝑁 = (𝑆, 𝑇, 𝐹) be a plain net and let 𝑅 ⊆ 𝑆 be a set of places. The S-vector 𝜒𝑅 : 𝑆 → {0, 1} defined by  1 if 𝑠 ∈ 𝑅 𝜒𝑅 (𝑠) = 0 if 𝑠 ∉ 𝑅 is called the characteristic vector of 𝑅. Further, let 𝛾 be a path in 𝑁. The set of places on 𝛾 is denoted by 𝑆(𝛾). Using this definition and the linear algebra we already know about from the previous section, the quantity 𝑀 (𝑅) = 𝑀 T · 𝜒𝑅 (5.1) simply counts the number of tokens on 𝑅 under a marking 𝑀 : 𝑆 → N of a net 𝑁 = (𝑆, 𝑇, 𝐹). Definition 5.9 Token count on a set of places With the terminology introduced in the previous definition, we say that a place set 𝑅 is (token-) empty (token-free) or marked under 𝑀, depending on whether 𝑀 (𝑅) = 0 or 𝑀 (𝑅) > 0.a a

From here on, a place set being “empty” is treacherous terminology. It could mean “... is the empty set”, but it could also mean “... is free of tokens”. We shall try to be unambigous whenever the word “empty” is used.

For T-systems, the quantity (5.1) is invariant under firing: Lemma 5.10 Token conservation property of T-systems Let 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 ) be a T-system and let 𝑀 ∈ [𝑀0 ⟩. For every simple cycle 𝛾 of (𝑆, 𝑇, 𝐹), 𝑀 (𝑆(𝛾)) = 𝑀0 (𝑆(𝛾)). Proof: Consider a simple cycle 𝛾 and the effect of firing a transition 𝑡. If 𝑡 lies on 𝛾, firing 𝑡 moves exactly one token on 𝛾. If 𝑡 does not lie on 𝛾, firing 𝑡 does not affect the tokens on 𝛾. 5.10

96

5 Graph-theoretical Structure of Petri Nets

5.4 Free-choice nets Now we define a class of nets that encompasses both S-nets and T-nets called freechoice nets (or FC-nets, for short). Originally, FC-nets have been introduced as a “smallest common generalisation” of S-nets and T-nets. In FC-nets, all structures shown in Figure 5.3 are allowed, but a combination of two of them, such as shown on the left-hand side of Figure 5.5, is disallowed. The seemingly more complicated structure on the right-hand side of Figure 5.5 is still permitted in FC-nets.

not FC:

FC: 𝑎

𝑎

𝑏

Fig. 5.5: Left-hand side: • 𝑎 ∩ • 𝑏

and • 𝑎

𝑏

•𝑏

≠∅ ≠ (forbidden structure in free-choice nets). Right-hand side: • 𝑎 = • 𝑏 (still allowed in free-choice nets).

Definition 5.11 Free-choice nets (FC-nets) A plain net 𝑁 = (𝑆, 𝑇, 𝐹) is called an FC-net if ∀𝑡 1 , 𝑡2 ∈ 𝑇 : • 𝑡1 ∩ • 𝑡 2 ≠ ∅ ⇒ • 𝑡1 = • 𝑡 2

(5.2)

A plain marked net 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 ) is called an FC-system if (𝑆, 𝑇, 𝐹) is an FC-net. Originally, the class of free-choice nets was defined more restrictively, as follows. Let a plain net (𝑆, 𝑇, 𝐹) be called fc (small letters) if ∀𝑡1 , 𝑡2 ∈ 𝑇 : (𝑡 1 ≠ 𝑡 2 ∧ • 𝑡 1 ∩ • 𝑡 2 ≠ ∅) ⇒ | • 𝑡 1 | = | • 𝑡2 | = 1

(5.3)

Of course, every fc-net is an FC-net. The class of nets defined in Definition 5.11 was originally called extended free-choice. Since most important properties and results either hold for both classes or are easily transferred from one to the other, we ignore this distinction for the time being. In fact, the more restrictive property (5.3) plays no further role, except in the following remark. Remark 5.12 Transforming an FC-net into an fc-net It is possible to simulate an FC-net by an fc-net, using the construction shown in Figure 5.6. The 𝜏 denotes a “silent” (or “internal”) transition. This is a simulation in a sense which will be discussed in Section 8.3 below. Every free-choice net satisfies the following property which is symmetric to its defining property (5.2):

5.4 Free-choice nets

97

𝑝

𝑞

𝑝

𝑞 𝜏

{ 𝑎

𝑎

𝑏

𝑏

Fig. 5.6: Left hand side: an FC- but non-fc Petri net. Right hand side: a simulating fc-net. 𝑠1 • ∩ 𝑠2 • ≠ ∅ ⇒ 𝑠1 • = 𝑠2 •

(5.4)

In fact, (5.4) is equivalent to (5.2) and could have been used as an alternative definition of the FC-property. A non-FC-net is shown in Figure 5.7. Property (5.2) is violated both because • 𝑡3 ∩ • 𝑡 ≠ ∅ ∧ • 𝑡 ≠ • 𝑡 and because • 𝑡 ∩ • 𝑡 ≠ ∅ ∧ • 𝑡 ≠ • 𝑡 . Under the marking shown 1 3 1 3 2 3 2 in the figure, transition 𝑡3 is dead, despite the fact that on its input places, a token can be moved cyclically from 𝑠1 to 𝑠2 and back. We will see that it is impossible to construct such a situation in a free-choice net.

𝑡1 𝑠1

𝑠2 𝑡2

𝑡3

Fig. 5.7: A non-live system violating the free-choice property.

The term “free choice” can be explained by Lemma 5.13. Suppose that in a freechoice net, some marking 𝑀 activates a transition 𝑡. Then all transitions in ( • 𝑡) • are activated, and one may freely choose between firing any of them. Lemma 5.13 Elementary property of FC-systems Let 𝑡 be a transition of an FC-net and let 𝑀 be a marking. Then 𝑀 activates 𝑡 if and only if 𝑀 activates all transitions in ( • 𝑡) • . Proof: This follows immediately from (5.2). 5.13 Lemma 5.13 holds even if • 𝑡 = ∅. In that case, every marking activates 𝑡, and also, every marking activates all transitions in ( • 𝑡) • = ∅. From here onwards, we embark on: first, developing liveness criteria for the three Petri net classes defined thus far; and then, secondly, obtaining boundedness criteria for live nets in these three classes. As announced, the largest class is first.

98

5 Graph-theoretical Structure of Petri Nets

5.5 A liveness criterion for FC-systems This criterion uses two graph-theoretical structures, siphons and traps. Both are sets of places, but they have different functionalities. Token-empty siphons are forever token-empty. Marked traps remain marked forever. Definition 5.14 Siphons and traps Let 𝑁 = (𝑆, 𝑇, 𝐹) be a plain Petri net. A set 𝐷 ⊆ 𝑆 is called siphon if • 𝐷 ⊆ 𝐷 • . A set 𝑄 ⊆ 𝑆 is called trap if 𝑄 • ⊆ • 𝑄. The set ∅ is called a trivial siphon; all other siphons ≠ ∅ are called proper. The set ∅ is called a trivial trap; all other traps ≠ ∅ are called proper.

Lemma 5.15 The union of siphons (traps) is a siphon (trap) Let 𝐷 1 and 𝐷 2 be two siphons and let 𝑄 1 and 𝑄 2 be two traps. 𝐷 1 ∪ 𝐷 2 is a siphon, and 𝑄 1 ∪ 𝑄 2 is a trap. Proof: Directly from Definition 5.14. This property is, however, not valid for the intersection (see Figure 5.8).

𝑠1

𝑠2

𝑠3

5.15

𝐷1 = {𝑠1 , 𝑠2 } and 𝐷2 = {𝑠2 , 𝑠3 } are siphons as well as traps but 𝐷1 ∩ 𝐷2 = {𝑠2 } is neither a siphon nor a trap

Fig. 5.8: Counterexample to an analogue of Lemma 5.15 for intersection.

Lemma 5.16 Elementary properties of siphons and traps • Let 𝐷 be a siphon, let 𝑀 (𝐷) = 0 and let 𝑀 ′ ∈ [𝑀⟩. Then 𝑀 ′ (𝐷) = 0. • Let 𝑄 be a trap, let 𝑀 (𝑄) > 0 and let 𝑀 ′ ∈ [𝑀⟩. Then 𝑀 ′ (𝑄) > 0. 𝑡

Proof: Assume 𝑀 −→ 𝑀 ′ with 𝑀 (𝐷)=0 and 𝑀 ′ (𝐷)>0. Then necessarily 𝑡 ∈ • 𝐷. 𝑡 By • 𝐷 ⊆ 𝐷 • , also 𝑡 ∈ 𝐷 • , contradicting (𝑀 −→ ) ∧ 𝑀 (𝐷)=0. Thus if 𝐷 is token-empty, 𝐷 remains token-empty. 𝑡

Assume 𝑀 −→ 𝑀 ′ with 𝑀 (𝑄)>0 and 𝑀 ′ (𝑄)=0. Then necessarily 𝑡 ∈ 𝑄 • . By 𝑡 𝑄 • ⊆ • 𝑄, also 𝑡 ∈ • 𝑄, contradicting (𝑀 −→ 𝑀 ′ ) ∧ 𝑀 ′ (𝑄)=0. Thus once 𝑄 is marked, 𝑄 remains marked. 5.16

5.5 A liveness criterion for FC-systems

99

This lemma can be applied in a special circumstance. Consider a net with initial marking 𝑀0 which has a proper siphon 𝐷, and inside 𝐷, some (proper) trap 𝑄 ⊆ 𝐷 with 𝑀0 (𝑄)>0. By Lemma 5.16, such a siphon 𝐷 can never be completely emptied of tokens. It turns out that for FC-nets, liveness is already guaranteed if this condition holds for every proper siphon 𝐷: Theorem 5.17 The Commoner/Hack Criterion CHC Let 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 ) be a free-choice system. The following two properties are equivalent: ) For all siphons 𝐷 ⊆ 𝑆 with 𝐷 ≠ ∅ CHC (i) there is a trap 𝑄 ⊆ 𝐷 such that 𝑀0 (𝑄) > 0 (ii)

𝑁 is live

Proof: The proof is split into (i)⇒(ii) (Section 5.5.1) and (ii)⇒(i) (Section 5.5.2). 5.17

5.5.1 Proof of Theorem 5.17, (i)⇒(ii) By contraposition. Let 𝑁 be non-live. We will construct a siphon that can be made token-empty, starting from 𝑀0 . 𝑁 not live ⇒ ( definition of liveness ) there are 𝑡 ∈ 𝑇 and 𝑀 ∈ [𝑀0⟩ such that 𝑡 is dead at 𝑀 ⇒ ( elementary property of FC-systems ) all 𝑡 ′ ∈ ( • 𝑡) • are dead at 𝑀 ⇒ ( firing rule and, by the deadness of 𝑡, • 𝑡 ≠ ∅ ) there is a place 𝑠 ∈ • 𝑡 which is token-empty at all markings in [𝑀⟩ ⇒ ( firing rule ) every transition in • 𝑠 is dead at 𝑀 ⇒ ( continuation of this argument ) we may construct a siphon which is token-empty at 𝑀. The third implication holds because if there was not such a place, then we could successively fill all input places of 𝑡 with tokens and eventually fire 𝑡, in contradiction to it being non-live.4

4 This argument recurs in a more general setting as Theorem 8.2 in Section 8.2.

100

5 Graph-theoretical Structure of Petri Nets

The siphon constructed by this algorithm does not contain a trap 𝑄 with 𝑄(𝑀0 )>0, because such a trap cannot completely be emptied of tokens. That is: CHC fails to hold. (i)⇒(ii) Figure 5.9 illustrates this part of the proof. Suppose that 𝑡 is the original non-live transition and 𝑠 is its input place which remains token-empty. Starting from 𝑠, we go backward to 𝑠′ and to 𝑠′′ . From 𝑠′′ , we go backward to 𝑠′′′ , and then back to 𝑠, obtaining the token-empty siphon {𝑠, 𝑠′ , 𝑠′′ , 𝑠′′′ }. 𝑠 ′′

𝑠′

𝑠

𝑡′

𝑡

𝑠 ′′′

Fig. 5.9: Illustration of the proof of Theorem 5.17 ((i)⇒(ii)). The backtracking argument leads to a necessary condition for deadlock-freeness in general nets. Corollary 5.18 Deadlocks and emptiable siphons Let 𝑁 = (𝑆, 𝑇, 𝐹) be a net. If 𝑀 is a deadlock of 𝑁 then there is a siphon 𝐷 ⊆ 𝑆 which is token-empty at 𝑀. Proof: By backtracking, as in the proof of Theorem 5.17 ((i)⇒(ii)). 5.18 As a consequence, the Commoner/Hack Criterion CHC implies deadlock-freeness in general; in fact, liveness already does. The difference between deadlock-freeness and liveness is shown, for instance, in Figure 5.7. The marking shown there is deadlock-free but not live. There is only one proper siphon, {𝑠1 , 𝑠2 }, which is not emptiable.

5.5.2 Proof of theorem 5.17, (ii)⇒(i) By contraposition. Suppose that 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 ) is an FC-system which does not satisfy the Commoner/Hack Criterion CHC, that is, there is a proper siphon 𝐷 with no marked trap in it, i.e., where the maximal trap 𝑄 in it (which may be empty) is unmarked. We shall show that it is then possible to reach a marking with sufficiently

5.5 A liveness criterion for FC-systems

101

few tokens in 𝐷 \𝑄, and none in 𝑄, so that all the transitions in 𝐷 • are simultaneously dead (and 𝐷 • ≠ ∅ since 𝐷 is proper and there is no isolated place). This is done by means of allocations. An allocation is essentially a conflict resolution rule, picking exactly one transition out of a set of transitions sharing a pre-place. If transitions sharing an input place are enabled simultaneously, firing according to an allocation means that the allocated transition will be chosen, rather than any other transition. For the proof, it can be shown that there exists an allocation 𝛼 which keeps moving tokens in 𝐷 \𝑄 following a coherent partial order, without touching 𝑄. Firing according to 𝛼 will eventually make 𝐷 \ 𝑄 sufficiently token-free while keeping 𝑄 token-free. Definition 5.19 Allocation Let 𝑋 ⊆ 𝑆 be a set of places. An allocation of 𝑋 is a map 𝛼 : 𝑋 → 𝑇 with ∀𝑠1 , 𝑠2 ∈ 𝑋 : 𝑠1• ∩ 𝑠2• ≠ ∅ ⇒ 𝛼(𝑠1 ) = 𝛼(𝑠2 ) ∈ 𝑠1• 𝛼 is called cycle-free if there is no set ∅ ≠ 𝑋 ′ ⊆ 𝑋 with 𝑋 ′ ⊆ (𝛼(𝑋 ′ )) • . The set 𝑠2• could have been used instead of 𝑠1• on the right-hand side of the implication, since these two sets are equal for FC-nets. An allocation essentially determines a fixed conflict-resolution rule; it points in the direction of the transition to be chosen in case of a conflict. We will need to define our allocation of the set 𝐷 \ 𝑄 in such a way that it ”decreases” in some sense the marking on 𝐷. The following lemma is the key to the proof. Lemma 5.20 Existence of a cycle-free allocation There exists a cycle-free allocation of the set 𝐷 \ 𝑄 which satisfies 𝛼(𝐷 \ 𝑄) ⊆ 𝑇 \ • 𝑄 (i.e.: 𝛼 “does not lead into 𝑄”). Proof: We show that for any set 𝑌 ⊆ 𝑆 with 𝑌 ≠ ∅ and for the largest trap 𝑄 ⊆ 𝑌 , there is a cycle-free allocation 𝛼 : 𝑌 \ 𝑄 → 𝑇 with 𝛼(𝑌 \ 𝑄) ⊆ 𝑇 \ • 𝑄

(5.5)

The lemma follows by setting 𝑌 = 𝐷, for 𝐷 being the siphon in the main proof. We proceed by induction over the size of 𝑌 \ 𝑄. Base: 𝑌 \ 𝑄 = ∅. Then 𝛼 = ∅ already satisfies the requirements (i.e., cycle-freeness and (5.5)). Step: 𝑌 \ 𝑄 ≠ ∅. We split 𝑌 into two sets:

102

5 Graph-theoretical Structure of Petri Nets

𝑌1′ = { 𝑠 ∈ 𝑌 | ∃𝑡 ∈ 𝑠• : 𝑡 • ∩ 𝑌 = ∅ } and 𝑌1 = 𝑌 \ 𝑌1′ Because 𝑄 is the largest trap in 𝑌 and because 𝑌 ≠ 𝑄, 𝑌 is not a trap. No place in 𝑌1′ can belong to a trap, since at least one of its output transitions leads out of 𝑌 entirely. Hence 𝑌1′ is not the empty set and 𝑌1 is a proper subset of 𝑌 . Moreover, we have 𝑄 ⊆ 𝑌1 because the places in 𝑌1′ do not belong to a trap in 𝑌 . Therefore: 𝑌 \ 𝑄 = (𝑌1 \ 𝑄) ∪ 𝑌1′ We construct the required allocation 𝛼 of 𝑌 \ 𝑄 as the union of an allocation 𝛼1 of 𝑌1 \ 𝑄 and an allocation 𝛼1′ of 𝑌1′ . By 𝑄 ⊆ 𝑌1 ⊂ 𝑌 , we may apply the induction hypothesis to 𝑌1 , obtaining an allocation 𝛼1 : 𝑌1 \ 𝑄 → 𝑇 with the required properties. It remains to extend this allocation to 𝑌1′ . This is possible because 𝑌1′ was defined in such a way that its places have a suitable output transition. Thus, we put 𝛼1′ : 𝑌1′ → 𝑇 and for 𝑠 ∈ 𝑌1′ : 𝛼1′ (𝑠) = some 𝑡 ∈ 𝑠• with 𝑡 • ∩ 𝑌 = ∅ It is not hard to check that the required properties are indeed valid. 5.20 It may be observed that, in a free-choice net, such an acyclic allocation 𝛼 defines a partial order on the places in 𝐷 \ 𝑄 in such a way that, for any place 𝑠 ∈ 𝐷 \ 𝑄, the places in 𝛼(𝑠) • come after 𝑠. While there is a transition 𝑡 ∈ 𝐷 • which may be fired (possibly after firing some transitions outside 𝐷 • , which does not impact the marking on 𝐷 since the latter is a siphon), if 𝑡 is not in the range of 𝛼, then the transition 𝑡 ′ ∈ ( • 𝑡) • ∩ 𝛼(𝐷 \ 𝑄) will be fired instead of 𝑡. This is always possible because of the FC-property. This reduces the token count on • 𝑡 = • 𝑡 ′ while the one on 𝑡 ′• may increase, but while respecting the partial order mentioned above, so that the marking globally decreases following this order. From Dickson’s lemma (Lemma 3.20), it is is not possible to continue indefinitely, meaning that at some point all the transitions in 𝐷 • (and there are some) are dead, as claimed, and the net was not live. This finishes the proof of Theorem 5.17. (ii)⇒(i) In a proof by contradiction, one might assume that the net is live and fails to satisfy CHC. Using the allocation and the arguments as in the preceding proof, one can show that all tokens can be removed completely from 𝐷, contradicting liveness. Remark 5.21 Token-free isolated places Note that this theorem goes wrong if there are unmarked isolated places. However, places with (only) self-loops (see Provision 5.2) are not harmful: if they are unmarked, the net cannot be live.

5.5 A liveness criterion for FC-systems

103

Remark 5.22 Monotonicity of liveness for FC-systems From Theorem 5.17, it results that liveness is monotonic for FC-systems, since adding tokens may only mark more traps. Condition CHC mentions only the initial marking and the two graph-theoretical structures of trap and siphon. In particular, it refers neither to the reachability set [𝑀0⟩ nor to the reachability graph of 𝑁. When property CHC is tested algorithmically, it suffices to consider only the minimal proper siphons and in each of them, the maximal trap. Still, in the worst case there may be exponentially many minimal proper siphons. Example 5.23 The FC property is necessary for Theorem 5.17 The premise of free-choiceness cannot be omitted in any of the two directions of the liveness theorem. Figure 5.7 presents a non-FC-system satisfying Condition CHC but failing to be live. This system is deadlock-free, though; a property that holds for all systems satisfying CHC in general, even if they are non-FC-systems. Figure 5.10 shows a non-FC-system which is live but fails to satisfy condition CHC. The set of places {𝑠1 , 𝑠2 , 𝑠3 , 𝑠4 } is a siphon which does not contain any marked trap. The FC property is violated at place 𝑠4 and its output transitions.

𝑠1

𝑠3

𝑠2

𝑠4

Fig. 5.10: A non-free-choice example, showing that the free choice property is essential for Theorem 5.17(⇐).

5.5.3 Some remarks on allocations Since allocations play a key role in the proof of Theorem 5.17, and later on as well, we will now give some examples.

104

5 Graph-theoretical Structure of Petri Nets 𝑡3

𝑡1 𝑠1

𝑠3

𝑠2

𝑡2

𝑡4

Fig. 5.11: An allocation 𝛼1 (shown in blue) which is not cycle-free.

Example 5.24 Illustrating the cycle-freeness of allocations Figure 5.11 shows an allocation 𝛼1 (represented graphically by thick, blue arrows) of the set 𝑋 = {𝑠2 , 𝑠3 } of places: 𝛼1 : 𝑠2 ↦→ 𝑡4 , 𝑠3 ↦→ 𝑡3 The firing sequence 𝑡 2 𝑡 1 𝑡 2 𝑡4 𝑡3 does not agree with 𝛼1 , because 𝛼1 demands that 𝑡 4 rather than 𝑡 1 should fire in case both are enabled. The firing sequence 𝑡 2 𝑡 4 𝑡 3 𝑡 4 𝑡3 𝑡4 , on the other hand, does agree with 𝛼1 . To see that 𝛼1 is not cycle-free, consider the set 𝑋 ′ = {𝑠2 , 𝑠3 } which satisfies 𝑋 ′ ⊆ (𝛼1 (𝑋 ′ )) • = {𝑡 3 , 𝑡4 }• = {𝑠2 , 𝑠3 } and which is contained in 𝑋.

𝑡1

𝑠1

𝑠2

𝑡2

𝑡3

𝑡4

𝑡5

𝑠3

𝑠4

𝑠5

𝑠6

𝑡6

𝑡7

𝑠7

Fig. 5.12: A cycle-free allocation 𝛼2 .

5.5 A liveness criterion for FC-systems

105

Example 5.25 Illustrating a trivial maximal trap 𝑄 = ∅ The net in Figure 5.12 arises from the net in Figure 4.3 by exchanging places and transitions. It is also free-choice. The set 𝐷 = {𝑠1 , 𝑠2 , 𝑠4 , 𝑠5 , 𝑠7 } of places is a siphon, and the maximal trap contained in it is the empty set 𝑄 = ∅. Hence 𝐷 = 𝐷 \ 𝑄, and by Lemma 5.20, we should be able to find a cycle-free allocation of 𝐷. Indeed, 𝛼2 , as shown in Figure 5.12 by thick blue arrows, is such an allocation of 𝐷. Formally: 𝛼2 : 𝑠1 ↦→ 𝑡 2 , 𝑠2 ↦→ 𝑡5 , 𝑠4 ↦→ 𝑡 7 , 𝑠5 ↦→ 𝑡6 , 𝑠7 ↦→ 𝑡1 If the mapping 𝑠2 ↦→ 𝑡 5 is changed to 𝑠2 ↦→ 𝑡4 , then we get a cycle with 𝑋 ′ = {𝑠2 , 𝑠5 , 𝑠7 }. For this net, no initial marking can possibly satisfy condition CHC, because the siphon 𝐷 contains no nontrivial trap. Hence the net is not even structurally live. Indeed, it is easy to see that whatever the initial marking, we can create a deadlock by firing the allocated transitions 𝑡 1 , 𝑡 2 , 𝑡5 𝑡 6 and 𝑡7 (but no other transition and in particular, neither 𝑡3 nor 𝑡 4 ) sufficiently many times.

𝑠1

𝑡1

𝑡3

𝑡2

𝑠4 𝑠2 𝑠3

𝑡4

Fig. 5.13: Another cycle-free allocation 𝛼3 .

Example 5.26 Illustrating a non-trivial maximal trap In the FC-net shown in Figure 5.13, the only minimal nontrivial siphon is 𝐷 = {𝑠1 , 𝑠2 , 𝑠3 }, and the maximal trap in this siphon is 𝑄 = {𝑠1 , 𝑠2 }. This implies that the net is live as soon as one ore more tokens are on the set 𝑄 = {𝑠1 , 𝑠2 }, and non-live for every marking 𝑀 with 𝑀 (𝑄) = 0 (which is the case for the marking shown in the figure). This emphasizes the importance of the allocation constructed in the proof of Theorem 5.17 not leading “into 𝑄”. Indeed, 𝛼3 : 𝑠3 ↦→ 𝑡 4 is a cycle-free allocation of 𝐷 \ 𝑄 = {𝑠3 }. It is possible to reach a deadlock by firing 𝑡 4 twice. It would be wrong to allocate 𝛼3′ : 𝑠3 ↦→ 𝑡3 because this would not lead to a deadlock, but on the contrary, to a live marking through firing 𝑡 3 . This example shows the significance of condition

106

5 Graph-theoretical Structure of Petri Nets

𝛼(𝐷 \ 𝑄) ⊆ 𝑇 \ • 𝑄 in Lemma 5.20. 𝛼3 satisfies this condition, but 𝛼3′ does not. (If we omitted 𝑡 4 and 𝑠4 in this example, 𝑄 = 𝐷 would become the largest trap in 𝐷, and the resulting net would be live.)

Example 5.27 Illustrating the reason why 𝑄 needs to be maximal The example shown in Figure 5.14 is a modification of the net shown in Figure 5.13. The two tokens on 𝑠3 have been “distributed” on a prolonged path from 𝑠3 to 𝑠1 . This system is live, in contrast to the system shown in Figure 5.13. Indeed, the siphon 𝐷 = {𝑠1 , 𝑠2 , 𝑠3 , 𝑠5 } cannot be made token-empty. This is due to the fact that the maximal trap inside 𝐷 is 𝑄 = {𝑠1 , 𝑠2 , 𝑠5 } (not just {𝑠1 , 𝑠2 }), and that this set does have a token on it. Only the set 𝐷 \ 𝑄 can be completely drained of tokens, and in both examples, we have 𝐷 \ 𝑄 = {𝑠3 }.

𝑡5

𝑠5

𝑠1

𝑡1

𝑡3

𝑡2

𝑠4 𝑠2 𝑠3

𝑡4

Fig. 5.14: Why 𝑄 needs to be maximal.

5.5.4 Complexity of checking CHC Here we turn to the question whether some algorithmic advantages were gained by considering the Commoner/Hack liveness criterion, CHC. Straightforwardly testing CHC involves searching all subsets of 𝑆 until all minimal proper siphons have been scanned. For every such siphon 𝐷, the maximal trap inside 𝐷 must be found. The latter algorithm is polynomial in the size of 𝐷: start with 𝐷 and keep erasing places that cannot possibly belong to the maximal trap inside 𝐷. However, there may be exponentially many minimal siphons. In general, we have the following:

5.5 A liveness criterion for FC-systems

107

Proposition 5.28 NP-completeness of non-liveness for FC-systems The following problem is NP-complete: Given: An FC-system Σ. To decide: Is Σ not live? This sounds slightly better than the the complexity of the general liveness problem, which is EXPSPACE-hard (see Section 3.7). Proof: 1) The problem is in NP: guess a set 𝐷 of places; check in polynomial time whether 𝐷 is a siphon; compute (in polynomial time) the maximal trap 𝑄 ⊆ 𝐷; answer “not live” if and only if 𝑀0 (𝑄) = 0. 2) The problem is NP-hard: given a propositional formula 𝜑 in conjunctive normal form, we construct an FC-system Σ of polynomial size such that 𝜑 is satisfiable if and only if Σ is not live. This reduces the problem SAT (satisfiability of propositional formulas in CNF), which is already known to be NP-hard, to the non-liveness problem of FC-systems. Hence the latter is also NP-hard. To see how this construction, works, let 𝜑 be a formula in CNF, that is, a conjunction of disjunctions of literals. We construct the following marked FC-net (compare Figure 5.15): • For every variable 𝑥𝑖 , a place 𝐴𝑖 is introduced, along with two transitions 𝑥 𝑖 and 𝑥𝑖 , as well as arrows from 𝐴𝑖 to 𝑥𝑖 and from 𝐴𝑖 to 𝑥𝑖 . • For every clause 𝐶 𝑗 , a transition 𝐶 𝑗 is introduced; for every literal 𝑙𝑖 in 𝐶 𝑗 , a place (𝑙𝑖 , 𝐶 𝑗 ) is introduced; and there will be an arrow from 𝑙 𝑖 to (𝑙𝑖 , 𝐶 𝑗 ) and another arrow from (𝑙𝑖 , 𝐶 𝑗 ) to 𝐶 𝑗 .5 • There will be a place False are arrows from 𝐶 𝑗 to False. Moreover, there will be a transition Back, an arrow from False to Back, and arrows from Back to every 𝐴𝑖 . • Finally, by definition, 𝑀0 places one token on each of the places 𝐴𝑖 and no tokens elsewhere. The so constructed system is polynomial in the size of 𝜑, and 𝜑 is satisfiable if and only if it is non-live. 5.28 Example 5.29 A sample formula in CNF and its FC-net Figure 5.15 shows the result of this construction applied to 𝜑 = (𝑥1 ∨ 𝑥3 ) ∧ (𝑥1 ∨ 𝑥2 ∨ 𝑥3 ) ∧ (𝑥2 ∨ 𝑥3 )

(5.6)

For instance, the firing sequence 𝑥 1 𝑥2 𝑥3 leads to a deadlock while the corresponding assignment (𝑥1 = 1, 𝑥2 = 0, 𝑥3 = 0) satisfies formula (5.6).

5 As usual, 𝑙 = 𝑙 is assumed.

108

5 Graph-theoretical Structure of Petri Nets Back

𝐴1

𝐴3

𝐴2

𝑥3 𝑥1

𝑥1

( 𝑥1 , 𝐶1 )

𝑥2

𝑥3

𝑥2

( 𝑥3 , 𝐶1 )

𝐶1

𝐶2

𝐶3

False

Fig. 5.15: An FC-net for formula (5.6), i.e. for (𝑥1 ∨ 𝑥3 ) ∧ (𝑥1 ∨ 𝑥2 ∨ 𝑥3 ) ∧ (𝑥 2 ∨ 𝑥3 ).

5.6 Liveness criteria for S-systems and T-systems The liveness criterion CHC holds for S-systems and for T-systems, since they are subclasses of FC-systems. What happens to the CHC condition in these two special cases? If Σ = (𝑆, 𝑇, 𝐹, 𝑀0 ) is a connected S-system, then we may examine two cases. Case 1: (𝑆, 𝑇, 𝐹) is strongly connected. Then 𝑆 is the only nontrivial siphon and 𝑆 is also the maximal trap contained in it. Hence Σ is live exactly when 𝑀0 (𝑆) > 0 holds true, i.e., if there is at least one token in the net. Case 2: (𝑆, 𝑇, 𝐹) is weakly, but not strongly, connected. Then there exists at least one nontrivial siphon containing no nontrivial trap whatsoever. To see this, take the place set of any non-final minimal strongly connected component, where “non-final” means that there is some directed path to another strongly connected component. Hence CHC cannot be satisfied in this case. Combining the two cases and considering also completely disconnected S-systems, we get:

5.6 Liveness criteria for S-systems and T-systems

109

Corollary 5.30 Liveness of S-systems A plain S-system Σ = (𝑆, 𝑇, 𝐹, 𝑀0 ) is live if and only if it is covered by cycles and every strongly connected component carries at least one token under 𝑀0 . Let Σ = (𝑆, 𝑇, 𝐹, 𝑀0 ) be a T-system. Observe first that condition CHC is true whenever it is true for all minimal proper siphons. But the minimal proper siphons of (𝑆, 𝑇, 𝐹) are either singleton sets or the places on its simple cycles. Such cycles also coincide with the maximal traps contained in them. Adding, for later use, a part about realisability, the following result can be obtained: Theorem 5.31 Liveness and realisability in T-systems Let 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 ) be a plain T-system. The following three conditions are equivalent: a) 𝑁 is live; b) all places 𝑠 ∈ 𝑆 satisfy • 𝑠 ≠ ∅ and all (elementary) cycles carry at least one token under 𝑀0 ; c) all places 𝑠 ∈ 𝑆 satisfy • 𝑠 ≠ ∅ and the Parikh vector 1 = (1, . . . , 1) is 𝜏 realisable, that is, there is some firing sequence 𝜏 such that 𝑀0 −→ and every transition occurs exactly once in 𝜏. Proof: a)⇔b): In a T-system, minimal proper siphons are either singletons {𝑠} for a place 𝑠 ∈ 𝑆 with • 𝑠 = ∅, or (the set of places on) elementary cycles. The former cannot contain any trap because of the absence of isolated places, and the maximal trap in an elementary cycle is the cycle itself. Thus condition b) is what CHC reduces to for T-systems, and the equivalence between a) and b) turns out to be the counterpart of Theorem 5.17 for T-systems. c)⇒b): If the Parikh vector 1 is realisable, there can be no token-free cycles. a)⇒c): If a place 𝑠 ∈ 𝑆 satisfies • 𝑠 = ∅, consider 𝑡 ∈ 𝑠• . This transition exists due to the absence of isolated places. Then 𝑡 can fire at most 𝑀0 (𝑠) times, i.e., there is some reachable marking at which 𝑡 is dead. Hence if a) is true, all places 𝑠 satisfy • 𝑠 ≠ ∅. The realisability of 1 can be shown by induction on the number of transitions. If |𝑇 | = 1, say 𝑇 = {𝑡}, liveness implies that 𝑡 can be fired once from 𝑀0 . 𝑡

If |𝑇 | > 1, choose any 𝑡 ∈ 𝑇 such that (𝑀0 −→ ). 𝑁 can be transformed into another live T-system 𝑁 ′ by erasing 𝑡 and merging input places and output places of 𝑡 in an appropriate way. By induction hypothesis, a suitable firing sequence 𝜏 ′ exists in 𝑁 ′ . Then 𝜏 = 𝑡𝜏 ′ is a suitable firing sequence in 𝑁. “Appropriate merging” is done in the following way. Let 𝑟 1 , . . . , 𝑟 𝑛 be the places in • 𝑡\𝑡 • and let 𝑠 , . . . , 𝑠 be the places in 𝑡 •\• 𝑡. The places in • 𝑡 ∩𝑡 • , i.e., the side places 1 𝑚 of 𝑡, will be treated as irrelevant (since they are – and remain – marked, by liveness). Merging means that 𝑡 along with all of its input and output places are replaced by a

110

5 Graph-theoretical Structure of Petri Nets

set of places containing, for every pair (𝑖, 𝑗) (1 ≤ 𝑖 ≤ 𝑛 and 1 ≤ 𝑗 ≤ 𝑚), a new place (𝑟 𝑖 , 𝑠 𝑗 ) with marking 𝑀0′ (𝑟 𝑖 , 𝑠 𝑗 ) = 𝑀0 (𝑟 𝑖 ) + 𝑀0 (𝑠 𝑗 ). Input transitions of 𝑟 𝑖 become, by definition, input transitions of all places (𝑟 𝑖 , 𝑠 𝑗 ). Output transitions of 𝑠 𝑗 become output transitions of all (𝑟 𝑖 , 𝑠 𝑗 ). Since 𝑀0 enables 𝑡, none of the new places is tokenempty. Thus, no empty cycle is introduced by this construction, which implies the liveness of 𝑁 ′ . 5.31 The cross product construction in the last part of the proof is known as place multiplication. It will be reconsidered, in a different context, in Chapter 19. For S-sytems and T-systems, there exist polynomial algorithms checking condition CHC. Corollary 5.30 stipulates to construct the strongly connected components of an S-net and to check whether all of them are marked. To apply Theorem 5.31, it may either be checked whether the Parikh vector 1 = (1, . . . , 1) is realisable, or whether all simple cycles are marked and all places 𝑠 satisfy • 𝑠 ≠ ∅.

5.7 Boundedness criteria for live FC- (S-/T-) systems This section describes a structural criterion for the boundedness of a live FC-net. Two particular kinds of subnets play a role. Definition 5.32 S-components and T-components Let 𝑁 = (𝑆, 𝑇, 𝐹) be a plain net and let 𝑁1 be the subnet 𝑁 (𝑆1 , 𝑇1 ) for some 𝑆1 ⊆ 𝑆 and 𝑇1 ⊆ 𝑇. • 𝑁1 is called an S-component of 𝑁 if 𝑇1 = • 𝑆1 ∪ 𝑆1• and for all 𝑡 ∈ 𝑇1 : | • 𝑡 ∩ 𝑆1 | ≤ 1 ≥ |𝑡 • ∩ 𝑆1 |. • 𝑁1 is called a T-component of 𝑁 if 𝑆1 = •𝑇1 ∪ 𝑇1• and for all 𝑠 ∈ 𝑆1 : | • 𝑠 ∩ 𝑇1 | ≤ 1 ≥ |𝑠• ∩ 𝑇1 |.a 𝑁1 is called strongly connected (inside 𝑁) if 𝑁1 is strongly connected (as a separate net). a

In both cases, presets and postsets are taken in 𝑁 .

Figure 5.16 shows a net 𝑁 and some of its S- and T-components. There are structural connections between S- and T-components, S- and T-invariants, and siphons and traps. Lemma 5.33 S/T-components, S/T-invariants, siphons, and traps Let 𝑁1 = (𝑆1 , 𝑇1 , 𝐹1 ) be a strongly connected subnet of 𝑁 = (𝑆, 𝑇, 𝐹) with 𝑆1 ≠ ∅ and 𝑇1 ≠ ∅. (a) If 𝑁1 is an S-component, then the S-vector 𝑥 = 𝜒𝑆1 is an S-invariant. If 𝑁1 is a T-component, then the T-vector 𝑦 = 𝜒𝑇1 is a T-invariant. (b) If 𝑁1 is an S-component, then the place set 𝑆1 is both a minimal proper siphon and a minimal proper trap.

5.7 Boundedness criteria for live FC- (S-/T-) systems

𝑁

𝑁1

111

𝑁1′

𝑁1′′

Fig. 5.16: Illustrating S- and T-components. 𝑁1 is a non-strongly-connected Tcomponent, but not an S-component. 𝑁1′ is a strongly connected S-component, but not a T-component (since the property 𝑆1 = •𝑇1 ∪ 𝑇1• is violated in 𝑁). 𝑁1′′ is a strongly connected S-component. There are no strongly connected T-components.

Proof: (a): Because 𝑁1 is a strongly connected S-component, every transition 𝑡 of 𝑁 satisfies either ( • 𝑡 ∪ 𝑡 • ) ∩ 𝑆1 = ∅ or | • 𝑡 ∩ 𝑆1 | = 1 ∧ |𝑡 • ∩ 𝑆1 | = 1. Therefore, the incidence matrix 𝐶 of 𝑁 satisfies 𝑥 T ·𝐶 = 0. The argument for T-components is symmetrical. (b): Let 𝑠 be any place of 𝑆1 and let 𝑡 ∈ • 𝑠. Because 𝑁1 is a strongly connected S-component, there is some place 𝑠′ ∈ • 𝑡 with 𝑠′ ∈ 𝑆1 . Therefore, 𝑆1 is a siphon, and it is proper because of 𝑆1 ≠ ∅. It is also a minimal proper siphon because taking away any place 𝑠′ ∈ 𝑆1 destroys the siphon property for some place 𝑠 ∈ 𝑆1 ∩ (𝑠′• ) • , since no place in • ( • 𝑠)) (other than 𝑠′ ) is in 𝑆1 , by the S-component property of 𝑁1 . The argument for traps is symmetrical. 5.33 Strong connectedness is essential in this lemma. In the following, unless specified otherwise, we consider only strongly connected S- and T-components. Theorem 5.34 A boundedness criterion for live FC-systems Let 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 ) be a live FC-system. (1) A place 𝑠 is 𝑚-bounded (𝑚 ∈ N, 𝑚 ≥ 1) if and only if there exists a strongly connected S-component (𝑆1 , 𝑇1 , 𝐹1 ) with 𝑠 ∈ 𝑆1 and 𝑀0 (𝑆1 ) ≤ 𝑚. (2) For a place 𝑠, there exists a marking 𝑀 ∈ [𝑀0 ⟩ satisfying 𝑀 (𝑠) ≥ 𝑚 (𝑚 ≥ 1) if and only if 𝑀0 (𝑆1 ) ≥ 𝑚 is true for all strongly connected S-components (𝑆1 , 𝑇1 , 𝐹1 ) with 𝑠 ∈ 𝑆1 .

Remark 5.35 𝑀 (𝑠) ≥ 𝑚 cannot be strengthened to 𝑀 (𝑠) = 𝑚 in (2) With 𝑀 (𝑠) = 𝑚 instead of 𝑀 (𝑠) ≥ 𝑚, Part (2)(⇐) of Theorem 5.34 becomes false. An example is shown in Figure 5.17. With 𝑚 = 1, the only strongly connected S-component containing 𝑠3 (the simple cycle 𝑠1 𝑡1 𝑠3 𝑡 2 𝑠1 ) contains 3 ≥ 𝑚 tokens, but no reachable marking 𝑀 satisfies 𝑀 (𝑠3 ) = 𝑚 = 1.

112

5 Graph-theoretical Structure of Petri Nets 𝑡1 𝑠1

𝑠3

𝑠2

𝑡2

Fig. 5.17: A 3-bounded T- (and hence, FC-) system. Proof: (1)(⇒): Assume that 𝑠 is 𝑚 1 -bounded at (𝑁, 𝑀0 ) (with a tight bound 𝑚 = 𝑚 1 ).6 The proof is divided into two steps: (a) there is an S-component covering 𝑠, and (b) this S-component is strongly connected. Step (a): By Theorem 5.17, the (unique) maximal trap in any minimal (proper) siphon is 𝜎1 marked under 𝑀0 as well as under every successor marking of 𝑀0 . Let 𝑀0 −→ 𝑀1 with 𝑀1 (𝑠) = 𝑚 1 and let 𝐿 1 be as 𝑀1 , except 𝐿 1 (𝑠) = 0. 𝜎1′

𝐿 1 is not live, because if it were, then we could have 𝐿 1 −→ 𝐿 1′ with 𝐿 1′ (𝑠)=1, 𝜎1

𝜎1′

and thus (by monotonicity) also 𝑀0 −→ 𝑀1 −→ 𝑀1′ with 𝑀1′ (𝑠) = 𝑚 1 + 1, contradicting the bound. Hence at 𝐿 1 , there is some minimal proper siphon 𝐷 1 with a token-free maximal trap 𝑄 1 inside. Since 𝐿 1 differs from 𝑀1 only at 𝑠 and 𝑀1 is live, we have 𝑠 ∈ 𝑄 1 . Now let us examine how 𝑄 = 𝑄 1 looks like. Is it already an S-component? Take any transition 𝑡. Of 𝑡’s input places, at most one can be in 𝐷, since 𝐷 is minimal. Hence also, at most one of them can be in 𝑄, since 𝑄 is a subset of 𝐷. Since 𝑄 is a trap, if 𝑡 has some input place in 𝑄, then 𝑡 also has at least one output place in 𝑄. Hence | • 𝑡 ∩ 𝑄| ≤ |𝑡 • ∩ 𝑄|, and the firing of 𝑡 can never decrease the number of tokens on 𝑄. If | • 𝑡 ∩ 𝑄| < |𝑡 • ∩ 𝑄|, then 𝑡 even properly increases the number of tokens on 𝑄. If there is no transition 𝑡 with | • 𝑡 ∩ 𝑄| < |𝑡 • ∩ 𝑄|, all transitions 𝑡 satisfy either (| • 𝑡 ∩ 𝑄| = 0) ∧ (0 = |𝑡 • ∩ 𝑄|) or (| • 𝑡 ∩ 𝑄| = 1) ∧ (1 = |𝑡 • ∩ 𝑄|). We have the S-component properties for 𝑄 = 𝑄 1 and are finished. However, if there is such a transition 𝑡, we can continue as follows. At 𝑀1 , there are precisely 𝑚 tokens on 𝑄 1 , since so many are on 𝑠, and 𝑄 1 is token-empty if they are taken away at 𝐿 1 . Since (𝑁, 𝑀1 ) is live, we can eventually fire 𝑡 and thus properly 𝜏1 increase the number of tokens on 𝑄 1 : 𝑀1 −→ 𝑀1′ with 𝑀1′ (𝑄 1 ) > 𝑚 1 . Place 𝑠 is still bounded under (𝑁, 𝑀), but it could happen that 𝑚 1 is no longer tight. Let the 𝜏1 𝜎2 new tight bound be 𝑚 2 ≤ 𝑚 1 . Let 𝑀1 −→ 𝑀1′ −→ 𝑀2 with 𝑀2 (𝑠) = 𝑚 2 and let 𝐿 2 be as 𝑀2 , except 𝐿 2 (𝑠) = 0. Again 𝐿 2 is not live, because otherwise, the bound 𝑚 1 could be transcended. 6 A tight bound is a minimal one, i.e., one which can actually be achieved by some reachable marking.

5.7 Boundedness criteria for live FC- (S-/T-) systems

113

Hence at 𝐿 2 there is some minimal proper siphon 𝐷 2 with a token-free maximal trap 𝑄 2 inside. But 𝐷 2 and 𝑄 2 cannot be the same as 𝐷 1 and 𝑄 1 , since 𝑄 2 now has more than 𝑚 1 ≥ 𝑚 2 tokens and cannot, therefore, be token-free if 𝑚 2 of them are taken away. Hence 𝐷 2 is different from 𝐷 1 , and we can repeat the entire argument all over until we reach some minimal proper siphon 𝐷 𝑗 that is considered twice, i.e. 𝐷 𝑗 = 𝐷 𝑖 for some 𝑖 < 𝑗. In that case, 𝑄 𝑗 having more than 𝑚 𝑗 tokens leads to a contradiction, and the argument is finished with 𝑄 𝑗 being an S-component containing 𝑠 and having 𝑚 𝑗 ≤ 𝑚 2 ≤ 𝑚 tokens. This is eventually the case, since the number of minimal proper siphons is finite. Step (b): The S-component 𝑁1 = (𝑆1 , 𝑇1 , 𝐹1 ) produced as a result of (a) is strongly connected. The proof is by contradiction. Suppose 𝑁1 is not strongly connected. Then there is a strongly connected nonempty proper subset 𝑆1′ ⊆ 𝑆1 which is “extremal” in the sense of 𝐹, more precisely, satisfies the following: • there are two places 𝑠1 ∈ 𝑆1 \𝑆1′ and 𝑠2 ∈ 𝑆1′ with 𝑠2 ∈ (𝑠1• ) • ; • no two places 𝑠1 ∈ 𝑆1′ and 𝑠2 ∈ 𝑆1 \𝑆1′ satisfy 𝑠2 ∈ (𝑠1• ) • . Then 𝑆1 \𝑆1′ is also a siphon, contradicting the minimality of 𝑆1 . (1)(⇐): Suppose there exists a strongly connected S-component (𝑆1 , 𝑇1 , 𝐹1 ) with 𝑠∈𝑆1 and 𝑀0 (𝑆1 ) ≤ 𝑚. By Lemma 5.33(a), 𝑆1 is a {0, 1}-S-invariant in 𝑁. Hence the number of tokens on it is constant, implying that at most 𝑚 tokens can be accumulated on 𝑠. (2)(⇒): By contraposition. If 𝑚 tokens can be accumulated on 𝑠, then, by (1)(⇐), no 𝑆component containing 𝑠 can contain less than 𝑚 tokens. (2)(⇐): Assume 𝑀0 (𝑆1 ) ≥ 𝑚 for all S-components (𝑆1 , 𝑇1 , 𝐹1 ) with 𝑠 ∈ 𝑆1 . We prove ∃𝑀 ∈ [𝑀0 ⟩ : 𝑀 (𝑠) ≥ 𝑚 by contradiction. Suppose ∀𝑀 ∈ [𝑀0 ⟩ : 𝑀 (𝑠) < 𝑚. By (1)(⇒), there is an S-component (𝑆1 , 𝑇1 , 𝐹1 ) with 𝑠 ∈ 𝑆1 and 𝑀0 (𝑆1 ) ≤ 𝑚 − 1, contradicting the assumption. 5.34 Corollary 5.36 Exact bounds for live FC-systems Let 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 ) be a live FC-system, let 𝑠 ∈ 𝑆, and let 𝑆𝐶𝑆𝐶 (𝑠) be the set of strongly connected S-components of 𝑁 containing 𝑠. Then max{ 𝑀 (𝑠) | 𝑀 ∈ [𝑀0⟩ } = min{ 𝑀0 (𝑆 ′ ) | 𝑆 ′ ∈ 𝑆𝐶𝑆𝐶 (𝑠) }

Corollary 5.37 Covering by S-components A well-formed FC-net is covered by a set of strongly connected Scomponents. A live FC-system is 𝑚-bounded if and only if it is covered by a set of strongly connected S-components with at most 𝑚 tokens.

114

5 Graph-theoretical Structure of Petri Nets

Example 5.38 Checking Theorem 5.34 on Figure 5.18 The marking of the system shown – twice – in Figure 5.18 is live, and the system is also safe. According to the theorem, it should therefore be covered by strongly connected S-components carrying one token each. There exist two such S-components. One of them is shown in blue on the left-hand side of the figure, the other one is symmetrical and covers the places 𝑠1 , 𝑠3 , 𝑠5 , 𝑠7 . The system is also covered by T-components, one of which is shown in green on the right-hand side of the figure (cf. Example 5.43 below).

𝑠1

𝑠1

𝑡1

𝑡2

𝑡1

𝑡2

𝑠2

𝑠3

𝑠4

𝑠5

𝑠2

𝑠3

𝑠4

𝑠5

𝑡3

𝑡4

𝑡5

𝑡6

𝑡3

𝑡4

𝑡5

𝑡6

𝑠6

𝑠7

𝑡7

𝑠6

𝑠7

𝑡7

Fig. 5.18: An initially marked FC-net, an S-component drawn in blue, and a Tcomponent drawn in green. Theorem 5.34 can be refined to S-systems and to T-systems, as follows. Observe first that by Corollary 5.30, a live S-system is a collection of mutually disjoint strongly connected S-systems. Thus, every place is in a unique strongly connected S-component. Proposition 5.39 A boundedness criterion for live S-systems Let 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 ) be a live S-system and let 𝑠 be a place in 𝑆. Then 𝑠 is 𝑚-bounded with 𝑚 = 𝑀0 (𝑆1 ), where 𝑁1 = (𝑆1 , 𝑇1 , 𝐹1 ) is the (unique, strongly connected) S-component with 𝑠 ∈ 𝑆1 . Moreover, there exists a marking 𝑀 ∈ [𝑀0⟩ satisfying 𝑀 (𝑠) = 𝑚. Proof: Since 𝑁1 is a strongly connected S-sytem, the number of tokens on 𝑆1 stays constant over transition firings. Thus, at most 𝑚 = 𝑀0 (𝑆1 ) tokens can be assembled on 𝑠; hence 𝑠 is 𝑚-bounded. We need to show that there is a marking 𝑀 ∈ [𝑀0 ⟩ with 𝑀 (𝑠) = 𝑚. We have 𝑀0 (𝑠) ≤ 𝑀0 (𝑆1 ) = 𝑚. If 𝑀0 (𝑠) = 𝑚, there is nothing to prove. If 𝑀0 (𝑠) < 𝑚, we may use strong connectedness which shows that tokens still missing on 𝑠 can be moved there. 5.39

5.7 Boundedness criteria for live FC- (S-/T-) systems

115

The strongly connected S-components of a T-system are its simple cycles. Hence a live T-system is 𝑚-bounded if and only if there exists a covering by simple cycles which carry 𝑚 or less tokens. Proposition 5.40 A boundedness criterion for live T-systems Let 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 ) be a live T-system and let 𝑠 ∈ 𝑆. (1) A place 𝑠 is 𝑚-bounded (𝑚 ∈ N, 𝑚 ≥ 1) if and only if there exists an elementary cycle 𝛾 containing 𝑠 ∈ 𝛾 and satisfying 𝑀0 (𝛾) ≤ 𝑚. (2) There exists a marking 𝑀 ∈ [𝑀0⟩ satisfying 𝑀 (𝑠) = 𝑚 (𝑚 ≥ 1) if and only if 𝑀0 (𝑆1 ) ≥ 𝑚 is true for every elementary cycle 𝛾 with 𝑠 ∈ 𝛾. Proof: (1)(⇒): By backtracking from 𝑠, we might hit a place without pre-transition, or a token-empty cycle (but in eiher case, the net is not live), or we might not be able to come back to 𝑠 (but then, 𝑠 is not bounded). Similarly, if on every backward path from 𝑠 back to 𝑠, we encounter more than 𝑚 tokens, then the transitions on such paths can eventually be fired, and 𝑠 is not 𝑚-bounded. Hence, there must be at least one such path from 𝑠 back to 𝑠 with 𝑚 or less tokens. (2)(⇐): The same argument shows that if a cycle containing 𝑠 and carrying less than 𝑠 tokens does not exist, then we can fire the backward paths and assemble 𝑚 (and possibly more) tokens on 𝑠. 5.40 Both parts of Proposition 5.40 also follow more directly from Theorem 5.34. Example 5.41 Figure 5.19 In Figure 5.19, places 𝑠1 and 𝑠6 are on an S-component with 2 tokens, and they are, moreover, not on any other (strongly connected) S-component. Hence by Proposition 5.40(⇐), there must be some firing sequence putting two tokens on 𝑠1 , and another firing sequence putting two tokens on 𝑠6 . Indeed, 𝑡1 𝑡4 𝑡3 𝑡 1 results in two tokens on 𝑠1 , while 𝑡1 𝑡4 𝑡2 𝑡4 results in two tokens on 𝑠6 . These sequences can be found by backtracking (possibly repeatedly) from 𝑠1 and 𝑠6 , respectively.

𝑡1

𝑡2 𝑠1 𝑠3

𝑠2

𝑠5

𝑠4 𝑠6

𝑡3

𝑡4

Fig. 5.19: A 2-bounded T-system. Well-formed FC-nets also satisfy a T-component covering property.

116

5 Graph-theoretical Structure of Petri Nets

Theorem 5.42 Covering by T-components A live and bounded FC-system 𝑁 is covered by strongly connected Tcomponents. Moreover, for every strongly connected T-component 𝑁1 in the cover, there exists a reachable marking 𝑀 such that 𝑀, restricted to 𝑁1 , is a live and bounded marking of 𝑁1 (as a separate net). Proof: (Sketch.) Analogously to Lemma 5.33, it can be shown that the transitions of a T-component form a minimal T-invariant. There is also a converse: if 𝑁 is well-formed, then the support7 of a minimal T-invariant spans a T-component. The basic proof ideas are as follows. First, we know from Corollary 4.19 that 𝑁 is strongly connected. (If not, its strongly connected components can be considered separately.) By Proposition 4.14, 𝑁 is covered by a positive T-invariant. Thus, every transition lies on a minimal T-invariant. For a minimal T-invariant 𝑦 with support 𝑌 , it is possible to define an allocation 𝛼 : 𝑋 ↦→ 𝑌 where 𝑋 is the set of incoming places of 𝑌 and every 𝛼(𝑠), for 𝑠 ∈ 𝑋, lies on 𝑌 , such that 𝑋 ⊆ (𝛼(𝑋)) • (that is, 𝛼 is cyclic). It is further possible to find an infinite firing sequence which agrees with 𝛼 (i.e., contains no non-allocated transition of 𝑋 • ) and contains every transition of 𝑌 infinitely often (for this, one needs liveness and the FC property). If this sequence contains transitions not in 𝑌 , then it is possible to use an exchange argument (similar to Exercise 1.6) in order to separate them from the transitions in 𝑌 in the following way. Transitions ∉ 𝑌 which occur finitely often in the sequence can be moved “to the left”, i.e., they can be executed before any 𝑌 -transitions. Transitions ∉ 𝑌 which occur infinitely often in the sequence can be moved “to the right”, i.e., 𝑌 transitions can always be fired before them (for this, one needs boundedness). Thus, an infinite sequence can be constructed in which a finite number of non-𝑌 -transitions occur first, after which an infinite sequence of 𝑌 -transitions occurs. From this, one can prove that 𝑌 spans a T-component. 5.42 Example 5.43 Figure 5.18 This net is covered by two strongly connected T-components, one of which is shown in green on the right-hand side of the figure. The cyclic allocation mentioned in the proof is 𝛼 : 𝑋 = {𝑠1 , 𝑠2 , 𝑠3 , 𝑠6 , 𝑠7 } ↦→ 𝑌 = {𝑡1 , 𝑡3 , 𝑡4 , 𝑡7 } with 𝛼(𝑠1 ) = 𝑡 1 etc. and a resulting infinite firing sequence is 𝑀0 only non-𝑌 -transition.

𝑡6 (𝑡3 𝑡7 𝑡1 𝑡4 ) ∞

−−−−−→ −−−

, with 𝑡 6 being the

The following corollary strengthens Proposition 4.18 – considerably – for FCsystems. Its proof follows directly from the definition of well-formedness and from the preceding covering results. 7 See Definition 4.15.

5.8 An algorithm for the construction of T-components

117

Corollary 5.44 S- and T-component coverings of well-formed FC-nets Let 𝑁 be a well-formed free-choice net. 𝑁 is covered by strongly connected S-components as well as by strongly connected T-components.

Example 5.45 Relevance of the free-choice premise Figure 5.20 shows that the FC premise is essential for T-component coverings. There are also live and safe Petri nets which do not satisfy (PS), and, a fortiori, are not covered by strongly connected S-components.

𝑎

𝑐

𝑏

Fig. 5.20: A live and safe Petri net which is not covered by strongly connected T-components. The net is not FC, and Corollary 5.44 does not apply.

5.8 An algorithm for the construction of T-components In this chapter, an algorithm is described which produces, given a transition 𝑎 of some well-formed free-choice net, a T-component covering 𝑎. First, call a Petri net 𝑁 choice-free if ∀𝑠 ∈ 𝑆 : |𝑠• | ≤ 1.8 Clearly, plain choice-free nets are also free-choice, and T-nets are choice-free. For well-formed choice-free nets, there is also a converse:

Proposition 5.46 Choice-free nets and T-nets A well-formed, plain, choice-free Petri net is a T-net. Proof: Let 𝑁 = (𝑆, 𝑇, 𝐹) be a well-formed, plain, choice-free net. By Proposition 4.18, 𝑁 is covered by cycles. Suppose there is a place 𝑠 ∈ 𝑆 with two distinct input transitions 𝑡1 , 𝑡2 ∈ • 𝑠. Pick any directed cycle through 𝑡 1 and 𝑠. In any live marking, 𝑡2 can put ever more tokens on the places of this cycle, but none of these tokens can be taken away from the cycle by plainness and choice-freeness. Hence no live 8 See also Chapter 12 later in the book.

118

5 Graph-theoretical Structure of Petri Nets

marking can be bounded, contradicting well-formedness. This contradiction shows that no such place exists, and the net is, in fact, a T-net. 5.46

𝑔

𝑁4

𝑠6

𝑎 𝑠1

𝑓

𝑠2

𝑠3

𝑁5

𝑁1

𝑑

𝑏

𝑠5

𝑠4 𝑐

𝑒

𝑁2

𝑁3

Fig. 5.21: A well-formed free-choice Petri net. A live and 1-bounded (safe) marking is specified by hollow gray tokens on 𝑠5 and 𝑠6 . This net is covered by three Tcomponents 𝑁1 , 𝑁2 , 𝑁3 and two S-components 𝑁4 , 𝑁5 . There are two non-singleton transition clusters, {𝑎, 𝑔} and {𝑐, 𝑒}. The other transition clusters, {𝑏}, {𝑑}, and { 𝑓 }, are singletons. There is one non-singleton place cluster, {𝑠2 , 𝑠3 }. The other place clusters, {𝑠1 }, {𝑠4 }, {𝑠5 }, and {𝑠6 }, are singletons. Figure 5.21 depicts a well-formed free-choice net with three T-components and two S-components. It will be used to illustrate the algorithm. Let us first define a transition cluster as a maximal set of transitions {𝑡1 , . . . , 𝑡 𝑛 } whose presets agree, i.e., • 𝑡 = . . . = • 𝑡 .9 In a free-choice net, two transitions belong to the same transition 1 𝑛 cluster if their presets are non-disjoint. In a marking enabling one of these transitions, all others are enabled as well (and could be chosen to be fired instead of the first one). It may help to imagine the transitions of a cluster to be “rivals”. The following algorithm provides a full alternative proof of Theorem 5.42: Proof: Let 𝑁 be a well-formed free-choice net, with an initial live and bounded marking 𝑀0 . Let us pick and fix 𝑢 to be any transition of 𝑁. We want to show that there is a Tcomponent covering 𝑢. The property is trivial if 𝑢 is isolated. From Proposition 4.18, we may also restrict our attention to the strongly connected component containing 𝑢. If all transitions of 𝑁 are singleton clusters (i.e., there is no proper rivalry), then we are already done since 𝑁 is then also choice-free and a T-net, from Proposition 5.46, and because 𝑁 is plain and well-formed.

9 See also Definition 6.3 (conflict cluster) in the next chapter.

5.8 An algorithm for the construction of T-components

119

Let 𝑡 be a transition with rivals such that there is no rivalry strictly between 𝑡 and 𝑢: from strong connectedness and the existence of rivalries, this is always possible10 (we allow 𝑡 = 𝑢). We thus have a path 𝑠 → 𝑢 0 = 𝑡 → 𝑠1 → 𝑢 1 → 𝑠2 → 𝑢 2 . . . 𝑠 𝑛 → 𝑢 𝑛 = 𝑢 where for all 𝑖 ∈ {1, . . . , 𝑛 − 1} : |𝑠𝑖• | = 1, 𝑠• = {𝑡, 𝑡1 , . . . , 𝑡 𝑘 } with |𝑠• | > 1, and 𝑘 > 0 (i.e., the {𝑡 1 , . . . , 𝑡 𝑘 } are the rivals of 𝑡, and there is at least one such rival). The idea is to see what happens if we consider the net 𝑁 ′ which is just like 𝑁, except that all transitions in {𝑡 1 , . . . , 𝑡 𝑘 } and all their surrounding arcs are deleted. Then 𝑁 ′ is again free-choice; 𝑀0 is again a bounded marking of 𝑁 ′ ; all firing sequences of (𝑁 ′ , 𝑀0 ) are also firing sequences of (𝑁, 𝑀0 ); and the set of reachable markings of (𝑁 ′ , 𝑀0 ) is a subset of the set of reachable markings of (𝑁, 𝑀0 ). However, some transitions may no longer be live. We claim, first of all, that not all transitions are non-live in (𝑁 ′ , 𝑀0 ); in fact, 𝑡 is amongst the live transitions of (𝑁 ′ , 𝑀0 ). For consider any marking 𝑀 reachable in (𝑁 ′ , 𝑀0 ). Then 𝑀 is also reachable in (𝑁, 𝑀0 ), and since 𝑡 is live in (𝑁, 𝑀0 ), there 𝜏 is some sequence 𝜏 such that 𝑀 −→ 𝑀 ′ and 𝑀 ′ enables 𝑡 in (𝑁, 𝑀0 ). If 𝜏 does not contain any of the {𝑡 1 , . . . , 𝑡 𝑘 }, it is also firable from 𝑀 in (𝑁 ′ , 𝑀0 ), enabling 𝑡 there as well. But if some 𝑡𝑖 is contained in 𝜏, say 𝜏 = 𝜏1 𝑡𝑖 𝜏2 , then 𝜏1 already enables 𝑡 from 𝑀 (by the free-choice property, since 𝑡 is in the same cluster as 𝑡𝑖 ). In both cases, 𝑡 can be enabled after 𝑀 in (𝑁 ′ , 𝑀0 ); hence, 𝑡 is live in (𝑁 ′ , 𝑀0 ). But all the transitions in the selected path between 𝑡 and 𝑢 are also live from 𝑀0 . Indeed, if 𝑢 𝑗 is live while 𝑢 𝑗+1 is not, let us consider a marking reachable from 𝑀0 where 𝑢 𝑗+1 is dead; then, by activating 𝑢 𝑗 , 𝑠 𝑗+1 may receive as many tokens as we want while they may not be absorbed, contradicting the boundedness assumption. Hence, in particular, 𝑢 remains live. Let 𝑇𝑑 denote the set of transitions which are not live in (𝑁 ′ , 𝑀0 ) and let 𝑀1 be a marking reached in (𝑁 ′ , 𝑀0 ) in which all transitions in 𝑇𝑑 are dead (i.e., cannot be fired after any firing sequence). (Of course, 𝑀1 can also be reached in (𝑁, 𝑀0 ).) Now take all transitions in 𝑇𝑑 , and all of their surrounding arcs, out of 𝑁 ′ , keep the strongly connected component of 𝑁 ′ containing 𝑢 (suppressing 𝑇𝑑 will in general disconnect the previously connected net under consideration) and call the new net 𝑁1 . By construction, (𝑁1 , 𝑀1 ) is a live and bounded (hence strongly connected) free-choice net with less rivalry. And 𝑢 is in 𝑁1 (as well as the whole chain from 𝑡 to 𝑢). Continuing the construction, we shall eventually construct a bounded and live (i.e, well-formed), strongly connected choice-free net containing 𝑢. From Proposition 5.46 it is also a T-net, and since it was built by dropping transitions and keeping connected components, it is a T-component of the original net, which completes the proof since 𝑢 was chosen arbitrarily. 5.42

10 We may for instance start from any transition with rivals, go to 𝑢, and keep the last visited transition with rivals.

120

5 Graph-theoretical Structure of Petri Nets

Figure 5.22 illustrates how the construction proceeds in the example which is shown in Figure 5.21.

3 𝑔

2 𝑎

𝑠6

𝑎

𝑠1 10

𝑠1

𝑠2

𝑠2

9

4 𝑓

𝑏 1 8

5 𝑠3

𝑏

𝑑 10 𝑠5

𝑠4 𝑐

𝑒

6

7

𝑠3

𝑠4 𝑐

Fig. 5.22: Finding a T-component covering transition 𝑏 as shown in Figure 5.21. The various steps are exhibited in blue on the left-hand side and explained below. An initial, live and bounded, marking 𝑀0 is shown (hollow gray tokens on 𝑠5 and 𝑠6 ). These tokens are moved during the construction of new live markings to places 𝑠4 and 𝑠1 , respectively (gray tokens on these places). The end result is shown on the right-hand side. The live and bounded marking constructed during the proof is also shown (tokens on 𝑠1 and 𝑠4 ). Places 𝑠5 and 𝑠6 have been deleted in the last step, to keep only the strongly connected T-net containing 𝑏. There are ten steps: 1) 𝑢 = 𝑏 is picked. It has no rivals. 2) We may find a transition with rivals by backtracking from 𝑏. This could be either 𝑎 or 𝑐; we choose 𝑎. 3) We delete transition 𝑔 because it is a rival of 𝑎. Now 𝑓 is non-live and should be deleted next. 4) However, before doing so, a new live marking has to be found in which 𝑓 is dead. In this case, 𝑓 is fired once and a token is moved from 𝑠6 to 𝑠1 . 5) Transition 𝑓 is now dead and can be deleted. 6) The search for transitions with rivals continues, and we find 𝑐 by backtracking from a transition without rivals (in this case, 𝑏). 7) Transition 𝑒 is now eliminated because it is a rival of 𝑐. 8) Now 𝑑 is non-live. A new marking has to be found in which 𝑑 is dead (in this case, moving a token from 𝑠5 to 𝑠4 ). 9) Transition 𝑑 is deleted. The construction now finishes because no further rivalry exists and, by Proposition 5.46, the result is a T-net. 10) Places 𝑠6 and 𝑠5 are deleted in order to keep only the T-component containing 𝑏.

5.9 An algorithm for the construction of S-components

121

5.9 An algorithm for the construction of S-components In this section, we sketch an algorithm for the construction of an S-component covering a given place 𝑠 of a well-formed free-choice net. It uses the concept of a place cluster, which is symmetric to that of a transition cluster. A place cluster is defined as a maximal set of places {𝑠1 , . . . , 𝑠 𝑛 } with 𝑠1• = . . . = 𝑠•𝑛 .11 In a free-choice net, two places belong to the same place cluster if their postsets are non-disjoint. Before one of the output transitions of a place cluster can be fired, all of the places in the place cluster need to have at least one token. The places of a place cluster may figuratively be referred to as “siblings”. Figure 5.21 shows some examples. A net 𝑁 is join-free if ∀𝑡 ∈ 𝑇 : | • 𝑡| ≤ 1. Clearly, plain join-free nets are also freechoice, and S-nets are join-free. For well-formed, plain, join-free nets, there is also a converse: Proposition 5.47 Join-free nets and S-nets A well-formed, plain, join-free Petri net is an S-net. Proof: Let 𝑁 = (𝑆, 𝑇, 𝐹) be a well-formed, plain, join-free net. Suppose there is a transition 𝑡 ∈ 𝑇 with two distinct output places 𝑠1 , 𝑠2 ∈ 𝑡 • . In any live marking, 𝑡 properly increases the number of tokens on the net, with no possibility to decrease it, due to plainness and join-freeness. Hence no live marking can be bounded, contradicting well-formedness. Thus no such transition exists, and the net is, in fact, an S-net. 5.47 Sketch of an algorithm: Given a well-formed free-choice net, we may proceed symmetrically to the previous algorithm, starting with a given place and eventually using Proposition 5.47. Places with more than one sibling are sought in forward direction, and one just keeps the visited places among them, dropping their siblings. During the process, places becoming unbounded are also dropped (rather than nonlive transitions). We end up with an S-component containing the place initially selected. End of sketch The procedure is illustrated on Figure 5.21. Suppose we wish to construct a strongly connected S-component covering 𝑠4 . Since it is its only sibling, we search forward, rather than backward, finding 𝑠3 with sibling 𝑠2 . After dropping 𝑠2 , 𝑠1 and 𝑠6 are unbounded. After dropping them, all places have only one sibling. Now, transitions 𝑎, 𝑓 , 𝑔 are dropped. The S-component 𝑁5 shown in Figure 5.21 remains.

11 See also Definition 6.3 (PRESETS) in the next chapter.

122

5 Graph-theoretical Structure of Petri Nets

5.10 Bibliographical remarks and further reading The constructions exemplified in Figures 5.1 (turning a non-plain into a plain net) and 5.2 (loop splitting) have to be applied with care. For instance, the construction creating plain arcs out of weighted arcs does not preserve the property of selfstabilisation, as pointed out in [CHR95] (by Lucy Cherkasova, Rodney R. Howell, and Louis E. Rosier). Loop splitting plays a major role when place/transition Petri nets are translated into VAS (vector addition systems). VAS’s cannot express loops and are otherwise equivalent in descriptive power to place/transition nets, but they are often preferred in the study of complexity properties of Petri nets because of their closeness to Marvin Minsky’s [Min67] counter machines (see Section 10.6 for an example). The free choice property was first investigated by Michel H.Th. Hack [Hac72] in cooperation with Fred G. Commoner [Com72]. The covering theorems can be found in [Hac72]. The material of this chapter is also presented in the first part of [DE95], by Jörg Desel and Javier Esparza, the definitive book on the structure theory of free-choice systems. This book contains all proofs in detail,12 including a full proof of Theorem 5.42 (of which the sketch given in Section 5.7 is an abridged version), as well as more historical remarks. The proof given in Section 5.8 resembles, in its style, the original intentions of [Hac72]. No publication describing in full the algorithms of Sections 5.8 and 5.9 exists, but the authors are planning to issue a report after releasing this book. The construction in Section 5.5.4 is in [JLL77] (Neil D. Jones, Lawrence H. Landweber, and Y. Edmund Lien). S-systems, T-systems and marked graphs have been investigated – under various names – in [Hol68; HC70] (by Anatol W. Holt and Fred G. Commoner, “state transition diagrams”), [Fur76] (by Fred C. Furtek), [Com+71] (by Fred G. Commoner, Anatol W. Holt, Shimon Even, Amir Pnueli, “marked graphs”), and [GL73] (by Hartmann J. Genrich and Kurt Lautenbach, “synchronisation graphs”). Of the restrictions mentioned in this chapter, Provision 5.1 is the most important. It is possible to generalise T-systems and FC-systems to non-plain Petri nets: compare [Ter+92] (“weighted T-systems”, by Enrique Teruel, Piotr Chrząstowski-Wachtel, José M. Colom and Manuel Silva) and [TS96] (by Enrique Teruel and Manuel S. Silva, “equal conflict systems”). For an application to Flexible Manufacturing Systems, compare also [Sau03] by Nathalie Sauer. The properties of these generalised net classes are by no means easily derived from the properties of their plain counterparts. For instance, Theorem 5.31 leads to a polynomial algorithm checking the liveness of a marked T-system. In fact, the elementary cycle enumerating algorithm in [SL76] (Jayme L. Szwarcfiter and Peter E. Lauer) was designed with this application in mind. A counterpart of Theorem 5.31(a)⇔c)) is known for weighted T-systems, but the reproduction vector in “c)” may be exponentially long, rather than just the all-ones vector 1. 12 For Theorem 5.34, the proof given there differs from the proof presented above.

5.11 Exercises

123 2

𝑠1

𝑏

2

𝑎

𝑑 2

3

𝑐

4 𝑠3

𝑠4

𝑠2 5

𝑎

3

2

𝑏

5

𝑐

5

9

𝑠1

3

5

𝑠4

𝑠2

4 𝑑 9

3

𝑠3

Fig. 5.23: Two weighted T-nets. The challenges are: (i) to detect whether they are structurally live or not; (ii) if so, to find a minimal live marking; and (iii) to check whether all minimal live markings are reachable from each other.

At the date of publishing this book, no polynomial algorithm appears to be known to check the liveness of weighted T-systems. Figure 5.23 is included for readers wishing to try a finger exercise in weighted T-systems (even just in weighted cycles, also called ring nets, see [Wei15], by Jeremias Weihmann). The net on the right-hand side is from [CR95] (Piotr Chrząstowski-Wachtel and Marek Raczunas). There have been recent advances in tackling this problem in [Dev23] (by Raymond Devillers).

5.11 Exercises Exercise 5.1 Converting a net into a free-choice net To a given plain Petri net 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 ), we can associate a free-choice Petri net 𝐹𝐶 (𝑁) as follows: first, if 𝑁 contains nodes of the form (𝑠, 𝑡) or (𝑡, 𝑠), we shall rename them; then 𝐹𝐶 (𝑁) = (𝑆 ′ , 𝑇 ′ , 𝐹 ′ , 𝑀0′ ) with 𝑆 ′ = 𝑆 ∪ {(𝑠, 𝑡)|𝑠 ∈ 𝑆, 𝑡 ∈ 𝑇, 𝐹 (𝑠, 𝑡) = 1}; 𝑇 ′ = 𝑇 ∪ {(𝑡, 𝑠)|𝑠 ∈ 𝑆, 𝑡 ∈ 𝑇, 𝐹 (𝑠, 𝑡) = 1}; ∀𝑡 ∈ 𝑇∀𝑠 ∈ 𝑆 : 𝐹 ′ (𝑡, 𝑠) = 𝐹 (𝑡, 𝑠) and, when 𝐹 (𝑠, 𝑡) = 1, 𝐹 ′ (𝑠, (𝑡, 𝑠)) = 𝐹 ′ ((𝑡, 𝑠), (𝑠, 𝑡)) = 𝐹 ′ ((𝑠, 𝑡), 𝑡) = 1 (and no further arcs); 𝑀0′ | 𝑆 = 𝑀0 ; and 𝑀0′ | 𝑆 ′ \𝑆 = (0, . . . , 0) T . a) Find a suitable example and explain the construction. Ideally, your example could serve as a basis for answering b). b) Show why the following algorithm “solving” the liveness problem is flawed. Upon inputting a plain Petri net, construct the free-choice Petri net according to a) and apply the liveness criterion for free-choice Petri nets; the answer of the latter is also conjectured to answer whether or not the original net is live. Exercise 5.2 Net analysis a) Give a positive S-invariant to the Petri net shown in Figure 5.24. b) Give a positive T-invariant.

124

5 Graph-theoretical Structure of Petri Nets

c) Give all four traps 𝑄 ≠ ∅. d) Give all three siphons 𝐷 ≠ ∅. e) Use the Commoner-Hack Criterion in order to determine whether or not this net is live under the initial marking shown below. Justify your answer.

𝑠3 𝑡5 𝑡3

𝑡1

𝑠5 𝑡6 𝑠1

𝑠2

𝑡9 𝑡7 𝑠6

𝑡2

𝑡4 𝑡8 𝑠4

Fig. 5.24: Petri net to Exercise 5.2.

Exercise 5.3 Number of reachable markings in circuits Let 𝑛 > 0 and let 𝑁 = (𝑆, 𝑇, 𝐹) be a Petri net with 𝑆 = {𝑠0 , . . . , 𝑠 𝑛−1 }, 𝑇 = {𝑡0 , . . . , 𝑡 𝑛−1 } as well as 𝐹 (𝑠𝑖 , 𝑡𝑖 ) = 1 and 𝐹 (𝑡𝑖 , 𝑠 (𝑖+1) mod 𝑛 ) = 1. Find a formula which allows one to calculate the number of reachable markings of the system (𝑁, (𝑛, 0, . . . , 0) T ). Exercise 5.4 S-systems and T-systems Give constructions which produce, for every 𝑛 ∈ N, a) a strongly connected S-System b) a strongly connected T-System such that the system is of size 𝑂 (𝑛) (make separate estimations for places, transitions, arcs and tokens of the initial marking), while the size of the reachability graph is at least exponential in 𝑛. c) Can you find safe (1-bounded) solutions?

5.11 Exercises

125

Exercise 5.5 Liveness Let 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 ) and 𝑁 ′ = (𝑆, 𝑇, 𝐹, 𝑀) with 𝑀 ≥ 𝑀0 . Prove or disprove (find plain counterexamples, if possible): a) If 𝑁 is weakly live, then so is 𝑁 ′ . b) If 𝑁 is live, then so is 𝑁 ′ . c) If 𝑁 is a live free-choice system, then so is 𝑁 ′ . Exercise 5.6 Liveness a) Characterise the set of live markings of the Petri net 𝑁 shown in Figure 5.25. Hint: Consider the two places 𝑟 1 and 𝑟 2 to be “resources” and check the siphons of this net. b) Find a live marking, as well as a weakly live marking which is not live. Try to minimise the number of tokens.

𝑁:

𝑠1

𝑠5

𝑡5

𝑡1

𝑟1 𝑠6

𝑠2

𝑡4

𝑡6

𝑡2

𝑠3

𝑡8

𝑠7 𝑟2

𝑠4

𝑡3

𝑡7

𝑠8

Fig. 5.25: Petri net to Exercise 5.6.

Solution 5.1 (Exercise 5.1) For a), see Figure 5.26. It may be observed that, here, 𝑁 is already free-choice. Hence, this example may seem nonsensical. However, if we drop in 𝑁 the arc from 𝑠2 to 𝑡 1 , we get a net 𝑁1 which is asymmetric choice (and no longer free-choice).13 By contrast, the corresponding net 𝐹𝐶 (𝑁1 ), which is obtained from 𝐹𝐶 (𝑁) by dropping the nodes (𝑡1 , 𝑠2) and (𝑠2 , 𝑡1 ) (and the arcs around them) is still free-choice. 13 Asymmetric choice systems are a class of nets which are, in a certain sense, “slightly larger” than the class of free-choice nets. Their formal definition is relegated to Section 8.2.

126

5 Graph-theoretical Structure of Petri Nets

It may also be observed that transitions in 𝑁 correspond to firing sequences in 𝐹𝐶 (𝑁); for instance, from the initial marking (1 1 0) T , 𝑡 2 is mimicked by (𝑡2 , 𝑠1 )(𝑡2 , 𝑠2 )𝑡 2 (as well as by (𝑡 2 , 𝑠2 )(𝑡 2 , 𝑠1 )𝑡2 ). But the reverse is not always true; for instance, the sequence (𝑡1 , 𝑠1 )(𝑡2 , 𝑠2 ) does not allow to perform anything afterwards and does not correspond to any firing in 𝑁. This will be exploited in point b) below. b): Add to the net 𝑁 on the left of Figure 5.26 a transition 𝑡3 from 𝑠3 to 𝑠1 and to 𝑠2 . The resulting system 𝑁 ′′ is live under the marking (1 1 0) T . In the corresponding free-choice system 𝐹𝐶 (𝑁 ′′ ), however, which is obtained from 𝐹𝐶 (𝑁) by adding nodes (𝑠3 , 𝑡3 ), (𝑡3 , 𝑠3 ) and 𝑡 3 (and adequate arcs), there is a deadlock after firing (𝑡 1 , 𝑠1 ) and (𝑡 2 , 𝑠2 ). If the algorithm yields “non-live” because of this, the answer is wrong for the original system. The correspondence between the behaviours of a net and the corresponding free-choice net is thus not obvious. Note: In the original net, the siphon {𝑠2 , 𝑠3 } contains itself as a marked trap. In the free-choice net, {𝑠1 , 𝑠2 , (𝑠1 , 𝑡2 ), (𝑠2 , 𝑡1 ), 𝑠3 } is a siphon containing no proper traps. This siphon has no correspondent in the original net.

(𝑠1 , 𝑡1 )

(𝑡1 , 𝑠1 ) 𝑠1 𝑠1

𝑡1

(𝑡1 , 𝑠2 ) 𝑠3

𝑡1 𝑠3

{ (𝑡2 , 𝑠1 )

𝑠2

(𝑠2 , 𝑡1 )

𝑡2

(𝑠1 , 𝑡2 )

𝑡2

𝑠2 𝑁

(𝑡2 , 𝑠2 )

(𝑠2 , 𝑡2 )

𝐹𝐶 ( 𝑁 )

Fig. 5.26: Solution of Exercise 5.1(a).

Solution 5.2 (Exercise 5.2) a) (2 1 1 1 1 1) T b) (2 2 1 1 1 3 3 1 4) c) 𝑄 1 = {𝑠1 , 𝑠2 , 𝑠5 , 𝑠6 }, 𝑄 2 = {𝑠1 , 𝑠2 , 𝑠3 , 𝑠5 , 𝑠6 }, 𝑄 3 = {𝑠1 , 𝑠2 , 𝑠4 , 𝑠5 , 𝑠6 }, 𝑄 4 = {𝑠1 , 𝑠2 , 𝑠3 , 𝑠4 , 𝑠5 , 𝑠6 } d) 𝐷 1 = {𝑠1 , 𝑠2 , 𝑠3 , 𝑠4 , 𝑠6 }, 𝐷 2 = {𝑠1 , 𝑠2 , 𝑠3 , 𝑠4 , 𝑠5 }, 𝐷 3 = {𝑠1 , 𝑠2 , 𝑠3 , 𝑠4 , 𝑠5 , 𝑠6 } e) It is easy to check that this net is free-choice. The siphon 𝐷 1 contains no proper trap, since every trap contains 𝑠5 . In particular, it contains no trap which is marked under the initial marking. Hence, by the Commoner-Hack Criterion, the net is not live. Indeed, 𝑡 14 𝑡54 𝑡 64 empties 𝐷 1 and kills the net. Solution 5.3 (Exercise 5.3) If an 𝑛-cycle as specified in the exercise has 𝑚 tokens, then in all reachable markings there are 𝑚 tokens, and all distributions of these 𝑚 tokens onto the places are possible;

5.11 Exercises

127

that is, every such distribution is a reachable marking, and vice versa. Calculating the number 𝐶 (𝑛, 𝑚) of these distributions is a combinatorial problem known as distribution with repetitions, and the formula is14   𝑚+𝑛−1 𝐶 (𝑛, 𝑚) = 𝑚 Specialising this formula to 𝑚 = 𝑛 yields  𝐶 (𝑛, 𝑛) =

2·𝑛 − 1 𝑛



Thus, for instance, a 3-cycle with three places, three transitions, and three tokens, has 10 reachable markings. Solution 5.4 (Exercise 5.4) First the easy solutions. b): The 𝑛-bit system shown in Figure 7.2 has 𝑛 transitions, 2·𝑛 places, 2·𝑛 arcs, and 𝑛 tokens in the initial marking. It can be made strongly connected by adding a transition collecting tokens from all output places and delivering tokens to all input places. This adds one more transition and 𝑂 (𝑛) more arcs, still keeping size 𝑂 (𝑛) for the system. Its reachability graph has size 2𝑛 . c): If a strongly connected S-System is safe, it has exactly one token in the initial marking, as well as in any reachable marking. Hence the state space cannot be of exponential size. So, the answer is “no” for S-Systems. The 𝑛-bit system of b) is safe. So, the answer is “yes” for T-systems. a): If we put 𝑘 tokens on a simple cycle with 𝑛 places and 𝑛 transitions, we get 𝑛+𝑘−1 reachable states (compare Exercise 5.3). The question is: Can we choose 𝑘 𝑘 in such a way that there are 𝑂 (2 𝑘 ) reachable states? Choosing 𝑘 = 𝑛 is a good idea (it corresponds to putting 𝑛 tokens on our cycle). Then 𝑛+𝑘−1 becomes 𝑘 

2·𝑛 − 1 𝑛

 (5.7)

 We now estimate how 2·𝑛−1 relates to 2𝑛 . In fact, we can approximate (5.7) both 𝑛 from below and from above, making use of the Vandermonde identity:15 

𝑛+𝑚 𝑘



 𝑘   ∑︁ 𝑛 𝑚 = 𝑗 𝑘−𝑗 𝑗=0

(5.8)

14 Consult https://math.stackexchange.com/questions/208377/ combination-with-repetitions. 15 See https://en.wikipedia.org/wiki/Vandermonde’s_identity

128

5 Graph-theoretical Structure of Petri Nets

Approximation from below: 2·𝑛−1 = ( by (5.8), setting 𝑚 = 𝑛 − 1 and 𝑘 = 𝑛 ) 𝑛 Í𝑛 𝑛 𝑛−1  𝑗=0

𝑗 𝑛− 𝑗 𝑛−1  ≥ ( because the factor 𝑛− 𝑗 is ≥ 1 for all 𝑗 ≥ 1 ) Í𝑛 𝑛  𝑛 𝑗=0 𝑗 − (𝑛−1) = 2 − (𝑛−1)

So, with this choice of 𝑘, we have a little less than 2𝑛 , but still 𝑂 (2𝑛 ) many, reachable states. (Part a) of the exercise is already settled at this point.) Approximation from above:  𝑛−1  Í 2·𝑛−1 = 𝑛𝑗=0 𝑛𝑗 𝑛− 𝑛 𝑗 𝑛−1  𝑛  ≤ ( since 𝑛− 𝑗 ≤ 𝑛− 𝑗 )   Í𝑛 𝑛 𝑛 𝑗=0

𝑗

𝑛− 𝑗

= ( because in Pascal’s triangle, Í𝑛 𝑛  𝑛  𝑗=0

𝑗

𝑛  𝑛− 𝑗

=

𝑛 𝑗

)

𝑗

≤ ( extract the square )  Í ( 𝑛𝑗=0 𝑛𝑗 ) 2 = (2𝑛 ) 2 = 22·𝑛  Summarising, if 𝑟 = 2·𝑛−1 denotes the number of states in a cycle with 𝑛 places, 𝑛 𝑛 transitions and 𝑛 tokens, we get: 2𝑛 − (𝑛 − 1) ≤ 𝑟 ≤ 22·𝑛 , In other words, and in complexity theory language, 𝑟 ∈ Θ(2𝑛 ). Solution 5.5 (Exercise 5.5) a) If 𝑁 is weakly live, then, by definition, there is a weakly live transition 𝑡, for which, by definition, there is an infinite firing sequence (starting from the initial marking) which contains 𝑡 infinitely often. By the monotonicity property of Petri nets, the same firing sequence is also firable in 𝑁 ′ . b) The net on the left-hand side is live, while the net on the right-hand side is not live. It is possible to find plain counterexamples by applying the constructions to get rid of arc weights > 1 and side-loops given at the beginning of the chapter 5 on the graph-theoretical structure of Petri nets. 3 3 2 2 𝑡2 𝑡2 𝑁: 𝑡1 𝑁 ′ : 𝑡1 c) Assume that 𝑁 is a live free-choice system. Since 𝑁 ′ differs from 𝑁 only by its initial marking, it is also free-choice and has the same siphons and traps as 𝑁. Since 𝑁 is live, every proper siphon contains a marked trap (by the Commoner/Hack liveness criterion). Because of 𝑀 ≥ 𝑀0 , the same is true for 𝑁 ′ . By the liveness criterion again, 𝑁 ′ is live.

5.11 Exercises

129

Both directions of the liveness criterion have been used. In summary, liveness is not a monotonic property for Petri nets, but it is for free-choice ones. Solution 5.6 (Exercise 5.6) The net consist of two cyclic processes which share two resources 𝑟 1 , 𝑟 2 in an asymmetric way: the process on the left-hand side acquires 𝑟 1 and then 𝑟 2 before releasing both, while the other process acquires them in the opposite order (first 𝑟 2 , then 𝑟 1 ). Part a) of the exercise: Suppose first that the net deadlocks. Then there is necessarily a token-empty proper siphon in the deadlock marking. Hence we may first investigate which siphons are emptiable. There are five minimal proper siphons: {𝑠1 , 𝑠2 , 𝑠3 , 𝑠4 }, {𝑠5 , 𝑠6 , 𝑠7 , 𝑠8 }, {𝑟 1 , 𝑠2 , 𝑠3 , 𝑠7 }, {𝑟 2 , 𝑠3 , 𝑠6 , 𝑠7 } and {𝑟 1 , 𝑟 2 , 𝑠3 , 𝑠7 } The first four of these are also the support of minimal semipositive S-invariants. This means that they remain marked if they are initially marked. Hence for liveness, we will only have to guarantee the following initial condition: (𝑠1 + 𝑠2 + 𝑠3 + 𝑠4 > 0) ∧ (𝑠5 + 𝑠6 + 𝑠7 + 𝑠8 > 0) ∧ (𝑟 1 + 𝑠2 + 𝑠3 + 𝑠7 > 0) ∧ (𝑟 2 + 𝑠3 + 𝑠6 + 𝑠7 > 0)

(5.9)

where 𝑠𝑖 denotes the number of tokens on 𝑠𝑖 . By the basic S-invariant property, (5.9) is an inductive invariant. The potential emptiability of the fifth siphon 𝐷 = {𝑟 1 , 𝑟 2 , 𝑠3 , 𝑠7 } is not so easy to analyse. By definition, it is empty in any marking in which 𝑟 1 , 𝑟 2 , 𝑠3 , 𝑠7 carry zero tokens. In that case, none of the transitions 𝑡1 , 𝑡2 , 𝑡5 , 𝑡6 can fire. But (5.9) may still be true (for instance, if all other places are marked). Let us thus categorise all transitions with respect to their effects on 𝐷: 𝑡1 and 𝑡 5 decrease the number of tokens on 𝐷 by 1 token 𝑡3 and 𝑡 7 increase the number of tokens on 𝐷 by 1 token 𝑡2 , 𝑡4 , 𝑡6 and 𝑡8 neither decrease nor increase the number of tokens on 𝐷 We have to ensure that the emptying transitions (𝑡 1 , 𝑡5 ) cannot fire often enough (as many times as 𝐷 has tokens) without filling ones (𝑡 3 , 𝑡7 ) having to occur. Initially, 𝐷 has 𝑟 1 + 𝑟 2 + 𝑠3 + 𝑠7 tokens. The number of times 𝑡 1 can occur without 𝑡 3 having to occur is bounded by the sum of markings on 𝑠1 and 𝑠4 , but also by the sum of tokens on 𝑟 1 and 𝑠7 , hence by the minimum of these two sums. Likewise, the firings of 𝑡5 without 𝑡 7 having to occur is

130

5 Graph-theoretical Structure of Petri Nets

limited by the minimum of 𝑟 2 +𝑠3 and 5 +𝑠8 . Thus we are led to consider (tentatively) the relationship 𝑟 1 + 𝑟 2 + 𝑠3 + 𝑠7 > min(𝑟 1 +𝑠7 , 𝑠1 +𝑠4 ) + min(𝑟 2 +𝑠3 , 𝑠5 +𝑠8 )

(5.10)

First of all, we may observe that this relationship is invariant with regard to firing. In other words, if it holds initially, then it holds in every successor marking. To see this, we may check the effects of all transitions on the left-hand side and the right-hand side of the inequality. Take, for instance, 𝑡 1 . By firing 𝑡 1 , the left-hand side of (5.10) decreases by 1, since one token is removed from 𝑟 1 while no tokens are subtracted from 𝑟 2 , 𝑠3 and 𝑠7 . On the right-hand side, the term min(𝑟 1 +𝑠7 , 𝑠1 +𝑠4 ) decreases by 1, while the other term min(𝑟 2 +𝑠3 , 𝑠5 +𝑠8 ) remains constant. In all, the inequality remains valid by the firing of 𝑡 1 . Other transitions can be checked similarly: 𝑡 2 : both sides of (5.10) remain constant 𝑡 3 : the left-hand side of (5.10) increases by 1; so does the right-hand side 𝑡 4 : both sides of (5.10) remain constant Symmetrically, the other transitions 𝑡 5 , 𝑡6 , 𝑡7 , 𝑡8 do not change the truth of (5.10), either. Hence (5.10) is another inductive invariant. Finally, we claim that the net is live if and only if both (5.9) and (5.10) hold true in the initial marking. First part of the proof: If (5.9)∧(5.10) does not hold, then the net is not live. This is clear if (5.9) does not hold. Hence let us assume that (5.9)∧¬(5.10). The latter means that the negation of (5.10) holds: 𝑟 1 + 𝑟 2 + 𝑠3 + 𝑠7 ≤ min(𝑟 1 +𝑠7 , 𝑠1 +𝑠4 ) + min(𝑟 2 +𝑠3 , 𝑠5 +𝑠8 )

(5.11)

Now we construct a deadlock. First consider the special case that zero tokens are on 𝑠3 , 𝑠4 , 𝑠7 , and 𝑠8 . Then (5.11) becomes 𝑟 1 + 𝑟 2 ≤ min(𝑟 1 , 𝑠1 ) + min(𝑟 2 , 𝑠5 ) (5.12) and this is possible only if both 𝑠1 ≥ 𝑟 1 and 𝑠5 ≥ 𝑟 2 . A deadlock occurs after firing 𝑡1 𝑟 1 times and 𝑡5 𝑟 2 times. The general case can be reduced to the special case by firing 𝑡3 𝑠3 times, firing 𝑡7 𝑠7 times, and after that, firing 𝑡4 𝑠3 +𝑠4 times and 𝑡8 𝑠7 +𝑠8 times. Since (5.10) is always true, (5.11) is true in the marking reached after these firings. Thus, a deadlock is reached in all possible cases. Second part of the proof: If (5.9) ∧ (5.10), then the net is live. We argue that if any transition of the net is dead, then there is some siphon that can be emptied of tokens. This is clear if one of the S-invariants is token-empty, hence we may assume in the following that all S-invariants carry at least one token (and can, of course, not be emptied of tokens). To deal with 𝐷, observe that the net is AC

5.11 Exercises

131

(asymmetric choice) and that therefore, Theorem 8.2 (“dead transitions have some eventually dead places”) and Lemma 8.12 (constructing a token-empty siphon) can be applied. Hence, by non-liveness, we find an emptiable minimal siphon, which, by the above, can only be 𝐷.16 However, we now show that 𝐷 cannot be emptied of tokens, contradicting the assumption and finishing the proof. To show that 𝐷 cannot be emptied, assume that 𝑀 [𝑡⟩𝑀 ′ such that 𝑀 ′ (𝐷) = 0. Then 𝑡 can only be 𝑡 1 or 𝑡5 ; w.l.o.g., assume 𝑡 = 𝑡 1 . Consider 𝑀; the numbers 𝑟 1 , 𝑟 2 , 𝑠1 etc. are now supposed to denote the tokens in 𝑀. Since (5.10) is an inductive invariant, it also holds in 𝑀. As 𝑡 1 is enabled in 𝑀, we have 𝑠1 ≥ 1. Furthermore, 𝑟 1 = 1 and 𝑟 2 = 𝑠3 = 𝑠7 = 0 in 𝑀, otherwise 𝐷 would not be token-empty under 𝑀 ′ . In this case, (5.10) reduces to: 1 + 0 + 0 + 0 > min(1, 𝑠1 +𝑠4 ) + min(0, 𝑠5 +𝑠8 ) ≥ 1 + 0 = 1 a contradiction. Hence 𝐷 remains marked, and 𝑁 is live. Part b) of the exercise: The marking {𝑠3 , 𝑠7 } is live and has two tokens. The marking {𝑠3 } is weakly live but not live; it has only one token.

16 If you solved this exercise without knowing about the theorem and the lemma, you might have re-invented them in some way.

Chapter 6

More Structure Theory

Abstract There are some interesting connections between the structural liveness of a Petri net and the rank of its incidence matrix. They are presented in the first part of this chapter. In the second part, it is shown that these connections lead to a principle of duality with regard to the structure and the behaviour of free-choice nets. They also lead to a reduction algorithm characterising precisely the set of well-formed FC-nets, and to a structural characterisation of reversible FC-nets via traps.

Contents 6.1 6.2 6.3 6.4 6.5 6.6 6.7 6.8

Introduction . . . . . . . . . . . . . . . . . Conditions for structural liveness . . . . . . Rank theorem for free-choice nets . . . . . Duality . . . . . . . . . . . . . . . . . . . Net reductions . . . . . . . . . . . . . . . Home states and reversibility . . . . . . . . Bibliographical remarks and further reading Exercises . . . . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

133 134 141 145 147 148 152 153

6.1 Introduction In the previous chapter, an exact structural characterisation of structural liveness has been derived for a few Petri net classes. But for general Petri nets, such a result remains out of reach. Nevertheless, structural liveness can be “approximated”, so to speak, from logically different sides. Two separate structural conditions, a sufficient one and a necessary one, can be devised. As it turns out, the rank of the incidence

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. Best and R. Devillers, Petri Net Primer, Computer Science Foundations and Applied Logic, https://doi.org/10.1007/978-3-031-48278-6_6

133

134

6 More Structure Theory

matrix of a net plays an important role. These two conditions, and the concomitant results, will be described in Section 6.2. For free-choice nets, these results can be combined and simplified, eventually leading to an exact characterisation of their well-formedness. This exact condition is called the rank theorem, and it will be presented in Section 6.3. The rank theorem has various pleasant consequences which are described in Sections 6.4 to 6.6. These results all go to show that well-formed free-choice nets are a very elegant and well-understood class of Petri nets. Provision 6.1 Weak connectedness and plainness In the remaining part of this chapter, 𝑁 is assumed to be weakly connected as well as plain, that is, the function 𝐹 does not yield values > 1. Plainness has been identified as a significant restriction in the last chapter. Weak connectedness is added because the focus is not on systems that can be seen as consisting of many mutually disjoint components.

6.2 Conditions for structural liveness The aim, in this section, is to describe two conditions for the structural liveness1 of a Petri net 𝑁. The first condition is a sufficient one, the second is a necessary one. Applied to free-choice nets, they yield a connection between the well-formedness of an FC-net and the rank of its incidence matrix 𝐶. The column rank (row rank) of 𝐶 is defined as the maximal number of linearly independent column vectors (row vectors, respectively) in 𝐶. Since, as is known from classical courses on Linear Algebra, the column rank and the row rank of any matrix 𝐶 are identical, the rank of 𝐶 is simply defined as one of them. The rank of a Petri net 𝑁 is defined as the rank of its incidence matrix. If 𝑁 is covered by a positive T-invariant, and more generally if 𝑁 has a non-null T-invariant, the rank of 𝐶 is less than |𝑇 |, the number of transitions. This is because 𝐶 · 𝑦 = 0 and 𝑦 ≠ 0 means that some non-trivial linear combination of the columns of 𝐶 equals 0, which implies that its columns are linearly dependent and the rank of 𝐶 cannot exceed |𝑇 | − 1. We may combine this observation with Proposition 4.14,2 to see that any well-formed net has rank at most |𝑇 | − 1. Example 6.2 T-invariants in Figure 6.1 Consider the simple directed cycle shown in 𝑁1 of Figure 6.1. The characteristic function which assigns the number 1 to every transition of the cycle is a minimal positive T-invariant, and the column rank of the net actually equals 1 𝑁 is structurally live if there exists a live marking. See Definition 4.1. 2 Stating that every well-formed Petri net has a positive T-invariant.

6.2 Conditions for structural liveness

135

|𝑇 | − 1. Suppose now that this cycle is slightly modified by introducing more conflicts, as depicted in 𝑁2 of Figure 6.1. In order to reproduce a marking, either cycle can be chosen. We get two semi-positive, minimal T-invariants, the sum of which is a positive, non-minimal, T-invariant covering the net. 𝑁2 has column rank |𝑇 | − 2.

𝑡3

𝑡2

𝑡3

𝑡2

𝑠3 𝑁1 :

𝑁2 : 𝑡1

𝑠1

𝑡1 𝑠2

𝑠1

𝑠2

Fig. 6.1: A simple cycle (l.h.s.) and a modification (r.h.s.). 𝑁1 has three conflict clusters {𝑡1 }, {𝑡 2 }, {𝑡3 }, and PRESETS 𝑁1 = {{𝑠1 }, {𝑠2 }, {𝑠3 }}. The rank of 𝑁1 is 2. 𝑁1 has a single minimal positive T-invariant. 𝑁2 has two conflict clusters {𝑡1 },{𝑡2 , 𝑡3 }, and PRESETS 𝑁2 = {{𝑠1 }, {𝑠2 }}. The rank of 𝑁2 is 1. 𝑁2 has two different minimal semipositive T-invariants. Making a wild guess from this example, there seem to be some connections between the T-invariants of a net and its rank. The next definition is designed to untangle these connections. Definition 6.3 Conflict clusters, and the set of presets Let 𝑁 = (𝑆, 𝑇, 𝐹) be a plain Petri net. For 𝑡, 𝑡 ′ ∈ 𝑇, let 𝑡 ∼0 𝑡 ′ if • 𝑡 ∩ • 𝑡 ′ ≠ ∅ (i.e., if there is a potential conflict between 𝑡 and 𝑡 ′ ). Let ∼ ⊆ 𝑇 × 𝑇 be the reflexive and transitive closure of ∼0 . A conflict cluster of 𝑁 is defined as an equivalence class of the equivalence relation ∼ . The set of all conflict clusters of 𝑁 is denoted by CC 𝑁 . The set of all non-empty presets of 𝑁 is defined as PRESETS 𝑁 = {• 𝑡 | 𝑡 ∈ 𝑇 ∧ • 𝑡 ≠ ∅}. A conflict cluster is a maximal set of transitions which are connected by arcs from places to transitions (but not using arcs from transitions to places). The set of presets simply collects all presets of all transitions. Example 6.4 Conflict clusters and presets in Figures 6.1 and 6.2 In the cycle on the left-hand side of Figure 6.1, the relation ∼0 (hence also ∼) is the identity relation, and we have three conflict clusters. Also, 𝑁1 has three presets and its rank is 2 = |𝑇 |−1. On the right-hand side of Figure 6.1, ∼0 is not the identity since we have 𝑡 2 ∼0 𝑡3 . 𝑁2 has two conflict clusters, as well as two presets. Its rank is 1 = |𝑇 |−2. The number of conflict clusters may differ from the number of presets, as shown in Figure 6.2.

136

6 More Structure Theory 𝑠2

𝑡3

𝑡1 𝑡4 𝑠3 𝑠1 𝑡5 𝑡2

𝑠4

𝑡6

Fig. 6.2: A (non-free-choice) net sporting two conflict clusters {𝑡1 , 𝑡2 } and {𝑡 3 , 𝑡4 , 𝑡5 , 𝑡6 }, and four presets {𝑠1 }=• 𝑡 1 =• 𝑡2 , {𝑠2 , 𝑠3 }=• 𝑡3 =• 𝑡 4 , {𝑠3 , 𝑠4 }=• 𝑡 5 , {𝑠4 }=• 𝑡 6 .

It so happens that in a free-choice net, exactly as many conflict clusters as presets are encountered: Lemma 6.5 FC-nets have as many presets as clusters In a free-choice net 𝑁 which is transition-regular, i.e., in which all transitions 𝑡 satisfy • 𝑡 ≠ ∅, the conflict clusters CC 𝑁 and the sets PRESETS 𝑁 are in 1-1 correspondence with each other. Proof: Let 𝑁 be free-choice. For a conflict cluster 𝑈 ∈ CC 𝑁 , the set •𝑈 is welldefined because any two transitions in 𝑈 have the same presets by (5.2), and if it is nonempty, then it is a preset in PRESETS 𝑁 . Conversely, for any preset 𝑅 ∈ PRESETS 𝑁 , the set 𝑅 • is well-defined by (5.4), and it is a conflict cluster. Moreover, 𝑈 = ( •𝑈) • and 𝑅 = • (𝑅 • ) for clusters 𝑈 with •𝑈 ≠ ∅ and for presets 𝑅, which means that the correspondence is indeed one-to-one (unless there are transitions 𝑡 with • 𝑡 = ∅, which is excluded by the premise). 6.5 Figure 6.2 demonstrates that the FC premise is necessary. Figure 6.3 shows a positive example. If we would allow that, for some 𝑡 ∈ 𝑇, • 𝑡 = ∅, there could, in general, be more conflict clusters than presets. For strongly connected free-choice nets, conflict clusters and PRESETS have been called transition clusters and place clusters, in Section 5.8 and 5.9, respectively. Theorem 6.6 Sufficient condition for structural liveness Assume that 𝑁 is a weakly connected, plain net covered by a positive Sinvariant and a positive T-invariant. If the rank of 𝐶 is strictly less than |CC 𝑁 |, then there exists a live marking of 𝑁.

6.2 Conditions for structural liveness

137

𝑠1

𝑡1

𝑠2

𝑠3

𝑡2

𝑡3

𝐶 𝑡1 𝑠1 −1 𝑠2 1 𝑠3 1

𝑡2 1 −1 −1

𝑡3 1 −1 −1

Fig. 6.3: An FC-system having two conflict clusters (shown as blue rectangles) and two presets {𝑠1 }=• 𝑡 1 and {𝑠2 , 𝑠3 }=• 𝑡2 =• 𝑡3 . The rank of 𝐶 is 1: 𝐶 (., 𝑡1 ) = −𝐶 (., 𝑡2 ) = −𝐶 (., 𝑡3 ). By Theorem 6.6, a live marking exists.

Proof: (Sketch.) The proof may be done by contraposition. Supposing that no live marking of 𝑁 exists, the column rank of 𝐶 is shown to be at least |CC 𝑁 |. We start with a suitably chosen non-live marking 𝑀1 and aim at constructing exactly |CC 𝑁 | linearly independent column vectors contained in 𝐶. The way to do this is to pick exactly one transition from every conflict cluster, in such a way that (the column vectors of 𝐶 corresponding to) these transitions are linearly independent. The proof is divided into three consecutive steps, (1), (2) and (3). Step (1): Let 𝑀1 be such that all places are marked with a single token. By assumption, 𝑀1 is not live. Using this, and also the strong connectedness which is guaranteed by 𝜎 Corollary 4.21, it may be shown that there is a firing sequence 𝑀1 −→ 𝑀2 , such that in 𝑀2 , every conflict cluster contains a transition with an unmarked input place. The basic idea is to remove tokens from conflict clusters in a systematic way, as long as there are still some conflict clusters not containing an unmarked place. Since every place is marked by 𝑀1 , this ensures that every conflict cluster contains a transition from 𝜎, so that 𝜎 is at least as long as the number of conflict clusters. Step (2): Suppose that there are 𝑘 conflict clusters 𝐾1 , . . . , 𝐾 𝑘 . We are going to argue that there are transitions 𝑡1 , . . . , 𝑡 𝑘 and an S-vector 𝑥 : 𝑆 → N such that 𝑡 𝑗 ∈ 𝐾 𝑗 and 𝑥 T · 𝐶 (., 𝑡 𝑗 ) < 0 for all 1 ≤ 𝑗 ≤ 𝑘, where 𝐶 is the incidence matrix of 𝑁 and 𝐶 (., 𝑡 𝑗 ) is the column of 𝐶 associated with 𝑡 𝑗 . Suppose that 𝑅 denotes the set of places that are token-empty at 𝑀2 . The 𝑡 𝑗 are constructed by scanning 𝜎 backwards and choosing 𝑘 transitions from 𝑅 • , one for every 𝐾 𝑗 . Scanning backwards ensures that the chosen transitions have input places in 𝑅. Since we need only one representative transition 𝑡 𝑗 from each 𝐾 𝑗 , all other

138

6 More Structure Theory

transitions from 𝐾 𝑗 occurring before 𝑡 𝑗 in 𝜎 can be ignored. The nonnegative place vector 𝑥 is initialised and adjusted after this process, making sure that the desired negative value 𝑥 T · 𝐶 (., 𝑡 𝑗 ) < 0 is obtained. This may necessitate adding large values because a representative transition may have put some tokens before on 𝑅, and 𝜎 may be long. The details are omitted, but an example of the construction is given after the proof. Step (3): We claim that the set of vectors {𝐶 (., 𝑡1 ), . . . , 𝐶 (., 𝑡 𝑘 )} is linearly independent. This can be deduced from the properties of 𝑥 as follows. Because 𝑥 T · 𝐶 (., 𝑡 𝑗 ) < 0 for all 1 ≤ 𝑗 ≤ 𝑘, the vector 𝑥 solves the left-hand side of a variation of Farkas’ lemma, namely the third line of (4.2) in Section 4.4, where 𝐴 is the matrix   𝐴 = 𝐶 (., 𝑡1 ) . . . 𝐶 (., 𝑡 𝑘 ) Therefore, there is no vector 𝑦 ≫ 0 of dimension 𝑘 with 𝐴 · 𝑦 = 0, which means that the 𝑘 columns of 𝐴 are linearly independent. Hence the rank of 𝐶 is at least 𝑘, ending the proof. 6.6 Example 6.7 Figure 6.4 The Petri net shown on the left-hand side of Figure 6.4 is plain, and it satisfies both (PS) and (PT). It has 5 conflict clusters, also shown (in blue) on the left-hand side of the figure. There exists no live marking for this net. The theorem claims that the rank of 𝐶 should be at least 5, and indeed, it actually equals 5. To trace the constructions in the proof, consider the firing sequence 𝑀1

𝑡1 𝑡2 𝑡2 𝑡6 𝑡1 𝑡2 𝑡7 𝑡1 𝑡2 𝑡5 𝑡5 𝑡5 𝑡5

−−−−−−−−−−−−−−−−−−−→ −−−

𝑀2 = {𝑠3 , 𝑠3 , 𝑠3 , 𝑠3 , 𝑠6 , 𝑠6 , 𝑠6 , 𝑠6 }

where 𝑀1 is the marking shown in the figure. In 𝑀2 , places 𝑠3 and 𝑠6 carry four tokens each, and 𝑅 = {𝑠1 , 𝑠2 , 𝑠4 , 𝑠5 , 𝑠7 } is the set of unmarked places.a In 𝑀2 , every transition in every conflict cluster has at least one unmarked input place. In the final step of the proof, the following transitions are recorded, in this order: 𝑡 5 (for 𝐾5 ), then 𝑡2 (for 𝐾4 ), and then, similarly, 𝑡 1 , 𝑡7 , 𝑡6 . The corresponding entries in 𝐶 are linearly independent. The construction of 𝑥, the nonnegative vector which has a negative scalar product with all representative transitions, is also shown and explained in the figure. a

𝑀2 is actually a deadlock, but this does not have to be the case in general.

One half of the promises made for this section have, so far, been honoured. For a sufficient condition for structural liveness, the conflict clusters have been important. For the other half, viz. a necessary condition for structural liveness, the presets become relevant instead.

6.2 Conditions for structural liveness

𝑡1

𝐾3

𝑠1

𝐾4 𝑠3

𝐾1

139

𝜎 = 𝑡1 𝑡2 𝑡2 𝒕6 𝑡1 𝑡2 𝒕7 𝒕1 𝒕2 𝑡5 𝑡5 𝑡5 𝒕5 𝑠2

𝑡3

𝑡2

𝑡5

𝑡4 𝑠5

𝑠4

𝑡6

𝑡7

𝐾5 𝑠6

𝐾2

𝐶 𝒕1 𝑠1 1 𝑠2 1 𝑠3 0 𝑠4 0 𝑠5 0 𝑠6 0 𝑠7 −1

𝒕2 −1 0 1 0 0 0 0

𝑡3 −1 0 0 1 0 0 0

𝑡4 0 −1 0 0 1 0 0

𝒕5 0 −1 0 0 0 1 0

𝒕6 0 0 −1 0 −1 0 1

𝒕7 𝑥 0 6 0 1 0 0 −1 216 0 1296 −1 0 1 36

𝑠7

Fig. 6.4: A non-structurally-live net, its conflict clusters 𝐾1 , . . . , 𝐾5 , and its incidence matrix 𝐶. The representative transitions shown in bold blue have been constructed in the order 𝑡5 , 𝑡2 , 𝑡1 , 𝑡7 , 𝑡6 , backwards from 𝜎, and the conflict clusters are numbered accordingly. 𝑅 = {𝑠1 , 𝑠2 , 𝑠4 , 𝑠5 , 𝑠7 } is the set of empty places at 𝑀2 . 𝑥 is constructed in the order of cluster numbering, first making sure that the scalar product with 𝐶 (., 𝑡6 ) gets negative (by choosing place 𝑠5 ∈ 𝑅 ∩ • 𝑡6 and starting with 𝑥(𝑠5 ) = 1 and 𝑥(.) = 0 elsewhere), then making sure that the scalar product with 𝐶 (., 𝑡7 ) gets negative (by choosing place 𝑠4 ∈ 𝑅 ∩ • 𝑡7 ), while maintaining the negativity of 𝑥 T ·𝐶 (., 𝑡6 ) by multiplying 𝑥(𝑠5 ) with 6, and so on. The factor 6 is chosen because of 6 = |𝑅| + 1 (a transition can put at most 5 tokens on 𝑅, by plainness). Columns 𝑡1 , 𝑡2 , 𝑡5 , 𝑡6 , 𝑡7 of 𝐶 are linearly independent.

Theorem 6.8 Necessary condition for structural liveness Assume that 𝑁 is a weakly connected, plain net covered by a positive Sinvariant. If there exists a live marking of 𝑁, then the rank of 𝐶 is strictly less than |PRESETS 𝑁 |. Proof: (Sketch.) Let 𝑁 be a net which has a live marking and is covered by a positive S-invariant. Thus, 𝑁 is well-formed, and by Proposition 4.14, it also satisfies (PT). Let 𝑚 = |PRESETS 𝑁 | be the number of presets of 𝑁. If 𝑚 = 0, then no transition has any input place. Since there are at least one transition and one place and the net is weakly connected, there is some transition without any input place but with some output place. Such a net cannot satisfy (PS). Hence 1 ≤ 𝑚 ≤ |𝑇 |, where 𝑚 = |𝑇 | in case no two transitions have a common preset. The theorem can be proved by induction on |𝑇 | − 𝑚 ≥ 0. The base is 𝑚 = |𝑇 |, in which case the rank of 𝐶 is less than |𝑇 | = |PRESETS 𝑁 |, since 𝑁 satisfies (PT).

140

6 More Structure Theory

Assuming 𝑚 < |𝑇 | for the inductive step, the trick is to consider any set 𝑈 of transitions with the same preset and to connect them by adding a “regulation circuit” which goes through all transitions of 𝑈, getting a new net 𝑁 [𝑈] with the same set of transitions but at least two more places, and with more presets. The new net satisfies (PS). It is also structurally live, since given a live marking of 𝑁, a live marking of 𝑁 [𝑈] can be obtained by putting sufficiently many tokens on the new places. Also, |PRESETS 𝑁 [𝑈 ] | = |PRESETS 𝑁 | + |𝑈| − 1 > |PRESETS 𝑁 |

(6.1)

where the inequality follows from |𝑈| ≥ 2, and the equality follows from the following consideration. 𝑁 [𝑈] and 𝑁 have the same transitions. Transitions not in 𝑈 have the same presets in 𝑁 and in 𝑁 [𝑈]. Transitions in 𝑈 have the same preset in 𝑁 but different presets in 𝑁 [𝑈]. Because of (6.1), the induction hypothesis can be applied to 𝑁 [𝑈], entailing (rank of 𝑁 [𝑈]) ≤ |PRESETS 𝑁 | + |𝑈| − 2.

(6.2)

It can moreover be shown that (rank of 𝑁) + |𝑈| − 1 ≤ (rank of 𝑁 [𝑈]), which can be combined with (6.2), yielding (rank of 𝑁) ≤ |PRESETS 𝑁 | − 1 and ending the inductive proof. 6.8

𝑠1

𝑡1

𝑡2

𝑠2

𝑠3

𝑠4

𝑠5

𝑡3

𝑡4

𝑡5

𝑡6

𝑠6

𝑠7

𝐶 𝑡1 𝑠1 −1 𝑠2 1 𝑠3 1 𝑠4 0 𝑠5 0 𝑠6 0 𝑠7 0

𝑡2 −1 0 0 1 1 0 0

𝑡3 0 −1 0 0 0 1 0

𝑡4 0 0 −1 0 0 0 1

𝑡5 0 0 0 −1 0 1 0

𝑡6 0 0 0 0 −1 0 1

𝑡7 1 0 0 0 0 −1 −1

𝑡7

Fig. 6.5: An FC-net 𝑁 satisfying (PS), and its incidence matrix.

Example 6.9 Figures 6.5 and 6.6 The net 𝑁 shown in Figure 6.5 is plain and satisfies (PS), and it is structurally live. A live and bounded (even safe, but not reproducible) marking can be obtained by putting a token each on places 𝑠2 and 𝑠5 . The theorem claims that the number of presets should be larger than the rank of the incidence matrix. Indeed, the number of presets is 6, while the rank of the incidence matrix is 5.

6.3 Rank theorem for free-choice nets

141

To trace the inductive proof, we need to consider the set 𝑈 = {𝑡1 , 𝑡2 }, since • 𝑡 = • 𝑡 . Then 𝑁 [𝑈] is shown on the left-hand side of Figure 6.6, and its 1 2 incidence matrix on the right-hand side has rank 6. The inequalities claimed in the proof can thus be verified.

𝑠1

𝑟1 𝑡1

𝑡2 𝑟2

𝑠2

𝑠3

𝑠4

𝑠5

𝑡3

𝑡4

𝑡5

𝑡6

𝑠6

𝑠7

𝐶 𝑡1 𝑠1 −1 𝑠2 1 𝑠3 1 𝑠4 0 𝑠5 0 𝑠6 0 𝑠7 0 𝑟1 1 𝑟2 −1

𝑡2 −1 0 0 1 1 0 0 −1 1

𝑡3 0 −1 0 0 0 1 0 0 0

𝑡4 0 0 −1 0 0 0 1 0 0

𝑡5 0 0 0 −1 0 1 0 0 0

𝑡6 0 0 0 0 −1 0 1 0 0

𝑡7 1 0 0 0 0 −1 −1 0 0

𝑡7

Fig. 6.6: The net of Figure 6.5 with a regulation circuit 𝑡 1 𝑟 1 𝑡2 𝑟 2 𝑡1 . To see that weak connectedness is required as a precondition of this theorem, consider the net consisting of an isolated place and an isolated transition. This net has a positive S-invariant and is live under any marking, but the rank of its incidence matrix is 0 and the number of presets is also 0; so the inequality claimed in the theorem fails to hold. If (PS) holds for an FC-net, or if it is well-formed, transitions 𝑡 with • 𝑡 = ∅ are absent, and then the pleasant property |CC 𝑁 | = |PRESETS 𝑁 | (Lemma 6.5) is valid. Theorems 6.6 and 6.8 can therefore be combined for FC-nets as follows: Corollary 6.10 Structural liveness of FC-nets Let 𝑁 be a weakly connected FC-net satisfying (PS) and (PT). Then 𝑁 has a live marking if and only if its rank is at most |CC 𝑁 | − 1.

6.3 Rank theorem for free-choice nets Corollary 6.10 actually characterises structural liveness for the class of Petri nets in question (free-choice nets satisfying (PS) and (PT)). The results obtained thus far can

142

6 More Structure Theory

be put into a blender, and what comes out in the end is a structural characterisation of the well-formedness of (weakly connected) free-choice Petri nets. Theorem 6.11 The rank theorem for free-choice Petri nets Let 𝑁 be a weakly connected FC-net. 𝑁 is well-formed if and only if it satisfies (PS)∧(PT) and its rank equals |CC 𝑁 | − 1. Proof: (Sketch). (⇒:) (WF) ⇒ (PS)∧(PT): From the cover theorems 5.34 and 5.42 (previous Chapter). (WF) ⇒ rank of 𝑁 ≤ |CC 𝑁 | − 1: From (WF) ⇒ (PS)∧(PT) and Corollary 6.10(⇒). (WF) ⇒ rank of 𝑁 ≥ |CC 𝑁 | − 1: This can be proved by showing that it is possible to define a cycle-free allocation of size 𝑟 ≥ |CC 𝑁 | − 1, where the size of an allocation is the number of transitions it allocates, and that the transitions allocated by a cycle-free allocation are linearly independent. (⇐:) (PS)∧(PT)∧(rank of 𝑁=|CC 𝑁 |−1) ⇒ (WF): From Corollary 6.10(⇐). 6.11 The next example explains the part (WF) ⇒ rank of 𝑁 ≥ |CC 𝑁 | − 1 of this proof. Example 6.12 Cycle-free allocations Consider Figure 6.7. In 𝑁1 , a cycle-free allocation of {𝑠1 , 𝑠3 } is shown in bold blue. Adding the allocation (𝑠2 , 𝑡2 ) would create a cycle; in fact, the resulting subnet would be a – the only – T-component. The rank of 𝐶 (𝑁1 ) is 2. Indeed, the columns corresponding to 𝑡 1 and 𝑡 3 in 𝐶 (𝑁1 ) (coming from 𝐾1 and 𝐾3 ) are linearly independent. In 𝑁2 , adding any one of the allocations (𝑠2 , 𝑡2 ) or (𝑠2 , 𝑡3 ) would create a cycle (one of two existing T-components). The rank of 𝐶 (𝑁2 ) is 1. In Figure 6.8, the allocated transitions {𝑡3 , 𝑡4 , 𝑡5 , 𝑡6 , 𝑡7 } correspond to linearly independent columns in 𝐶, while 𝑡1 is the negative sum of 𝑡 3 , 𝑡4 and 𝑡7 , and 𝑡2 is the negative sum of 𝑡5 , 𝑡 6 and 𝑡 7 . In case the reader recognises two T-components at this point (just like in 𝑁2 of Figure 6.7), she or he is on the right track. The rank theorem has various consequences. One is that the well-formedness problem for FC-nets is decidable in polynomial time. Decision Problem 6.13 Well-formedness problem for FC-nets Input: A free-choice Petri net 𝑁. To decide: Is 𝑁 well-formed?

Theorem 6.14 Deciding well-formedness Problem 6.13 is decidable in polynomial time.

6.3 Rank theorem for free-choice nets

𝐾3

𝑡3

143

𝑡2

𝑡3

𝐾2

𝑡2

𝐾2

𝑠3 𝑁1 :

𝑁2 :

𝐾1

𝐾1 𝑠1

𝑠1

𝑠2

𝑡1 𝐶 ( 𝑁1 ) 𝑡1 𝑠1 −1 𝑠2 1 𝑠3 0

𝑡2 0 −1 1

𝑠2

𝑡1

𝐶 ( 𝑁2 ) 𝑡1 𝑡2 𝑡3 𝑠1 −1 1 1 𝑠2 1 −1 −1

𝑡3 1 0 1

Fig. 6.7: 𝑁1 is well-formed and has 3 clusters 𝐾1 , 𝐾2 , 𝐾3 ; a cycle-free allocation of size 2 is shown. 𝑁2 is well-formed and has 2 clusters 𝐾1 , 𝐾2 ; a cycle-free allocation of size 1 is shown. Larger allocations are cyclic. 𝑠1

𝑡1

𝑡2

𝑠3

𝑠2

𝑡3

𝑠5

𝑠4

𝑡5

𝑡4

𝑠6

𝑡6 𝑠7

𝐶 𝑡1 𝑠1 −1 𝑠2 1 𝑠3 1 𝑠4 0 𝑠5 0 𝑠6 0 𝑠7 0

𝑡2 −1 0 0 1 1 0 0

𝑡3 0 −1 0 0 0 1 0

𝑡4 0 0 −1 0 0 0 1

𝑡5 0 0 0 −1 0 1 0

𝑡6 0 0 0 0 −1 0 1

𝑡7 1 0 0 0 0 −1 −1

𝑡7

Fig. 6.8: A well-formed FC-net with 6 clusters. A cycle-free allocation of size 5 is shown. Note that the size of an allocation is the number of allocated transitions, rather than the number of arcs. Proof: (PS) can be decided by solving the inequality system 𝑥 T · 𝐶 = 0 and 𝑥 T ≥ (1, . . . , 1) in the rational numbers (since the existence of a positive rational solution implies the existence of a positive integer solution). (PT) can be decided similarly. The rank of the incidence matrix and the number of conflict clusters can also be 6.14 computed in polynomial time. Decision Problem 6.15 Liveness and boundedness problem for FC-nets Input: A free-choice Petri net 𝑁 and a marking 𝑀0 . To decide: Is (𝑁, 𝑀0 ) live and bounded?

144

6 More Structure Theory

Problem 6.15 is also polynomially decidable. The following intermediate results will be useful for the algorithm. Lemma 6.16 Minimal proper siphons Let 𝑁 be a well-formed free-choice net and let 𝐷 be a minimal proper siphon in 𝑁. Then 𝐷 is also a trap. Proof: Without using well-formedness, we first observe that for every transition 𝑡, | • 𝑡 ∩ 𝐷| ≤ 1. For suppose • 𝑡 ∩ 𝐷 contains two distinct places 𝑠, 𝑠′ , then 𝐷\{𝑠} is also a siphon, contradicting 𝐷’s minimality. To see this, calculate • (𝐷 \ {𝑠}) ⊆ • 𝐷 (by 𝐷 \{𝑠} ⊆ 𝐷); • 𝐷 ⊆ 𝐷 • (since 𝐷 is a siphon); and 𝐷 • = (𝐷 \{𝑠}) • (by 𝑠′ ∈ 𝐷 \{𝑠} and, because 𝑁 is free-choice, 𝑠′ • = 𝑠• ). Now let 𝑄 be the maximal trap contained in 𝐷; we shall show 𝑄 = 𝐷 in five steps. (i) 𝑄 is a proper trap. This follows from well-formedness, since for any live marking, 𝐷 must contain a marked proper trap and 𝑄 is maximal amongst all traps in 𝐷. (ii) ∀𝑡 ∈ 𝑄 • : | • 𝑡 ∩ 𝑄| = 1. For let 𝑡 ∈ 𝑄 • . Then | • 𝑡 ∩ 𝑄| ≥ 1; also, by the above observation, | • 𝑡 ∩ 𝐷 | ≤ 1, so also | • 𝑡 ∩ 𝑄| ≤ 1 by 𝑄 ⊆ 𝐷. Hence | • 𝑡 ∩ 𝑄| = 1. (iii) ∀𝑡 ∈ 𝑄 • : |𝑡 • ∩𝑄| ≥ 1. For let 𝑡 ∈ 𝑄 • . Then by 𝑄 • ⊆ • 𝑄, 𝑡 ∈ • 𝑄, i.e. |𝑡 • ∩𝑄| ≥ 1. (iv) ∀𝑡 ∈ • 𝑄 : | • 𝑡 ∩ 𝑄| ≥ |𝑡 • ∩ 𝑄|. Assume there is a transition 𝑢 with | • 𝑢 ∩ 𝑄| < 𝑢 𝑡 |𝑢 • ∩ 𝑄|. Then 𝑀 −→ 𝐿 implies 𝐿(𝑄) > 𝑀 (𝑄). By (ii) and (iii), 𝑀 −→ 𝐿 implies 𝐿(𝑄) ≥ 𝑀 (𝑄) for every transition 𝑡. Hence 𝑢 increases the number of tokens on 𝑄 while no other transition decreases it. By well-formedness, this cannot occur. (v) 𝑄 = 𝐷. Let 𝑡 ∈ • 𝑄. Then |𝑡 • ∩ 𝑄| ≥ 1, and by (iv), | • 𝑡 ∩ 𝑄| ≥ 1. So, also 𝑡 ∈ 𝑄 • . Thus 𝑄 is a siphon, and by the minimality of 𝐷, 𝑄 = 𝐷. 6.16 Lemma 6.17 Characterisation of liveness and boundedness An FC-system Σ = (𝑆, 𝑇, 𝐹, 𝑀0 ) is live and bounded iff (a)∧(b). (a) 𝑁 = (𝑆, 𝑇, 𝐹) is well-formed. (b) In 𝑀0 , every proper siphon carries at least one token. Proof: (⇒(a)) follows from the definition of well-formedness. (⇒(b)) follows from the liveness criterion, Theorem 5.17. For the other direction, (⇐), note that, by definition, (a) means that there exists a live and bounded marking. By Theorem 5.34, 𝑁 is covered by strongly connected S-components. Hence every marking, in particular 𝑀0 , is bounded. To prove the liveness of 𝑀0 , let 𝑋 ⊆ 𝑆 be some minimal proper siphon of 𝑁. By (b), 𝑀0 (𝑋) > 0. By Lemma 6.16, 𝑋 is also a trap. Let 𝐷 be any proper siphon of 𝑁. Then 𝐷 contains a minimal proper siphon, and hence also a marked trap 𝑋 with 𝑀0 (𝑋) > 0. By Theorem 5.17, 𝑀0 is a live marking of 𝑁. 6.17 Theorem 6.18 Deciding liveness and boundedness Problem 6.15 is decidable in polynomial time.

6.4 Duality

145

Proof: Here is an algorithm deciding Problem 6.15 using Lemma 6.17: (a) Decide as in Theorem 6.14. (b) Let 𝑅 = {𝑠 ∈ 𝑆 | 𝑀0 (𝑠) = 0}. Every nontrivial siphon has at least one token at 𝑀0 if and only if the only siphon in 𝑅 is ∅, if and only if the maximal siphon in 𝑅 is ∅. Hence Problem 6.15 may be decided by computing the maximal siphon in 𝑅 and checking whether it is empty or not. Algorithm 6.19 Computing the maximal siphon in a place set 𝑋 ⊆ 𝑆 Input: 𝑁 = (𝑆, 𝑇, 𝐹), 𝑋 ⊆ 𝑆; Output: maximal siphon 𝐷 ⊆ 𝑋; 𝐷 := 𝑋; while ∃𝑠 ∈ 𝐷, 𝑡 ∈ • 𝑠 : 𝑡 ∉ 𝐷 • → choose such an 𝑠; 𝐷 := 𝐷 \ {𝑠} endwhile After termination, ∀𝑠 ∈ 𝐷 : • 𝑠 ⊆ 𝐷 • , which means that 𝐷 is a siphon. Moreover, every siphon 𝐷 ′ ⊆ 𝑋 satisfies 𝐷 ′ ⊆ 𝐷, since places of 𝐷 ′ never enter the while loop. The loop is executed at most |𝑋 | ≤ |𝑆| times, because a place is taken away every time. In order to find a place satisfying the loop’s guard, at most |𝑋 | ≤ |𝑆| places have to be tested. The test • 𝑠 ⊈ 𝐷 • can be done in time 𝑂 (| • 𝑠| · |𝐷 • |) ≤ 𝑂 (|𝑇 | · |𝑇 |). The time taken by the algorithm is therefore in 𝑂 (|𝑆| 2 · |𝑇 | 2 ). Thus, the algorithm is polynomial, ending the proof. 6.18

6.4 Duality The rank theorem directly leads to the following duality theorem. Let the reverse of a net 𝑁 be obtained by changing the directions of all arcs, the dual by exchanging places and transitions, and the reverse-dual by changing directions of all arcs as well as exchanging places and transitions. Example 6.20 Figures 6.9 and 6.10 The two nets shown in Figure 6.9 are (up to isomorphisms) duals and reverses of each other, and both are self-reverse-dual. Figure 6.10 depicts two nets which are reverse-duals of each other but are not self-reverse-dual. Note that the S/T-components of one of them are T/S-components of the other.

146

6 More Structure Theory

Fig. 6.9: Two FC-nets which are duals and reverses of each other, as well as being self-reverse-dual.

Fig. 6.10: Two FC-nets which are reverse-duals of each other. However, they are not self-reverse-dual.

Theorem 6.21 Duality theorem for FC-nets A plain, weakly connected net is a well-formed FC-net if and only if its reverse-dual is a well-formed FC-net. Proof: The FC property and conditions (PS) and (PT) are invariant with respect to reverse-duality. Moreover, if 𝐶 is the incidence matrix of some net, then 𝐶 T is the incidence matrix of its reverse-dual, and the ranks of 𝐶 and 𝐶 T are equal. Finally, the number of clusters is the same in a free-choice net and in its reverse-dual. The claim follows with Theorem 6.11. 6.21 With an independent proof, the duality theorem makes the two covering theorems of Section 5.7 derivable from each other. However, both covering theorems have been used in the proof of the rank theorem. The dual or the reverse of a well-formed FC is not necessarily also well-formed (even if it is also an FC-net). Also, the number of conflict clusters may differ.

6.5 Net reductions

147

Example 6.22 Figures 6.9 and 6.10 The net on the left-hand side of Figure 6.9 is well-formed, but the net on the right-hand side is not well-formed. The former has 6 clusters while the latter has 5 clusters. Both nets shown in Figure 6.10 are well-formed.

6.5 Net reductions A largely graph-oriented, “algorithmic”, way of characterising well-formed FC-nets can be achieved by net reductions. Only three rules are needed. Suppose in the following that (𝑆, 𝑇, 𝐹) is a plain and weakly connected net. ST-reduction: Suppose 𝑠 ∈ 𝑆 and 𝑡 ∈ 𝑇 such that • 𝑠 ≠ ∅, 𝑡 • ≠ ∅, 𝑠• = {𝑡}, • 𝑡 = {𝑠}, and ( • 𝑠 × 𝑡 • ) ∩ 𝐹 = ∅. Then omit 𝑠 and 𝑡 and all arrows around 𝑠 and 𝑡, while introducing a new arrow from every 𝑢 ∈ • 𝑠 to every 𝑟 ∈ 𝑡 • . S-reduction: Suppose a place 𝑠 is nonnegatively linearly dependent on a set of other places. Then omit 𝑠, along with all arrows around it. T-reduction: Suppose a transition 𝑡 is nonnegatively linearly dependent on a set of other transitions. Then omit 𝑡, along with all arrows around it. Of course, the last two rules are not really pure and simple graph theory, because they refer to the linear independence of places and transitions. Example 6.23 Figure 6.11 In Figure 6.11, rules are applied as follows: (a) to (b): S-reduction and T-reduction. The vector for place 𝑠2 is 1× the vector for place 𝑠1 , hence 𝑠2 depends linearly and nonnegatively on 𝑠1 . Similarly, transition 𝑐 is 1× transition 𝑏. (In this special case, the two places and the two transitions actually duplicate each other.) (b) to (c): ST-reduction with 𝑠 and 𝑏. (c) to (d): ST-reduction with 𝑠1 and 𝑡. Note that at the end (in Figure 6.11(d)), a plain loop consisting of a single place and a single transition is obtained. Call this net the loop net. Theorem 6.24 Reduction theorem for FC-nets A plain, weakly connected FC-net is well-formed if and only if it can be reduced to the loop net by the three reduction rules defined above.

148

6 More Structure Theory 𝑎

𝑎 𝑠1

𝑎

𝑠1

𝑠2

𝑐

𝑏 𝑠3

𝑠3

𝑡

𝑠

𝑠

(d)

(c)

(b)

(a)

𝑎

𝑠1

𝑡

𝑡

𝑏

𝑠3

𝑠3

Fig. 6.11: A sample Petri net reduction.

6.6 Home states and reversibility So far, we have been concerned overwhelmingly with the properties of boundedness and liveness (and well-formedness) of a Petri net, with a brief interlude to weak liveness (the existence of infinite executions) in Section 4.5. Other interesting properties of a (marked) net, such as reversibility, have all reasons to be slightly jealous. To redeem this, we now take a closer look at reversibility in the context of free-choice nets. If a Petri net is reversible, then this means that all reachable states can be visited and re-visited as often as one likes. This is wrong for 𝑁1 in Figure 6.12. Once 𝑡 3 or 𝑡6 have occurred, there is no way of returning back to the initial state {𝑠2 , 𝑠5 }. It appears that a free-choice net can be live and safe and still not reversible. Let us take a closer look at what one can say up front about the relationship between boundedness, liveness and reversibility. Liveness and safeness do not imply reversibility. But reversibility implies liveness of singly live transitions. This is immediately clear because reversibility allows one to return to the initial state as often as desired, and from there, any transition can eventually be executed. On the other hand, reversibility does not imply boundedness, not even for live marked graphs. For is reversible but unbounded. instance, The property of a state being reachable from arbitrary reachable markings is called the home state property. In 𝑁2 of Figure 6.12, the initial marking is a home state. But after firing 𝑡1 once from the initial state, a lot of other transitions have to be put into action in order to bring back the initial state. Definition 6.25 Home state Let 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 ) be a marked net. A marking 𝑀 ∈ [𝑀0⟩ is called home state or home marking if ∀𝑀 ′ ∈ [𝑀0⟩ : 𝑀 ∈ [𝑀 ′⟩.

6.6 Home states and reversibility

149

𝑡1

𝑠1

𝑁1

𝑀0

𝛼 𝑡1

𝑡4 𝑡3 𝑀0 −→ g 𝑀0

𝑁2

𝑡2

𝑀0

𝛼

𝑡4 𝑡3 𝑡1 𝑡2 𝑡3 𝑡1 𝑡2 𝑡5

−−−−−−→ −−

𝑡1 𝑡4 𝑡3 𝑡2 𝑡3 𝑡1 𝑡2 𝑡5

−−−−−−→ −−

𝑀0 𝑀0

𝑡2 𝑠2

𝑠3

𝑠4

𝑠5

𝑡3

𝑡4

𝑡5

𝑡6

𝑡4 𝛼

𝑠6

𝑠7 𝑡7

𝛼 𝑡3

𝑡5

𝑁𝑡1

𝑁init

Fig. 6.12: Not a home state (l.h.s.): one of the two T-components is indicated by bold arcs; neither T-component is activated; there is also a token-empty trap 𝑄 = {𝑠1 , 𝑠3 , 𝑠4 , 𝑠6 , 𝑠7 }. Home state (r.h.s.): the allocation 𝛼 is shown in blue; Tcomponent 𝑁𝑡 is not activated while T-component 𝑁init is; after firing 𝑡1 , the firing sequence 𝑡 4 𝑡 3 𝑡 2 𝑡3 𝑡1 𝑡2 𝑡5 brings back 𝑀0 .

Incidentally, this allows various equivalent (re-)formulations of reversibility: a marked net is reversible iff the initial state is a home state iff all reachable states are home states iff the reachability graph is strongly connected. Observe that in a strongly connected T-system, every live marking is a home marking. This follows quite directly from Theorem 5.31(a)⇔(c) which states that liveness is equivalent to the realisability of the all-ones Parikh vector 1. But let us now have a closer look at FC-systems. There are various ways of convincing oneself that 𝑀0 , shown on the left-hand side of Figure 6.12, is not a home state. One possibility is to construct the reachability graph (which has actually previously been depicted in Figure 4.3). This graph has a unique “last” strongly connected component, but 𝑀0 is not contained in it. In fact, all reachable markings except 𝑀0 are home states. Another, much more structural, possibility is to observe that the place set 𝑄 = {𝑠1 , 𝑠3 , 𝑠4 , 𝑠6 , 𝑠7 } is a trap which is initially token-free. Both 𝑡 3 and 𝑡 6 put a token on 𝑄, and by the trap property (Lemma 5.16), 𝑄 can never again become empty of tokens. Hence 𝑀0 cannot possibly be a home state. For live and bounded FC-nets, a converse is also true: Theorem 6.26 Trap theorem for FC-nets Let 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 ) be a live and bounded FC-system. 𝑀0 is a home state ⇐⇒ each proper trap is initially marked.

150

6 More Structure Theory

Proof: (Sketch.) (⇒): By contraposition. Suppose 𝑄 ≠ ∅ is a trap satisfying 𝑀0 (𝑄) = 0. By liveness, is is possible to find a marking 𝑀 ∈ [𝑀0 ⟩ such that 𝑀 (𝑄) > 0. Thus (as in the example) 𝑀0 is not a home state. (⇐): Call a strongly connected T-component of 𝑁 activated if has no token-free cycle. Taken in isolation, an activated T-component is a live T-system, to which Theorem 5.31 applies. Since every marking of a (strongly connected, live) T-system is a home state, if 𝑡 lies inside an activated T-component in a live FC-system, and if 𝑡 𝑀 −→ 𝑀 ′ , then 𝑀 ∈ [𝑀 ′ ⟩; that is, the firing of 𝑡 can be reversed. This argument 𝑡1 𝑡2 extends inductively to firing sequences. If, in a firing sequence 𝑀0 −→ 𝑀1 −→ 𝑡𝑛 𝑀2 . . . 𝑀𝑛−1 −→ 𝑀𝑛 , every transition 𝑡𝑖 is inside some activated T-component, then 𝑀0 ∈ [𝑀𝑛⟩. 𝑡

Now suppose that 𝑀0 −→ 𝑀. We want to prove that 𝑀0 ∈ [𝑀 ⟩. This suffices because the argument can be extended inductively. If 𝑡 is inside some strongly connected T-component which is activated at 𝑀0 , then by the argument just given, 𝑀0 ∈ [𝑀⟩. However, there might not be any activated T-components containing 𝑡. It can be shown show that nevertheless, 𝑀0 can be reached from 𝑀: • Start with some initially activated strongly connected T-component 𝑁init . From the fact that all nonempty traps are marked initially, the existence of 𝑁init can be proved. • Using the covering theorem, pick any T-component 𝑁𝑡 containing 𝑡 and construct an allocation 𝛼 which “points to 𝑁𝑡 ”. This can be done by first allocating to places of 𝑁1 their unique output transitions in 𝑁𝑡 , and then progressing backwards. If 𝑠1 already has been allocated a transition and 𝑠2 ∈ • ( • 𝑠1 ) has not, then choose an arbitrary transition in 𝑠2• ∩ • 𝑠1 for 𝑠2 . • Now execute activated T-component(s) as much as possible by a suitable firing 𝜏 f0 , such that neither 𝑡 nor any other transition in its conflict sequence 𝑀0 −→ 𝑀 cluster occur in 𝜏 (so that 𝑡 remains enabled throughout 𝜏). Do this in accordance with 𝛼, so that tokens are “moved towards” 𝑁𝑡 (with the aim, of course, of 𝜏 f0 with a sequence 𝜏 satisfying this property. 𝑁𝑡 is activating it). Say, 𝑀0 −→ 𝑀 𝜏𝑡 𝑡𝜏 f0 , while 𝑡 is still enabled. Thus both 𝑀0 −→ e and 𝑀0 −→ e activated at 𝑀 𝑀 𝑀. f • Because 𝑁𝑡 is activated at 𝑀0 and 𝑡 is in 𝑁𝑡 , and because only activated T′

𝜏𝑡 𝜏 e −→ components are fired (whose firings can be “undone”), 𝑀0 −→ 𝑀 𝑀0 with some sequence 𝜏 ′ , and because 𝜏 and the subsequent firing of 𝑡 can be exchanged, 𝑡 𝜏 𝜏′ e −→ also 𝑀0 −→ 𝑀 −→ 𝑀 𝑀0 . Thus 𝑀0 ∈[𝑀⟩. 6.26

6.6 Home states and reversibility

151

Example 6.27 Figure 6.12 continued An example explaining this proof is given by 𝑁2 shown in Figure 6.12. 𝑡1 Suppose 𝑀0 −→ 𝑀. We want to show that 𝑀0 can be reached from 𝑀. Transition 𝑡1 is inside the T-component 𝑁𝑡1 shown in the figure, but 𝑁𝑡1 is not activated. However, there is another, initially activated, T-component 𝑁init whose transitions can be executed in a reversible way. A maximal sequence activating 𝑁𝑡1 and not containing 𝑡 1 is 𝜏 = 𝑡 4 𝑡 3 which can be fired from 𝑡4 𝑡3 𝑡4 𝑡3 f0 and from 𝑀: 𝑀 −→ e (note that by 𝛼, 𝑡 3 was chosen 𝑀0 : 𝑀0 −→ 𝑀 𝑀 ′ e f0 , since its rather than 𝑡5 ). From 𝑀, the sequence 𝜏 = 𝑡 2 𝑡 3 leads back to 𝑀 f0 , transition firings are done in the (now) activated T-component 𝑁𝑡 . From 𝑀 the sequence 𝑡 1 𝑡 2 𝑡5 , which “undoes” 𝑡 1 , leads back to 𝑀0 ; here, 𝑡1 𝑡2 “undoes” 𝑡3 and 𝑡 5 “undoes” 𝑡 4 . Summarising:

𝑀0

𝜏 = 𝑡4 𝑡3 f0 𝑀

𝑡1

𝑡1 𝜏 ′ = 𝑡2 𝑡3

𝑡4 𝑡3 𝑀

e 𝑀

𝑡1 𝑡2 𝑡5 f0 𝑀

𝑀0

The free-choice property is important for Theorem 6.26. If it is omitted, counterexamples exist even for safe asymmetric choice systems.3 Corollary 6.28 Existence of home states Let 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 ) be a live and bounded FC-system. There exists a home state 𝑀 ∈ [𝑀0⟩. This follows from the trap theorem, because every trap 𝑄 ≠ ∅ has at least one incoming transition and because by liveness, every transition can be fired at least once. Corollary 6.29 Confluence Let 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 ) be a live and bounded FC-system and let 𝑀1 , 𝑀2 ∈ [𝑀0⟩. Then [𝑀1⟩ ∩ [𝑀2⟩ ≠ ∅. This is because any home state is in the intersection [𝑀1⟩ ∩ [𝑀2⟩. Corollary 6.30 Deciding the home state property polynomially Let 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 ) be a live and bounded FC-system and let 𝑀 ∈ [𝑀0⟩. It can be decided in polynomial time whether or not 𝑀 is a home state.

3 See Section 8.2 for asymmetric choice systems.

152

6 More Structure Theory

Proof: The naive method does not work since the number of traps can grow exponentially in the size of the net. Algorithm 6.19 can be modified (but keeping its polynomiality) for traps: Algorithm 6.31 Computing the maximal trap in a place set 𝑋 ⊆ 𝑆 Input : 𝑁 = (𝑆, 𝑇, 𝐹), 𝑋 ⊆ 𝑆; Output : maximal trap 𝑄 ⊆ 𝑋; 𝑄 := 𝑋; while ∃𝑠 ∈ 𝑄, 𝑡 ∈ 𝑠• : 𝑡 ∉ • 𝑄 → choose such an 𝑠; 𝑄 := 𝑄 \ {𝑠} endwhile Finally, in order to decide whether 𝑀 ∈ [𝑀0 ⟩ is a home state, let 𝑋 𝑀 be the set of places not marked by 𝑀. By the trap theorem, 𝑀 is a home state if and only if Algorithm 6.31 returns the empty set 𝑄 = ∅ when started with 𝑋 𝑀 . 6.30

6.7 Bibliographical remarks and further reading The rank theorem was discovered and developed in the early 1990s by a number of groups including [CCS91] (Javier Campos, Giovanni Chiola, Manuel Silva), and [Des92a] (Jörg Desel). The duality principle is due to Michel Hack [Hac72]. Net reductions are described in [DE95] (Jörg Desel, Javier Esparza). The trap theorem stems from [BDE92] (Eike Best, Jörg Desel, Javier Esparza). A safe asymmetric choice net whose initial state is not a home state in described in [BV84] (Eike Best and Klaus Voss). [DE95] contains a different example serving the same purpose. This chapter closely follows Jörg Desel’s presentation in [Des92b], and the proofs are mostly also in [DE95] and in various pieces of work by the research group around Manuel Silva; see, e.g., [RTS98; TS96] (Laura Recalde, Enrique Teruel, and Manuel Silva) where many generalisations and extensions are also described. The term “regulation circuit” was introduced by Kurt Lautenbach in his PhD Thesis [Lau73]. Lemma 6.16 covers parts of Theorem 4.30 and Proposition 5.4 from [DE95]. Structural results in a similar vein have been discovered, e.g.: [Esp98] (Javier Esparza; NP-completeness of reachability in live and safe FC-systems); a blocking theorem [GHM03; Weh10b] (Bruno Gaujal, Stefan Haar, Jean Mairesse, and Joachim Wehler), stating that for any non-conflicting transition of an FC-system, there exists a unique reachable marking in which no transition is enabled except the selected one; general reachability criteria due to Jeng S. Huang and Tadao Murata [HM98], with related results in [LR94] (Kurt Lautenbach and Hanno Ridder) and [YY03]

6.8 Exercises

153

(Hsu-Chu Yen and Lien-Po Yu); and the proof, by Joachim Wehler [Weh10a], of a conjecture on the liveness and safeness of a “coloured” version of free-choice systems by Hartmann J. Genrich and P.S. Thiagarajan [GT84], using the notion of frozen tokens [BM84] (Eike Best and Agathe Merceron).

6.8 Exercises Exercise 6.1 The rank theorem Find a (plain and pure) Petri net which is not free-choice and for which the equivalence stated in the rank theorem (Theorem 6.11) fails to hold. Exercise 6.2 Liveness and safeness Prove that a well-formed FC-net 𝑁 has a live and safe marking. Exercise 6.3 Reduction of free-choice systems Consider Figure 6.9. a) Show that the net on the left-hand side (which is well-formed) can be reduced to a self-loop by the net reduction rules. b) Argue that the net on the right-hand side (which is not well-formed) cannot be reduced to a self-loop by the net reduction rules. Exercise 6.4 Structure theory Consider the unmarked Petri net 𝑁 in Figure 6.13. (a) Show that 𝑁 is an FC-net. Is it an fc-net? (b) Using the linear-algebraic characterisation of well-formedness, prove that 𝑁 is well-formed. (c) Find some live and safe marking of 𝑁. (d) Using the reduction rules, prove that 𝑁 is well-formed. Exercise 6.5 ST-reduction Why can the premise • 𝑠 ≠ ∅ in the ST-reduction rule (Section 6.5) not be omitted? Exercise 6.6 More structure theory Consider the fc-net 𝑁 shown in Figure 6.14. a) Show by reduction that 𝑁 is well-formed. b) Find all S-components and all T-components of this net. Also, give a positive S-invariant and a positive T-invariant.

154

6 More Structure Theory 𝑠0

𝑒

𝑎

𝑠1

𝑐

𝑠3

𝑠2

𝑓

𝑠4

𝑑

𝑏

Fig. 6.13: An unmarked Petri net 𝑁 to Exercise 6.4. 𝑠4 𝑠2

𝑒

𝑑 𝑓 𝑐

𝑏

𝑠6 𝑠1

𝑔

𝑎

𝑠5

𝑠3

Fig. 6.14: An unmarked free-choice Petri net 𝑁 to Exercise 6.6.

Solution 6.1 (Exercise 6.1) The Petri net net system shown in Figure 6.15 is live and bounded. Hence the net is structurally live. The rank of its incidence matrix is 2, since the columns belonging to 𝑡 1 and 𝑡2 are linearly dependent, and so are the columns belonging to 𝑡3 and 𝑡 4 . There are two conflict clusters, {𝑡 1 , 𝑡2 , 𝑡3 } and {𝑡4 }. The net is not FC.4 Solution 6.2 (Exercise 6.2) Let 𝑁 be a well-formed Petri net. A live and safe marking can be constructed in three steps: (i) Let 𝑀1 be a live and bounded marking of 𝑁. Such a marking exists by wellformedness. (ii) Check whether there is a reachable marking 𝑀 ′ ∈ [𝑀1⟩ (a finite set) and a place 𝑠 with 𝑀 ′ (𝑠) > 1. If so, define a new marking 𝑀2 by taking away 𝑀 ′ (𝑠) − 1 tokens from 𝑠 and by 𝑀2 (𝑠) = 𝑀 ′ (𝑠) for 𝑠′ ≠ 𝑠. 𝑀2 is again live by the 4 However, the net is “asymmetric choice” (see Section 8.2). Asymmetric choice nets are the “closest superclass” of FC nets that are considered in this book (see Figure 12.4).

6.8 Exercises

155 𝑡1

𝑡2

𝑡3

𝑡4

Fig. 6.15: A live and bounded non-free-choice system.

Commoner/Hack criterion. It is also bounded, since by the rank theorem, there is a positive S-invariant 𝑥, and hence, every marking is bounded. (iii) Repeat Step (ii), getting markings 𝑀1 , 𝑀2 , 𝑀3 , 𝑀4 , . . . until it is no longer applicable. This procedure terminates at some index 𝑖 ≥ 1 because the value 𝑥 T · 𝑀𝑖 decreases at every step. The result is a safe marking 𝑀 = 𝑀𝑖 . Solution 6.3 (Exercise 6.3) a): The ST-rule can be applied four times to the transitions in the middle of the net. Then the S-rule and the T-rule can by applied once. Finally, the ST-rule yields a self-loop. b): None of the rules can be applied to this net. Solution 6.4 (Exercise 6.4) (a): For example, 𝑐 and 𝑒 share an input place 𝑠0 . But they also share all their input places: • 𝑐 = • 𝑒 = {𝑠0 , 𝑠1 , 𝑠4 }. The net is not fc, since, for instance, 𝑐 and 𝑒 share an input place, but this is not their only shared input place. (b): The incidence matrix is 0 0 −1 1 −1 1 © ª ­ 0 0 −1 1 −1 1 ® ­ ® 𝐶 = ­−1 1 0 0 1 −1® ­ ® ­ 1 −1 1 −1 0 0 ® « 0 0 −1 1 −1 1 ¬ There is a positive S-invariant 𝑥 = (1 1 3 3 1) T and a positive T-invariant 𝑦 = (1 1 1 1 1 1) T , so both (PS) and (PT) are satisfied. The rank of 𝐶 equals 2, and there are three conflict clusters, CC 𝑁 = {{𝑐, 𝑒}, {𝑎, 𝑓 }, {𝑏, 𝑑}} (and their respective input places). The rank of 𝐶 thus equals |CC 𝑁 | − 1. Moreover, the net

156

6 More Structure Theory

is connected. Hence Part (⇐) of the rank theorem yields the well-formedness of 𝑁. (c): One token on 𝑠2 and no token elsewhere is a live and safe marking. (d): By S-reduction, we can erase places 𝑠1 and 𝑠4 (and their surrounding arrows) since both are linear duplicates of 𝑠0 . Observe that the third column of (the remaining part of) 𝐶, which corresponds to transition 𝑐, is the sum of the first and fifth columns, corresponding to 𝑎 and 𝑒, respectively. Hence 𝑐 is nonnegatively linearly dependent on 𝑎 and 𝑒 and can be omitted by T-reduction. Intuitively, the effect of 𝑐 can be simulated by 𝑎 and 𝑒.5 Similarly, 𝑓 and 𝑏 can be omitted by two T-reductions. After these steps, a ring with three transitions and three places is left. This ring can be contracted to a loop by applying the ST-reduction rule twice. Solution 6.5 (Exercise 6.5) Because, for instance, is non-well-formed but can be reduced to a single loop by an ST-reduction if • 𝑠 ≠ ∅ is omitted as a premise. Solution 6.6 (Exercise 6.6) a) We have successively: ST-reduction contracts 𝑠5 and 𝑒. ST-reduction contracts 𝑠1 and 𝑏. T-reduction removes 𝑔 (since 𝑔 is the sum of 𝑎 and 𝑑 and thus has the same effect as 𝑎𝑑). T-reduction removes 𝑐 (since 𝑐 has has the same effect as dfa; namely, no effect). ST-reduction contracts 𝑠2 and 𝑑. S-reduction removes 𝑠4 (which is now redundant because of 𝑠3 ). ST-reduction contracts 𝑠3 and 𝑓 . What remains is a loop between 𝑎 and 𝑠6 . b) There are two S-components, spanned by {𝑠1 , 𝑠2 , 𝑠4 , 𝑠5 , 𝑠6 } and by {𝑠3 , 𝑠6 }, respectively. There are three T-components, spanned by {𝑏, 𝑐}, by {𝑎, 𝑏, 𝑑, 𝑓 }, and by {𝑒, 𝑓 , 𝑔}, respectively. Positive S- and T-invariants can be obtained by adding the respective place and transition vectors. (1 1 1 1 1 2 ) T is a positive S-invariant. (1 2 1 1 1 2 1 ) T is a positive T-invariant

5 More formally, the effect of a transition sequence 𝜎 is defined as 𝐶 · P ( 𝜎) (compare Definition 11.9 below). In the present exercise, we have 𝐶 · P (𝑐) = 𝐶 · P (𝑎𝑒).

Chapter 7

Program Verification Using Traps

Abstract In this chapter, a verification method based both on linear algebra and on traps in the sense of the previous chapters is presented. It is fast, useful for a sizeable class of systems, and applicable to computer programs via a translation into Petri nets. Similar to testing, the technique is semi-exact. In contrast to testing, it is based on overapproximating, rather than underapproximating, the state space of a system.

Contents 7.1 7.2 7.3 7.4 7.5 7.6 7.7 7.8

Introduction . . . . . . . . . . . . . . . . . . . . Properties of systems . . . . . . . . . . . . . . . Underapproximations and overapproximations . . Verification using the marking equation and traps Peterson’s mutual exclusion algorithm . . . . . . Limits of marking equation and traps . . . . . . Bibliographical remarks and further reading . . . Exercises . . . . . . . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

157 158 160 164 167 171 172 173

7.1 Introduction Siphons and traps have played a major role in the previous two chapters. Both are objects of a “once so, always so” type. Siphons continue to be token-empty, once they are empty of tokens; and once a trap is marked, it remains marked forever. Properties of a similar kind can be found in fields quite afar from Petri nets. Pandora’s box, the movie Groundhog Day, or Newton’s Third Law of Motion being, perhaps, slightly outlandish examples, there is also an analogy closer to this text’s topic: program invariants. Invariants play an important role in program verification. They © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. Best and R. Devillers, Petri Net Primer, Computer Science Foundations and Applied Logic, https://doi.org/10.1007/978-3-031-48278-6_7

157

158

7 Program Verification Using Traps

are assertions characterised by the pattern “once true, always true”. That is, once a program invariant holds true, it continues to be true; it never turns from true to false during the program’s execution. A trap 𝑄 that is initially marked in a Petri net with an initial marking 𝑀0 (i.e., for which 𝑀0 (𝑄) > 0 holds) remains marked throughout all reachable states. In that sense, a trap behaves like a system invariant. It can also be viewed as a necessary condition for reachable states: if 𝑀 is reachable from 𝑀0 and 𝑀0 (𝑄) > 0 then 𝑀 (𝑄) > 0. Being a necessary condition, the trap property 𝑀 (𝑄) > 0 resembles the marking equation. This chapter demonstrates how the marking equation and traps can be exploited in order to prove (rather than to test) system properties. The reader may have noticed that in the previous two chapters, no reachability graph was to be seen at all; rather, we were able to deduce behavioural properties from structural ones. A similar kind of structural reasoning will be applied in the present chapter to general systems, rather than to particular classes such as free-choice nets. Of course, as we shall see, there is a price to be paid for this generality.

7.2 Properties of systems Abstractly, semantically, and in simple cases, one may consider a system property P to be a function from the set of states of a system to the set {false, true} of truth values. If P(𝑠) = true for a state 𝑠, then we say that P holds in 𝑠, and if P(𝑠) = false, then property P does not hold in 𝑠. Definition 7.1 Properties of a Petri net Let 𝑁 = (𝑆, 𝑇, 𝐹) be a Petri net and let N𝑆 be its state set.a A property of 𝑁 is a function P : N𝑆 → {false, true}. A property P holds at a marking 𝑀 ∈ N𝑆 if P(𝑀) = true. P is an inductive invariant of 𝑁 if 𝑡

∀𝑀, 𝑀 ′ ∈ N𝑆 ∀𝑡 ∈ 𝑇 : ( P(𝑀) = true ∧ 𝑀 −→ 𝑀 ′ ) ⇒ P(𝑀 ′ ) = true Let P be a property of 𝑁 and let 𝑀0 be an initial marking of 𝑁. P holds generally in (𝑁, 𝑀0 ) if ∀𝑀 ∈ [𝑀0⟩ : P(𝑀) = true. P is invariant in (𝑁, 𝑀0 ) if 𝑡

∀𝑀, 𝑀 ′ ∈ [𝑀0⟩ ∀𝑡 ∈ 𝑇 : ( P(𝑀) = true ∧ 𝑀 −→ 𝑀 ′ ) ⇒ P(𝑀 ′ ) = true a

For the set of all states of an unmarked net, see Definition 1.3 in Chapter 1.

This definition highlights some of the – often subtly different – articulations of “property” and of “invariance” that can be found in the literature. There are different ways of expressing a property P syntactically. A popular method is the specification of P by means some logical formula – classically, by a (first-order)

7.2 Properties of systems

159

predicate. Every property divides the state space into two disjoint portions: the set of states in which it holds, and the rest in which it does not hold. Conversely, every such division can be interpreted as a property, even though it may not be easy to find a predicate describing this property syntactically. In this chapter, we shall encounter a linear-algebraic (rather than predicate-based) technique for dividing state spaces and expressing and analysing properties. Note that P being a property of 𝑁 does not depend on any particular initial marking of 𝑁. The same is true for the notion of an inductive invariant. If an initial marking is taken into account, P being an invariant of (𝑁, 𝑀0 ) only requires inductive invariance over reachable states. There is also a difference between a property P holding in 𝑀0 , which means just its truth in the single state 𝑀0 , and P holding (generally) in (𝑁, 𝑀0 ), which means that P holds true in every reachable marking. In both cases, it does not matter which values are assumed by P for non-reachable markings. Note also that P may hold for some (𝑁, 𝑀0 ) and may fail to hold for (𝑁, 𝑀0′ ), for the same net 𝑁 but different initial markings. Of course, we have the following: Lemma 7.2 Invariance Let 𝑁 be a Petri net with initial marking 𝑀0 . Let P be a property of 𝑁. Then P holds (generally) for (𝑁, 𝑀0 ) if and only if P holds for 𝑀0 and is invariant in (𝑁, 𝑀0 ). Proof: (⇒) is trivially valid, since if P holds generally, then P(𝑀) equals true for every reachable marking 𝑀 in [𝑀0 ⟩, including 𝑀0 . (⇐) can be proved easily by induction on the length of a firing sequence by which 𝑀 ∈ [𝑀0 ⟩ can be reached from 𝑀0 . 7.2 Example 7.3 Figure 7.1 Consider the two Petri nets shown in Figure 7.1 and property Pndead : N𝑆 → {false, true} 𝑡

defined by Pndead (𝑀) = true ⇔ ∃𝑡 ∈ 𝑇 : 𝑀 −→ 𝑁1 has exactly one reachable marking 𝑀 which is its initial marking and which enables transition 𝑡1 . Therefore, Pndead holds both in the initial marking 𝑀 of 𝑁1 and also generally in (𝑁1 , 𝑀). By contrast, 𝑁2 has two reachable 𝑡2 markings {𝑀0 , 𝑀1 } with 𝑀0 −→ 𝑀1 and 𝑀1 being a deadlock marking. Consequently, Pndead holds in the initial marking 𝑀0 of 𝑁2 but not in 𝑀1 and therefore also not generally in (𝑁2 , 𝑀0 ).

160

7 Program Verification Using Traps

( 𝑁1 , 𝑀 )

𝑡1

( 𝑁2 , 𝑀0 )

𝑡2

Fig. 7.1: Pndead holds in 𝑀 and (generally) in (𝑁1 , 𝑀). Pndead holds in 𝑀0 , but not, generally, in (𝑁2 , 𝑀0 ).

7.3 Underapproximations and overapproximations The state space of a system, i.e., the set of reachable states, is of paramount interest, because all properties of the system, good or bad, are manifested in it, be it directly or indirectly. However, this state space is in general very unwieldy, both in terms of its size and in terms of its shape. The size of a state space can be enormous, even when finite. To claim that it is “of exponential size in the size of the system” may be true but borders on an understatement. Consider, for instance, 𝑛 simple binary variables (also known as bits) which can flip their values independently of each other. The state space of such a system already has 2𝑛 elements. A Petri net variant in which variables can only “flip” once is shown in Figure 7.2; still there are 2𝑛 reachable states.

• • •

(𝑛 times)

• • •

Fig. 7.2: An 𝑛-bit system with 2𝑛 reachable states. It can flip, but not flop. If 𝑛 is only 10000, then the state space of the 𝑛-bit system already exceeds the estimated number of atoms in the known universe. But there are even larger – still finite – state spaces, say of Ackermann size. We shall, in fact, encounter an example of such a very large state space in a later chapter of this book.1 The shape of a state space can be truly bizarre. The reachability graph of the 𝑛-bit example in Figure 7.2 has a very regular shape.2 But systems are typically much more complicated, and there is no known systematic (let alone automatic) way of characterising their state spaces. The blue part of Figure 7.3 is a feeble attempt to adumbrate a “real”, very complex, state space by means of a simplistic drawing.3 Suppose that you wish to convince yourself of the correctness of some program that you have written. A common way of doing this is by “testing”. Theoretically, 1 See Chapter 10. 2 In fact, its state space is an 𝑛-dimensional cube. 3 This drawing is derived from a picture designed by Till Tantau, the creator of TikZ, the drawing program used in this book. See https://texample.net/tikz/examples/maze/.

7.3 Underapproximations and overapproximations

𝑠0

161

𝑠0 ¬P

Fig. 7.3: A fairly complex state space coloured blue. The initial state is 𝑠0 . On the left-hand side, the state space is completely unexplored. On the right-hand side, a single execution has been traced in it, but the bad state satisfying ¬P on the bottom right is not on its way and has not been found by it.

testing consists of checking the behaviour of the program on some well-chosen set of test cases. For each of these test cases, the program is started at the initial state 𝑠0 , and it is checked whether, along some execution, some state satisfying ¬P can be reached, where P is the desired property which is meant to hold generally for all states reachable from 𝑠0 . If an undesirable state satisfying ¬P is encountered during this process, you know for sure that the program is not correct. But if not, you know nothing.4 Testing is thus a method of underapproximation; that is, starting with the initial state, the state space is explored stepwise “from within”. Testing theory – not pursued any further in the present notes – aims at performing such an exploration cleverly and systematically, in order to detect situations of non-correctness. However, a famous dictum by Edsger W. Dijkstra5 rightly states that Program testing can be a very effective way to show the presence of bugs, but is hopelessly inadequate for showing their absence.

The method we are about to describe in this chapter is based on the symmetric principle of overapproximation. It could be paraphrased by the opposite dictum: State space overapproximation can be a very effective way to show the absence of bugs, but is hopelessly inadequate for showing their presence.

4 Unless, of course, you are sure that all reachable states have been encountered, which is very unlikely in practice. 5 See https://en.wikiquote.org/wiki/Edsger_W._Dijkstra#1970s.

162

7 Program Verification Using Traps

The idea is to “circumscribe” the set of all reachable states of a system by a set which may be larger but is more regular and lends itself better to analysis. For instance, in a net with place set 𝑆 and initial marking 𝑀0 , the set of all states, N𝑆 , is certainly larger (and much more manageable) than the reachability set [𝑀0 ⟩. However, it is not very useful. In order to be more useful, a superset of [𝑀0⟩ should be “as close as possible” to [𝑀0⟩, in much the same way as regular polygons circumscribing a cycle can provide better approximations with increasing numbers of vertices. Indeed, linear algebra may allow [𝑀0 ⟩ to be enclosed within some form of polyhedric structure. Such polyhedra are manageable, but they may contain states which are not reachable from [𝑀0⟩. Therefore, as opposed to testing, we shall be able to detect some situations of correctness without being able, in general, to ascertain the existence of incorrect states. For example, consider Figure 7.4. The unwieldy blue state space [𝑠0 ⟩ has been enclosed by three polygons, red, orange and green, where the last one is the best upper approximation. Suppose further that a desirable property P has been proved to hold for all states inside the green polygon. Then it can be concluded for sure that P also holds inside the blue area, that is, for all reachable states. The snag is, of course, that no matter how cleverly our polygons are designed, and even if we find a suitable one, they may still contain non-reachable states.6 The presence of errors cannot be proved by this method: not being able to prove P throughout one of the enclosing polygons does not mean that the system is incorrect. Remark 7.4 A word of caution: abstract versus concrete properties A “property” has been defined above abstractly as a set of states. In practice, typically, a property might be given by a formula of first-order logic. For example, consider a program variable var 𝑥 : N and the entire state space: 𝑥 = 𝑛 with 𝑛 ∈ N the “green” subset: 𝑥 = 𝑛 with 𝑛 ∈ N ∧ 3|𝑛 (3 divides 𝑛) the “blue” subset: 𝑥 = 𝑛 with 𝑛 ∈ N ∧ 6|𝑛 It might, at first sight, seem as though the first-order formula P:

∃𝑛(8 ≤ 𝑛 ≤ 10) ∧ (3|𝑛) ∧ (𝑥 = 𝑛)

is true in the green field but not in the blue field. However, in the abstract view, the property is identified with its truth domain, and the truth domain of this particular formula P is the single state 𝑥 = 9, which is not generally true in the green field (nor in the blue field). Under- and overapproximations lead to two different semidecision algorithms shown in Figures 7.5 and 7.6. They are phrased in terms of a Petri net 𝑁 with initial marking 𝑀0 , but the idea is transferrable to programs or any other systems where state spaces play a role. 6 Unless, of course, the state space is polyhedral and has been overapproximated precisely.

7.3 Underapproximations and overapproximations

163

P

𝑠0

Fig. 7.4: Overapproximating the blue state space shown in Figure 7.3 by polyhedra. Property P has just been proved to hold in all states inside the green polyhedron. Thus, P holds generally in the blue state space. input: A system Σ = ( 𝑁 , 𝑀0 ) and a property P

Let V ⊆ [ 𝑀0 ⟩ be some set of visited (tested) reachable states; does one of the states in V satisfy ¬P ?

yes P does not hold for Σ

no

inconclusive outcome

Fig. 7.5: A semidecision algorithm for the non-correctness of P.

164

7 Program Verification Using Traps input: A system Σ = ( 𝑁 , 𝑀0 ) and a property P

Let W ⊇ [ 𝑀0 ⟩ be some set of states enclosing [ 𝑀0 ⟩; does one of the states in W satisfy ¬P ? no

yes

P holds for Σ

inconclusive outcome

Fig. 7.6: A semidecision algorithm for the correctness of P.

We now focus on overapproximations. It was already – more or less tacitly – assumed that the areas enclosing [𝑀0 ⟩ should be polygons, because they are not too hard to handle. Polygons can be represented by linear (in)equalities, say by some set E of them. Suppose now that the negation ¬P can also be represented linearly. In that case, we may check the solvability of the conjunction E ∧ ¬P

(7.1)

If the system (7.1) of linear inequalities has no solution, then P holds for all states inside the polygon and hence also for all states in [𝑀0 ⟩. For this reason, we are interested in properties P which are co-linear in the following sense: Definition 7.5 Co-linear properties A property P is called co-linear if there are a matrix 𝐴 and a vector 𝑏 such that ∀𝑀 ∈ W : ¬P(𝑀) ⇐⇒ 𝐴 · 𝑀 ≥ 𝑏 that is, the negation of P can be characterised by a linear-algebraic inequality. In this definition, 𝑀 is viewed as a column vector with index set 𝑆, while 𝑏 is a vector with (say) 𝑘 rows, and 𝐴 is a matrix with 𝑘 rows and |𝑆| columns; the number 𝑘 being chosen so as to suit P. In the next two sections, it will be shown on some examples how these ideas work in practice.

7.4 Verification using the marking equation and traps The marking equation 𝑀 = 𝑀0 +𝐶·P (𝜏) is valid whenever 𝑀 can be reached from 𝑀0 by the sequence 𝜏 in a system Σ. This determines a linear necessary condition for reachability (the set of potentially reachable states), say E. Now E can be used

7.4 Verification using the marking equation and traps

165

as a first, albeit very coarse, “polygon” enclosing the state space. In order to use it for the verification of a given co-linear property P in Σ, the semidecision algorithm shown in Figure 7.6 requires to check the solvability of E ∧ ¬P. If this is unsolvable, P is true within the area delineated by E, and hence also within [𝑀0⟩. Let us investigate this on the simple example shown in Figure 7.7. We discuss this example in four steps, (A), . . . , (D).

𝑠1

𝑠2

𝑡1

𝑡2

𝑠3

𝑡3

𝑠4

𝑡5

𝑡4 𝑠5

𝑠6

Fig. 7.7: Illustrating the algorithm specified in Figure 7.6. (A): Let us first consider the property P1 : “𝑠1 and 𝑠5 cannot be marked simultaneously” P1 is co-linear, since ¬P1 is equivalent to (𝑀 (𝑠1 ) ≥ 1) ∧ (𝑀 (𝑠5 ) ≥ 1) and (𝑀 (𝑠1 ) ≥ 1)∧(𝑀 (𝑠5 ) ≥ 1) can be represented in the form required in Definition 7.5 with 𝑘 = 2 and 𝐴, 𝑏 as follows: 𝑀 (𝑠1 ) © ª ­   ­ 𝑀 (𝑠2 ) ®®   100000 1 ­ 𝑀 (𝑠3 ) ® · ­ ® ≥ 000010 1 ­ 𝑀 (𝑠4 ) ® | {z } ­­ 𝑀 (𝑠5 ) ®® |{z} 𝐴 𝑏 « 𝑀 (𝑠6 ) ¬ The following system of inequalities:

166

7 Program Verification Using Traps

𝑀 (𝑠1 ) 𝑀 (𝑠2 ) 𝑀 (𝑠3 ) 𝑀 (𝑠4 ) 𝑀 (𝑠5 ) 𝑀 (𝑠6 ) 𝑋 (𝑡 𝑖 ) 𝑀 (𝑠 𝑗 ) ¬P1 : 𝑀 (𝑠1 ) 𝑀 (𝑠5 ) E:

= = = = = = ≥ ≥ ≥ ≥

1 − 𝑋 (𝑡1 ) 1 − 𝑋 (𝑡2 ) 𝑋 (𝑡 1 ) + 𝑋 (𝑡 3 ) − 𝑋 (𝑡 4 ) 𝑋 (𝑡 2 ) + 𝑋 (𝑡 5 ) − 𝑋 (𝑡 4 ) 𝑋 (𝑡 4 ) − 𝑋 (𝑡 3 ) 𝑋 (𝑡 4 ) − 𝑋 (𝑡 5 ) 0 (𝑖 = 1, . . . , 5) 0 ( 𝑗 = 1, . . . , 6) 1 1

(7.2)

is not solvable. Here, 𝑋 stands for the Parikh vector of some (unknown) firing sequence leading up to some marking violating P1 . The first six lines of (7.2) encode how markings change through the firing of the transitions. It also encodes the initial marking but, of course, no others. The last two lines of E in (7.2) simply encode the requirement that both Parikh vectors and markings are semipositive.7 The unsolvability of (7.2) can be checked quickly, albeit non-systematically: 𝑀 (𝑠1 ) ≥ 1 ⇒ ( Line 1 of E ) 1 − 𝑋 (𝑡1 ) ≥ 1 ⇒ ( 𝑋 (𝑡1 ) ≥ 0, from Line 7 of E ) 𝑋 (𝑡1 ) = 0 ⇒ ( Lines 3 and 5 of E ) 𝑀 (𝑠3 ) = 𝑋 (𝑡 3 ) − 𝑋 (𝑡4 ) = −𝑀 (𝑠5 ) ⇒ ( Line 8 of E ) 𝑀 (𝑠3 ) = 𝑀 (𝑠5 ) = 0 ⇒ contradiction to 𝑀 (𝑠5 ) ≥ 1 This contradiction implies that P1 is a valid property of the system. (B): Next, let us consider the property P2 : “𝑠1 and 𝑠6 cannot be marked simultaneously” P2 is again co-linear, since ¬P2 is equivalent to (𝑀 (𝑠1 ) ≥ 1) ∧ (𝑀 (𝑠6 ) ≥ 1) However, plugging ¬P2 instead of ¬P1 into (7.2), i.e., considering the inequality system (E) + (¬P2 ) 7 It makes sense to include these inequalities explicitly, because linear inequality solvers usually operate on the set of integers, rather than on the set of natural numbers.

7.5 Peterson’s mutual exclusion algorithm

167

yields the solution vectors 𝑋 = (01110) T and 𝑀 = (100001) T . In this case, no definite conclusion is possible as to the validity (or not) of P2 . We may try to add more linear constraints in order to make solvability less likely. For instance, if it is known, for some reason, that the system is safe, then 𝑀 (𝑠 𝑗 ) ≥ 0 in (E) can be strengthened to S:

∀ 𝑗, 1 ≤ 𝑗 ≤ 6 : 0 ≤ 𝑀 (𝑠 𝑗 ) ≤ 1

However, there is still a solution to (E) + (S) + (¬P2 ) (C): There are other ways of adding linear constraints. If 𝑄 is a trap, then the trap property can be expressed linearly, and if 𝑄 is initially marked, then there is a chance that adding its linear inequality expresses a true further constraint. For instance, let us add the initially marked trap 𝑄 1 = {𝑠1 , 𝑠3 , 𝑠4 , 𝑠6 } and consider the following system: (E) + (S) + (¬P2 ) + ( 𝑄 1 : 𝑀 (𝑠1 ) + 𝑀 (𝑠3 ) + 𝑀 (𝑠4 ) + 𝑀 (𝑠6 ) ≥ 1 ) As this system is still solvable, there is still no conclusive information about the validity of P2 . (D): We shall add a second trap, namely 𝑄 2 = {𝑠2 , 𝑠3 , 𝑠4 , 𝑠5 } (note that 𝑄 2 is also marked initially): (E) + (S) + (¬P2 ) + (𝑄 1 ) + ( 𝑄 2 : 𝑀 (𝑠2 ) + 𝑀 (𝑠3 ) + 𝑀 (𝑠4 ) + 𝑀 (𝑠5 ) ≥ 1 ) It so happens that this inequality system is no longer solvable. Now, we know that P2 is valid. In other words, there does not exist any reachable safe marking in which both 𝑠1 and 𝑠6 carry a token. Importantly, the reachability graph has played no role at all in this analysis. In general, the trap method can be exponential, since there may exist exponentially many traps in the size of the net. In practice, it may help to give the user a means of influencing the choice of which trap to include (or not).

7.5 Peterson’s mutual exclusion algorithm Mutual exclusion ensures that a piece of code is not executed by two different processes at the same time, or that a critical resource is allocated only to one single user at a time. Traditionally, semaphore operations 𝑃 (acquire of a resource, say) and 𝑉 (release of the resource) are used for this purpose. 𝑃 is a “test-and-set” operation of the form “if the resource is free then acquire it” consisting of a freeness test followed

168

7 Program Verification Using Traps

by an acquire operation. The testing and the acquiring have to be executed together, i.e., atomically as a single action, lest some other process is “faster” and intervenes. Peterson’s algorithm shows that test-and-set operations are not strictly necessary in order to implement mutual exclusion. Instead, just by atomic tests which might involve waiting until the outcome is “true”, combined with unconditional atomic variable updates, the same effect can be achieved: Algorithm 7.6 Peterson’s mutual exclusion algorithm var in1 , in2 : Boolean (init false); var hold : {1, 2} (init don’t care; e.g., 1); repeat ⟨in1 := true⟩; repeat ⟨in2 := true⟩; ⟨hold := 1⟩; ⟨hold := 2⟩; if ⟨¬in2 → skip⟩ if ⟨¬in1 → skip⟩ ⟨hold = 1 → skip⟩ ⟨hold = 2 → skip⟩ fi; fi; “critical section 1” “critical section 2” ⟨in1 := false⟩ ⟨in2 := false⟩ end repeat end repeat Peterson’s algorithm is a parallel shared-variable program with two processes which are composed by means of a “parallel” operator ∥. The brackets ⟨. . .⟩ enclose atomic actions or atomic tests. Process 𝑖 declares its willingness to enter a critical section by setting variable in𝑖 to true, while hold is a variable by which a process gives the other process an “after you” signal. The critical section can only be entered by process 𝑖, either if the other process 𝑗 has not declared its interest in entering the critical section, or if the other process has sent an “after you” signal. In this way, hold arbitrates between the processes in case both want to enter their critical sections at the same time. Figure 7.8 shows a Petri net representing this algorithm. It can be understood at an intuitive level, but it is also the essential core of a more systematic translation of Peterson’s program according to the rules given later in Chapter 20. Such nets are automatically safe. The places and transitions of this net are linked back to the program by the following naming conventions: double arrows f t 𝑡2 𝑡3 , 𝑡4 𝑡5 𝑡6 𝑡1 𝑠4

correspond to two opposite arrows abbreviates false abbreviates true corresponds to ⟨in1 := true⟩ correspond to ⟨hold := 1⟩ corresponds to ⟨¬in2 → skip⟩ corresponds to ⟨hold = 2 → skip⟩ corresponds to ⟨in1 := false⟩ corresponds to the first critical section

7.5 Peterson’s mutual exclusion algorithm

169

Analogously, transitions 𝑡 7 etc. correspond to the actions of the second process, and 𝑞 4 models the second critical section.

in1 =f

𝑠1

𝑡1

in2 =f

𝑞1

𝑡8

𝑡7

𝑡2

𝑞2

𝑠2 in1 =t 𝑡3

in2 =t 𝑡9

𝑡4

𝑡10

hold=2 𝑞3

𝑠3

𝑡5

𝑡6

hold=1

𝑡11

𝑡12 𝑞4

𝑠4

Fig. 7.8: A Petri net representing Peterson’s algorithm. In the remainder of this section, we shall state and explain three co-linear properties and prove one of them by means of traps. • The mutual exclusion property is co-linear. A marking 𝑀 violating this property (i.e. satisfying the negation of mutual exclusion) satisfies a linear relationship of the following form:8 𝑀 (𝑠4 ) + 𝑀 (𝑞 4 ) ≥ 2 (7.3) since 𝑀 (𝑠4 ) ≤ 1 and 𝑀 (𝑞 4 ) ≤ 1 by safeness. (7.3) is an instance of Definition 7.5 with 𝑘 = 1 and the following matrix 𝐴 and vector 𝑏:   𝑠1 𝑠2 𝑠3 𝑠4 hold= hold=2 𝑞 1 𝑞 2 𝑞 3 𝑞 4 𝐴= and 𝑏 = (2) 0 0 0 1 0 0 0 0 0 1 𝐴 is a matrix with a single line.9 • The property that some reachable marking activates transition 𝑡4 (i.e. that 𝑡4 is singly live) is also co-linear, since its negation is characterised by 𝑀 (𝑠2 ) + 𝑀 (hold = 2) ≤ 1 In this case, definition 7.5 is instantiated by 𝑘 = 1 and 8 It is assumed that the safeness of the net is known. 9 The place names to which 𝐴’s entries belong were given explicitly, for reasons of clarity.

170

7 Program Verification Using Traps



𝑠 𝑠 𝑠 𝑠 hold=1 hold=2 𝑞 1 𝑞 2 𝑞 3 𝑞 4 𝐴= 1 2 3 4 0 −1 0 0 0 −1 0 0 0 0

 and 𝑏 = (−1)

• In a safe net, deadlock-freeness can be expressed co-linearly in the following way. The linear-algebraic relationship ∑︁ ∑︁ 𝑀 (𝑠) ≤ | • 𝑡| − 1 (or, equivalently, − 𝑀 (𝑠) ≥ 1 − | • 𝑡|) 𝑠∈ • 𝑡

𝑠∈ • 𝑡

expresses the negation of a transition 𝑡 being activated at 𝑀. Hence the negation of deadlock-freeness corresponds to the conjunction ∑︁ ∀𝑡 ∈ 𝑇 : (− 𝑀 (𝑠) ≥ 1 − | • 𝑡|) 𝑠∈ • 𝑡

This conjunction can be brought into a form stipulated by Definition 7.5.10 In order to prove mutual exclusion, it turns out that two traps are significant: 𝑄 1 = { in2 =f, 𝑞 2 , hold=2, 𝑠3 } 𝑄 2 = { in1 =f, 𝑠2 , hold=1, 𝑞 3 } Using them, it is possible to show that the places 𝑠4 and 𝑞 4 never carry a token simultaneously, implying mutual exclusion. Note that traps are necessary, since if all side-place arcs are omitted in figure 7.8, we get the same incidence matrix but lose the mutual exclusion property. Since the proof is very similar to the one under (D) above, it is omitted here. Instead, we describe how this proof is related to a traditional proof by program invariants, not using Petri nets. To see this, define two invariants of Peterson’s algorithm, as follows. 𝐼1 = ( ¬in2 ∨ hold=2 ∨ process 2 is at 𝑞 2 ∨ process 1 is at 𝑠3 ) where the control parts (“process 2 is at control point 𝑞 2 ” etc.) can be described, for instance, by auxiliary variables. 𝐼1 is true initially because of ¬in2 (due to the initial value of in2 ). Moreover, it is easy11 to see that the truth of 𝐼1 is preserved over every atomic action of the program. (𝐼1 is thus an inductive invariant.) After defining a symmetric invariant, 𝐼2 = ( ¬in1 ∨ hold=1 ∨ process 1 is at 𝑠2 ∨ process 2 is at 𝑞 3 ) we get

10 Which is not shown explicitly at this point. 11 But nice; and, perhaps, instructive for the reader not used to verifying program invariants.

7.6 Limits of marking equation and traps

171

process 1 is at 𝑠4 ∧ process 2 is at 𝑞 4 ⇒ ( 𝐼1 ∧ 𝐼2 and every process is at exactly one location ) (¬in2 ∨ hold=2) ∧ (¬in1 ∨ hold=1) ⇒ ( at location 𝑠4 , in1 is true, and at 𝑞 4 , in2 is true ) (hold=2) ∧ (hold=1) ⇒ contradiction This contradiction proves the mutual exclusion property. Petri nets may be really helpful at this point, because the assertion “a process is at some control location” does not normally belong to the standard toolkit of program verification. Such assertions are indeed superfluous for purely sequential programs. It was only with the advent of parallelism that verification was augmented with so-called “auxiliary variables”. The purpose of these auxiliary variables is precisely to be able to express control locations explicitly and formally, and to use them in program proofs. In the meantime, many other methods of achieving the same goal have been invented and used.

7.6 Limits of marking equation and traps This section serves to pour quite a bit of water into the wine served in this chapter thus far, unfortunately. The inequality system (E), even when accompanied by safeness inequalities (S), is fairly weak. This has been illustrated by the example shown in the previous section (Peterson’s algorithm). The mutual exclusion property becomes wrong if the side conditions of the net are erased, but nothing changes in the incidence matrix. Hence there is also no change in the system (E+S). There is also a principal limit of the trap technique, since we may find examples of properties which cannot be proved, even if all traps are taken into account. We give an example of this kind in Figure 7.9.

𝑠1

𝑡3 𝑠5 𝑠4

𝑡1

𝑠2

𝑡4 𝑠6 𝑡2

𝑠3

Fig. 7.9: Illustrating the limits of the trap technique.

172

7 Program Verification Using Traps

The valid property P:

“the marking (110110) is not reachable”

is co-linear, but (E) + (¬P) + (all initially marked traps) has a solution. Hence the semidecision algorithm yields “don’t know” as a result, which means that it is not capable of proving the validity of P. What did we expect? Proving a program correct is much more complicated than checking a few traps, even if there are exponentially many of them. Still, it seems advantageous to have a good tool in one’s hand in the moment it can be useful.

7.7 Bibliographical remarks and further reading The idea of program proving by means of under- and overapproximations is so pervasive in computer science that it is difficult to pinpoint origins. Underapproximation in the form of testing is just about as old as computer programming itself. Overapproximation has diverse incarnations and names, “abstract interpretation” by Patrick and Radhja Cousot [CC77] being an early one. The need for program verification has become apparent in the 1950s and 1960s. Plenty of textbooks are concerned with methods of program verification, e.g. [ABO09] (Krzysztof R. Apt, Frank S. de Boer, Ernst-Rüdiger Olderog). Overapproximation has also been used for verification via model checking, [ERW22] (by Javier Esparza, Mikhail A. Raskin and Christoph Welzel) being a fairly novel approach. The terminology about system invariants introduced in Definition 7.1 is not exactly uniform in the literature, but is used, in one form or another, in many contexts. Invariant-driven program design has been advocated by Edsger W. Dijkstra in [Dij76]. In particular, a technique of weakening a specification such that it becomes a program’s invariant has frequently been used in this approach. The contents of this section are significantly based on [ES01] by Javier Esparza and Claus Schröter, on Stephan Melzer’s PhD Thesis [Mel98] and on the paper [EM00] (Javier Esparza, Stephan Melzer). The trap technique suggests a stepwise, user-guided procedure. Stepwise verification techniques have often been used and refined in various contexts, and been applied to various case studies (as a recent example, see [ERW21], by Javier Esparza, Mikhail A. Raskin, and Christoph Welzel). Stepwise, user-guided verification has also been advocated in the CEGAR approach (couterexample-guided abstraction refinement) developed in [Cla+00] (by Edmund M. Clarke, Orna Grumberg, Somesh Jha, Yuan Lu, and Helmut Veith) and pursued in many different directions, e.g. [HJM03] (by Thomas A. Henzinger, Ranjit Jhala, and Rupak Majumdar).

7.8 Exercises

173

Semaphores have been introduced by Edsger W. Dijkstra in [Dij68b]. Gary L. Peterson’s algorithm [Pet81] is described in Michel Raynal’s book [Ray86] and in Wikipedia.12 The trap proof is also in [Bes96] (Eike Best). Auxiliary variables date back to Susan S. Owicki and David Gries [OG76]. Like traps, siphons also have a slight “once so, always so” charisma about them. Hence they may, one day, also be useful for verification. However, so far, they seem to have attracted very little interest in this capacity. Linear inequality systems are the subject of many books, see e.g. [Sch99] (Alexander Schrijver). There is also a free integer linear programming solver called lpsolve that can be used for the linear inequalities described in this chapter.13 The problem of having to deal with huge, irregular, state spaces has sometimes been dubbed “the state space explosion problem”. Program verification, over- and underapproximations, and counterexample-guided approaches are not the only ways of coming to grips with enormous state spaces. There have also been many efforts to develop means of reducing the size of the state space without losing the ability to prove properties one wants to prove. Such techniques have frequently been referred to as state space reductions, or as partial order reductions (see [Pel98], by Doron Peled, and [GW93], by Patrice Godefroid and Pierre Wolper). Well-known amongst them are stubborn sets [VH11] (Antti Valmari) and ample sets [Pel93] (Doron Peled). These methods are often based on partial order semantics, often in the guise of firing sequence permutations and the “diamond property” which comes from the independence of activities, or, more generally, from persistence. Partial oder semantics will not only be described in various upcoming parts of this book (e.g., in Chapters 8, 9, and 12), but also put to algorithmic use (in Chapter 16).

7.8 Exercises Exercise 7.1 Verification by traps Consider the Petri net system shown in Figure 7.10. a) Does this system satisfy the property P : “on 𝑠1 , there are at least as many tokens as on 𝑠4 ” in all reachable markings? Justify your answer without using the following parts of the exercise. b) Express the property ¬P as a linear constraint, i.e., determine the matrix 𝐴 and the vector 𝑏 for which ¬P (𝑀) ⇐⇒ 𝐴 · 𝑀 ≥ 𝑏, for all markings 𝑀. c) Specify the inequality system E + ¬P. d) Check whether or not that E + ¬P has a solution in N. What can be deduced about the validity of P? e) Determine all marked traps of the system.

12 See https://en.wikipedia.org/wiki/Peterson’s_algorithm. 13 See https://sourceforge.net/projects/lpsolve/.

174

7 Program Verification Using Traps

f) Add the corresponding inequalities to E + ¬P and check again whether or not there is a solution. What can be deduced about the validity of P?

𝑠1

𝑠2

𝑡1

𝑠5 𝑡4

𝑠4

𝑡2

𝑡3

𝑠3

Fig. 7.10: Petri net to Exercise 7.1.

Exercise 7.2 Alternative overapproximation ideas Explore the idea of using siphons or S-invariants instead of traps for verification. The idea is to identify a siphon (or an S-invariant) and to add a suitable equation or inequality to an existing system E + ¬P. Explain how these new methods work and discuss whether or not they make sense, that is, whether or not there are properties which can only be proved by these new methods but not by means of the existing system E + ¬P. Solution 7.1 (Exercise 7.1) a) It does. 𝑠1 +𝑠2 +𝑠3 +𝑠4 is an S-invariant, with initially three tokens, implying that the only markings invalidating P can be (0, 𝑥, 𝑦, 1, 𝑧), (0, 𝑥, 𝑦, 2, 𝑧), and (1, 0, 0, 2, 𝑧). Once there is a token on 𝑠4 , only one token is on 𝑠5 . Without removing the token from 𝑠4 , one can move a token from 𝑠1 to 𝑠2 , after which 𝑠5 is token-empty. In this way, (1, 1, 0, 1, 0) can be reached, but none of the above token distributions. b) 𝐴 = (−1 0 0 1 0), 𝑏 = (1) c) 0 ≤ 𝑀 (𝑠1 ) = 3 − 𝑋 (𝑡 1 ) + 𝑋 (𝑡 4 ) 0 ≤ 𝑀 (𝑠2 ) = 𝑋 (𝑡 1 ) − 𝑋 (𝑡 2 ) 0 ≤ 𝑀 (𝑠3 ) = 𝑋 (𝑡 2 ) − 𝑋 (𝑡 3 ) 0 ≤ 𝑀 (𝑠4 ) = 𝑋 (𝑡 3 ) − 𝑋 (𝑡 4 ) 0 ≤ 𝑀 (𝑠5 ) = 2 + 𝑋 (𝑡 3 ) + 𝑋 (𝑡4 ) − 𝑋 (𝑡 1 ) − 𝑋 (𝑡 2 ) 𝑋 (𝑡 1 ) ≥ 0 𝑋 (𝑡 2 ) ≥ 0 𝑋 (𝑡 3 ) ≥ 0 𝑋 (𝑡 4 ) ≥ 0 𝑀 (𝑠4 ) − 𝑀 (𝑠1 ) ≥ 1 d) We have 𝑋 (𝑡 1 ) ≥ 𝑋 (𝑡 2 ) ≥ 𝑋 (𝑡3 ) ≥ 𝑋 (𝑡4 ) ≥ 𝑋 (𝑡1 ) − 3 (from lines 1–4), and 𝑋 (𝑡 3 ) − 2𝑋 (𝑡4 ) + 𝑋 (𝑡1 ) ≥ 4 (from line 1, line 4, and the last line). It is not hard to

7.8 Exercises

175

find a solution; e.g., 𝑋 (𝑡1 ) = 𝑋 (𝑡 2 ) = 𝑋 (𝑡3 ) = 3, and 𝑋 (𝑡 4 ) = 1. The corresponding marking is 𝑀 = (1 0 0 2 0). Since the inequality system is solvable, no statement can be derived about the validity of P. e) 𝑄 1 = {𝑠2 , 𝑠3 , 𝑠5 }, 𝑄 2 = {𝑠1 , 𝑠2 , 𝑠3 , 𝑠4 }, 𝑄 3 = {𝑠1 , 𝑠2 , 𝑠3 , 𝑠4 , 𝑠5 }, 𝑄 4 = {𝑠1 , 𝑠2 , 𝑠3 , 𝑠5 }, 𝑄 5 = {𝑠2 , 𝑠3 , 𝑠4 , 𝑠5 } f) 𝑀 (𝑠2 ) + 𝑀 (𝑠3 ) + 𝑀 (𝑠5 ) ≥ 1 𝑀 (𝑠1 ) + 𝑀 (𝑠2 ) + 𝑀 (𝑠3 ) + 𝑀 (𝑠4 ) ≥ 1 𝑀 (𝑠1 ) + 𝑀 (𝑠2 ) + 𝑀 (𝑠3 ) + 𝑀 (𝑠4 ) + 𝑀 (𝑠5 ) ≥ 1 𝑀 (𝑠1 ) + 𝑀 (𝑠2 ) + 𝑀 (𝑠3 ) + 𝑀 (𝑠5 ) ≥ 1 𝑀 (𝑠2 ) + 𝑀 (𝑠3 ) + 𝑀 (𝑠4 ) + 𝑀 (𝑠5 ) ≥ 1 𝑄 2 , . . . , 𝑄 5 have no impact on the solvability of the system (it remains solvable), since the marking 𝑀 = (1 0 0 2 0) from d) satisfies all corresponding inequalities. However, because of 𝑀 (𝑠2 ) + 𝑀 (𝑠3 ) + 𝑀 (𝑠5 ) = 2+ 𝑋 (𝑡4 ) − 𝑋 (𝑡 2 ) = 2+1−3 = 0 ̸ ≥ 1, the 𝑄 1 inequality contradicts solvability by 𝑀 = (1 0 0 2 0). Rewriting the last inequality yields 𝑋 (𝑡4 ) ≥ 𝑋 (𝑡 2 ) − 1, and with 𝑋 (𝑡2 ) ≥ 𝑋 (𝑡3 ) ≥ 𝑋 (𝑡 4 ) from d), we get 𝑋 (𝑡 2 ) = 𝑋 (𝑡 4 ) or 𝑋 (𝑡 2 ) = 𝑋 (𝑡4 )+1. By 𝑋 (𝑡 2 ) ≥ 𝑋 (𝑡 3 ) ≥ 𝑋 (𝑡4 ), we have 𝑋 (𝑡3 ) = 𝑋 (𝑡 4 ) or 𝑋 (𝑡 3 ) = 𝑋 (𝑡 4 )+1. In the first case, 𝑀 (𝑠4 ) = 𝑋 (𝑡 3 )−𝑋 (𝑡 4 ) = 0, and hence, the inequality for ¬P, 𝑀 (𝑠4 ) − 𝑀 (𝑠1 ) ≥ 1, is not satisfiable. In the second case, we may deduce 𝑀 (𝑠4 ) = 1 and hence, from ¬P, 𝑀 (𝑠1 ) = 0, that is, 𝑋 (𝑡 1 ) = 3 + 𝑋 (𝑡 4 ). A solution must therefore be of the form 𝑋 = (𝑎 + 3 𝑎 + 1 𝑎 + 1 𝑎). But then we can calculate 𝑀 (𝑠5 ) = 2 + (𝑎 + 1) + 𝑎 − (𝑎 + 3) − (𝑎 + 1) = −1 < 0 contradicting 𝑀 (𝑠5 ) ≥ 0. Hence the inequality system (including 𝑄 1 ) has no solution. As a consequence, P is valid in the system. Solution 7.2 (Exercise 7.2) Í The S-invariant method considers an S-invariant 𝐽 = 𝑠∈𝑆 𝑗 𝑠 · 𝑠 (with 𝑗 𝑠 being the entry for the place 𝑠 in 𝐽) and adds the equality ∑︁ ∑︁ 𝑗 𝑠 · 𝑀 (𝑠) = 𝑗 𝑠 · 𝑀0 (𝑠) 𝑠∈𝑆

𝑠∈𝑆

to E + ¬P. The right-hand side of this equation can be evaluated to an integer number. The same equation is obtained as a linear combination of the equations in E corresponding to 𝑀 (𝑠) in E, since these equations are built from the marking equation 𝑀 = 𝑀0 + 𝐶 · 𝑋. Hence the new equation can be derived from the old ones, which means that no change in solvability can be obtained. Thus, the S-invariant method does not add any new capabilities, given that we already have E as defined in this chapter. The siphon method consists of identifying initially token-empty (proper) siphons, since for marked siphons, no general statement about their future behaviour is possible. The basic property we know is that a siphon remains empty if it is empty to start with. So, suppose that 𝐷 is a token-empty siphon. We get a new equality of the Í form 𝑠∈𝐷 𝑀 (𝑠) = 0. In order to violate this equation (and thus make the system unsolvable, and as a consequence, show the validity of P), there would have to be a

176

7 Program Verification Using Traps

transition sequence increasing the number of tokens on 𝐷. An example is a net with a place 𝑠, a transition 𝑡, 𝐹 (𝑠, 𝑡) = 1, 𝐹 (𝑡, 𝑠) = 2, and 𝑀0 (𝑠) = 0. Suppose that we investigate the property P: “there cannot be more than 5 tokens on 𝑠”. The system E + ¬P yields a solution vector 𝑀 (𝑠) = 6 and 𝑋 (𝑡) = 6. However, the additional siphon {𝑠} and the equation 𝑀 (𝑠) = 0 excludes this solution, showing that P is true for the given initial marking. This reasoning shows that the siphon method has some potential to be useful. However, its use is prone to limitations, because systems are unlikely to contain tokenempty siphons to start with.

Chapter 8

Fairness, Simulations, and Inhibitor Arcs

Abstract An execution of a parallel system is fair if no activity that is capable of taking place is denied this capability for a very long time, or even forever. Depending on how this somewhat diffuse notion is formalised, different types of fairness can be distiguished, ranging from very strong notions of fairness to really weak ones. Several types of fairness properties will be examined. A Petri net class will be identified for which some notions of fairness collapse to a single one. Even more distinctions can be made in a Turing-powerful system model.

Contents 8.1 8.2 8.3 8.4 8.5 8.6 8.7 8.8

Introduction . . . . . . . . . . . . . . . . . Asymmetric choice nets . . . . . . . . . . Simulations and net transformations . . . . Strong fairness and very strong fairness . . Weak fairness and finite delay . . . . . . . Inhibitor arcs and zero tests . . . . . . . . Bibliographical remarks and further reading Exercises . . . . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

177 178 181 186 191 194 197 199

8.1 Introduction It was proved in the previous chapter that Peterson’s algorithm satisfies the mutual exclusion property. The case that both processes are in their critical sections at the same time was deemed “bad”, and the proof by invariants showed that this bad case cannot happen. In parallel program terminology, a property which states that

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. Best and R. Devillers, Petri Net Primer, Computer Science Foundations and Applied Logic, https://doi.org/10.1007/978-3-031-48278-6_8

177

178

8 Fairness, Simulations, and Inhibitor Arcs

“something bad cannot happen” is called a safety property, and program invariants are a standard way of proving such properties. There is also a different type of property called a liveness property.1 Liveness properties are ones which state that “something good must happen”. For sequential programs, this might, for instance, be the property that a given loop, or a given recursion, terminates under all circumstances. A standard way of proving termination is by considering functions into a well-founded set such as N whose values decrease with each execution of a loop. In a parallel program, there are other liveness properties one would possibly like to prove. For example, in Peterson’s algorithm, one might want to prove that a process who wants to enter its critical section (the “good thing”) will eventually be able to do so. This is fairly tricky, because the relative speeds of the two processes might accidentally be synchronised in such a malignant way that whenever process 2 is just about to execute its atomic action ⟨in2 := true⟩, process 1 tests the shared variable in2 through its atomic test action ⟨¬in2 → skip⟩, thus preventing process 2 from proceeding, perhaps at a lower “machine level”. If this happens continually, process 2 cannot proceed at all, and “the good thing” never happens. Whether or not process 1 behaves like this on purpose, or even whether or not the lower level implementor has made a mistake, an innocent bystander would probably say that such a behaviour is grossly unfair for process 2. This is, indeed, the terminology that has been established for such situations. In general, there are not just one, but many possible causes of unfair behaviour. In what follows, some (but not all) of them will be investigated in the context of Petri nets. In general, there is even an infinity of such notions. Most of these collapse to a single notion in a subclass of Petri nets called asymmetric choice nets. We shall define and discuss this class in the next two sections. Throughout this chapter, we continue to assume that all Petri nets are weakly connected and plain.

8.2 Asymmetric choice nets We start by defining the class of asymmetric choice nets (AC-nets) as an extension of free-choice (FC) nets. Recall from Section 5.4 that the free choice property of a Petri net (𝑆, 𝑇, 𝐹) can equivalently be expressed by one of the following conditions: ∀𝑡 1 , 𝑡2 ∈ 𝑇 : • 𝑡 1 ∩ • 𝑡2 ≠ ∅ ⇒ • 𝑡 1 = • 𝑡2 ∀𝑠1 , 𝑠2 ∈ 𝑆 : 𝑠1• ∩ 𝑠2• ≠ ∅ ⇒ 𝑠1• = 𝑠2•

(8.1)

1 What we have here is an unfortunate clash of terminologies. This can hardly be avoided, because the use of the words “liveness” and “safety” (or “safeness”) is well established, with inconsistent meanings, in two different areas, parallel program verification and Petri nets. We will deviate from the terminology used throughout the previous chapters as little as possible.

8.2 Asymmetric choice nets

179

We shall be interested in a generalisation of the second condition2, as follows. Definition 8.1 AC-nets A net (𝑆, 𝑇, 𝐹) is called asymmetric choice (or an AC-net) if ∀𝑠1 , 𝑠2 ∈ 𝑆 : 𝑠1• ∩ 𝑠2• ≠ ∅ ⇒ (𝑠1• ⊆ 𝑠2• ∨ 𝑠2• ⊆ 𝑠1• ) This implies that every FC-net is also an AC-net, because 𝑠1• = 𝑠2• logically implies 𝑠1• ⊆ 𝑠2• ∨ 𝑠2• ⊆ 𝑠1• . The converse is wrong, however. Figure 8.1(l.h.s.) exhibits a typical AC structure which is not free-choice. On the right-hand side of the figure, a typical non-AC (and a fortiori, non-FC) structure is shown. This structure is also known as an M-structure.

AC but not FC

M-structure: not AC

Fig. 8.1: Explanation of the asymmetric choice property.

An M-structure occurs, for instance, in the non-AC system shown in Figure 8.2. Recall that this system contains a transition which is dead but all of whose input places are live, in the sense that it is always possible to put tokens on them (but never simultaneously). We have already argued in Section 5.5.1 that such a situation cannot occur in free-choice systems. The next theorem shows that it cannot even occur in AC systems.

𝑡1 𝑠1

𝑠2 𝑡2

𝑡3

Fig. 8.2: A non-live system which is not asymmetric choice.

2 It is possible to generalise the first condition in a similar way, but this leads to a class of nets usually considered as less interesting.

180

8 Fairness, Simulations, and Inhibitor Arcs

Theorem 8.2 Dead transitions have eventually dead input places Let 𝑡 be a transition of an AC system and suppose that 𝑡 is never activated during an infinite firing sequence 𝜎

𝑡1

𝑡2

𝑡3

𝑀0 −→ 𝑀1 −→ 𝑀2 −→ . . .

= 𝑡

that is, ∀𝑖 ≥ 0 : ¬(𝑀𝑖 −→ ). Then there is a place 𝑝 ∈ • 𝑡 such that in some suffix of 𝜎, 𝑝 is token-empty, i.e., ∃𝑘 ≥ 0 ∀ 𝑗 ≥ 𝑘 : 𝑀 𝑗 ( 𝑝) = 0.

Example 8.3 The AC premise is necessary for Theorem 8.2 to hold The system shown in Figure 8.2 violates this property with 𝑡=𝑡 3 and 𝜎=(𝑡1 𝑡2 ) ∞ . Proof: Let 𝑡 be a transition and 𝜎 a sequence satisfying the premises stated in the theorem. In particular, 𝑡 is not enabled in any of the markings 𝑀𝑖 (with 𝑖 ≥ 0). Consider the set {𝑠1 , . . . , 𝑠 𝑛 } = • 𝑡 of input places of 𝑡. By the AC property, they can be ordered in such a way that 𝑠1• ⊆ 𝑠2• ⊆ 𝑠3• ⊆ . . . ⊆ 𝑠•𝑛

(8.2)

Consider first 𝑠1 . If there is an infinite suffix of 𝜎 in which 𝑠1 is permanently unmarked, then we are done. Otherwise, there is some index in 𝜎, say 𝑘 1 , such that 𝑀𝑘1 (𝑠1 ) > 0, i.e., such that there is a token on 𝑠1 at marking 𝑀𝑘1 . By the special choice of 𝑠1 (being minimal with respect to its output transitions), it follows that 𝑀 𝑗 (𝑠1 ) > 0 for all 𝑗 ≥ 𝑘 1 . Otherwise there is a transition in 𝑠1• , other than 𝑡, whose incoming places are a superset of those of 𝑡 – by the choice of 𝑠1 and by the AC property – and which is activated, contradicting the fact that 𝑡 is not activated. Now consider a suffix of 𝜎 in which 𝑠1 carries at least one token. Repeat the argument for 𝑠2 (and after that, for 𝑠3 , and so on). In this way, we find a suffix in which all 𝑠𝑙 ∈ • 𝑡 carry at least one token, but this contradicts 𝑡 being disabled along 𝜎. Hence for at least one of the places so considered, say for 𝑠ℓ (1 ≤ ℓ ≤ 𝑛), we find some index in 𝜎 after which this place is permanently non-marked. Since 𝑠ℓ ∈ • 𝑡, the claim is proved. 8.2 • • • In an AC net, we may have two transitions 𝑡 1 , 𝑡2 with 𝑡1 ∩ 𝑡2 ≠ ∅ but both 𝑡1 ⊈ • 𝑡 2 and • 𝑡2 ⊈ • 𝑡1 . In other words, an analogue of (8.2) does not hold for transitions. Using the first line of (8.1) in the definition of a net class generalising FC nets yields a rather different net class.

8.3 Simulations and net transformations

181

8.3 Simulations and net transformations The definition of the asymmetric choice property given in the previous section is fairly liberal, in the sense that, in the literature, one finds more restrictive definitions as well. Occasionally, nets of one such class can “simulate” nets of another class. We have already seen examples of this type in Section 5.2 where arc weights > 1 could be “simulated” by arc weights at most 1, and transitions in loops could be “simulated” by splitting them into a beginning and an end. In this section, we take a closer look at such simulations and at alternative definitions of the asymmetric choice property. In order to motivate the definition of simulation, we discuss Figure 8.3. The idea is that a net 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 ) is simulated by another net having, in addition to the transitions of 𝑁, some more transitions which one could think of as “silent”.3 Silent transitions are characterised by a letter 𝜏, and there could be several of them.

𝑠

𝑠 𝑡1 𝑡1

𝑠

𝑠 𝑡2

𝜏

𝜏

𝜏

𝜏

𝜏

𝜏

𝑡2 𝜏

𝑠1

𝑠2 𝑁

𝜏

𝑠1

𝑠2 𝑁1

𝑡1

𝑡2

𝑡1

𝑡2

𝑠1

𝑠2

𝑠1

𝑠2

𝑁2

𝑁3

Fig. 8.3: Motivation of the definition of simulation.

𝑁1 will be accepted as a simulation of 𝑁, but 𝑁2 will not. The reason is that a transformation such as hinted at by 𝑁 { 𝑁2 could create, in the context of some larger environment, non-liveness and/or deadlocks.4 However, we do not mind how the silent transitions behave “amongst themselves”, so to speak, as long as the behaviour of 𝑁 is properly reflected in the simulating net. This is why 𝑁1 and 𝑁3 will indeed by accepted as simulations of 𝑁 in Figure 8.3. 𝑁3 contains so-called (𝜏-)divergences (i.e., two infinite “silent” loops, in the sense of infinite behaviour) which 𝑁 does not have; but its “visible”, non-𝜏, behaviour is just like that of 𝑁.

3 If the nets are thought of as being labelled, silent transitions do not contribute to the generated language. 4 For instance, transition 𝑡1 could depend on some part of the environment which involves firing 𝑡2 . Committing oneself to 𝑡1 , as is possible in 𝑁2 , would then be premature.

182

8 Fairness, Simulations, and Inhibitor Arcs

Definition 8.4 Simulation Let 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 ) and 𝑁 ′ = (𝑆 ′ , 𝑇 ′ , 𝐹 ′ , 𝑀0′ ) be two Petri nets. Let 𝑓 : 𝑇 → 𝑇 ′ be an injective function.a We say that 𝑁 ′ simulates 𝑁 via 𝑓 if there is a surjective function 𝛽 : [𝑀0′⟩ → [𝑀0⟩ such thatb (i) 𝑀0 = 𝛽(𝑀0′ ) (ii) if 𝑀1 = 𝛽(𝑀1′ ) with 𝑀1 ∈ [𝑀0⟩ and 𝑀1′ ∈ [𝑀0′⟩ then 𝑡

(a) whenever 𝑀1 −→ 𝑀2 with 𝑡 ∈ 𝑇 𝛼 then ∃𝑀2′ ∈ 𝛽 −1 (𝑀2 ) ∃𝛼 ∈ (𝑇 ′ ) ∗ : 𝑀1′ −→ 𝑀2′ ∧ 𝑓 −1 (𝛼) = 𝑡 𝛼 (b) whenever 𝑀1′ −→ 𝑀2′ with 𝛼 ∈ (𝑇 ′ ) ∗

c

𝑓 −1 ( 𝛼)

then 𝑀1 −→ 𝛽(𝑀2′ ) The idea is that in 𝑁 ′ , the transitions 𝑓 (𝑇 ) “duplicate” the transitions 𝑇, while the transitions in 𝑇 ′ \ 𝑓 (𝑇 ) are “silent”. b The idea is that all reachable states of 𝑁 are simulated by some, but possibly more than one, reachable state(s) of 𝑁 ′ . c 𝑓 −1 ( 𝛼) is the string obtained by erasing all letters in 𝑇 ′ \ 𝑓 (𝑇 ) from 𝛼 and applying 𝑓 −1 to each of the remaining letters (this is possible since 𝑓 is injective). a

Let us see why, in Figure 8.3, 𝑁2 does not simulate 𝑁 according to this definition (via the function 𝑓 which is already given by the transition labellings). By (iib), the surjection 𝛽 must map the initial state, say 𝑀0′′ , of 𝑁2 and both states, say 𝑀1′′ and 𝑀2′′ , reached after firing one of the 𝜏 transitions to the initial state of 𝑁. Hence 𝛽 −1 (𝑀0 ) = {𝑀0′′ , 𝑀1′′ , 𝑀2′′ }. But 𝑀0 enables both 𝑡 1 and 𝑡2 while 𝑀1′′ and 𝑀2′′ enable only one of these transitions, even taking into account possible 𝜏 transitions. This contradicts (iia). This phenomenon is absent in 𝑁3 , since 𝜏 firings between two visible transitions can always be “undone” by other 𝜏 firings. Simulation preserves liveness in the following sense. However, it does not preserve boundedness, nor the absence of infinite firing sequences, nor concurrency. Proposition 8.5 Simulation preserves liveness Let 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 ) and 𝑁 ′ = (𝑆 ′ , 𝑇 ′ , 𝐹 ′ , 𝑀0′ ). Let 𝑓 : 𝑇 → 𝑇 ′ be an injection and suppose that 𝑁 ′ simulates 𝑁 via 𝑓 . Then 𝑁 is live iff for all 𝑡 ′ ∈ 𝑓 (𝑇), 𝑡 ′ is live in 𝑁 ′ . Proof: Let 𝛽 be the surjection which exists by Definition 8.4. 𝛼

(⇒:) Assume that 𝑁 is live. Let 𝑀0′ −→ 𝑀1′ with 𝛼 ∈ (𝑇 ′ ) ∗ and let 𝑡 ′ ∈ 𝑓 (𝑇), i.e., 𝑓 −1 ( 𝛼)

𝑡 ′ = 𝑓 (𝑡) with 𝑡 ∈ 𝑇. By (i) and (iib), 𝑀0 −→ 𝑀1 where 𝑀1 = 𝛽(𝑀1′ ). Because 𝛾

𝑁 is live, ∃𝛾 ∈ 𝑇 ∗ ∃𝑀2 ∈ [𝑀0⟩ : 𝑀1 −→ 𝑀2 and 𝑡 occurs in 𝛾. By (iia) and because 𝛽 is a surjection, there are a sequence 𝛾 ′ ∈ (𝑇 ′ ) ∗ and a marking 𝑀2′ ∈ [𝑀0′ ⟩ such 𝛾′

that 𝑀1′ −→ 𝑀2′ and 𝑡 ′ occurs in 𝛾 ′ . Hence all 𝑡 ′ ∈ 𝑓 (𝑇) are live in 𝑁 ′ .

8.3 Simulations and net transformations

183 𝛾

(⇐:) Assume that all 𝑡 ′ ∈ 𝑓 (𝑇) are live in 𝑁 ′ . Let 𝑀0 −→ 𝑀1 with 𝛾 = 𝑡1 . . . 𝑡 𝑚 ∈ 𝛼1 ... 𝛼𝑚 𝑇 ∗ and let 𝑡 ∈ 𝑇. By (i) and (iia), ∃𝛼1 , . . . , 𝛼𝑚 : 𝑀0′ −→ −−− 𝑀1′ and 𝑀1 = 𝛽(𝑀1′ ). ′ ′ ′ Since all 𝑡 ∈ 𝑓 (𝑇) are live in 𝑁 , there are a sequence 𝛼 ∈ (𝑇 ′ ) ∗ and a marking 𝛼′

𝑓 −1 ( 𝛼′ )

𝑀2′ ∈ [𝑀0′ ⟩ such that 𝑀1′ −→ 𝑀2′ and 𝑓 (𝑡) occurs in 𝛼′ . By (iib), 𝑀1 −→ −−− 𝑀2 for 𝑀2 = 𝛽(𝑀2′ ) and 𝑡 occurs in 𝑓 −1 (𝛼′ ) by the definition of 𝑓 −1 . Hence 𝑁 is live. 8.5 ′ This proposition states that, while 𝑁 may contain non-live silent transitions, the liveness or otherwise of the visible transitions 𝑓 (𝑇) in 𝑁 ′ coincides with the liveness or otherwise of transitions 𝑇 in 𝑁. The next aim is to show that various definitions of the asymmetric choice property are “essentially equivalent” because the corresponding nets can simulate each other. The definitions given below are increasingly strict, that is to say, every RAC net is also SPL, every SPL net is also NQSPL, and every NQSPL net is also AC. We are going to claim that RAC nets can simulate AC nets (and hence also all other classes).5 The class of SPL nets has been defined for historical reasons, and the class of NQSPL nets appears in the proof of this claim. Definition 8.6 Alternative definitions of “asymmetric choice” A net 𝑁 = (𝑆, 𝑇, 𝐹) is called not quite simple (NQSPL) if ∀𝑠1 , 𝑠2 ∈ 𝑆 : 𝑠1• ∩ 𝑠2• ≠ ∅ ⇒ ( |𝑠1• | = 1 ∨ 𝑠1• = 𝑠2• ∨ |𝑠2• | = 1 ) 𝑁 is called simple (SPL) if ∀𝑠1 , 𝑠2 ∈ 𝑆 : 𝑠1• ∩ 𝑠2• ≠ ∅ ⇒ ( |𝑠1• | = 1 ∨ |𝑠2• | = 1 ) 𝑁 is called reduced asymmetric choice (RAC) if ∀𝑠1 , 𝑠2 ∈ 𝑆 : 𝑠1• ∩ 𝑠2• ≠ ∅ ⇒ ( |𝑠1• | = 1 ∧ |𝑠2• | ≤ 2 ∧ • (𝑠2• ) = {𝑠1 , 𝑠2 } ) ∨ ( |𝑠2• | = 1 ∧ |𝑠1• | ≤ 2 ∧ • (𝑠1• ) = {𝑠1 , 𝑠2 } ) Claim AC can be simulated by RAC For every AC net 𝑁, there is an RAC net which simulates 𝑁. Argument: (Sketch.) RAC is a “bare-bones” formulation of AC property. The net 𝑁 shown in Figure 8.4 is already not RAC. However, inserting a divergent 𝜏-loop into the arc from 𝑠 to 𝑡 1 suffices to get a simulating RAC net 𝑁 ′ . The general construction extends this idea, but it is a little more complex. The key idea are two equivalence relations 𝜎, 𝜌 on places 𝑝, 𝑞 ∈ 𝑆 (where 𝑁 = (𝑆, 𝑇, 𝐹) is a plain AC net) as follows:

5 Simulation is transitive (Exercise 8.3).

184

8 Fairness, Simulations, and Inhibitor Arcs 𝑠1 𝑠1

𝑠2

𝑠

𝜏

𝑡1

𝑠2

𝑠 𝜏

𝑡1

𝑡2

𝑡2 𝑁′

𝑁

Fig. 8.4: Transforming an SPL net 𝑁 into an RAC net 𝑁 ′ . 𝑝 𝜎 𝑞 iff ∃𝑠 ∈ 𝑆 : 𝑝 • ∩ 𝑠• ≠ ∅ ∧ 𝑞 • ∩ 𝑠• ≠ ∅ 𝑝 𝜌 𝑞 iff 𝑝 • = 𝑞 • For the verification that 𝜎 is an equivalence, the AC property is crucial.6 Note that 𝜌 is “almost finer” than 𝜎, that is to say, if 𝑝 𝜌 𝑞 and 𝑝 • ≠ ∅, then also 𝑝 𝜎 𝑞. The 𝜌-equivalence classes 𝑅 can be partially ordered by 𝑅1 ⊑ 𝑅2 iff 𝑅1• ⊆ 𝑅2• From the AC property, it follows that every 𝜎-equivalence class contains a unique maximal (with respect to ⊑) 𝜌-equivalence class, say 𝑅max . Thus, the Hasse diagram7 of ⊑ (within a 𝜎-equivalence class) is a tree with root 𝑅max . Amongst the minimal 𝜌-equivalence classes 𝑅, those with |𝑅 • | = 1 can be distinguished. Let them be called “atoms”, for the moment. Now consider a Hasse tree of 𝜌 inside one 𝜎-class and pick an edge 𝑅max 𝑅 for which 𝑅 is not an atom. If such an edge does not exist, call the tree “flat”. The procedure described next results in all trees being flat. Having picked such an edge in the Hasse diagram, we introduce two new 𝜏-labelled transitions 𝑎 and 𝑏 and a new (unmarked) place 𝑟: 𝑎 𝜏

𝑟

𝑏 𝜏

Further, we erase all arrows from 𝑅 ∪ 𝑅max to 𝑅 • and replace them by arrows from 𝑟 to each transition in 𝑅 • ; also, we draw an arrow from every place of 𝑅 ∪ 𝑅max to 𝑎, and an arrow from 𝑏 back to every place in 𝑅 ∪ 𝑅max . 6 In the terminology of Section 6.2, 𝜎 just singles out the conflict clusters of 𝑁 . 7 This is just like ⊑, except that transitively generated arrows are omitted.

8.3 Simulations and net transformations

185

The procedure is repeated with a possibly changed relation 𝜎, until all Hasse trees are flat. Figure 8.5 exemplifies the transformation on an example. The resulting net is NQSPL (in general) or even (in this case) SPL. The final steps are (a) to get rid of places 𝑝, 𝑞 with 𝑝 • = 𝑞 • and | 𝑝 • | ≥ 2, in order to transform an NQSPL net into an SPL net, and (b) to get an RAC net from an SPL net. Transformation (a) can be done as follows. Suppose that 𝑝 • = 𝑞 • . Then a divergent 𝜏-loop is introduced as in Figure 8.6 (shown for | 𝑝 • | = |𝑞 • | = 2). Transformation (b) can be done by generalising the transformation shown in Figure 8.4. 8.3 𝑠1

𝑠3

𝑠2

𝑠5

𝑠4

𝑠1• 𝑠2• 𝑠3• 𝑠4• 𝑠5•

𝑡1 𝑠1

𝑡2

𝑡3 𝑠3

𝑠2

𝑅max = {𝑠4 } = = = = =

{𝑡1 } {𝑡1 } {𝑡2 , 𝑡4 } {𝑡1 , 𝑡2 , 𝑡3 , 𝑡4 } {𝑡2 , 𝑡3 , 𝑡4 }

{𝑠5 }

{𝑠1 , 𝑠2 }

{𝑠3 }

𝑡4 𝑠4

𝑠5

𝜏

𝜏

𝑅max,1 = {𝑠4 }

𝑠

{𝑠1 , 𝑠2 }

𝑠1

𝑅max,2 = {𝑠}

{𝑠5 }

{𝑠3 }

𝑠3

𝑠2

𝑠4

𝑠5

𝜏

𝜏

𝑡1 𝑡2

𝑡3

𝑡4

𝑠 𝜏

𝜏

Result: 𝑡1 𝑠′

𝑡2

𝑡3

𝑡4

Fig. 8.5: Transforming an AC net 𝑁 into an NQSPL net 𝑁 ′ . Equivalence classes of 𝜌 are marked light gray. Atoms are underlined. In the first step, the Hasse tree edge from 𝑅max ={𝑠4 } to {𝑠5 } is chosen. Two new trees arise after this step. The tree of 𝑅max,1 is already flat. In the second step, the edge from 𝑅max,2 ={𝑠} to {𝑠3 } is chosen. The result is not just NQSPL but even SPL. In effect, single firings have been transformed into tricklings down a (reversible) 𝜏-ladder.

186

8 Fairness, Simulations, and Inhibitor Arcs 𝑝

𝑞

𝑝

{ 𝑎

𝑞

𝜏

𝜏

𝑎

𝑏

𝑏

Fig. 8.6: Transforming an NQSPL net into an SPL net.

8.4 Strong fairness and very strong fairness The notion of fairness originates from systems in which scarce resources have to be administered. For instance, just like a busy bakery with its customers, a time-sharing operating system has to distribute computing time of a few real processors fairly amongst many competing processes (the “customers”). 𝑎1 𝑠1

𝑎3

Σ 𝑠3

𝑠2

Σ0′

Σ0 𝑠4 𝑠0

𝑡0

𝑡0′

𝑡

𝑠0′

𝑠0′′

𝑡′

𝑡 𝑎2

𝑎4

Fig. 8.7: Three examples Σ, Σ0 , Σ0′ illustrating (un)fairness. Figure 8.7 epitomises some classic examples of resource conflicts. The system Σ depicted on the left-hand side consists of two sequential processes with transitions 𝑎 1 , 𝑎 2 and 𝑎 3 , 𝑎 4 . In each of them, a resource (exemplified by the token) is passed around cyclically. There is also a central transition 𝑡 which needs both resources and can use them only if they reside simultaneously on places 𝑠2 and 𝑠3 . In case 𝑡 occurs, it processes the two resources (not shown in the picture) and hands them back simultaneously. In Σ0 , a single resource is continually needed by two activities 𝑡0 and 𝑡. In Σ0′ , two resources are needed, one by 𝑡 0′ and the other by 𝑡 ′ .8 Arguably, there is no resource conflict at all in Σ0′ . In Σ, for instance, denoting the initial marking by 𝑀0 = {𝑠1 , 𝑠4 }, we may consider an infinite sequence (𝑎 1 𝑎 3 𝑡𝑎 2 𝑎 4 ) ∞ as follows: 𝑎1

𝑎3

𝑡

𝑎2

𝑎4

𝑎1

𝑎3

𝜎 = 𝑀0 → {𝑠2 , 𝑠4 } → {𝑠2 , 𝑠3 } → {𝑠2 , 𝑠3 } → {𝑠1 , 𝑠3 } → 𝑀0 → {𝑠2 , 𝑠4 } → . . . 8 The latter also devours its resource, but this is not the important point to be made.

8.4 Strong fairness and very strong fairness

187

In this sequence, both processes are executed infinitely often in such a way that the marking in which both 𝑠2 and 𝑠3 carry a token occurs infinitely often. Thus, 𝑡 can (and does) occur infinitely often as well. We shall call 𝜎 0-fair (or just fair) towards all transitions of the net, including 𝑡. Still in Σ, consider the infinite sequence 𝑎 1 (𝑎 3 𝑎 4 ) ∞ , or, more precisely: 𝑎1

𝑎3

𝑎3

𝑎4

𝑎3

𝑎4

𝜎0 = 𝑀0 → {𝑠2 , 𝑠4 } → {𝑠2 , 𝑠3 } → {𝑠2 , 𝑠4 } → {𝑠2 , 𝑠3 } → {𝑠2 , 𝑠4 } → . . . On place 𝑠3 , a token continually appears, dis-appears, and re-appears again. Therefore, 𝑡 is enabled infinitely often along 𝜎0 , but it never occurs as a part of 𝜎0 . In such a case, we shall call 𝜎0 unfair towards 𝑡 (though fair towards 𝑎 3 and 𝑎 4 ). We may also consider the infinite firing sequence (𝑎 1 𝑎 2 𝑎 3 𝑎 4 ) ∞ , i.e., more fully: 𝑎1

𝑎2

𝑎3

𝑎4

𝑎1

𝑎2

𝑎3

𝜎1 = 𝑀0 → {𝑠2 , 𝑠4 } → 𝑀0 → {𝑠1 , 𝑠3 } → 𝑀0 → {𝑠2 , 𝑠4 } → 𝑀0 → . . . Along this sequence, 𝑡 is never even activated, but it is not “far away” from being activated. It is infinitely often the case that after only a single intermediate occurrence of one of the transitions 𝑎 1 or 𝑎 3 , it could become activated. Loosely speaking, one could say that 𝑡 is “almost activated” infinitely often.9 In this case, we shall call 𝜎1 1-unfair (though at the same time fair) towards 𝑡. In order to capture this formally and more generally, we introduce a parameter 𝑘 and define the notion of 𝑘-fairness. Before doing this, let us consider the system Σ0 depicted in the middle of Figure 8.7 and the infinite firing sequence (𝑡0 ) ∞ . More fully: 𝑡0

𝑡0

𝑡0

𝜏0 = {𝑠0 } −→ {𝑠0 } −→ {𝑠0 } −→

...

𝜏0 is unfair towards 𝑡 in almost the same sense as 𝜎0 was towards the other 𝑡 before. A difference is that, in this case, 𝑡 is always enabled during 𝜏0 , not just infinitely often as in 𝜎0 . Intuitively, in 𝜏0 , 𝑠0 contains a single token which is continually “snatched away” by 𝑡0 , to the disadvantage of 𝑡 which could also grab this token but never does. The notion of “always enabled” can be tricky, as shown by Σ0′ on the right-hand side of Figure 8.7. The infinite sequence (𝑡0′ ) ∞ , i.e. 𝑡0′

𝑡0′

𝑡0′

𝜏0′ = {𝑠0′ , 𝑠0′′ } −→ {𝑠0′ , 𝑠0′′ } −→ {𝑠0′ , 𝑠0′′ } −→

...

permanently enables 𝑡 ′ but does not contain 𝑡 ′ . Formally, it is also unfair towards 𝑡 ′ . Nevertheless, there is a difference to the previous case. Note that the token on 𝑠0′′ which serves to enable 𝑡 ′ permanently resides on this place, without ever being used, or moved away, for the occurrence of 𝑡 0′ . Intuitively, 𝑡 ′ is simply “too lazy by itself” to use the token, rather than being hindered by another transition’s occurrences. We shall say that 𝜏0′ (but not 𝜏0 ) violates what is sometimes called the finite delay property or the progress property. 9 Even more colloquially, the behaviour of the two processes is called a “conspiracy” against 𝑡, so as to make sure that 𝑡 is never enabled.

188

8 Fairness, Simulations, and Inhibitor Arcs

The next definition formalises the notion of 𝑘-(un)fairness. In it, the following modified quantification symbols are used: ∃𝑖∞ , meaning: ∀∞ 𝑖 , meaning:

“there are infinitely many 𝑖 satisfying . . .” “for all but finitely many 𝑖 we have . . .”

These quantifiers can be used as parts of Boolean formulae, and an analogue of de Morgan’s law is valid.10 The set N of natural numbers is extended by a “number” ∞ denoting “infinity”. This number obeys the same algebraic laws as did 𝜔 in Section 3.5. In particular, 𝑖 ≤ ∞ for every 𝑖 ∈ N ∪ {∞}. The symbols differ for historical reasons: ∞ is traditionally used for infinite sequences and 𝜔 is traditionally used for coverability graphs.11 Definition 8.7 A hierarchy of strong (un-)fairness notions Let 𝑘 ∈ N ∪ {∞} and let (𝑆, 𝑇, 𝐹) be a net with initial marking 𝑀0 . • A marking 𝑀 𝑘-activates 𝑡 ∈ 𝑇 if there is a sequence of transitions 𝑥1 ...𝑥𝑖 𝑥 1 , . . . , 𝑥 𝑖 with 𝑖 ≤ 𝑘 and 𝑀 −→ 𝐾, such that 𝐾 activates 𝑡.a 𝑡3 𝑡1 𝑡2 𝑡4 • A firing sequence 𝜎 = 𝑀0 −→ 𝑀1 −→ 𝑀2 −→ 𝑀3 −→ . . . is called 𝑘-fair towards 𝑡 if ∞ ( ∀∞ 𝑖 : ¬(𝑀𝑖 𝑘-activates 𝑡) ) ∨ ( ∃𝑖 : (𝑡=𝑡 𝑖 ) ) 𝑡1

𝑡2

𝑡3

𝑡4

• An infinite firing sequence 𝑀0 −→ 𝑀1 −→ 𝑀2 −→ 𝑀3 −→ . . . is called 𝑘-fair if it is 𝑘-fair towards every transition in 𝑇. A sequence is called (𝑘-)unfair (towards 𝑡) if it is not (𝑘-)fair (towards 𝑡, respectively). 0-fairness is called strong fairness. a

Note that 𝑖 ≤ 𝑘 is always the case if 𝑘 = ∞.

The central condition can be read, logically equivalently, as follows: if 𝑡 is 𝑘-activated infinitely often, then it occurs infinitely often in 𝜎. This implies that all finite firing sequences are called fair. Also, observe that the notions of “𝑀 0-activates 𝑡”, “𝑀 activates 𝑡”, and “𝑡 is enabled at 𝑀” coincide. Example 8.8 Figure 8.7 For the examples shown in Figure 8.7, we have the following: • 𝜎 is 𝑘-fair, for every 𝑘 ∈ N ∪ {∞}. • 𝜎0 is 𝑘-unfair towards 𝑡, for every 𝑘. • 𝜎1 is 0-fair towards 𝑡, and 𝑘-unfair towards 𝑡, for every 𝑘 ≠ 0. • 𝜏0 is 𝑘-fair towards 𝑡 0 and 𝑘-unfair towards 𝑡, for every 𝑘. • 𝜏0′ is 𝑘-fair towards 𝑡 0′ and 𝑘-unfair towards 𝑡 ′ , for every 𝑘. 10 More precisely, ¬(∃𝑖∞ 𝐹 ) ⇐⇒ ∀𝑖∞ ¬𝐹 and ¬(∀𝑖∞ 𝐹 ) ⇐⇒ ∃𝑖∞ ¬𝐹 for a Boolean formula 𝐹. 11 Very loosely speaking, ∞ has a flavour of “definitely infinite” while 𝜔 is very vaguely associated with “finite or infinite”.

8.4 Strong fairness and very strong fairness

189

So far, 𝜏0 and 𝜏0′ are not distinguished. Definition 8.7 describes an infinite spectrum of fairness notions. We now investigate some relationships between them and return to the finite delay property in the next section. The next two propositions follow almost immediately from the definition. Proposition 8.9 ∞-fairness is the strongest fairness notion Let 𝜎 be ∞-fair (towards 𝑡). For every 𝑘 ∈ N, 𝜎 is 𝑘-fair (towards 𝑡).

Proposition 8.10 A hierarchy of fairness notions Let 𝜎 be a firing sequence and let 𝑘 ∈ N. If 𝜎 is (𝑘+1)-fair (towards 𝑡), then 𝜎 is also 𝑘-fair (towards 𝑡). Both propositions have converses in some contexts. The next result holds for Petri nets in general, but not for a Turing-powerful model such as parallel programs, as shall be seen below in Section 8.6. Proposition 8.11 A converse of Proposition 8.9 for Petri nets Let 𝜎 be a firing sequence of a Petri net, and suppose that, for every 𝑘 ∈ N, 𝜎 is 𝑘-fair. Then 𝜎 is ∞-fair. Proof: By contraposition. We consider an infinite firing sequence 𝑡1

𝑡2

𝑡3

𝜎 = 𝑀0 −→ 𝑀1 −→ 𝑀2 −→

...

which is not ∞-fair, and we show that there exists a number 𝑘 ∈ N such that 𝜎 is not 𝑘-fair. By ∞-unfairness, there is a transition 𝑡 ∈ 𝑇 which occurs only finitely often in 𝜎 but is infinitely often ∞-activated along 𝜎 (in fact, therefore, by every marking occurring in 𝜎). By Dickson’s lemma (Lemma 3.20), 𝜎 contains an infinite increasing chain of markings 𝑀𝑘0 ≤ 𝑀𝑘1 ≤ 𝑀𝑘2 ≤ . . .

(∀ 𝑗 ∈ N : 𝑘 𝑗 < 𝑘 𝑗+1 )

(8.3)

Since 𝜎 ∞-activates 𝑡 infinitely often, 𝑀𝑘0 also ∞-activates 𝑡. Consider some sequence 𝜏, starting at 𝑀𝑘0 , after which 𝑡 is activated. Then the same sequence 𝜏 is enabled at every 𝑀𝑘 𝑗 (for 𝑗 ≥ 0), because of the monotonicity of activation (Lemma 1.14) and because of (8.3). Let 𝑘 be the length of 𝜏. We just found that 𝑡 is 𝑘-activated infinitely often along 𝜎, but since 𝑡 occurs only finitely often in 𝜎, the latter is 𝑘-unfair towards 𝑡. This ends the proof. 8.11 For AC-systems, we even have the following:

190

8 Fairness, Simulations, and Inhibitor Arcs

Proposition 8.12 A converse of Proposition 8.10 for AC-systems Let 𝜎 be a firing sequence of an AC-system and let 𝑘 ∈ N. If 𝜎 is 𝑘-fair, then 𝜎 is also 𝑘+1-fair. Proof: By contradiction. We assume that 𝑘 ∈ N and that 𝑡1

𝑡2

𝑡3

𝜎 = 𝑀0 −→ 𝑀1 −→ 𝑀2 −→

...

is an infinite firing sequence which is 𝑘-fair but not (𝑘+1)-fair. From this assumption, a contradiction will be derived. By the non-(𝑘+1)-fairness of 𝜎, we find a transition 𝑡0 which is infinitely often (𝑘+1)-activated in 𝜎 but occurs there only finitely many times. Hence, and by the 𝑘-fairness of 𝜎, we find a suffix 𝜎0 of 𝜎 along which 𝑡 is never 𝑘-activated (and, in particular, never activated). By Theorem 8.2, we find a place 𝑠0 ∈ • 𝑡 0 which is never marked in some suffix b 𝜎0 of 𝜎0 . The place 𝑠0 can be used in a backtracking argument in order to construct a siphon which can be emptied of tokens. To this end, we consider any transition 𝑡1 ∈ • 𝑠0 . Since 𝑠0 is token-free in b 𝜎0 , 𝑡1 does not occur in b 𝜎0 . Again by 𝑘-fairness, there exists a suffix 𝜎1 of b 𝜎0 along which 𝑡1 is not activated. The argument can now be repeated, and it can be done so for every transition in • 𝑠0 . The suitable newly constructed suffixes may be far from the ones already constructed earlier, but nevertheless, they exist. b = 𝑀𝑗 Thus, there exists a siphon 𝐷 ≠ ∅ which is token-empty in some marking 𝑀 b occurring in 𝜎 (by choosing 𝑗 large enough); i.e., 𝑀 (𝐷) = 0. Moreover, 𝑡 0 is an outgoing transition of 𝐷, that is, 𝑡 0 ∈ 𝐷 • . By the basic property of siphons (Lemma 5.16), this implies 𝑀𝑙 (𝐷) = 0 for all 𝑙 ≥ 𝑗. Hence 𝑡0 cannot be fired (and not even ∞-activated) from all of these 𝑀𝑙 , contradicting the fact that 𝑡 0 is (𝑘+1)-activated infinitely often in 𝜎. 8.12 Observe that we may have a firing sequence which is, at the same time, (𝑘+1)-unfair and 𝑘-fair towards a given transition 𝑡. However, under the premises of Proposition 8.12, the proof implies that such a sequence is necessarily 𝑘-unfair towards some other transition 𝑢 ≠ 𝑡. Example 8.13 Figure 8.7 continued In the system Σ shown on the left-hand side of Figure 8.7, 0-fairness and 1-fairness differ from each other. By Proposition 8.12, Σ cannot be an ACsystem. Indeed, it displays the typical M-structure, since 𝑠2• ∩ 𝑠3• = {𝑡} ≠ ∅, but neither 𝑠2• ⊆ 𝑠3• nor 𝑠3• ⊆ 𝑠2• . Taken together, Propositions 8.11 and 8.12 imply that for AC-systems, the entire 𝑘-fairness hierarchy collapses to a single fairness notion, say that of 0-fairness.12 12 Which is by far the most widely used concept out of this hierarchy.

8.5 Weak fairness and finite delay

191

8.5 Weak fairness and finite delay Weak fairness arises from strong fairness by turning the universal quantifier in the definition of strong fairness (Definition 8.7) into an existential one: Definition 8.14 Weak fairness Let (𝑆, 𝑇, 𝐹) be a net with initial marking 𝑀0 . 𝑡3 𝑡1 𝑡2 𝑡4 A firing sequence 𝑀0 −→ 𝑀1 −→ 𝑀2 −→ 𝑀3 −→ . . . is weakly fair towards 𝑡 if (∃𝑖∞ : ¬(𝑀𝑖 activates 𝑡)) ∨ (∃𝑖∞ : (𝑡=𝑡𝑖 )) The defining property of weak fairness can be paraphrased as follows: if an infinite sequence has a suffix during which a transition 𝑡 is always enabled, then 𝑡 occurs infinitely often in the sequence. Weak fairness is weaker than strong fairness: Proposition 8.15 Strong fairness implies weak fairness A 0-fair (i.e., strongly fair) sequence is also weakly fair. Proof: The existence of a suffix during which 𝑡 is always enabled implies that 𝑡 is infinitely often enabled during that suffix. 8.15 In order to see the difference to strong fairness, reconsider the sequence 𝑎3

𝑎1

𝑎3

𝑎4

𝑎4

𝑎3

𝜎0 = 𝑀0 → {𝑠2 , 𝑠4 } → {𝑠2 , 𝑠3 } → {𝑠2 , 𝑠4 } → {𝑠2 , 𝑠3 } → {𝑠2 , 𝑠4 } → . . . in Σ (Figure 8.7). It is strongly unfair towards 𝑡. But it is weakly fair towards 𝑡 because it does not have a suffix during which 𝑡 is always enabled. However, 𝜎0 is weakly unfair towards 𝑎 2 . We finally turn to fairness notions related to the permanent enabling of transitions in an infinite firing sequence. We wish to capture the distinction between the sequences 𝑡0

𝑡0

𝑡0

𝑡0′

𝑡0′

𝑡0′

Σ0 :

𝜏0 = 𝑀0 −→ 𝑀0 −→ 𝑀0 −→

...

Σ0′ :

𝜏0′ = 𝑀0′ −→ 𝑀0′ −→ 𝑀0′ −→

...

in Figure 8.8 (which duplicates Σ0 and Σ0′ from Figure 8.7). In 𝜏0′ , transition 𝑡 ′ is permanently enabled, while in 𝜏0 , transition 𝑡 is always enabled, but not really permanently since in every marking, there is a conflict between 𝑡 and 𝑡0 . If 𝑡0 is considered to “take a tiny amount of time”, then there is always a tiny interval during which 𝑡 is not enabled. In 𝜏0′ , there is no such interval, even if 𝑡0′ is considered to “take a lot of time”. Apparently, we need to define some kind of maximality property for infinite sequences, by which 𝜏0 can be called “maximal” wheras 𝜏0′ is not “maximal”. The next definitions are designed to make this notion precise.

192

8 Fairness, Simulations, and Inhibitor Arcs 𝑡0

Σ0 :

𝑠0

𝑡

Σ0′ :

𝑡0′

𝑠0′

𝑠0′′

𝑡′

Fig. 8.8: Two examples illustrating the finite delay property.

Definition 8.16 Weak and strong transpositions Let 𝑁 be a plain net, 𝑀 a marking of 𝑁 and 𝑡, 𝑢 two transitions of 𝑁. Suppose that 𝑢𝑡 𝜎 = 𝑀 −→ 𝜌

=

𝑡𝑢

𝑀 −→

are two firing sequences starting at a marking 𝑀. Then 𝜌 and 𝜎 are called {𝑢,𝑡 }

weak transpositions of each other, starting from 𝑀. If 𝑀 −→ (i.e., 𝑀 even concurrently enables {𝑢, 𝑡}), then 𝜌 and 𝜎 are called strong transpositions of each other.a a

For the notion of concurrent enabling, recall Definition 1.19.

Example 8.17 Difference between weak and strong transpositions 𝑡0 𝑡 𝑡𝑡0 In Figure 8.8 (left-hand side), 𝑀0 −→ and 𝑀0 −→ are weak transpositions of each other. But they are not strong transpositions, since 𝑀0 satisfies {𝑡0′ ,𝑡 ′ }

{𝑡0 ,𝑡 }

𝑡0′ 𝑡 ′

¬(𝑀0 −→ ). On the right-hand side we have 𝑀0′ −→ , hence 𝑀0′ −→ and 𝑡 ′ 𝑡0′

𝑀0′ −→ are both weak and strong transpositions of each other. Definition 8.18 Permutation equivalence ≡ and comparability ⊑ Let 𝑁 be a plain net, 𝑀0 be a marking of 𝑁 and 𝑥1 , 𝑥2 , . . . , 𝑦 1 , 𝑦 2 , . . . be transitions of 𝑁. Suppose that 𝜎1 𝜎2

= =

𝑥1

𝑥2

𝑥3

𝑀0 −→ 𝑀1 −→ 𝑀2 −→ . . . 𝑦1 𝑦2 𝑦3 𝑀0 −→ 𝑀1 −→ 𝑀2 −→ . . .

are firing sequences. We define 𝜎1 ≡′ 𝜎2 if there is a finite number of strong transpositions from 𝜎1 to 𝜎2 , and 𝜎1 ≡ 𝜎2 (𝜎1 and 𝜎2 are permutation equivalent) if either 𝜎1 ≡′ 𝜎2 (if both sequences are finite), or (if both sequences are infinite) for every 𝑛 ≥ 0 there are 𝜎1′ , 𝜎2′ such that 𝜎1 ≡′ 𝜎1′ and 𝜎1′ and 𝜎2 agree on the prefix of length 𝑛, as well as 𝜎2 ≡′ 𝜎2′ and 𝜎2′ and 𝜎1 agree on the prefix of length 𝑛. Further, we define 𝜌 ⊑ 𝜎 if for every 𝑛 ≥ 0 there is 𝜎 ′ such that 𝜎 ≡′ 𝜎 ′ and 𝜌 and 𝜎 ′ agree on the prefix of length 𝑛.a a

Here, 𝑛∈N∪{∞} is assumed to be less than or equal to the number of transitions in 𝜌.

8.5 Weak fairness and finite delay

193

Example 8.19 Illustration of ≡ and ⊑ for finite sequences For instance, on the right-hand side of Figure 8.8, 𝑡0′

𝑡0′

𝑡′

𝑡′

𝑡0′

𝑡′

𝑡′

𝑡0′

𝑀0′ → 𝑀0′ → 𝑀0′ → 𝑀0′ → 𝑀0′ ≡ 𝑀0′ → 𝑀0′ → 𝑀0′ → 𝑀0′ → 𝑀0′ because there is the following sequence of strong transpositions: 𝑡0′

𝑡′

𝑡0′

𝑡′

𝑡′

𝑡0′

𝑡0′

𝑡′

𝑡′

𝑡0′

𝑡′

𝑡0′

𝑀0′ −→ 𝑀0′ −→ 𝑀0′ −→ 𝑀0′ −→ 𝑀0′ ≡ 𝑀0′ −→ 𝑀0′ −→ 𝑀0′ −→ 𝑀0′ −→ 𝑀0′ ≡ 𝑀0′ −→ 𝑀0′ −→ 𝑀0′ −→ 𝑀0′ −→ 𝑀0′ Also, 𝑡′

𝑡0′

𝑡′

𝑡0′

𝑡′

𝑀0′ −→ 𝑀0′ ⊑ 𝑀0′ −→ 𝑀0′ −→ 𝑀0′ but

𝑀0′ −→ 𝑀0′ @ 𝑀0′ −→ 𝑀0′ 𝜌

A possible definition of the maximality of a finite firing sequence 𝑀0 −→ is to require that it cannot be (strongly) permuted into a prefix of a longer firing sequence 𝜎 𝑀0 −→ . The following definition transports this idea to infinite firing sequences (which are also captured by Definition 8.18). Definition 8.20 Maximality of infinite firing sequences Let 𝑁 be a plain net, 𝑀0 be a marking of 𝑁 and 𝑥1 , 𝑥2 , . . . be transitions of 𝑁. An infinite firing sequence 𝜌

=

𝑥1

𝑥2

𝑥3

𝑀0 −→ 𝑀1 −→ 𝑀2 −→ . . .

is called maximal if for all infinite firing sequences 𝜎

=

𝑦1

𝑦2

𝑦3

𝑀0 −→ 𝑀1 −→ 𝑀2 −→ . . .

with transitions 𝑦 1 , 𝑦 2 , . . . of 𝑁: if 𝜌 ⊑ 𝜎 then 𝜌 ≡ 𝜎. Finally, the finite delay property is defined as follows. Definition 8.21 The finite delay (or progress) property Let (𝑆, 𝑇, 𝐹) be a net with initial marking 𝑀0 . 𝑡3 𝑡1 𝑡2 𝑡4 An infinite firing sequence 𝑀0 −→ 𝑀1 −→ 𝑀2 −→ 𝑀3 −→ . . . satisfies the finite delay property, or the progress property, if it is maximal.

194

8 Fairness, Simulations, and Inhibitor Arcs

Example 8.22 Application of Definition 8.21 to the running example For instance, reconsider 𝑡0

𝑡0

𝑡0

𝑡0′

𝑡0′

𝑡0′

𝜏0 = 𝑀0 −→ 𝑀0 −→ 𝑀0 −→

...

and 𝜏0′ = 𝑀0′ −→ 𝑀0′ −→ 𝑀0′ −→

...

in Figure 8.8. 𝜏0′ does not satisfy the finite delay property because there are “larger” infinite firing sequences 𝜎 with 𝜏0′ ⊑ 𝜎 and yet 𝜏0′ . 𝜎, for instance 𝑡′

𝑡0′

𝑡0′

𝜎 = 𝑀0′ −→ 𝑀0′ −→ 𝑀0′ −→

...

To see that 𝜏0′ ⊑ 𝜎, observe that the 𝑡 ′ in 𝜎 can be shifted arbitrarily far to the right by suitable strong transpositions. To see that 𝜏0′ . 𝜎, observe that 𝑡 ′ occurs in 𝜎 but not in 𝜏0′ . No amount of strong transpositions could turn 𝜏0′ into 𝜎 or vice versa. By contrast, 𝜏0 satisfies the finite delay property because no firing sequence 𝜎 of this kind can be found. If 𝑡 ever occurred in such a 𝜎, it could not be strongly transposed with any of its neighbours.

8.6 Inhibitor arcs and zero tests In this section, we describe a Turing-powerful extension of the Petri net model of computation. We will now allow a new kind of arc from a place to a transition, implying that such a transition is enabled (only) if the place has no tokens. Such an arc describes a test for zero (namely, a test whether the token count on the place equals 0). We shall show that in such an extended model, the conclusion of Proposition 8.11 is wrong, meaning that ∞-fairness discriminates more than the conjunction of all 𝑘-fairness notions for 𝑘 ∈ N. Definition 8.23 I-nets: Petri nets with inhibitor arcs An I-net is a 4-tuple (𝑆, 𝑇, 𝐹, 𝐺), such that (𝑆, 𝑇, 𝐹) is a plain Petri net and 𝐺 ⊆ 𝑆 × 𝑇 are the inhibitor arcs (we shall assume that, if (𝑠, 𝑡) ∈ 𝐺, then 𝐹 (𝑠, 𝑡) = 0). The transition rule is modified as follows. A transition 𝑡 ∈ 𝑇 is enabled (or activated) at a marking 𝑀 if and only if: ∀𝑠 ∈ 𝐹 −1 (𝑡) : 𝑀 (𝑠) ≥ 𝐹 (𝑠, 𝑡) ∧ ∀𝑠 ∈ 𝐺 −1 (𝑡) : 𝑀 (𝑠) = 0 Firing of an enabled transition is defined as before.

(as before) (new)

8.6 Inhibitor arcs and zero tests

195

𝑠

𝑡

An inhibitor arc (𝑠, 𝑡) ∈ 𝐺 is represented graphically as follows: I-nets are Turing-powerful in the following sense: Theorem 8.24 Inhibitor nets are Turing-powerful Every prefix-closed Turing-computable language 𝐿 ⊆ Σ∗ can be generated by an initially marked, finite inhibitor net. The restriction to prefix-closed languages is not essential. If final states are permitted, then this restriction can be omitted. Proof: (Idea.) It is essential to allow potentially unbounded I-nets, since we can then simulate machines using unbounded counters, and such machines are known to be as powerful as Turing machines. Counter machines use increment and decrement operations (i.e., addition and subtraction of 1), as well as zero tests. The increment and decrement operations can easily be simulated by ordinary Petri net structures,13 while the test for zero has just been allowed in I-nets. 8.24 For bounded I-nets, we have the following: Proposition 8.25 Simulating bounded I-nets Bounded I-nets can be simulated by ordinary Petri nets without inhibitor arcs. Proof: (Sketch.) We use boundedness in order to insert complementary places for all places (unless already present). The complementary place of a place 𝑝 with bound 𝐵 is defined as a place 𝑝 such that 𝑀0 ( 𝑝) + 𝑀0 ( 𝑝) = 𝐵 and the arrow between a transition 𝑡 and 𝑝 are exactly opposite to the arc between 𝑡 and 𝑝. The test for 0 with respect to 𝑝 is replaced by a test for 𝐵 with regard to 𝑝. 8.25 The net created by this construction may be non-plain even if the original I-net is plain. Example 8.26 Figure 8.9 Let us assume that a bound for 𝑝 on the left-hand side of Figure 8.9 is 𝐵 = 2, that is, 2 is the smallest number 𝐵 for which 𝑝 is 𝐵-bounded. The net shown in the middle of Figure 8.9 arises by adding the complementary place 𝑝 (ignoring the inhibitor arc). Note that 𝑀0 ( 𝑝) + 𝑀0 ( 𝑝) = 𝐵 = 2. Observe that, by construction, 𝑝 and 𝑝 form an S-invariant and that the sum of tokens on this S-invariant is, and remains through any firing, equal to 2. In the net shown on the right-hand side of Figure 8.9, the zero test, by 𝑡, for 𝑝 has been replaced by a 2-test for 𝑝. This 2-test is realised by a loop around 𝑡 and 𝑝 with arc weights 2.

13 See also Chapter 10 below.

196

8 Fairness, Simulations, and Inhibitor Arcs

This construction is easily generalised. For example, if the exact bound of 𝑝 were 3, then we would put two tokens on 𝑝 and use a 3-cycle.

𝑝 {

𝑝

𝑝 {

𝑝

𝑝

2 2

𝑡

𝑡

Fig. 8.9: A bounded I-system, and its transformation into an ordinary Petri net. The tight bound of 𝑝 is assumed to be 2. Finally, an example is shown in Figure 8.10 in order to demonstrate that the statement of Proposition 8.11 is not valid for I-nets. First of all, we stress that Definition 8.7 of 𝑘-fairness is the same for I-nets, since it depends only on the notion of a transition being (𝑘-)enabled, and this can be defined for I-nets in the same way as for ordinary Petri nets. 𝑥

3 𝑦

𝑡 𝑝

Fig. 8.10: Counterexample for the statement of Proposition 8.11 in terms of I-nets. In Figure 8.10, the firing sequence (𝑥𝑦) ∞ , that is, more precisely, 𝑥

𝑦

𝑥

𝑦

𝑥

𝑦

𝜏 = ( 𝑝=1) → ( 𝑝=3) → ( 𝑝=2) → ( 𝑝=4) → ( 𝑝=3) → ( 𝑝=5) → . . . is 𝑘-fair towards 𝑡, for every arbitrary 𝑘 ∈ N. This is because the token count on 𝑝 eventually exceeds 𝑘, and from that point onwards, 𝑡 is not 𝑘-enabled any more. It is also fair towards 𝑥 and 𝑦, since both occur infinitely often in it. However, 𝜏 is not ∞-fair towards 𝑡, because no matter how many tokens have been assembled on 𝑝, transition 𝑡 can always be re-enabled by firing 𝑦 sufficiently often. Such an example cannot be realised by ordinary Petri nets without inhibitor arcs and zero tests. It can, of course, be realised by programs where variables may have infinite value sets.

8.7 Bibliographical remarks and further reading

197

8.7 Bibliographical remarks and further reading The distinction between safety properties and liveness properties in the context of parallel program verification goes back to Leslie Lamport [Lam77]. Implicitly, one finds the distinction also in [Dij76] (by Edsger W. Dijkstra) where invariants, to prove correctness (“bad things” cannot happen), were contrasted with so-called variants, that is, functions into the set N whose values decrease with every loop iteration, to prove termination (a “good thing” that is supposed to happen eventually). Asymmetric choice nets originate from work by Carl Adam Petri (who has called the M-structure symmetric confusion [Pet67]) and Michel H.T. Hack (who defined “simple nets” – corresponding to the SPL-nets defined in Definition 8.6 – in [Hac72]). Other related classes and some discussion of them can be found in [BS83] (Eike Best and Michael W. Shields) and, more recently, in [GGS09] (Rob J. van Glabbeek, Ursula Goltz and Jens-Wolfhard Schicke-Uffmann). Symmetric confusion is discussed in connection with the distributability of activities in [GGS13] (R.J. van Glabbeek, U. Goltz and J.-W. Schicke-Uffmann). A Hasse diagram is a partial order where the order relations stemming from transitivity are not shown.14 The simulation relation in Section 8.3 has been taken from [Bes86] (Eike Best). It is related to the notion of (weak) (bi-)simulation defined for labelled transition systems. For bisimulation, compare [Mil89] by Robin Milner.15 For (weak) simulation, see, e.g., [Rot02] (Jan Rothe).16 In the context of labelled transition systems with explicit “silent transitions” 𝜏 (the letter “𝜏” to denote silent transitions was actually introduced by Robin Milner), weak simulation relates to simulation the same way as weak bisimulation relates to bisimulation. The simulation relation defined in Section 8.3 differs, however, in the sense that 𝜏 is implicit and 𝛽 is required to be a surjective function, rather than merely a relation. It is thus a fairly strong notion of simulation. This is underlined by the fact that if two nets simulate each other, then their reachability graphs are isomorphic.17 Exercises 11.4 to 11.6 in Chapter 11 can be consulted in order to sort out these various notions. Issues of fairness, divergence, and the proper distribution of resources, are pretty much ubiquitous in the realm of distributed communicating systems. They are referred to under a variety of different names. For instance, “starvation freedom” (meaning something similar to “fairness”) denotes the fact that there is no such action like 𝑡, situated between two processes as in Figure 8.7, such that 𝑡 could forever 14 See https://en.wikipedia.org/wiki/Hasse_diagram. 15 Also: https://en.wikipedia.org/wiki/Bisimulation. 16 Also: https://en.wikipedia.org/wiki/Simulation_(computer_science). 17 To see this, one needs the axiom of choice (see https://en.wikipedia.org/wiki/Axiom_ of_choice) and the Cantor-Dedekind-Schröder-Bernstein (CDSB) Theorem (see https://en. wikipedia.org/wiki/Schroeder-Bernstein_theorem). The axiom of choice is needed in order to construct an injection from 𝐵 to 𝐴 if there is a surjection from 𝐴 to 𝐵, and the CDSB theorem is needed in order to construct a bijection between 𝐴 and 𝐵 if there are two injections, one from 𝐴 to 𝐵 and one from 𝐵 to 𝐴.

198

8 Fairness, Simulations, and Inhibitor Arcs

be denied the resources it needs.18 A divergent, internal, i.e., “silent”, loop, as in 𝑁3 in Figure 8.3, is also called a “livelock”, or a “busy wait” (if it occurs in connection with some activity which cannot momentarily be taken), or “polling” (if it occurs when messages are expected), or a “spinlock” (when used, for instance, for acquiring data base locks).19 A seminal paper on fairness is [LPS81] (by Daniel Lehmann, Amir Pnueli, Jonathan Stavi) in which three notions of fairness are defined (impartiality, justice and fairness) which correspond to weak fairness, the finite delay property, and strong fairness. The notions of 𝑘-fairness are from [Bes84] (Eike Best). ∞-fairness is also known as superfairness [Fle92] (Hans Fleischhack) or hyper-fairness [Lam00] (Leslie Lamport). The finite delay property has been called “progress assumption” in [Rei96]. Fairness is studied in depth in the monograph [Fra86] by Nissim Francez. There is also a more recent survey paper [GH19] by Rob van Glabbeek and Peter Höfner, introducing other related notions and classifying the various forms of fairness in terms of strength and applicability. Early references to inhibitor arcs are [Age74] (Tilak Agerwala), [Hac75b] (Michel H.T. Hack), and [Pet77] (James L. Peterson) where it is shown – using counter machines, for which see, e.g., [Min67] (Marvin Minsky) – that inhibitor nets are Turing-powerful (Theorem 8.24). See also [JK95] (Ryszard Janicki and Maciej Koutny). Definition 8.21 (finite delay) is actually a firing sequence realisation of an idea due to Wolfgang Reisig. In [Rei96], he observed that partial order semantics allows an elegant, intuitive definition of the progress property (compare also [KW97], by Ekkart Kindler and Rolf Walter). To see this, reconsider Σ0′ in Figure 8.11. The acyclic net 𝜋1 describes a partially ordered evolution of Σ0′ where the token on 𝑠0′′ is left in place while the token on 𝑠0′ is continually moved by firing 𝑡0′ . 𝜋1 is not maximal because it can be extended by an occurrence of 𝑡 ′ . Doing this (independently of the upper line in 𝜋1 ) yields 𝜋2 , which is as maximal as can possibly be. No linearisation of 𝜋1 satisfies the finite delay property, while every linearisation of 𝜋2 does.20 Every (i.e., the only) linearisation of 𝜋1 is ⊑ (but not ≡) every linearisation of 𝜋2 . Definitions 8.16 to 8.21 express this idea in terms of firing sequences, using the machinery and especially the relation ≡ taken from from [BD87] (Eike Best and Raymond Devillers). For safe Petri nets, it is proved in [BD87] that two firing sequences (whether finite or infinite) are ≡-related if and only if they are linearisations of a (unique) partial order.

18 Compare https://en.wikipedia.org/wiki/Starvation_(computer_science). 19 Compare https://en.wikipedia.org/wiki/Deadlock, https://en.wikipedia.org/ wiki/Busy_waiting and and https://en.wikipedia.org/wiki/Spinlock. 20 A linearisation is a firing sequence whose transition ordering conforms to the partial ordering. The only linearisation of 𝜋1 is 𝑡0′ 𝑡0′ 𝑡0′ . . . while there are many linearisations of 𝜋2 , such as 𝑡 ′ 𝑡0′ 𝑡0′ 𝑡0′ . . . or 𝑡0′ 𝑡 ′ 𝑡0′ 𝑡0′ . . .; in the terminology of Chapter 2, 𝑡 ′ can be shuffled arbitrarily with 𝑡0′ 𝑡0′ 𝑡0′ . . .

8.8 Exercises

Σ0′ :

𝑠0′

199 𝑡0′

𝜋1 :

𝑡′

𝑠0′′

𝑠0′

𝑡0′

𝑠0′

𝑡0′ ···

𝜋2 :

𝑠0′

𝑠0′

𝑡0′ ···

𝑠0′′

𝑠0′′

𝑡0′

𝑡′

Fig. 8.11: Partial order semantics can explain finite delay by maximality.

8.8 Exercises Exercise 8.1 Fair sequences Construct a marked Petri net 𝑁 in which all firing sequences are 0-fair, but not all firing sequences are 1-fair. Hint: the net does not have to be live. Exercise 8.2 Inhibitor arcs, and fairness Consider the Petri nets 𝑁1 and 𝑁2 shown in Figure 8.12. Note that 𝑁2 has an inhibitor arc. Answer the following questions and substantiate your answers. In this exercise, we consider only infinite firing sequences. a) For which transitions 𝑡 (in 𝑁1 and in 𝑁2 ) do there exist firing sequences which are 0-unfair towards 𝑡? If such sequences exist, specify them. b) Are there any firing sequences (in 𝑁1 and in 𝑁2 ) which are 0-fair towards 𝑡 3 and 1-unfair towards 𝑡 3 ? c) Are there any firing sequences (in 𝑁1 and in 𝑁2 ) which are 1-fair towards 𝑡 3 and 2-unfair towards 𝑡 3 ? d) In 𝑁2 , is there a firing sequence which contains 𝑡 3 only finitely often, but is still ∞-fair?

𝑠1 𝑠1

𝑠2

𝑠3

𝑡3

𝑡4

𝑠2

𝑠3

𝑡3

𝑡4

𝑡1

𝑁2 :

𝑡1

𝑁1 :

𝑡2

𝑡2

𝑠4

Fig. 8.12: Petri nets to Exercise 8.2.

Exercise 8.3 Transitivity of simulation Show that simulation is transitive.

𝑠4

200

8 Fairness, Simulations, and Inhibitor Arcs

Solution 8.1 (Exercise 8.1) See Figure 8.13. The right-hand side shows a simple solution. On the left-hand side, the idea is to construct a net with two cycles (𝑠1 /𝑠5 and 𝑠4 /𝑠7 ), subject to two constraints. First constraint: If two tokens are simultaneously on 𝑠5 and 𝑠7 , then the net may be made dead by a non-1-fair transition 𝑡. If 𝑡 is activated, the firing sequence cannot be extended; hence no 0-unfair sequences are produced. Non-1-fairness towards 𝑡 materialises if it happens continually that a token is on 𝑠5 (but not 𝑠7 ) or on 𝑠7 (but not 𝑠5 ), and this is made possible by the two cycles. Second constraint: The cycles have to be synchronised such as to prevent one of them to be executed indefinitely without the other one being executed, since this would create a 0-unfair firing sequence. This motivates the role played by the places 𝑠2 and 𝑠3 and, in particular, the asymmetric distribution of only one token on them. Initially, we may fire 𝑡3 and 𝑡6 , and then 𝑡 5 , by which the two outer cycles reproduce their initial states, and by which the inner token has migrated from 𝑠2 to 𝑠3 . After that, symmetrically, 𝑡1 and 𝑡4 can be fired, and then 𝑡 2 , reproducing the initial marking precisely, and so on, in lockstep fashion. We do not have to use the inner token. Alternatively, 𝑡 1 and 𝑡6 can be fired, enabling 𝑡. There are no other firing sequences. In other words, there is only one infinite firing sequence which is 0-fair, but not 1-fair, towards 𝑡 (and, of course, 0-fair towards all other transitions).

𝑡

𝑠5

𝑠6

𝑠7

𝑡 2

𝑡1

𝑡3

𝑡2

𝑡5

𝑡4

𝑡6 𝑡1

𝑡2 2

𝑠1

𝑠2

𝑠3

𝑠4

Fig. 8.13: Two solutions of Exercise 8.1.

8.8 Exercises

201

Solution 8.2 (Exercise 8.2) a) In 𝑁1 : 𝑡2 𝑡1 (𝑡 4 𝑡 2 ) ∞ is 0-unfair with respect to 𝑡 1 and 𝑡 3 , and (𝑡 2 𝑡 1 𝑡 3 ) ∞ is 0-unfair with respect to 𝑡 4 . No firing sequence is 0-unfair with respect to 𝑡 2 , since the removal of 𝑡2 leads to a cycle-free net in which all transitions have one or more input places; hence there are no infinite firing sequences without 𝑡 2 . In 𝑁2 : 𝑡2 𝑡1 (𝑡 4 𝑡 2 ) ∞ is 0-unfair with respect to both 𝑡1 and 𝑡 3 (even though, compared to the above, 𝑡3 can fire in the other “half” of states). There are no firing sequences which are 0-unfair with respect to 𝑡 2 or 𝑡4 . For 𝑡 2 , the same reasoning as above is applicable. Hence 𝑡2 must occur infinitely often in a 0-unfair sequence with respect to 𝑡 4 . After the last occurrence of 𝑡4 and the first subsequent 𝑡 2 , place 𝑠3 can never again be emptied. Hence 𝑡 3 is no longer firable and no new tokens appear on 𝑠4 , contradicting the infinity of 𝑡 2 -occurrences. b) In 𝑁1 : Yes: (𝑡2 𝑡 4 ) ∞ . After each occurrence of 𝑡 2 , a token is on 𝑠3 , so that the firing of 𝑡 1 would activate 𝑡 3 . The sequence is 0-fair, but not 1-fair. In 𝑁2 : Yes: (𝑡2 𝑡4 ) ∞ . In this case, after each occurrence of 𝑡 4 , no token is on 𝑠3 , so that the firing of 𝑡1 would activate 𝑡3 . c) In 𝑁1 : No. The only true cycle without 𝑡3 arises from 𝑡2 and 𝑡4 . But the firing of (𝑡2 𝑡 4 ) ∞ keeps putting tokens on 𝑠3 . Hence a token on 𝑠2 suffices to activate 𝑡3 ; such a token can be obtained by firing 𝑡1 . Hence a 1-fair sequence towards 𝑡 3 has to contain 𝑡3 infinitely often. This implies that the sequence is not 2-unfair. In 𝑁2 : Yes: 𝑡2 𝑡 1 𝑡 4 𝑡 3 𝑡 2 (𝑡2 𝑡4 ) ∞ . From the bracket “(” onwards, there is always at least one token on 𝑠3 , whereas 𝑠2 is empty. Hence (at least) both 𝑡1 and 𝑡 4 have to fire – after some 𝑡 4 – in order to activate 𝑡3 . But neither 𝑡1 nor 𝑡4 alone suffice to activate 𝑡3 . This means that the sequence is 1-fair and 2-unfair. d) No. If the sequence contains 𝑡 3 finitely often, it has to contain 𝑡 2 and 𝑡4 infinitely often. This implies that after the last 𝑡 3 and the subsequent 𝑡 2 , 𝑠1 and 𝑠2 can never be emptied together. Hence at that point, it suffices to fire 𝑡 1 at most once and 𝑡4 as many times as there are tokens on 𝑠3 (𝑡4 needs only tokens from 𝑠3 ). After that, 𝑡 3 is activated. This means that 𝑡3 is ∞-activated in every reachable state. Hence the sequence cannot be ∞-fair. Solution 8.3 (Exercise 8.3) Suppose that 𝑁 ′′ = (𝑆 ′′ , 𝑇 ′′ , 𝐹 ′′ , 𝑀0′′ ) simulates 𝑁 ′ = (𝑆 ′ , 𝑇 ′ , 𝐹 ′ , 𝑀0′ ) via 𝑓 ′ and 𝑁 ′ simulates 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 ) via 𝑓 . Define 𝑔 = ( 𝑓 ◦ 𝑓 ′ ) (relational composition). To be shown: 𝑁 ′′ simulates 𝑁 via 𝑔. Two times Definition 8.4 means that there is a suitable surjective function 𝛽 : [𝑀0′ ⟩ → [𝑀0 ⟩ and a suitable surjective function 𝛽′ : [𝑀0′′⟩ → [𝑀0′⟩. Define 𝛾 = (𝛽′ ◦ 𝛽). (i) 𝑀0 = 𝛾(𝑀0′′ ), since, by 𝑀0 = 𝛽(𝑀0′ ) and 𝑀0′ = 𝛽′ (𝑀0′′ ), 𝑀0 = 𝛽(𝛽′ (𝑀0′′ )) = (𝛽′ ◦ 𝛽)(𝑀0′′ )

202

8 Fairness, Simulations, and Inhibitor Arcs

(ii) Suppose 𝑀1 = 𝛾(𝑀1′′ ) with 𝑀1 ∈ [𝑀0⟩ and 𝑀1′′ ∈ [𝑀0′′⟩. 𝑀1 = 𝛾(𝑀1′′ ) implies that 𝑀1 = 𝛽(𝑀1′ ) with 𝑀1′ = 𝛽′ (𝑀1′′ ). 𝑡

𝛼

(a): 𝑀1 −→ 𝑀2 implies that ∃𝑀2′ ∈ 𝛽 −1 (𝑀2 ) ∃𝛼 ∈ (𝑇 ′ ) ∗ : 𝑀1′ −→ 𝑀2′ such that 𝛼 = 𝑥1 . . . 𝑥 𝑘 𝑢𝑥 𝑘+1 . . . 𝑥 𝑚 , with all 𝑥 𝑗 being silent in 𝑁 ′ and 𝑢 = 𝑓 (𝑡). Hence there is also a suitable 𝛼′ = 𝛼1 , . . . , 𝛼 𝑘 , 𝜂, 𝛼 𝑘+1 , . . . , 𝛼𝑚 ∈ (𝑇 ′′ ) ∗ , leading from 𝑀1′′ to 𝑀2′′ ∈ 𝛽′−1 (𝑀2′ ), such that all 𝛼 𝑗 are silent in 𝑁 ′′ , except for 𝑓 ′ (𝑥 𝑗 ), and 𝜂 = 𝜂1 𝑢 ′ 𝜂2 with 𝑢 ′ = 𝑓 ′ (𝑢) and 𝜂1 , 𝜂2 silent in 𝑁 ′′ . By 𝑓 ′−1 , every 𝛼 𝑗 and 𝜂𝑖 is reduced to 𝑥 𝑗 and 𝜀, respectively, in 𝑁 ′ , and by 𝑓 −1 , both 𝑥1 . . . 𝑥 𝑘 and 𝑥 𝑘+1 . . . 𝑥 𝑚 are reduced to 𝜀 in 𝑁. By 𝑀2′′ ∈ 𝛽′−1 (𝑀2′ ) and 𝑀2′ ∈ 𝛽 −1 (𝑀2 ), 𝑀2′′ ∈ 𝛾 −1 (𝑀2 ). By 𝑢 ′ = 𝑓 ′ ( 𝑓 (𝑡)), the claim follows, as far as (a) is concerned. 𝑓 ′−1 ( 𝛼′ )

𝛼′

(b): 𝑀1′′ −→ 𝑀2′′ with 𝛼′ ∈ (𝑇 ′′ ) ∗ implies that 𝑀1′ −−−−−→ −−− 𝛽′ (𝑀2′′ ) and then 𝑓 −1 ( 𝑓 ′−1 ( 𝛼′ ) )

also that 𝑀1

−−−−−→ −−−

𝑔 −1 ( 𝛼′ )

𝛽(𝛽′ (𝑀2′′ )), hence 𝑀1 −−−−−→ −−− 𝛾(𝑀2′′ ).

Chapter 9

Unfoldings and Reachability Checking

Abstract An unfolding of a Petri net describes the net’s behaviour in a way that differs from its reachability graph. While a reachability graph has firing sequences as paths and describes reachable markings as nodes, and may be cyclic, an unfolding is always acyclic and describes firing sequences as linearisations of, and reachable markings as cuts through, a partial order. Causal dependencies can be detected in an unfolding more explicitly than in a reachability graph. Unfoldings allow polynomialtime reachability checks and are conducive to the application of various partial order specific verification techniques.

Contents 9.1 9.2 9.3 9.4 9.5 9.6 9.7 9.8 9.9 9.10

Introduction . . . . . . . . . . . . . . . . . . An example . . . . . . . . . . . . . . . . . . Occurrence nets, and the maximal unfolding Finite and canonical prefixes . . . . . . . . . Verification methods based on the prefix . . . A linear-algebraic method . . . . . . . . . . A logic-oriented method . . . . . . . . . . . An “on-the-fly” method . . . . . . . . . . . Bibliographical remarks and further reading . Exercises . . . . . . . . . . . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. Best and R. Devillers, Petri Net Primer, Computer Science Foundations and Applied Logic, https://doi.org/10.1007/978-3-031-48278-6_9

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

204 204 205 209 215 217 220 221 222 224

203

204

9 Unfoldings and Reachability Checking

9.1 Introduction In the previous chapter, the absence of a partial order semantics was regretfully felt.1 This chapter redresses the balance to some extent. An unfolding of a marked Petri net 𝑁 is a single, partially ordered, object describing the full behaviour of 𝑁. In an unfolding, (tokens on) places are treated as local states, and transition occurrences as local events, as opposed to the reachability graph where states are global and transitions modify those global states. In that respect, unfoldings are quite close to Petri’s original ideas about the locality of states and transitions that have been spelt out at the very beginning of this text (Section 1.1). Formally, an unfolding of 𝑁 is a separate, labelled, acyclic Petri net 𝑂 whose labelling refers back to 𝑁. In order to distinguish the places of 𝑁 from the places of 𝑂, the latter are called conditions.2 To distinguish the transitions of 𝑁 from the transitions of 𝑂, the latter are called events. This chapter is divided into two parts. In the first part, unfoldings are defined and examined. In the second part, unfoldings are put to use for the verification of reachability properties.

9.2 An example

𝑁:

𝑠1

𝛽1 : 𝑠1

𝑠2

𝑠2

𝑡1

𝑡2

𝑡3

𝑡2 𝑠3

𝑠5

𝑠4

𝑡3

𝑡1 𝑠4

𝑡5

𝑡4

𝑠3

𝑠5 𝑠6

𝑠7

𝑠6

𝑠7

𝑡6

𝑡7

𝑡6

𝑡7

𝑠1

𝑠2

𝑠1

𝑠2

𝑡5 𝑡4

𝑡6 𝑠6

𝑡7 𝑠7

Fig. 9.1: A safe Petri net 𝑁 and a small unfolding 𝛽1 .

1 See the brief discussion about the progress property in Section 8.7. 2 This is surely, again, a bit of a misnomer. But this terminology is somewhat established. We will make sure not to mix this up with other types of conditions.

9.3 Occurrence nets, and the maximal unfolding

205

Figure 9.1 depicts an unfolding 𝛽1 of the marked Petri net 𝑁: it essentially describes a finite (but we shall also allow infinite unfoldings), “unrolled” behaviour of the net, starting with its initial marking and creating a new condition whenever a token enters (or returns to) a place. For instance, there are duplicate conditions labelled 𝑠6 and 𝑠7 . The pair on the left-hand side arises from firing 𝑡4 , the other pair from firing 𝑡5 . Similarly, a new event is created for every transition occurrence of the original net. For instance, there are duplicate events labelled 𝑡6 and 𝑡 7 . They stem from different histories in the unfolding, depending on the choice between 𝑡 1 and 𝑡2 made earlier. Example 9.1 Absence of loops in unfoldings After firing 𝑡1 𝑡 3 𝑡 4 𝑡 6 in Figure 9.1, a new token is created on place 𝑠1 . The unfolding describes this by a new condition labelled 𝑠1 , rather than by a new token. In a larger unfolding, this condition could give rise to a new event labelled 𝑡1 , and/or to a new event labelled 𝑡 2 . In this way, cycles never occur in an unfolding. Choices in the net are treated in such a “disjoint” way that in the unfolding, there is never a condition with more than one incoming arc. Formally, this particular kind of unrolling is described by a special type of Petri net called occurrence net, and by a function from the occurrence net to an underlying net by means of an inscription function. In this chapter, the focus is on Petri nets which are transition-regular, that is, by definition, every transition has at least one preplace (and also one postplace, although this is not so important). Figure 9.2 depicts a larger – in fact, a maximal – unfolding of the net shown in 9.1. The dots indicate that this unfolding is infinite. Example 9.2 Figure 8.11 of Section 8.7 revisited 𝜋1 and 𝜋2 describe unfoldings of the Petri net Σ0′ shown in Figure 8.11. Intuitively speaking, 𝜋2 is maximal while 𝜋1 is not. Below, the notion of maximality will be defined formally. Both 𝜋1 and 𝜋2 are infinite.

9.3 Occurrence nets, and the maximal unfolding So far, we have often been concerned with Petri nets which are strongly connected as graphs. In such nets, it is not easy to ascribe any special significance to directed paths, since paths may lead from everywhere to anywhere. Even if, say, two paths depart at a place and come together at a transition, not very much can be said about the net’s behaviour in general. It could, for instance, still be perfectly live and safe. In an acyclic Petri net, by contrast, the following relations are more meaningful.

206

9 Unfoldings and Reachability Checking 𝑠1

𝛽2 :

𝑠2

𝑡1 𝑒1

𝑡3 𝑒3

𝑒2 𝑡2 𝑠3

𝑠5

𝑠4 𝑡5 𝑒5

𝑡4 𝑒4 𝑠6

𝑠7

𝑡6 𝑒6

𝑡7 𝑒7

𝑠1

𝑠2

𝑡1 𝑒10

𝑒11 𝑡2

𝑠3

𝑠6

.. .

𝑒8 𝑡6

𝑒9 𝑡7

𝑠1

𝑠2

𝑠5

𝑒15 𝑡3

𝑠4

𝑠5

𝑡5 𝑒19

𝑠6

𝑠7 .. .

𝑒14 𝑡2

𝑠3 𝑡4 𝑒18

𝑡5 𝑒17 𝑠6

𝑠7 .. .

𝑠7

𝑡1 𝑒13

𝑒12 𝑡3

𝑠4

𝑡4 𝑒16

𝑠6

.. .

.. .

𝑠6

𝑠7 .. .

.. .

𝑠7 .. .

Fig. 9.2: The maximal unfolding 𝛽2 = 𝛽max of the net 𝑁 shown in Figure 9.1. The colours serve to illustrate the notions of configurations, cuts, ≈mark , and ◁card , which will be defined formally later. The local configuration [𝑒 18 ] of 𝑒 18 is shown in blue. It can be serialised, e.g., by 𝑒 2 𝑒 3 𝑒 5 𝑒 8 𝑒 9 𝑒 13 𝑒 15 𝑒 18 . A random cut is shown in red. It can be reached, e.g., after 𝑒 1 𝑒 3 𝑒 4 𝑒 6 𝑒 7 𝑒 11 . 𝐶𝑢𝑡 ([𝑒 18 ]) is shown in green. Intuitively, ≈mark identifies configurations after which the same marking is reached, and ◁card orders configurations by cardinality. For example, [𝑒 5 ] ≈mark [𝑒 18 ] and [𝑒 16 ] ≈mark [𝑒 18 ]. Also, [𝑒 11 ] ◁card [𝑒 18 ]; but not [𝑒 16 ] ◁card [𝑒 18 ] (since they have the same size and ◁card is irreflexive).

Definition 9.3 Causality, conflict, and parallelism relations Let (𝑆, 𝑇, 𝐹) be a (possibly infinite) plain net and let 𝑥1 , 𝑥2 ∈ 𝑆 ∪ 𝑇. • Causality: 𝑥1 ≺𝑥2 if there is a nonempty directed path from 𝑥1 to 𝑥2 . • Conflict: 𝑥1 #𝑥 2 , if there are a place 𝑠 and two directed paths 𝑠 𝑡1 . . . 𝑥1 and 𝑠 𝑡2 . . . 𝑥2 with 𝑡1 ≠ 𝑡2 . • Parallelism: 𝑥1 𝑐𝑜 𝑥 2 if neither 𝑥1 ≺ 𝑥 2 nor 𝑥2 ≺ 𝑥1 nor 𝑥1 #𝑥2 . A net is acyclic if ≺ is a partial order.

9.3 Occurrence nets, and the maximal unfolding

Example 9.4 Examples of ≺, 𝑐𝑜, and # relationships in Figure 9.2 In Figure 9.2, the events have been given names. Here is a sample of relationships according to Definition 9.3: 𝑒 1 𝑐𝑜 𝑒 3 , 𝑒 2 𝑐𝑜 𝑒 3 , 𝑒 1 # 𝑒 2 , 𝑒 6 𝑐𝑜 𝑒 7 , 𝑒 6 𝑐𝑜 𝑒 12 , 𝑒 7 ≺ 𝑒 12 , 𝑒 10 𝑐𝑜 𝑒 12 , 𝑒 10 # 𝑒 17 , 𝑒 10 # 𝑒 2 , 𝑒 3 ≺ 𝑒 10 , 𝑒 9 # 𝑒 10 , 𝑒 19 # 𝑒 10 , etc. The two initial conditions in Figure 9.2 are in relation 𝑐𝑜 with each other. The condition labelled 𝑠6 after 𝑒 4 is in relation # with event 𝑒 18 . Remark 9.5 Properties of the relations defined in Definition 9.3 These relations have properties that can be expected for them: ⪯=≺ ∪id is transitive (i.e., 𝑥 ⪯ 𝑦 and 𝑦 ⪯ 𝑧 implies 𝑥 ⪯ 𝑧); # is symmetric (i.e., 𝑥 # 𝑦 implies 𝑦 #𝑥 ); 𝑐𝑜 is symmetric (i.e., 𝑥 𝑐𝑜 𝑦 implies 𝑦 𝑐𝑜 𝑥).

Definition 9.6 Occurrence nets An occurrence net (O-net, for short) is a – possibly infinite – plain net 𝑂 = (𝐵, 𝐸, 𝐺) with the following properties: • 𝑂 is acyclic. • 𝑂 is finitely preceded, i.e., by definition: for every 𝑥 ∈ 𝐵 ∪ 𝐸, the set {𝑦 ∈ 𝐵 ∪ 𝐸 | 𝑦 ≺ 𝑥} is finite. • Events have finite direct causes and effects, i.e., by definition, for all 𝑒 ∈ 𝐸, • 𝑒 and 𝑒 • are finite sets. • 𝑂 is conflict-consistent, i.e., there is no 𝑥 ∈ 𝐵 ∪ 𝐸 such that 𝑥#𝑥. • 𝑂 is merge-free i.e., all 𝑏 ∈ 𝐵 satisfy | • 𝑏| ≤ 1. Places in 𝐵 are called conditions while transitions in 𝐸 are called events. Example 9.7 Figures 9.1(right-hand side) and 9.2 Both the net on the right-hand side of Figure 9.1 and the net shown in Figure 9.2 are O-nets. Note that, since each transition both has pre-places and postplaces, their ≺-minimal and ≺-maximal elements are conditions. The net 𝑁 shown on the left-hand side of Figure 9.1 is not an O-net. In fact, all O-net properties except finite precededness are violated.

Remark 9.8 More properties of the causality relations In an acyclic net, ≺ is antisymmetric (i.e., 𝑥 ≺ 𝑦 implies 𝑦 ⊀ 𝑥). Even more strongly, in an O-net, the relations are exhaustive: for any 𝑥, 𝑦, either 𝑥 ≺ 𝑦 or 𝑦 ≺ 𝑥 or 𝑥 # 𝑦 or 𝑥 𝑐𝑜 𝑦, but not more than one of them.

207

208

9 Unfoldings and Reachability Checking

Definition 9.9 Unfoldings of a net An unfolding (also called a branching process) of an initially marked net 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 ) is a tuple 𝛽 = (𝑂, 𝑝) where 𝑂 = (𝐵, 𝐸, 𝐺) is an O-net and 𝑝 : 𝐵 ∪ 𝐸 → 𝑆 ∪ 𝑇 is an inscription satisfying the following properties: • 𝑝 respects node types: 𝑝(𝐵) ⊆ 𝑆 and 𝑝(𝐸) ⊆ 𝑇. • 𝑝 maps the start of 𝑂 to the initial marking 𝑀0 of 𝑁: 𝑝| min(𝑂) is a bijection between min(𝑂) and 𝑀0 . • 𝑝 respects the environments of transitions: for every 𝑒 ∈ 𝐸, 𝑝| • 𝑒 is a bijection between • 𝑒 (in 𝑂) and • 𝑝(𝑒) (in 𝑁), and 𝑝| 𝑒• is a bijection between 𝑒 • (in 𝑂) and 𝑝(𝑒) • (in 𝑁). • 𝛽 does not duplicate transitions: for all 𝑒 1 , 𝑒 2 ∈ 𝐸, if • 𝑒 1 = • 𝑒 2 and 𝑝(𝑒 1 ) = 𝑝(𝑒 2 ) then 𝑒 1 = 𝑒 2 . A net can have many unfoldings. However, there is always a maximal one with respect to the prefix relation defined as follows. Definition 9.10 Prefix relation on unfoldings Let 𝛽′ = (𝑂 ′ , 𝑝 ′ ) and 𝛽 = (𝑂, 𝑝) be two unfoldings of a marked net. 𝛽′ is called a prefix of 𝛽 if 𝑂 ′ is a subnet of 𝑂 having the following properties: • min(𝑂) = min(𝑂 ′ ). • If a condition 𝑏 ∉ min(𝑂) belongs to 𝑂 ′ , its unique predecessor event 𝑒 ∈ • 𝑏 in 𝑂 also belongs to 𝑂 ′ . • If an event 𝑒 belongs to 𝑂 ′ , its predecessor conditions • 𝑒 and its successor conditions 𝑒 • in 𝑂 also belong to 𝑂 ′ . • 𝑝 ′ is the restriction of 𝑝 onto 𝑂 ′ .

Theorem 9.11 Existence and uniqueness of the maximal prefix Every marked net 𝑁 has a unique (up to isomorphism) maximal unfolding with respect to the prefix ordering. The maximal unfolding, which exists by Theorem 9.11, is called the maximal unfolding 𝛽max of 𝑁. Its canonical initial marking places one token on every condition of min(𝛽) (i.e., min(𝑂)) and zero tokens on all other conditions. Example 9.12 Figures 9.1 and 9.2 The unfolding 𝛽1 shown in Figure 9.1 is a prefix of the unfolding 𝛽2 shown in Figure 9.2. In fact, 𝛽2 is the maximal unfolding 𝛽max of the net 𝑁 shown in Figure 9.1. The reachable markings of 𝑁 and of 𝛽max are related as follows. • The reachable markings of 𝑁 are the set of images 𝑝(𝑀), where 𝑀 is a reachable marking of 𝛽max .

9.4 Finite and canonical prefixes

209

b and • For any reachable marking 𝑀 of 𝛽max and for any reachable marking 𝑀 ′ transition 𝑡 of 𝑁, there are a reachable marking 𝑀 and an event 𝑒 in 𝛽max with 𝑒 𝑡 b 𝑝(𝑒) = 𝑡 and 𝑀 −→ b 𝑝(𝑀 ′ ) = 𝑀, 𝑀 ′ in 𝛽max , if and only if 𝑝(𝑀) −→ 𝑀 in 𝑁. See Figure 9.3.

𝛽max

𝑁 𝑝 𝑝(𝑀)

𝑀 𝑝 𝑒

𝑡 𝑝 c 𝑀

𝑀′

Fig. 9.3: Relationship between a net 𝑁 and its unfolding 𝛽max .

9.4 Finite and canonical prefixes The maximal unfolding 𝛽max is not too interesting for algorithmic verifications, because it is usually infinite. However, 𝛽max has certain finite prefixes which are definitely interesting. Configurations, defined next, allow them to be defined. Intuitively speaking, a configuration corresponds to a set of events which can be “fired” – in some suitable order – from the initial cut min(𝑂).

9.4.1 Configurations, cutting contexts, and canonical prefixes A configuration is a partially ordered, conflict-free, subset of events. Definition 9.13 Configurations and cuts A configuration of an unfolding is a set 𝐶 ⊆ 𝐸 of events having the following properties: a) 𝐶 is backward closed, i.e., by definition, ∀𝑒 ∈ 𝐶∀𝑒 ′ ≺ 𝑒 : 𝑒 ′ ∈ 𝐶. b) 𝐶 is conflict-free, i.e., ∀𝑒, 𝑒 ′ ∈ 𝐶 : ¬(𝑒#𝑒 ′ ). The local configuration [𝑒] of an event 𝑒 ∈ 𝐸 is defined as the set of its predecessors (including itself), i.e., as {𝑒 ′ ∈ 𝐸 | 𝑒 ′ ⪯ 𝑒}.a A co-set is a set of elements of an unfolding which are pairwise in relation 𝑐𝑜. A cut is a maximal co-set with respect to ⊆. a

It is easy to prove that [𝑒] enjoys a) and b) from above, justifying the nomenclature.

210

9 Unfoldings and Reachability Checking

All local configurations are finite. This follows from König’s lemma 3.22, together with the finite precededness of O-nets and from the fact that events have finite causes. The set of finite configurations and of the set of local configurations of an unfolding 𝛽 will be denoted by Cfin (𝛽), and by Clocal (𝛽), respectively. There is a close relationship between finite configurations and cuts. Let 𝐶 be a finite configuration of an unfolding 𝛽 = (𝑂, 𝑝) of 𝑁. Then Cut(𝐶) = (min(𝑂) ∪ 𝐶 • ) \ • 𝐶 is a cut consisting only of conditions. It represents the marking 𝑝(Cut(𝐶)) which is denoted by Mark(𝐶). Intuitively, Mark(𝐶) is the marking of 𝑁 reached after firing the labels of events of 𝐶 in some order. In particular, if 𝐶 = ∅, then Cut(𝐶) corresponds to the initial marking min(𝑂). Let 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 ) be a marked net and let 𝛽max = (𝑂, 𝑝) = (𝐵, 𝐸, 𝐺) be its maximal unfolding. The construction of a suitable prefix starts with the initial configuration min(𝛽max ) and proceeds by successively extending it with new events. Adding events shall be denoted by ⊕. More formally, suppose 𝐶 is a configuration and 𝐸 ′ is a set of events. Then 𝐶 ⊕ 𝐸 ′ is the new configuration 𝐶 ′ = 𝐶 ∪ 𝐸 ′ , provided that 𝐶 ∩ 𝐸 ′ = ∅ and 𝐶 ∪ 𝐸 ′ is indeed a configuration. 𝐸 ′ is called an extension set of 𝐶, and 𝐶 ′ is called an extension of 𝐶. “Suitable prefix” may mean different things to different people. This is taken into account here by parametrising extensions by a cutting context. A cutting context determines the point at which the extension will come to a stop. The resulting (as it turns out, finite) prefix is called canonical prefix, given the cutting context under consideration. Definition 9.14 Cutting context, adequate order A cutting context is a pair Θ = (≈, ◁ ) where ≈ an equivalence relation and ◁ is a strict, finitely preceded partial order, both on finite configurations Cfin (𝛽max ), such that the following holds: • ◁ refines the strict subset relation ⊂, i.e., 𝐶1 ⊂ 𝐶2 implies 𝐶1 ◁ 𝐶2 . • Finite extensions preserve both ≈ and ◁, i.e., for every pair of configurations 𝐶1 ≈ 𝐶2 and for every finite extension set 𝐸 1 of 𝐶1 , there is a finite extension set 𝐸 2 of 𝐶2 with a) 𝐶1 ⊕ 𝐸 1 ≈ 𝐶2 ⊕ 𝐸 2 , and b) if 𝐶2 ◁ 𝐶1 , then 𝐶2 ⊕ 𝐸 2 ◁ 𝐶1 ⊕ 𝐸 1 . A partial order ◁ is also known as an adequate order, if, together with some equivalence ≈, it provides a cutting context. In the literature, the following relation is often considered: 𝐶1 ≈mark 𝐶2 iff Mark(𝐶1 ) = Mark(𝐶2 )

9.4 Finite and canonical prefixes

211

That is, two configurations are ≈mark -equivalent if they represent the same marking of the underlying net. This relation is interesting because one does not desire the reachable markings of the underlying net to be repeated too often in a prefix. Historically, the first ◁ partial order considered for unfoldings is 𝐶1 ◁card 𝐶2 iff |𝐶1 | < |𝐶2 | which allows configurations to be ordered according to their sizes. Both ≈mark and ◁card are illustrated in Figure 9.2. It may be helpful to check that (≈mark , ◁card ) is indeed a cutting context. The first condition holds because every proper subset of a finite set has less elements than the set itself. Part a) of the second condition holds because 𝐶1 ∩ 𝐸 1 = ∅ implies that the transitions of 𝑝(𝐸 1 ) are firable from Mark(𝐶1 ), and hence, because of 𝐶1 ≈mark 𝐶2 , also from Mark(𝐶2 ). Therefore, 𝛽max contains an event set 𝐸 2 which corresponds (from 𝐶2 ) to the firing of 𝐶1 plus 𝐸 1 . Because firing sequences uniquely determine the successor marking, the same marking is reached via 𝐶1 ⊕ 𝐸 1 ≈ 𝐶2 ⊕ 𝐸 2 . Part b) follows because 𝐸 2 has exactly as many elements as 𝐸 1 . Now assume that some cutting context Θ = (≈, ◁) is fixed. In the next two definitions, the (unique) canonical prefix corresponding to Θ is defined. First, two event sets are defined: cut-off events CutEΘ and relevant events RelevEΘ . The latter are events none of whose predecessors is a cut-off event. The next definition looks circular, but we shall argue that this is not exactly the case. Definition 9.15 Cut-off events and relevant events An event 𝑒 belongs to RelevEΘ if ([𝑒] \ {𝑒}) ∩ CutEΘ = ∅. An event 𝑒 belongs to CutEΘ if 𝑒 belongs to RelevEΘ and if there is a configuration 𝐶 ∈ Clocal (𝛽max ) having the properties 𝐶 ≈ [𝑒], 𝐶 ⊆ RelevEΘ \ CutEΘ , and 𝐶 ◁ [𝑒] Intuitively speaking, 𝐶 • “behaves similarly as [𝑒]” (𝐶 ≈ [𝑒]); • “is relevant” (𝐶 ⊆ RelevEΘ \ CutEΘ ); • lies before [𝑒] in the adequate order (𝐶 ◁ [𝑒]). If this is the case, it turns out that 𝑒 and all of its “future” in 𝛽max is redundant. The apparent circularity of Definition 9.15 is not a problem because, due to finite precededness and due to ∀𝑒 1 , 𝑒 2 ∈ 𝐸 : 𝑒 1 ≺ 𝑒 2 ⇒ [𝑒 1 ] ◁ [𝑒 2 ]

212

9 Unfoldings and Reachability Checking

when considering an event 𝑒, it can be assumed that for every 𝑒 ′ ∈ [𝑒] \ {𝑒} it is already known whether 𝑒 ′ is in CutEΘ or in RelevEΘ . Moreover, because of ∀𝑒 1 , 𝑒 2 ∈ 𝐸 : 𝑒 1 ∈ 𝐶 ∧ 𝐶 ◁ [𝑒 2 ] ⇒ [𝑒 1 ] ◁ [𝑒 2 ] the same is true for all events 𝑒 ′ in all configurations 𝐶 with 𝐶 ◁ [𝑒]. Hence there is a unique solution to the circular definition. Proposition 9.16 Properties of CutEΘ and RelevEΘ Let 𝑒 be an event in 𝛽max . a) 𝑒 ∈ RelevEΘ ⇐⇒ ([𝑒] \{𝑒}) ⊆ RelevEΘ \CutEΘ . b) 𝑒 ∈ CutEΘ ⇒ 𝑒 ∈ RelevEΘ . Proposition 9.16(a⇒) implies that the set RelevEΘ is backward closed, i.e., if 𝑒 ∈ RelevEΘ then also 𝑒 ′ ∈ RelevEΘ for all 𝑒 ′ ≺ 𝑒. Definition 9.17 Canonical prefix relative to a cutting context Θ The canonical prefix 𝛽Θ of Θ in 𝛽max is the prefix induced by RelevEΘ .

Proposition 9.18 Properties of the canonical prefix 𝛽Θ a) Let 𝐶 ∈ Cfin (𝛽max ). Then there is a configuration 𝐶 ′ ∈ Cfin (𝛽Θ ) with 𝐶 ′ ∩ CutEΘ = ∅ and 𝐶 ′ ≈ 𝐶. b) Let 𝐶 ∈ Cfin (𝛽Θ ) with 𝐶 ∩ CutEΘ = ∅ and let 𝑒 be an event of 𝛽max with 𝐶 ⊕ {𝑒} ∈ Cfin (𝛽max ). Then 𝐶 ⊕ {𝑒} ∈ Cfin (𝛽Θ ). c) 𝛽Θ is a finite unfolding of 𝑁 if and only if 𝑁 is bounded. Parts a) and b) of this proposition indicate that “nothing is lost” if one considers the canonical instead of the maximal prefix. Part c) means that for finite-state systems, the canonical prefix is finite as well, which is relevant for verification. More precisely, a) means that for every finite configuration of the maximal unfolding, an ≈-equivalent configuration within the canonical prefix can be found (which even does not contain any cut-off events). As a consequence, all reachable markings of the underlying net 𝑁 are represented (up to ≈) in the canonical prefix. Part b) means that the canonical prefix “preserves firings”. More precisely, every event 𝑒 which can be fired from some configuration without cut-off events, is either a cut-off event or leads to another configuration of the prefix. Consequently, all transitions of the reachability graph of 𝑁 are also represented in the canonical prefix. Example 9.19 Figures 9.1 and 9.2 For the equivalence ≈mark (and, probably, for any other reasonable equivalence relation), the prefix shown on the right-hand side of Figure 9.1 is “too small” for the net shown on the left-hand side, independently of what ◁ looks like.

9.4 Finite and canonical prefixes

213

Indeed, the marking {𝑠3 , 𝑠7 } is reachable but not represented in the prefix (all the conditions labelled 𝑠7 are in conflict with the only condition labelled 𝑠3 ) This prefix is not canonical for any cutting context involving ≈mark , and Proposition 9.18a) does not apply. Consider, on the other hand, the prefix shown in Figure 9.2, neglecting the final dots. This prefix is finite and represents all reachable markings. However, in some sense, it is “too big”. Indeed, imagine {𝑒 5 , 𝑒 16 , 𝑒 17 } to be a set of cutoff events. Then events 𝑒 8 , 𝑒 9 , 𝑒 13 , 𝑒 14 , 𝑒 15 , 𝑒 18 , 𝑒 19 are not in the prefix, but still, both a) and b) of Proposition 9.18 are valid and all reachable markings remain represented. Symmetrically, a similar case can be made for cut-off events {𝑒 4 , 𝑒 18 , 𝑒 19 }. As a result of this, we shall be looking for an alternative definition of ◁ in such a way that one of these two alternatives are chosen in order to get a smaller prefix.

9.4.2 An algorithm producing the canonical prefix So far, the canonical prefix of a net 𝑁 has been defined to be a certain prefix of the maximal unfolding. However, it can also be defined iteratively. An unfolding 𝛽 of 𝑁 will be represented as a set {𝑘 1 , . . . , 𝑘 𝑛 } of nodes. A node is either a condition or an event. • A condition 𝑏 is coded by a pair (𝑥, 𝑠) where 𝑥 is a pointer to 𝑏’s incoming event if 𝑏 ∉ min(𝛽), or nil if 𝑏 ∈ min(𝛽), and 𝑠 is a place of 𝑁 (𝑏’s label). • An event 𝑒 is coded by a pair (𝑋, 𝑡) where 𝑋 is a list of pointers to its incoming conditions, and 𝑡 is a transition of 𝑁 (𝑒’s label). By this code, the labels and the structure of 𝛽 are represented at the same time by the set {𝑘 1 , . . . , 𝑘 𝑛 }. Let 𝑡 be a transition of 𝑁 with 𝑡 • = {𝑠1 , . . . , 𝑠 𝑚 } The pair 𝑒 = (𝑋, 𝑡), with 𝑋 being a suitable set of pointers, is called a potential extension of an unfolding {𝑘 1 , . . . , 𝑘 𝑛 } if the set {𝑘 1 , . . . , 𝑘 𝑛 , (𝑋, 𝑡), (𝑒, 𝑠1 ), . . . , (𝑒, 𝑠 𝑚 )} is also an unfolding. The set of potential extensions of 𝛽 is denoted by 𝑃𝐸 (𝛽). Using these notions, the following algorithm constructs the (finite) canonical prefix, given a (bounded) Petri net and a cutting context Θ=(≈, ◁):

214

9 Unfoldings and Reachability Checking

Algorithm 9.20 Prefix construction Input: a bounded marked net 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 ) and a cutting context Θ=(≈, ◁); Output: the canonical prefix 𝛽 = 𝛽Θ . begin 𝛽 := the initial unfolding, i.e., conditions for 𝑀0 ; 𝑃𝐸 := 𝑃𝐸 (𝛽); CutOff := ∅; while 𝑃𝐸 ≠ ∅ do choose an event 𝑒 ∈ 𝑃𝐸 such that [𝑒] is minimal with respect to ◁; if [𝑒] ∩ CutOff = ∅ then add 𝑒 and new conditions representing 𝑝(𝑒) • to 𝛽; 𝑃𝐸 := 𝑃𝐸 (𝛽); if 𝑒 is a cut-off event of 𝛽 then CutOff := CutOff ∪ {𝑒} else 𝑃𝐸 := 𝑃𝐸 \ {𝑒} end “𝑒 is a cut-off event of 𝛽” means: there is a configuration 𝐶 ∈ Clocal (𝛽) with 𝐶 ≈ [𝑒] and 𝐶 ◁ [𝑒], such that the events of 𝐶 are non-cut-off events in 𝛽 The event 𝑒 in the last if command is chosen such that [𝑒] is ◁-minimal. This guarantees that none of the events that could be chosen are missed. Example 9.21 Figures 9.1 and 9.2, using Θ1 Choose Θ1 = (≈mark , ◁ card ). Proceeding in the order 𝑒 1 , 𝑒 2 , . . . as in Figure 9.2, the algorithm yields the prefix shown in Figure 9.2 without the final dots. The cut-off set is {𝑒 16 , 𝑒 17 , 𝑒 18 , 𝑒 19 }. Note that 𝑒 5 is not a cut-off event, even though there is a configuration produced earlier (namely, [𝑒 4 ]) which corresponds to the same marking as [𝑒 5 ]. However, it does not have strictly less elements than [𝑒 5 ]. Similarly, the newly produced local configurations of events 𝑒 6 to 𝑒 15 are not larger than already existing (≈mark -equivalent) ones, so that none of these events is a cut-off event. By contrast, the local configuration of 𝑒 16 does have an ◁-smaller one which has already been produced, namely [𝑒 4 ] (or also [𝑒 5 ]). The algorithm terminates after producing 𝑒 19 , yielding 𝛽Θ1 , because the set of potential extensions becomes empty. This example shows that the prefix produced by the algorithm may be “too large”. In fact, we already noticed that, and it seems not to be too grave a problem, judging from this simple example. However, there are very simple examples demonstrating the problem really impressively; see Figure 9.4. The canonical prefix of the net shown there is exponential in the net’s size; in fact, it is even exponential in the size of its reachability graph. This is hardly acceptable, which is why alternative ◁-orderings have been investigated.

9.5 Verification methods based on the prefix

𝑠0

215

𝑠𝑛

𝑠1 ···

𝑛 + 1 places

···

Fig. 9.4: 𝑛 choice places in sequence.

A better ◁ can be obtained as follows. Let the transitions of 𝑁 be ordered linearly (for instance, lexicographically) by ⊏. For an event set 𝐸 ′ , let 𝜙(𝐸 ′ ) denote the transition sequence in which transitions are ordered by ⊏ and in which every transition 𝑡 occurs |𝐸 ′ ∩ 𝑝 −1 (𝑡)| times. For example, assuming that 𝑡1 ⊏ 𝑡2 ⊏ . . . ⊏ 𝑡 7 , the set of events 𝐸 ′ = {𝑒 1 , 𝑒 3 , 𝑒 4 , 𝑒 6 , 𝑒 7 , 𝑒 10 , 𝑒 12 , 𝑒 16 } in Figure 9.2 yields 𝜙(𝐸 ′ ) = 𝑡1 𝑡1 𝑡3 𝑡3 𝑡 4 𝑡 4 𝑡 6 𝑡 7 . Define 𝐶1 ◁lex 𝐶2 iff (|𝐶1 | < |𝐶2 |) or |𝐶1 | = |𝐶2 | and 𝜙(𝐶1 ) lies lexicographically before 𝜙(𝐶2 ) ◁lex turns out to be adequate as well (see Exercise 9.3) and to suffer less from exponential explosion than ◁card . Example 9.22 Some applications of the algorithm, using Θ2 If Algorithm 9.20 is applied in the cutting context Θ2 = (≈mark , ◁lex ) on Figures 9.1 and 9.2, then after constructing the partial prefix [{𝑒 1 , 𝑒 2 , 𝑒 3 }] there is a choice between extending it by 𝑒 4 or by 𝑒 5 . The choose command will choose 𝑒 4 , since the sequence 𝜙([𝑒 4 ]) = 𝑡 1 𝑡3 𝑡4 lexicographically precedes 𝜙([𝑒 5 ]) = 𝑡2 𝑡3 𝑡5 . When it is 𝑒 5 ’s turn, the configuration {𝑒 1 , 𝑒 3 , 𝑒 4 } will be detected as a ◁lex -preceding one, so that 𝑒 5 is designated as a cut-off event straight away. The construction proceeds after 𝑒 4 , until events 𝑒 16 and 𝑒 17 are eventually determined as cut-off, terminating the construction. Thus, 𝛽Θ2 is actually smaller than 𝛽Θ1 . Similarly, Θ2 generates a linearly large prefix for Figure 9.4, instead of an exponentially large one.

9.5 Verification methods based on the prefix While a prefix may be quite a bit larger than the system whose behaviour it describes, it has the advantage of being acyclic. This allows various verification algorithms to be applied. First of all, acyclicity implies that the marking equation can be turned into a sufficient (as well as necessary) condition for reachability, as follows.

216

9 Unfoldings and Reachability Checking

In Chapter 4, it was proved that, with 𝜏 ∈ 𝑇 ∗ and 𝐶 being the incidence matrix of a net, 𝜏 𝑀0 −→ 𝑀 ⇒ 𝑀 = 𝑀0 + 𝐶 · P (𝜏) (9.1) with Parikh vector P (𝜏) : 𝑇 → N. Implication (9.1) simply states that in 𝑀, every place 𝑠 contains as many tokens as in 𝑀0 , plus the number of tokens added by 𝜏 minus the number of tokens taken away by 𝜏. The bad news is that the implication (9.1) cannot be reverted in general. The good news, however, is that it can, for acyclic (considered as graphs) nets. Lemma 9.23 Inverse of the marking equation Let (𝑆, 𝑇, 𝐹, 𝑀0 ) be an acyclic, initially marked net. Suppose there is a semipositive T-vector 𝑋 : 𝑇 → N with 𝑀0 + 𝐶 · 𝑋 ≥ 0 𝜏

Then there is a firing sequence 𝜏 with 𝑀0 −→ (𝑀0 + 𝐶 · 𝑋) and P (𝜏) = 𝑋. Proof: By induction over the sum of 𝑋’s entries. If 𝑋 is the null vector, choose 𝜏 = 𝜀. Suppose that 𝑋 contains at least one positive entry. Consider the set of transitions whose entries in 𝑋 are > 0. This set is nonempty. In this set, choose a transition, say 𝑡, which is ≺-minimal (i.e., 𝐹 + -minimal). This is possible because ≺ is acyclic. Consider any 𝑠 ∈ • 𝑡. The incidence matrix entry 𝐶 (𝑠, 𝑡) is negative (since otherwise, there would be a cycle between 𝑠 and 𝑡). It follows that (𝐶 · 𝑋)(𝑠) is negative, because there is no compensating positive entry 𝐶 (𝑠, 𝑡 ′ ) since all transitions 𝑡 ′ ∈ • 𝑠 satisfy 𝑡 ′ ≠ 𝑡 (by acyclicity) and 𝑋 (𝑡 ′ ) = 0 (by the ≺-minimality of 𝑡). Hence 𝑀0 (𝑠) ≥ 𝐹 (𝑠, 𝑡), by the semipositivity of (𝑀0 + 𝐶 · 𝑋)(𝑠). As this is true for every 𝑠 ∈ • 𝑡, 𝑡 is enabled by 𝑀0 . Let 𝑡 fire once, update 𝑋 by subtracting 1 from entry 𝑡, and apply the induction hypothesis. 9.23 In the remainder of this chapter, three verification methods using unfoldings are illustrated on a running example. Suppose that a partial state of some system is given and that it has to be verified that a state encompassing this partial state can be reached. Several verification problems3 can be formulated according to this pattern. For a safe Petri net, the following problem will be considered. Decision Problem 9.24 Partial marking reachability Input: A safe Petri net 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 ) and a partial marking (𝑃0 , 𝑃1 ) with 𝑃0 , 𝑃1 ⊆ 𝑆. To decide: Is there a marking 𝑀 ∈ [𝑀0 ⟩ such that ∀𝑠 ∈ 𝑃0 : 𝑀 (𝑠) = 0 and ∀𝑠 ∈ 𝑃1 : 𝑀 (𝑠) = 1?

3 For instance, the problem of verifying a critical section property.

9.6 A linear-algebraic method

217

That is, we are looking for a reachable marking which puts zero tokens on places of 𝑃0 and one token on places in 𝑃1 , while places outside 𝑃0 ∪ 𝑃1 have either 0 or 1 tokens. All methods will be explained on the example shown in Figures 9.5 and 9.6. Figure 9.5 depicts a safe net 𝑁. Figure 9.6 shows its finite prefix 𝛽, using any of the cutting contexts Θ1 or Θ2 defined in the previous section.

𝑠3

𝑠1 𝑡2

𝑡4 𝑡3

𝑡1 𝑠2

𝑠5

𝑠4

𝑡5

𝑡6 𝑠6

𝑡new

Fig. 9.5: An initially marked Petri net 𝑁. The dashed transition 𝑡 𝑛𝑒𝑤 and its incoming arrows can be ignored until Section 9.8.

9.6 A linear-algebraic method The acyclicity of the prefix is exploited directly, using Lemma 9.23. Let 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 ) be a net and let (𝑃0 , 𝑃1 ) be an instance of the partial marking decision problem. Let 𝛽 = (𝐵, 𝐸, 𝐺, 𝑝) be the canonical prefix with respect to a cutting context Θ = (≈mark , ◁), where ◁ is an arbitrary adequate order. Problem 9.24 can be solved linear-algebraically on 𝛽 as follows: Seek vectors 𝑀 ′ : 𝐵 → Z and 𝑋 : 𝐸 → Z with 𝑀 ′ ≥ 0 𝑋 ≥ 0 𝑀 ′ = 𝑀0′ + 𝐶 · 𝑋 (where 𝐶 is the incidence matrix of (𝐵, 𝐸, 𝐺) and 𝑀0′ is the marking corresponding to min(𝛽)) ′ 𝑀 (𝑏) = 0 for all conditions 𝑏 ∈ 𝐵 with 𝑝(𝑏) ∈ 𝑃0 Í ′ for all places 𝑠 ∈ 𝑃1 𝑏∈ 𝑝 −1 (𝑠) 𝑀 (𝑏) = 1 𝑋 (𝑒) = 0 for all events 𝑒 ∈ CutEΘ .

(1) (2) (3) (4) (5) (6)

(1) to (3) imply (together with Lemma 9.23) that 𝑀 ′ is reachable in 𝛽 by firing a sequence with Parikh vector 𝑋, hence that 𝑝(𝑀 ′ ) is reachable in 𝑁. (4) and (5) encode the decision problem (𝑃0 , 𝑃1 ), since (4) indicates that in 𝑝(𝑀 ′ ), 𝑃0 has no

218

9 Unfoldings and Reachability Checking

𝑠5

𝑏3

𝑠3

𝑏2

𝑠1

𝑏1

𝑒1 𝑡6

𝑏4

𝑠6

𝑒3 𝑡5

𝑒2 𝑡4

𝑏6

𝑠5

𝑏5

𝑠4

𝑒5 𝑡3

𝑒6 𝑡6

𝑏9

𝑠6

𝑒4 𝑡2

𝑏8

𝑠3

𝑒9 𝑡4

𝑒8 𝑡5

𝑏10

𝑠4

𝑒10 𝑡3

𝑏7

𝑠2

𝑒7 𝑡1

𝑏11

𝑠5

𝑒11 𝑡6

𝑏12

𝑠6

𝑒12 𝑡5

Fig. 9.6: A canonical prefix 𝛽 of 𝑁 corresponding to Figure 9.5. The dashed events are cut-off events and do not belong to the prefix. The dashed horizontal line can be ignored until Section 9.8.

9.6 A linear-algebraic method

219

tokens, and (5) means that 𝛽 contains exactly one marked condition for every place 𝑠 in 𝑃1 .4 Prefix construction guarantees that all markings can be reached without cut-off events (dashed in Figure 9.6) having to occur; hence (6). Example 9.25 Figures 9.5 and 9.6 Let us check whether a marking can be reached which satisfies (𝑃0 , 𝑃1 ) = ({𝑠1 , 𝑠3 , 𝑠5 }, {𝑠2 , 𝑠4 , 𝑠6 }). This leads to the following linear programming problem. The maximal set of events which possibly need to be fired is RelevEΘ less all cut-off events, which is {𝑒 1 , 𝑒 2 , 𝑒 4 , 𝑒 6 , 𝑒 9 , 𝑒 11 }. 𝑋 will be zero on {𝑒 3 , 𝑒 5 , 𝑒 7 , 𝑒 8 , 𝑒 10 , 𝑒 12 }. Hence we are searching for vectors 𝑀 ′ : {𝑏 1 , . . . , 𝑏 12 } → Z and 𝑋 : {𝑒 1 , . . . , 𝑒 12 } → Z satisfying (1): 𝑀 ′ ≥ 0 (2): 𝑋 ≥ 0 (3): 𝑀 ′ (𝑏 1 ) = 1−𝑋 (𝑒 4 ) 𝑀 ′ (𝑏 2 ) = 1−𝑋 (𝑒 2 ) ′ 𝑀 (𝑏 3 ) = 1−𝑋 (𝑒 1 ) 𝑀 ′ (𝑏 4 ) = 𝑋 (𝑒 1 )−𝑋 (𝑒 2 )−𝑋 (𝑒 3 ) ′ 𝑀 (𝑏 5 ) = 𝑋 (𝑒 2 )−𝑋 (𝑒 4 )−𝑋 (𝑒 5 ) 𝑀 ′ (𝑏 6 ) = 𝑋 (𝑒 2 )−𝑋 (𝑒 6 ) 𝑀 ′ (𝑏 7 ) = 𝑋 (𝑒 4 )−𝑋 (𝑒 7 ) 𝑀 ′ (𝑏 8 ) = 𝑋 (𝑒 4 )−𝑋 (𝑒 9 ) ′ 𝑀 (𝑏 9 ) = 𝑋 (𝑒 6 )−𝑋 (𝑒 8 )−𝑋 (𝑒 9 ) 𝑀 ′ (𝑏 10 ) = 𝑋 (𝑒 9 )−𝑋 (𝑒 10 ) 𝑀 ′ (𝑏 11 ) = 𝑋 (𝑒 9 )−𝑋 (𝑒 11 ) 𝑀 ′ (𝑏 12 ) = 𝑋 (𝑒 11 )−𝑋 (𝑒 12 ) (4): 𝑀 ′ (𝑏 1 ) = 0 𝑀 ′ (𝑏 3 ) = 0 𝑀 ′ (𝑏 8 ) = 0

𝑀 ′ (𝑏 2 ) = 0 𝑀 ′ (𝑏 6 ) = 0 𝑀 ′ (𝑏 11 ) = 0

(5): 𝑀 ′ (𝑏 7 ) = 1 𝑀 ′ (𝑏 5 )+𝑀 ′ (𝑏 10 ) = 1 ′ ′ ′ 𝑀 (𝑏 4 )+𝑀 (𝑏 9 )+𝑀 (𝑏 12 ) = 1 (6): 𝑋 (𝑒 3 ) = 𝑋 (𝑒 5 ) = 𝑋 (𝑒 7 ) = 𝑋 (𝑒 8 ) = 𝑋 (𝑒 10 ) = 𝑋 (𝑒 12 ) = 0 The vectors T 𝑏 1 𝑏 2 𝑏 3 𝑏 4 𝑏 5 𝑏 6 𝑏 7 𝑏 8 𝑏 9 𝑏 10 𝑏 11 𝑏 12 0 0 0 0 0 0 1 0 0 1 0 1  T 𝑒 1 𝑒 2 𝑒 3 𝑒 4 𝑒 5 𝑒 6 𝑒 7 𝑒 8 𝑒 9 𝑒 10 𝑒 11 𝑒 12 and 𝑋 = 1 1 0 1 0 1 0 0 1 0 1 0 𝑀′ =



corresponding to the sets 𝑀 ′ = {𝑏 7 , 𝑏 10 , 𝑏 12 } and 𝑋 = {𝑒 1 , 𝑒 2 , 𝑒 4 , 𝑒 6 , 𝑒 9 , 𝑒 11 } solve this linear system.a By linearising 𝑋 with respect to ≺, a firing sequence 𝜏 = 𝑡 6 𝑡 4 𝑡 6 𝑡 2 𝑡4 𝑡6 can be found through 𝑝. 𝜏 leads, in 𝑁, from 𝑀0 to a marking {𝑠2 , 𝑠4 , 𝑠6 } which satisfies the required property. a

The symbol T means transposition.

4 Actually, for the present case of safe nets, the “=” sign in (5) could be replaced by “≥”, since any two different conditions corresponding to the same place are in relation ≺ ∪ ≺ −1 .

220

9 Unfoldings and Reachability Checking

9.7 A logic-oriented method Problem 9.24 can also be reduced to a satisfiability problem for propositional logic (SAT). Acyclicity is exploited in the following way. Every condition of a prefix can hold at most once in some behaviour, and every event can occur at most once. Hence propositional variables, from which a propositional formula can be built, can meaningfully be associated with them. Let 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 ) be a marked net, let (𝑃0 , 𝑃1 ) be an instance of Problem (9.24), and let 𝛽 = (𝐵, 𝐸, 𝐺, 𝑝) be a canonical prefix based on ≈mark and some adequate order ◁. A propositional formula is constructed as follows. (1) Every condition 𝑏 is also a propositional variable 𝑏. (2) Every non-cut-off event 𝑒 is a propositional variable 𝑒. (3) Every condition has a clause expressing it being marked with a token. For instance, in Figure 9.6, the clause (𝑏 4 ↔ (𝑒 1 ∧ ¬𝑒 2 )) describes that 𝑏 4 carries a token, since this is the case iff 𝑒 1 but not 𝑒 2 have been fired.5 (4) Causality between events is described by clauses. For instance, 𝑒 2 → 𝑒 1 signifies that a firing of 𝑒 2 is preceded by a firing of 𝑒 1 . (5) Finally, (𝑃0 , 𝑃1 ) is also described by clauses. For instance, suppose that 𝑠6 ∈ 𝑃1 . Then one of the clauses is 𝑏 4 ∨ 𝑏 9 ∨ 𝑏 12 , since there is a token on 𝑠6 (in 𝑁) if and only if there is a token on one of these conditions (in 𝛽). By Lemma 9.23, the partial marking (𝑃0 , 𝑃1 ) is reachable if and only if the propositional formula so defined is satisfiable. Example 9.26 Figures 9.5 and 9.6 again Let us check whether (𝑃0 , 𝑃1 ) = ({𝑠1 , 𝑠3 , 𝑠5 }, {𝑠2 , 𝑠4 , 𝑠6 }) is reachable. The following formula is obtained (connecting all clauses in (3), (4) and (5) by a logical ∧): (1): 𝑏 1 , . . . , 𝑏 12 (2): 𝑒 1 , 𝑒 2 , 𝑒 4 , 𝑒 6 , 𝑒 9 , 𝑒 11 (3): 𝑏 1 𝑏4 𝑏7 𝑏 10

↔ ↔ ↔ ↔

¬𝑒 4 𝑒 1 ∧ ¬𝑒 2 𝑒4 𝑒9

𝑏2 𝑏5 𝑏8 𝑏 11

↔ ¬𝑒 2 ↔ 𝑒 2 ∧ ¬𝑒 4 ↔ 𝑒 4 ∧ ¬𝑒 9 ↔ 𝑒 9 ∧ ¬𝑒 11

𝑏3 𝑏6 𝑏9 𝑏 12

↔ ↔ ↔ ↔

¬𝑒 1 𝑒 2 ∧ ¬𝑒 6 𝑒 6 ∧ ¬𝑒 9 𝑒 11

(4): 𝑒 2 → 𝑒 1 𝑒9 → 𝑒4 ∧ 𝑒6

𝑒4 → 𝑒2 𝑒 11 → 𝑒 9

𝑒6 → 𝑒2

(5): ¬𝑏 1 𝑏7

¬(𝑏 2 ∨ 𝑏 8 ) 𝑏 5 ∨ 𝑏 10

¬(𝑏 3 ∨ 𝑏 6 ∨ 𝑏 11 ) 𝑏 4 ∨ 𝑏 9 ∨ 𝑏 12

5 The event 𝑒3 cannot take this token away since, being a cut-off, it is not contained in the prefix.

9.8 An “on-the-fly” method

221

This formula is satisfiable, for instance by assigning 𝑏 7 = 𝑏 10 = 𝑏 12 = 1; all other 𝑏 = 0; and 𝑒 1 = 1, 𝑒 2 = 1, 𝑒 4 = 1, 𝑒 6 = 1, 𝑒 9 = 1, 𝑒 11 = 1. Therefore, the partial marking ({𝑠1 , 𝑠3 , 𝑠5 }, {𝑠2 , 𝑠4 , 𝑠6 }) is reachable in 𝑁.

9.8 An “on-the-fly” method This is pretty much an ad hoc procedure, by which the safeness of the net shown in Figure 9.5 is exploited. Let 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 ) and (𝑃0 , 𝑃1 ) be an instance of Problem (9.24). By adding complementary places, it can be transformed into another decision problem having an empty set instead of 𝑃0 . The complement place 𝑠 of a (safe) place 𝑠 is defined by • 𝑠 = 𝑠• , 𝑠 • = • 𝑠, and 𝑀0 (𝑠) = 1 − 𝑀0 (𝑠) Thus, {𝑠, 𝑠} is an S-invariant, and it is always the case that 𝑠 carries a token iff 𝑠 does not. Hence instead of (𝑃0 , 𝑃1 ), the equivalent problem (∅, 𝑃1 ∪ 𝑃0 ) may be considered. Example 9.27 Figure 9.5 𝑠1 = 𝑠2 , 𝑠3 = 𝑠4 , 𝑠5 = 𝑠6 , {𝑠1 , 𝑠3 , 𝑠5 } = {𝑠2 , 𝑠4 , 𝑠6 }. Hence the problem ({𝑠1 , 𝑠3 , 𝑠5 }, {𝑠2 , 𝑠4 , 𝑠6 }) is equivalent with (∅, {𝑠2 , 𝑠4 , 𝑠6 }). In order to solve the original problem, a transition 𝑡 𝑛𝑒𝑤 with preset 𝑃1 ∪ 𝑃0 may be added (shown by dashed lines in Figure 9.5). If an event labelled by 𝑡 𝑛𝑒𝑤 appears in the prefix 𝛽 (i.e., occurs in the set 𝑃𝐸 (𝛽) at some point of the algorithm which constructs 𝛽 based on ≈mark ), the answer is “yes”, since 𝑡 𝑛𝑒𝑤 can then be fired at some reachable marking; otherwise, the answer is “no”. Example 9.28 Figure 9.6 Whenever the prefix has been constructed up to the dashed horizontal line, the decision algorithm stops because an event with preset {𝑏 7 , 𝑏 8 , 𝑏 9 } and with 𝑝({𝑏 7 , 𝑏 8 , 𝑏 9 }) = {𝑠2 , 𝑠3 , 𝑠6 } can then be inserted. Observe that the “on-the-fly” algorithm does not necessarily construct the full prefix. This can in some cases be advantageous, for instance if some prior bound on the size of configurations to be checked is known, or if there is a strong suspicion that the decision problem has a positive answer. On the other hand, this algorithm may require the construction of the prefix individually for every problem. By contrast, the other methods are happy with constructing the prefix once and for all, after which arbitrarily many decision checks on this single prefix can be performed.

222

9 Unfoldings and Reachability Checking

9.9 Bibliographical remarks and further reading Sections 9.3 and 9.4 on unfoldings, cutting contexts, and prefixes, follow the presentation in [KKV03] (Victor Khomenko, Maciej Koutny, Walter Vogler), where proofs of Propositions 9.16 and 9.18 can be found. Sections 9.5 to 9.8 on unfolding-based verification methods have been guided by [ME96] (Stefan Melzer, Javier Esparza), [EM00] (Javier Esparza, Stefan Melzer), [ES01] (Javier Esparza, Claus Schröter), and [SE05] (Stefan Schwoon, Javier Esparza). Theorem 9.11 is from [Eng91] (Joost Engelfriet). Diverse verification techniques are based on unfoldings. For instance, [MKS09] (Roland Meyer, Victor Khomenko, Tim Strazny) report a dramatic improvement of the analysis of pi-calculus systems [MPW92] (Robin Milner, Joachim Parrow, David Walker), when compared with the mobility workbench.6 Being able to check reachability in polynomial time sounds good, but it should be kept in mind that the prefix on which such a check is performed can be very large. There have, therefore, also been various research efforts aiming to bring down the size of the prefix as much as possible; see, e.g., [Kho+05] (Victor Khomenko, Alex Kondratyev, Maciej Koutny, Walter Vogler). The seminal paper on unfoldings for verification is [McM92] (Ken L. McMillan) in which the cutting context (≈mark , ◁card ) has been put forward. [ERV96] (Javier Esparza, Stefan Römer, Walter Vogler) contains an example showing that without the ◁ device, a prefix which is too small can arise. This example is reproduced in Figure 9.7 below. The monograph on unfoldings [EH08], written by Javier Esparza and Keijo Heljanko, contains, amongst other things, proofs of the properties listed in Remarks 9.5 and 9.8, as well as more context about adequate (e.g., lexicographical) orderings. A couple of tools incorporating fast unfolding constructions are maintained by Stefan Schwoon7 [SSE03] (Claus Schröter, Stefan Schwoon, Javier Esparza) and Victor Khomenko.8 The logic-oriented verification method is attractive in combination with efficient SAT solvers.9 In theory, none of the methods described above is essentially faster than another one, and all of them tend to be slow if the prefix is very large. Unfoldings are not the only way of defining a partial order semantics of Petri nets. Another possibility is to abstract from the conditions between events and to concentrate on the (causality, concurrency, conflict) relations between events. This approach is pursued, for instance, in [HKT95] (by P.W. Hoogers, H.C.M. Kleijn, and P.S. Thiagarajan), and in [NPW79] (by Mogens Nielsen, Gordon D. Plotkin, and Glynn Winskel). 6 http://www.it.uu.se/research/group/mobility/mwb (Link verified September 2022). 7 http://www.lsv.fr/~schwoon/tools/mole/ (Link verified September 2022). 8 http://homepages.cs.ncl.ac.uk/victor.khomenko/home.formal/tools/tools. html (Link verified September 2022). 9 https://en.wikipedia.org/wiki/SAT_solver (Link verified September 2022).

9.9 Bibliographical remarks and further reading

223

In [Vog92] (by Walter Vogler), several early approaches to partial order semantics are compared. The recent monograph [Jan+22] (by Ryszard Janicki, Jetty Kleijn, Maciej Koutny, and Łukacs Mikulski) contains a thorough discussion of various forms of (partial order) semantics and their interrelationships with step semantics and firing sequence semantics. Historically, partial order semantics – not just relating to Petri nets – has many roots and goes by different names, such as traces (Antoni Mazurkiewicz, [Maz84]) and pomsets (partially ordered multisets) (Vaughan R. Pratt, [Pra86]). Another early model is that of event structures, by which a distributed execution is represented as a partially ordered set of events equipped with conflict and causality (as in Definition 9.3) satisfying constraints akin to Definition 9.6 (compare [Win86], by Glynn Winskel).

𝑠1

𝑠1

𝑡1

𝑡2

𝑡1

𝑠2

𝑠3

𝑠4

𝑠5

𝑡3

𝑡4

𝑡5

𝑡6

𝑠6

𝑠7

𝑠8

𝑠9

𝑡8

𝑡7 𝑠10

𝑠11

1

2

𝑠3

𝑠2

3

𝑠6

𝑡7 𝑠10

7

𝑡3

4

𝑠8

𝑠7

𝑡8 𝑠11

10

𝑡2

𝑠5

𝑠4

𝑡5

5

𝑠9

𝑡4

𝑠6

8

6

𝑠7

𝑡7 𝑠10

𝑡6

𝑠9

𝑠8

9

𝑡8 𝑠11

𝑡9

𝑠12

Fig. 9.7: A Petri net (left-hand side) and an unfolding based solely on the ≈mark equivalence (right-hand side). The marking {𝑠12 } can be reached in the net, but is not reflected in the unfolding. The unfolding algorithm 9.20 creates events in the inscribed order. If any adequate order relation ◁ is ignored, the greyed events 8 and 10 are designated as cut-offs, since the markings {𝑠10 , 𝑠7 , 𝑠9 } and {𝑠11 , 𝑠6 , 𝑠8 } created by them have already been created by events 7 and 9, respectively. However, if some adequate order ◁ is taken into account, the unfolding shown on the right-hand side is just a proper prefix of a canonical prefix. For example, with the order ◁card , 8 is not a cut-off event.

224

9 Unfoldings and Reachability Checking

9.10 Exercises Exercise 9.1 Unfoldings Consider the Petri net shown in Figure 9.8. a) Unfold this Petri net maximally. Why is your unfolding finite? b) Determine all local configurations [𝑒] and cuts 𝐶𝑢𝑡 ([𝑒]) of your unfolding.

𝑠1

𝑠3

𝑠2

𝑡1

𝑡2

𝑡3

𝑠6

𝑠5

𝑠4

𝑡4

𝑠7

Fig. 9.8: A Petri net to Exercise 9.1. Exercise 9.2 McMillan prefix Consider the Petri net shown in Figure 9.9. Using Algorithm 9.20, compute the canonical prefix of the maximal unfolding with respect to the cutting context Θ = (≈mark , ◁card ) In each step, give the set PE of potential extensions and the set CutOff of cut-off events. Also, indicate in which steps the new conditions and events are created.

𝑡3

𝑡1

𝑠1

𝑠2

𝑠4 𝑠3

𝑡2

𝑠5

𝑡4

Fig. 9.9: A Petri net to Exercise 9.2. Exercise 9.3 Adequate order Prove that ◁ lex is an adequate order with respect to the equivalence ≈mark .

9.10 Exercises

225

Solution 9.1 (Exercise 9.1) a): See Figure 9.10. The unfolding is finite because the net deadlocks (in more than one possible ways). b): Local configurations: Cuts: [𝑒 1 ] = {𝑒 1 } 𝐶𝑢𝑡 ([𝑒 1 ]) = {𝑏 3 , 𝑏 4 , 𝑏 5 , 𝑏 6 } [𝑒 2 ] = {𝑒 2 } 𝐶𝑢𝑡 ([𝑒 2 ]) = {𝑏 1 , 𝑏 2 , 𝑏 5 , 𝑏 7 } [𝑒 3 ] = {𝑒 3 } 𝐶𝑢𝑡 ([𝑒 3 ]) = {𝑏 1 , 𝑏 2 , 𝑏 4 , 𝑏 8 } [𝑒 4 ] = {𝑒 1 , 𝑒 2 , 𝑒 4 } 𝐶𝑢𝑡 ([𝑒 4 ]) = {𝑏 5 , 𝑏 9 , 𝑏 10 } [𝑒 5 ] = {𝑒 1 , 𝑒 3 , 𝑒 5 } 𝐶𝑢𝑡 ([𝑒 5 ]) = {𝑏 4 , 𝑏 11 , 𝑏 12 } [𝑒 6 ] = {𝑒 1 , 𝑒 2 , 𝑒 4 , 𝑒 6 } 𝐶𝑢𝑡 ([𝑒 6 ]) = {𝑏 9 , 𝑏 13 } [𝑒 7 ] = {𝑒 1 , 𝑒 3 , 𝑒 5 , 𝑒 7 } 𝐶𝑢𝑡 ([𝑒 7 ]) = {𝑏 11 , 𝑏 14 }

𝑠1

𝑠3

𝑠2

𝑏1

𝑡1 𝑒1

𝑏6

𝑏7

𝑒3 𝑡4

𝑠7

𝑏8

𝑠2

𝑠7

𝑒5 𝑡2

𝑡2 𝑒4

𝑏9

𝑏5

𝑏4

𝑡3 𝑒2

𝑠6

𝑠5

𝑠4

𝑏3

𝑏2

𝑏10

𝑠3

𝑏11

𝑠2

𝑠3

𝑏12

𝑒6 𝑡4

𝑠7

Fig. 9.10: Solution of Exercise 9.1(a).

𝑏13 𝑠7

𝑒7 𝑡3

𝑏14

226

9 Unfoldings and Reachability Checking

Solution 9.2 (Exercise 9.2) Step 0: Create three conditions 𝑏 1 , 𝑏 2 , 𝑏 3 with 𝑝(𝑏 1 ) = 𝑠1 , 𝑝(𝑏 2 ) = 𝑠3 , 𝑝(𝑏 3 ) = 𝑠5 . Step 1: PE = {({𝑏 1 , 𝑏 2 }, 𝑡1 ), ({𝑏 2 , 𝑏 3 }, 𝑡3 )} CutOff = ∅. Both events are minimal with respect to ◁card Choose 𝑒 1 = ({𝑏 1 , 𝑏 2 }, 𝑡1 ). Add 𝑏 4 = (𝑒 1 , 𝑠2 ). Step 2: PE = {({𝑏 4 }, 𝑡2 ), ({𝑏 2 , 𝑏 3 }, 𝑡3 )} CutOff = ∅. [({𝑏 2 , 𝑏 3 }, 𝑡3 )] = {({𝑏 2 , 𝑏 3 }, 𝑡3 )} ◁card [({𝑏 4 }, 𝑡2 )] = {𝑒 1 , ({𝑏 4 }, 𝑡2 )} Choose 𝑒 2 = ({𝑏 2 , 𝑏 3 }, 𝑡3 ). Add 𝑏 5 = (𝑒 2 , 𝑠4 ). Step 3: PE = {({𝑏 4 }, 𝑡2 ), ({𝑏 5 }, 𝑡4 )} CutOff = ∅. [({𝑏 4 }, 𝑡2 )] and [({𝑏 5 }, 𝑡4 )] have the same cardinality. Choose 𝑒 3 = ({𝑏 4 }, 𝑡2 ). Add 𝑏 6 = (𝑒 3 , 𝑠1 ) and 𝑏 7 = (𝑒 3 , 𝑠3 ). 𝑒 3 ∈ CutOff since [𝑒 3 ] ≈mark ∅ and [𝑒 3 ] ▷card ∅. Step 4: PE = {({𝑏 5 }, 𝑡4 ), ({𝑏 6 , 𝑏 7 }, 𝑡1 )} CutOff = {𝑒 3 }. [({𝑏 5 }, 𝑡4 )] = {𝑒 2 , ({𝑏 5 }, 𝑡4 )} ◁card [({𝑏 6 , 𝑏 7 }, 𝑡1 )] = {𝑒 1 , 𝑒 3 , ({𝑏 6 , 𝑏 7 }, 𝑡1 )} Choose 𝑒 4 = ({𝑏 5 }, 𝑡4 ). Add 𝑏 8 = (𝑒 4 , 𝑠3 ) and 𝑏 9 = (𝑒 4 , 𝑠5 ). 𝑒 4 ∈ CutOff since [𝑒 4 ] ≈mark ∅ and [𝑒 4 ] ▷card ∅. Step 5: PE = {({𝑏 6 , 𝑏 7 }, 𝑡1 ), ({𝑏 8 , 𝑏 9 }, 𝑡3 )} CutOff = {𝑒 3 , 𝑒 4 }. For both 𝑒 = ({𝑏 6 , 𝑏 7 }, 𝑡1 ) and 𝑒 = ({𝑏 8 , 𝑏 9 }, 𝑡3 ), [𝑒] ∩ CutOff ≠ ∅. Remove both from PE. Step 6: End of execution, since PE = ∅. See Figure 9.11 for the prefix created after these steps. The order of generating events is by increasing indices. All reachable markings (there are only three of them) are represented by cuts in the prefix. Solution 9.3 (Exercise 9.3) • ◁lex refines ⊂: If 𝐶1 ⊂ 𝐶2 then |𝐶1 | < |𝐶2 |, hence 𝐶1 ◁lex 𝐶2 . • Finite extensions preserve ≈mark and ◁lex : Let 𝐶1 ≈mark 𝐶2 and let 𝐸 1 be a finite extension of 𝐶1 . To be shown: there is a finite extension set 𝐸 2 of 𝐶2 such that (i) 𝐶1 ⊕ 𝐸 1 ≈mark 𝐶2 ⊕ 𝐸 2 (ii) 𝐶2 ◁lex 𝐶1 implies 𝐶2 ⊕ 𝐸 2 ◁lex 𝐶1 ⊕ 𝐸 1 . (i): Since 𝐶1 ∩ 𝐸 1 = ∅, 𝑝(𝐸 1 ) transitions are firable from Mark(𝐶1 ), and hence, by 𝐶1 ≈mark 𝐶2 , also from Mark(𝐶2 ). Hence there is a finite event set 𝐸 2 such that 𝐶2 ∩ 𝐸 2 = ∅ and 𝐶2 ∪ 𝐸 2 is a configuration. Moreover, 𝐸 2 can be chosen such that |𝐸 1 | = |𝐸 2 | and 𝑝(𝐸 1 ) = 𝑝(𝐸 2 ); this is possible by Mark(𝐶1 ) = Mark(𝐶2 ). By the determinism of firing in Petri nets, 𝐶2 ⊕ 𝐸 2 ≈mark 𝐶1 ⊕ 𝐸 1 .

9.10 Exercises

227 𝑏1

𝑠1

𝑏2

𝑠3

𝑏3

𝑡1 𝑒1

𝑏4

𝑒2 𝑡3

𝑠2

𝑏5

𝑡2 𝑒3

𝑏6

𝑠1

𝑠5

𝑠4

𝑒4 𝑡4

𝑏7

𝑠3

𝑏8

𝑠3

𝑏9

𝑠5

Fig. 9.11: Solution of Exercise 9.2.

(ii): Suppose 𝐶2 ◁lex 𝐶1 . If |𝐶2 | < |𝐶1 |, then |𝐶2 ∪ 𝐸 2 | = |𝐶2 | + |𝐸 2 | (by 𝐶2 ∩ 𝐸 2 = ∅) < |𝐶1 | + |𝐸 1 | (by |𝐸 1 | = |𝐸 2 | and |𝐶2 | < |𝐶1 |) = |𝐶1 ∪ 𝐸 1 | (by 𝐶1 ∩ 𝐸 1 = ∅) If |𝐶2 | = |𝐶1 | and 𝜙(𝐶2 ) lexicographically precedes 𝜙(𝐶1 ), then, analogously to the above, we get |𝐶2 ∪ 𝐸 2 | = |𝐶1 ∪ 𝐸 1 |. It remains to prove that 𝜙(𝐶2 ⊕ 𝐸 2 ) lexicographically precedes 𝜙(𝐶1 ⊕ 𝐸 1 ). This follows from setting 𝜎 = 𝜙(𝐶2 ), 𝜏 = 𝜙(𝐶1 ), and by applying the lemma below |𝐸 1 | = |𝐸 2 | times to the transitions in 𝑝(𝐸 1 ) = 𝑝(𝐸 2 ). Let Σ = {𝑎 1 , . . . , 𝑎 𝑛 } be an alphabet and let 𝑎 1 ⊏ . . . ⊏ 𝑎 𝑛 be a linear order on its letters. E.g., Σ = {𝑎, 𝑏, 𝑐} and 𝑎 ⊏ 𝑏 ⊏ 𝑐. A sequence is alphabetically ordered if the letters occur in the ⊑ sequence in it. E.g., 𝑎𝑎𝑏𝑏𝑐𝑐𝑐 is alphabetically ordered, but 𝑎𝑎𝑏𝑐𝑏𝑐𝑐 is not. This order can be extended to sequences. E.g., 𝑎𝑎𝑏𝑐𝑐 lexicographically precedes 𝑎𝑎𝑐𝑐𝑐, written as 𝑎𝑎𝑏𝑐𝑐 ⊏ 𝑎𝑎𝑐𝑐𝑐. Lemma: If 𝜎 ⊑ 𝜏 and if the same letter is inserted alphabetically into 𝜎 to yield 𝜎 ′ and into 𝜏 to yield 𝜏 ′ , then again, 𝜎 ′ ⊑ 𝜏 ′ . E.g.: 𝑎𝑎𝑏𝑐𝑐 ⊏ 𝑎𝑎𝑐𝑐𝑐; and then, 𝑎𝑎𝑎𝑏𝑐𝑐 ⊏ 𝑎𝑎𝑎𝑐𝑐𝑐 (𝑎 has been inserted alphabetically); 𝑎𝑎𝑏𝑏𝑐𝑐 ⊏ 𝑎𝑎𝑏𝑐𝑐𝑐 (𝑏 has been inserted alphabetically); 𝑎𝑎𝑏𝑐𝑐𝑐 ⊏ 𝑎𝑎𝑐𝑐𝑐𝑐 (𝑐 has been inserted alphabetically). Proof: By case distinction. If 𝑎 ⊏ 𝑏 are the two letters marking the first difference between 𝜎 and 𝜏, and if 𝑐 is the letter to be inserted, then distinguish the cases 𝑐 ⊑ 𝑎, 𝑎 ⊏ 𝑐 ⊏ 𝑏, and 𝑏 ⊑ 𝑐.

Chapter 10

Petri Net Computers

Abstract Weak and strong Petri net computers are introduced as two different means of calculating functions from vectors of natural numbers to natural numbers. Their capabilities as well as their similarities and differences are explored. It is shown that even the fast-growing Ackermann function is Petri net computable, leading to extremely large state spaces.

Contents 10.1 10.2 10.3 10.4 10.5 10.6 10.7 10.8

Introduction . . . . . . . . . . . . . . . . . . Strongly Petri net computable functions . . . Weakly Petri net computable functions . . . Strong versus weak PN-computability . . . . The Ackermann function . . . . . . . . . . . Sizes of reachability and coverability graphs . Bibliographical remarks and further reading . Exercises . . . . . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

229 230 234 238 239 243 244 245

10.1 Introduction Now that the chapter count is turning double-digit, it may be time to take a little bit of stock of the contents of this text, so far. They can, perhaps, be categorised concisely in terms of two thematic strands: • “Something” that Petri nets can be used for. • “Something” that can be used for Petri nets.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. Best and R. Devillers, Petri Net Primer, Computer Science Foundations and Applied Logic, https://doi.org/10.1007/978-3-031-48278-6_10

229

230

10 Petri Net Computers

In Chapter 2, Petri nets are used in order to generate languages. In Chapters 3 to 6, graph theory and linear algebra are used in order to analyse Petri nets. Chapter 7 describes the use of Petri nets for program verification. Petri nets are also used in Chapter 8 in order to capture and compare various fairness notions. In Chapter 9, partial orders are used in order to define a concurrency semantics for Petri nets, and the resulting nets are used for system verification. Occasionally, we also pointed out what Petri nets are not good at. They have no memory (e.g., in Section 2.3, it is shown that palindromes are not Petri net recognisable); and they have no test for the absence of tokens (leading to a loss of computing power, as discussed in Section 8.6). But they are very good at counting, more specifically, at setting up counters and at increasing and decreasing the values of such counters. So, because place/transition Petri nets allow the addition and the subtraction of tokens to and from places, it should be possible to let them calculate functions in some sense. Indeed, to some degree, one can use them in this capacity. There are some problems, however. Let us assume that a net calculates a function 𝑓 : N𝑟 → N. How can a net be given 𝑟 input parameters in N𝑟 ? How does the “user” know at which point the calculation has ended, and how can the resulting value in N be displayed? The first and the last question are easy to solve: 𝑟 places in1 , . . . , in𝑟 are designated as input places for 𝑟 parameters 𝑥1 , . . . , 𝑥𝑟 , and a special place out is designated as an output place for the output 𝑓 (𝑥1 , . . . , 𝑥𝑟 ). The question about when the calculation is finished is not so easy to solve. Indeed, there is no uniform solution. This chapter presents two possible solutions: strong computers and weak computers. Both of them have their advantages and disadvantages. In this section, we drop the requirement that Petri nets are plain.

10.2 Strongly Petri net computable functions Definition 10.1 Strong PN-computability A Petri net 𝑁 = (𝑆, 𝑇, 𝐹) computes a partial function 𝑓 : N𝑟 → N strongly if the following conditions are satisfied: • 𝑁 has special places start, halt, out, in1 , . . . , in𝑟 ; • the initial state is 𝑀0 = 𝑠 𝑥1 ,..., 𝑥𝑟 = (1, 0, 0, 𝑥1 , . . . , 𝑥𝑟 , 0) T , provided 𝑓 (𝑥1 , . . . , 𝑥𝑟 ) is to be computed; • ∀𝑥1 , . . . , 𝑥𝑟 ∈ N: – ∀𝑀 ∈ [𝑀0⟩: (𝑀 (halt) > 0 =⇒ 𝑁 is dead in 𝑀), – ∃𝑀 ∈ [𝑀0⟩: (𝑀 (halt) = 1 ∧ ∀𝑠 ∈ 𝑆 \ {halt, out}: 𝑀 (𝑠) = 0) ⇐⇒ 𝑓 (𝑥1 , . . . , 𝑥𝑟 ) is defined, – ∀𝑀 ∈ [𝑀0 ⟩: ((𝑀 (halt) = 1 ∧ ∀𝑠 ∈ 𝑆 \ {halt, out}: 𝑀 (𝑠) = 0) =⇒ 𝑀 (out) = 𝑓 (𝑥1 , . . . , 𝑥𝑟 )). If there is a Petri net 𝑁 computing 𝑓 strongly, then 𝑓 is said to be strongly PN-computable, and 𝑁 is called a strong PN-computer for 𝑓 .

10.2 Strongly Petri net computable functions

231

The idea is that the net is started in a well-defined initial state (hence the vector 0, specifying that initially all the other places than start, halt, out, in1 , . . . , in𝑟 are empty of tokens). When halt gets some token(s), the net cannot continue. If there is a (deadlocking) marking (0, 1, 𝑥, 0, . . . , 0), then 𝑓 (𝑥1 , . . . , 𝑥𝑟 ) = 𝑥 is defined (so that no two markings of this kind may be reachable), even if we never reach it (this marking is not necessarily a home state). If there is no reachable marking (0, 1, 𝑥, 0, . . . , 0), then 𝑓 (𝑥 1 , . . . , 𝑥𝑟 ) is not defined. Example 10.2 The Kronecker function Let 𝛿0 : N → N be defined by  1 if 𝑛 = 0 𝛿0 (𝑛) = 0 otherwise Figure 10.1 shows a Petri net computing 𝛿0 strongly. If the input is 0, then only 𝑡0 can fire, and the correct result is produced. If the input is greater than 0 (in1 contains 𝑛 > 0 tokens), we still may perform 𝑡0 which leads to a deadlock, but not of the adequate kind since in1 keeps tokens; if we perform 𝑡1 , 𝑡2 may then progressively empty in1 , but if 𝑡 3 is fired too early, we get again an inadequate deadlock; on the contrary, after the sequence 𝑡 1 (𝑡2 ) 𝑛−1 𝑡3 we get the expected deadlock with a single token in halt.

𝑡2

𝑡0

in1

out

𝑡1

𝑡3

start

halt

Fig. 10.1: A strong Petri net computer for 𝛿0 The function 𝛿0 is interesting for two reasons: (i) it is not monotonic; (ii) it represents a test for zero. Usually, Petri nets are monotonic, and it is impossible to test a place for zero tokens.1 The new capabilities are obtained by insisting that on halt, no tokens may be present except on halt and (possibly) on out. The following functions are also strongly PN-computable. Theorem 10.3 Some basic strongly PN-computable functions The following functions are strongly PN-computable: (c) 𝑐 : N0 → N with 𝑐() = 𝑐 for all 𝑐 ∈ N (constant) (a) + : N2 → N with +(𝑚, 𝑛) = 𝑚 + 𝑛 (addition) 1 Leaving inhibitor arcs aside.

232

10 Petri Net Computers



𝑚 − 𝑛, if 𝑚 ≥ 𝑛 ⊥ otherwise (modified subtraction, where ⊥ is a symbol Í for “undefined”) (d) 𝑖𝑛𝑡𝑟 : N𝑟 → N with 𝑖𝑛𝑡𝑟 (𝑥1 , . . . , 𝑥𝑟 ) := 𝑟𝑖=1 𝑥𝑖 2𝑖−1 (binary decoding) (e) ∗𝑐 : N → N with ∗𝑐(𝑥) := ⌊𝑐 ∗ 𝑥⌋ for all 𝑐 ∈ Q (multiplication by a rational constant). (b) −⊥ :

N2

→ N with −⊥ (𝑚, 𝑛) =

Proof: Figures 10.2, 10.3 and 10.4 exhibit Petri nets strongly computing the five functions under consideration. This is easily seen for addition and subtraction, keeping in mind that for a correct computation, all places except halt and (possibly) out must be token-empty. It is even more easily seen for the net which simply puts 𝑐 tokens onto out (Figure 10.3(l.h.s.)). For the multiplication by 𝑚/𝑛, with 𝑚, 𝑛 ∈ N, 𝑛 ≠ 0, suppose 𝑚/𝑛 is some rational constant. The net in Figure 10.3(r.h.s.) computes ⌊𝑐 ∗ 𝑥⌋ by first producing 𝑚 times the input and putting a token onto out for however many times 𝑛 is contained in the result. If some remainder < 𝑛 is present, this will be consumed along with putting a token onto halt. Finally, the net for 𝑖𝑛𝑡𝑟 duplicates tokens that migrate from place in𝑖 to place in𝑖−1 . There is an S-invariant 𝐼 satisfying 𝐼 (out) = 𝐼 (in1 ) = 1, 𝐼 (in𝑖+1 ) = 2 · 𝐼 (in𝑖 ) for 1 ≤ 𝑖 < 𝑟, and 0 for every other entry. If 𝑠 ∈ [𝑠 𝑥1 ,...,𝑥𝑟 ⟩ (𝑠 𝑥1 ,...,𝑥𝑟 being a starting state) and all input places are empty in 𝑠, then the following holds: 𝑠(out) = ( 𝐼 (out) = 1 ) 𝐼 (out) · 𝑠(out) = ( all input places are token-free ) 𝐼T · 𝑠 = ( property of an S-invariant ) 𝐼 T · 𝑠 𝑥1 ,..., 𝑥𝑟 = ( matrix calculation ) Í𝑟 𝑖=1 𝑥 𝑖 · 𝐼 (in𝑖 ) = ( definition of 𝐼 ) Í𝑟 𝑖−1 𝑖=1 𝑥 𝑖 · 2 = ( definition of 𝑖𝑛𝑡𝑟 ) 𝑖𝑛𝑡𝑟 (𝑥1 , . . . , 𝑥𝑟 ). Some simple functions are not strongly PN-computable.

10.3

Theorem 10.4 Functions that are not strongly computable Neither multiplication ∗ : N2 → N nor squaring ·2 : N → N are strongly PN-computable. Proof: (Sketch.) The following facts will be used, but not proved, at this point. • Suppose ℎ1 , . . . , ℎ𝑟 : N 𝑘 → N with 𝑘 ∈ N and 𝑔: N𝑟 → N. Simultaneous substitution 𝑔(ℎ1 , . . . , ℎ𝑟 ) is strongly PN-computable;

10.2 Strongly Petri net computable functions

233

𝑡2

𝑡2

in1

out

in1

out

𝑡3

𝑡3 (a)

in2

𝑡1

(b)

in2

𝑡4

𝑡1

start

halt

𝑡4

start

halt

𝑝1

𝑝1

Fig. 10.2: Strong PN computers for addition (a) and modified subtraction (b). 𝑚

𝑛

𝑖𝑛

(c)

out

𝑛−1 1

out

.. .

(e) 𝑐 𝑡1 start

halt start

halt

Fig. 10.3: Strong PN computers for 𝑐 (c) and multiplication with 𝑚/𝑛 (e). 𝑡2 in1

out

2

in2 2 .. . in𝑟 −1

.. .

(d)

2

in𝑟 𝑡3

𝑡1 start

halt 𝑝1

Fig. 10.4: Strong PN computer for binary decoding 𝑖𝑛𝑡𝑟 (d).

234

10 Petri Net Computers

• and the same is true for case distinction, i.e., if 𝑓 (𝑥 1 , . . . , 𝑥𝑟 ) = 0 then 𝑔(𝑥 1 , . . . , 𝑥𝑟 ) else ℎ(𝑥1 , . . . , 𝑥𝑟 ), • as well as for the partial existential quantor, which yields 1 if ∃𝑥𝑟+1 , . . . , 𝑥 𝑛 : 𝑓 (𝑥1 , . . ., 𝑥𝑟 , 𝑥𝑟+1 , . . ., 𝑥 𝑛 ) = 0, and is otherwise undefined. Using these results, it would be possible to generate all polynomials (and show their strong PN-computability) if multiplication was strongly PN-computable. However, it is known that diophantine predicates (in effect the zeroes of polynomials) are not fully recursive but only recursively enumerable. On the other hand, diophantine predicates can be reduced to the reachability problem for Petri nets, which is decidable (i.e., fully recursive). Because of this contradiction, multiplication cannot be strongly PN-computable. Since 𝑥 ∗ 𝑦 = 12 ((𝑥 + 𝑦) 2 −⊥ (𝑥 2 + 𝑦 2 )), and because +, −⊥ and ∗𝑐 (with 𝑐 = 12 ) are strongly PN-computable, the non-strong PN-computability of multiplication entails the non-strong PN-computability of squaring. 10.4 Despite this shortcoming, strong PN-computability is interesting. Using it, the reachability problem for Petri nets can be shown to have at least exponential space complexity, and it can also be shown that register machines which can calculate numbers 𝑛 up to 22 can be simulated by Petri nets of size 𝑂 (𝑛).

10.3 Weakly Petri net computable functions If the requirement that all places (except halt and out) have to be token-empty is felt to be too strong, there is an alternative notion of Petri net computability. It is then necessary to give up the idea that function values have to be exact. Weak computability requires that the function value – but also every smaller value – can be the result of a computation. Definition 10.5 Weak PN-computability A Petri net 𝑁 = (𝑆, 𝑇, 𝐹) weakly computes a function 𝑓 : N𝑟 → N if the following requirements are satisfied: • 𝑁 has special places start, halt, out, in1 , . . . , in𝑟 ; • the initial state is 𝑀0 = 𝑠 𝑥1 ,..., 𝑥𝑟 = (1, 0, 0, 𝑥1 , . . . , 𝑥𝑟 , 0) T , provided 𝑓 (𝑥1 , . . . , 𝑥𝑟 ) is to be computed; • ∀𝑥1 , . . . , 𝑥𝑟 , 𝑦 ∈ N: – ∃𝑀 ∈ [𝑀0⟩: (𝑀 (halt) = 1 ∧ 𝑀 (out) = 𝑦) ⇐⇒ 𝑦 ≤ 𝑓 (𝑥 1 , . . . , 𝑥𝑟 ), – ∀𝑀 ∈ [𝑀0⟩ ∃𝑀 ′ ∈ [𝑀⟩: 𝑀 ′ (halt) = 1, – ∀𝑀 ∈ [𝑀0⟩: (𝑀 (halt) > 0 =⇒ 𝑁 is dead in 𝑀), – ∀𝑀 ∈ [𝑀0⟩ ∀𝑠 ∈ 𝑆: 𝑀 (𝑠) ≤ max{1, 𝑥1 , . . . , 𝑥𝑟 , 𝑓 (𝑥 1 , . . . , 𝑥𝑟 )}. If there is a Petri net 𝑁 computing 𝑓 weakly, then 𝑓 is said to be weakly PN-computable, and 𝑁 is called a weak PN-computer for 𝑓 .

10.3 Weakly Petri net computable functions

235

The very last requirement (last line but two) is actually motivated by results outside the material covered in the present chapter. It is included nevertheless, since the definition is an established one. Weak PN-computers 𝑁 = (𝑆, 𝑇, 𝐹) enjoy a particular notion of liveness. Whatever the executions of 𝑁 (∀𝑀 ∈ [𝑀0 ⟩), it is always possible (∃𝑀 ′ ∈ [𝑚⟩) that a token reaches halt (𝑀 ′ (halt) = 1). But since a token on halt indicates termination of the computation, 𝑁 is required to be dead as soon as a token has reached halt. Thus, 𝑁 can be maneuvered into a deadlock only by a token reaching the halt place. Even then, it cannot be guaranteed that out contains precisely 𝑓 (𝑥 1 , . . . , 𝑥𝑟 ) tokens. Instead, it may contain 𝑦 tokens, for every 𝑦 with 0 ≤ 𝑦 ≤ 𝑓 (𝑥1 , . . . , 𝑥𝑟 ). But out can never contain more than 𝑓 (𝑥 1 , . . . , 𝑥𝑟 ) tokens. Lemma 10.6 Some weakly computable functions The constant function 𝑐 : N0 → N with 𝑐() := 𝑐, the addition, as well as the multiplication, are weakly PN-computable. Proof: Figure 10.5 exhibits a net calculating the constant 𝑐. Observe the difference to strong computability: an additional transition 𝑡 2 ensures that all values 𝑦 ≤ 𝑐 can be produced on out. There has to be a side condition because the net has to be in a deadlock as soon as there is a token on halt. The addition of two natural numbers 𝑥 and 𝑦 is presented in Figure 10.6. This net has two S-invariants 𝐼1 = (1, 1, 0, 0, 0, 1) and 𝐼2 = (0, 0, 1, 1, 1, 0).2 Therefore, for all reachable markings 𝑀 ∈ [(1, 0, 0, 𝑥, 𝑦, 0) T⟩: 𝐼1 · 𝑀 = 𝐼1 · (1, 0, 0, 𝑥, 𝑦, 0) T = 1



𝐼2 · 𝑀 = 𝐼2 · (1, 0, 0, 𝑥, 𝑦, 0) T = 𝑥 + 𝑦

Hence 𝑀 (out) ≤ 𝑓 (𝑥, 𝑦) = 𝑥 + 𝑦 for all 𝑀 ∈ [(1, 0, 0, 𝑥, 𝑦, 0) T⟩. Together with 𝑗

𝑡3 𝑡1𝑖 𝑡2 𝑡4

(1, 0, 0, 𝑥, 𝑦, 0) T −→ 𝑀 ′ = (0, 1, 𝑖 + 𝑗, 𝑥 − 𝑖, 𝑦 − 𝑗, 0) T this yields 0 ≤ 𝑀 ′ (out) = 𝑖 + 𝑗 ≤ 𝑥 + 𝑦 and 𝑀 ′ (halt) = 1 for 0 ≤ 𝑖 ≤ 𝑥 and 0 ≤ 𝑗 ≤ 𝑦. Because • 𝑡 ∩ {start, 𝑝 6 } ≠ ∅ for all 𝑡 ∈ 𝑇 and because of the invariant 𝐼1 , the net is dead in all reachable states 𝑀 ′′ with 𝑀 ′′ (halt) = 1. For multiplication 𝑥 ∗ 𝑦, consider the net shown in Figure 10.7. In the initial state with parameters 𝑥 and 𝑦, 𝑡 1 (𝑡2 𝜎𝑡 3 𝑡6 𝑡5∗ 𝑡4 ) ( 𝑥−1) 𝑡2 𝜎𝑡 3 𝑡7 𝑡8∗ 𝑡9 𝑦

𝑦

𝑦

𝑦

can be fired, where 𝜎 stands for firing sequences of the inner addition net (which has just been started 𝑥 times). In this way, up to 𝑥 ∗ 𝑦 tokens can be produced on

2 The ordering of places conforms to the convention introduced in the definition: start, halt, out, in1 , . . . , in𝑟 , and then the rest of the places. The naming “ 𝑝6 ” in Figure 10.6 indicates that 𝑝6 is the sixth place overall, after start, halt, out, in1 , in2 .

236

10 Petri Net Computers

out. Transition 𝑡0 covers the case that 𝑥 = 0, since no other transition can then be fired. 10.6

out 𝑐 𝑡1 𝑡2

start

𝑡3 halt

Fig. 10.5: A weak Petri net computer producing a constant 𝑐.

in1

out 𝑡1

𝑝6

in2 𝑡2

𝑡3 start

𝑡4 halt

Fig. 10.6: A weak Petri net computer computing the addition 𝑥 + 𝑦

As opposed to the nets for addition and for the constant, it is generally impossible to drain the multiplication net out of all tokens (except for tokens on halt and out). Unless 𝑡0 is chosen, a computation will start the inner addition net + by 𝑡 1 , moving a token from in1 to the start place of +. Once a token is on start of +, the input 𝑦 of in2 can be copied onto the internal input in2 of +, and to in2 . After adding, 𝑦 can be copied back from in2 to in2 . Then, if 𝑥 > 0, a token is subtracted from 𝑥, and the result of the addition is subtracted from out of + to in1 of +, repeating the addition process. If 𝑥 = 0, however, 𝑡7 and 𝑡8 can copy the tokens from out of + to the final out place, and then 𝑡 9 can stop the computation by putting a token on the final halt. If there are less than 𝑥 ∗ 𝑦 tokens on out, some are bound to be “stuck in the system”. At any rate, 𝑦 input tokens coming from in2 are still somewhere in the net, more precisely, somewhere on the places in2 and in2 . In other words, even when the computation has ended with 1 token on halt and 𝑥 ∗ 𝑦 tokens on out, the net is otherwise not token-free. The constant net and the addition and multiplication nets can be used as subroutines in order to construct arbitrary polynomials.

10.3 Weakly Petri net computable functions

237

𝑡6

𝑡4

𝑡5 in1 𝑡8

𝑡2 in1

out out

in2 in2

𝑡7

in2 start 𝑡3

start

+ 𝑡9

halt halt

𝑡1

𝑡0

Fig. 10.7: A weak Petri net computer calculating the multiplication 𝑥∗𝑦. The addition net shown in Figure 10.6 is used as a subroutine.

Example 10.7 A polynomial Figure 10.8 exhibits a weak PN-Computer for 𝑝(𝑥, 𝑦) = 2𝑥 2 𝑦 + 2𝑥𝑦 2 + 4𝑥𝑦. The weak computers for the constant 2, for addition and for multiplication defined previously are used as subroutines and are shown in the picture as “Grey Boxes”. Actually, a mechanism ensuring that the net is dead after a token reaches halt needs to be added. Such a mechanism would, however, clutter up the picture. It may conveniently be implemented by adding suitable side loops “killing” transitions that are no longer needed.

238

10 Petri Net Computers

𝑥𝑦

2𝑥 𝑦

2𝑥 2 𝑦

2𝑥 2 𝑦 + 2𝑥 𝑦 2





+

in1 ∗

in2 2𝑥 𝑦 2 out 𝑐=2





+ halt

start

2

4𝑥 𝑦

𝑝 ( 𝑥, 𝑦)

Fig. 10.8: A weak Petri net computer for 𝑝(𝑥, 𝑦) = 2𝑥 2 𝑦 + 2𝑥𝑦 2 + 4𝑥𝑦.

10.4 Strong versus weak PN-computability Multiplication is weakly but not strongly computable. There are also functions which are strongly, but not weakly, computable.3 Actually, there are quite a few of them, namely at least those functions which fail to be monotonic in the following sense. Lemma 10.8 Weakly computable functions are monotonic Let 𝑓 : N𝑟 → N be a weakly PN-computable function. Then 𝑓 is monotonic, i.e., by definition, ∀𝑚, 𝑛 ∈ N𝑟 : (𝑚 ≤ 𝑛 =⇒ 𝑓 (𝑚) ≤ 𝑓 (𝑛)). Proof: Assume that 𝑓 is not monotonic and that 𝑁 computes 𝑓 weakly. By the non-monotonicity of 𝑓 , there are vectors 𝑥, 𝑦 ∈ N𝑟 with 𝑥 ≤ 𝑦 and 𝑓 (𝑥) > 𝑓 (𝑦). By 𝑥 ≤ 𝑦 and by Definition 10.5, 𝑀 𝑥 ≤ 𝑀 𝑦 for the initial states of 𝑁. Then there exist 𝜎 a sequence 𝜎 ∈ 𝑇 ∗ and a marking 𝑀 ∈ [𝑀 𝑥 ⟩ with 𝑀 𝑥 −→ 𝑀 and 𝑀 (out) = 𝑓 (𝑥). ′ Let 𝑀 := 𝑀 𝑦 − 𝑀 𝑥 ≥ 0. By the monotonicity of Petri net firings, the same sequence 𝜎 leads from 𝑀 𝑥 + 𝑀 ′ to 𝑀 + 𝑀 ′ : 3 So that the nomenclature is slightly misleading.

10.5 The Ackermann function

239 𝜎

𝑀 𝑦 = (𝑀 𝑥 + 𝑀 ′ ) −→ (𝑀 + 𝑀 ′ ) b (out) ≤ 𝑓 (𝑦) for all 𝑀 b ∈ [𝑀 𝑦⟩. But (𝑀 + 𝑀 ′ )(out) ≥ 𝑓 (𝑥) > 𝑓 (𝑦), contradicting 𝑀 10.8 The modified subtraction −⊥ : N2 → N has already been identified as strongly PNcomputable. It can be combined with the constant function 𝑐 : N0 → N, which is also strongly PN-computable, to a function −⊥ 𝑐 : N → N. On input 𝑥, this function yields 𝑐 − 𝑥 if 𝑐 − 𝑥 ≥ 0, and is not defined otherwise. It is strongly PN-computable and not monotonic in the sense of Lemma 10.8. Hence: Proposition 10.9 Incomparability of the PN-computability models Strong PN-computability and weak PN-computability are incomparable, that is, there are functions which are strongly but not weakly PN-computable, and there are functions which are weakly but not strongly PN-computable. Proof: The required examples are −⊥ 𝑐 and ∗, respectively.

10.9

Proposition 10.10 PN-computability is not Turing powerful Petri nets are not Turing-powerful, independently of which of the two PNcomputability models is considered. Proof: Turing machines can multiply, and they can also subtract.

10.10

10.5 The Ackermann function Definition 10.11 Ackermann function Let ( 𝐴𝑖 : N → N)𝑖 ∈N be the family of functions defined by 𝐴0 (𝑥) := 𝑥 + 1, 𝐴𝑛+1 (0) := 𝐴𝑛 (1) and 𝐴𝑛+1 (𝑥 + 1) := 𝐴𝑛 ( 𝐴𝑛+1 (𝑥)). The Ackermann function 𝑎 : N → N is defined by 𝑎(𝑛) := 𝐴𝑛 (𝑛). The Ackermann function grows extremely fast. For n=3, its growth already matches that of an exponential function. It fact, it (or some close relative) is the prime example of a function which is not primitive recursive. Basically, primitive recursive functions are those that can be computed by a LOOP program, which are characterised by the property that upper bounds for loop repetitions are fixed before a loop is entered and values computed inside a loop do not influence these bounds. Standard functions, such as addition, multiplication, and exponentiation, are primitive recursive, but the Ackermann function is not. Nevertheless, despite their phenomenally fast growth, Ackermann functions 𝐴𝑛 are still PN-computable.

240

10 Petri Net Computers

Lemma 10.12 Weak PN-computability of 𝐴𝑛 (𝑥) For every 𝑛 ∈ N, the function 𝐴𝑛 (𝑥) is weakly PN-computable. Of course, there will be different weak PN-computers for different values 𝑛 ∈ N. That is, only 𝑥 is an input parameter while 𝑛 is considered fixed. These nets will be defined inductively. The net for 𝑛 = 0 is given in Figure 10.9 while the net for 𝑛 + 1 (inductively using all the nets for 𝑛, 𝑛 − 1, etc.) is given in Figure 10.10.

𝑡2 𝑖𝑛

𝑥

out 𝐴0 ( 𝑥 ) 𝑡3

𝑡1 start

halt

Fig. 10.9: A weak Petri net computer for 𝐴0 (𝑥) = 𝑥 + 1

𝐴𝑛+1 ( 𝑥 )

𝑡2

𝑡4

𝑡3

𝑡6 𝑥

out 𝑛

in𝑛

out 𝑛+1

in𝑛+1 𝐴𝑛 𝑡5

𝑡1 start 𝑛 start 𝑛+1

𝑡7

halt 𝑛 halt 𝑛+1

Fig. 10.10: A weak Petri net computer for 𝐴𝑛+1 (𝑥). The net for 𝐴𝑛 is used as a subroutine. Proof: The Petri net shown in Figure 10.9 computes 𝐴0 weakly. Assume that 𝐴𝑛 is weakly PN-computable. Then it is claimed that 𝐴𝑛+1 is weakly PN-computable as well, using the Petri net shown in Figure 10.10. To prove this, note that the definition of the Ackermann function can be rewritten equivalently as follows, for 𝑥 ∈ N:

10.5 The Ackermann function

𝐴0 (𝑥) = 𝑥 + 1

241

and

𝐴𝑛+1 (𝑥) = 𝐴𝑛 (. . . 𝐴𝑛 (1) . . .) | {z }

(10.1)

𝑥+1 times

Assume that the places are numbered in the following way: 𝑠1 = start 𝑛+1 , 𝑠2 = halt 𝑛+1 , 𝑠3 = out 𝑛+1 , and 𝑠4 = in𝑛+1 (i.e., by the same ordering convention as in Definition 10.5). When 𝐴𝑛+1 is started with parameter 𝑥, there is a firing sequence 𝑡1 𝜎 (where 𝜎 only contains transitions of the subroutine net 𝐴𝑛 ) and a marking 𝑀0 𝑡1 𝜎 with (1, 0, 0, 𝑥, 0) −→ 𝑀0 and 𝑀0 (𝑠) = 0 for 𝑠 ∉ {in𝑛+1 , out 𝑛 , halt 𝑛 }, 𝑀0 (out 𝑛 ) = 𝐴𝑛 (1) = 𝐴𝑛+1 (0), 𝑀0 (in𝑛+1 ) = 𝑥 and 𝑀0 (halt 𝑛 ) = 1. Example 10.13 Calculating 𝐴2 (0) This starts the process with 𝐴2 and zero tokens on place in2 . The only enabled transition is 𝑡1 , so subroutine 𝐴1 is started with input value 1 on in1 , eventually producing one token on halt1 and 2 = 𝐴1 (1) = 𝐴2 (0) tokens on out1 . Since 𝑡 4 is disabled, what can only happen now is that the two tokens are transferred to out2 by the shifting mechanism 𝑡 5 , 𝑡6 , 𝑡7 (of course, one might also transfer only one of them and halt, leaving behind an “orphaned” token on out2 ; in fact, this possiblity is required by the definition of weak PN-computability). It can also be seen that had there been more than 0 tokens, say 𝑥 = 7 tokens, on in2 , then the shifting mechanism 𝑡 4 , 𝑡3 , 𝑡2 would have been able to transfer no less than 7 tokens onto the start place start1 . This models the recursion expressed in (10.1), since together with the very first execution of 𝐴𝑛 , 8 = 𝑥 +1 executions of 𝐴𝑛 would have been possible, each with an input that equals the output of the previous one, again by the transfer mechanism 𝑡4 , 𝑡3 , 𝑡2 . Formally, suppose that 𝑀𝑖 is a marking in which all places are token-empty, except for 𝑀𝑖 (out 𝑛 ) = 𝐴𝑛+1 (𝑖), 𝑀𝑖 (in𝑛+1 ) = 𝑥 − 𝑖 > 0 and 𝑀𝑖 (halt 𝑛 ) = 1 Then there are a sequence 𝜎 and a marking 𝑀𝑖+1 so that 𝑀𝑖 (out 𝑛 ) 𝑡2 𝜎

𝑡4 𝑡3

𝑀𝑖 −→ 𝑀𝑖+1 𝑀𝑖+1 (out 𝑛 ) = 𝐴𝑛 (𝑀𝑖 (out 𝑛 )) = 𝐴𝑛 ( 𝐴𝑛+1 (𝑖)) = 𝐴𝑛+1 (𝑖 + 1) 𝑀𝑖+1 (in𝑛+1 ) = 𝑀𝑖 (in𝑛+1 ) − 1 = 𝑥 − (𝑖 + 1) 𝑀𝑖+1 (halt 𝑛 ) = 1 and all other places are token-empty. In the special case 𝑖 + 1 = 𝑥, one gets 𝑀 𝑥 (out 𝑛 ) = 𝐴𝑛+1 (𝑥) 𝑀 𝑥 (in𝑛+1 ) = 0 𝑀 𝑥 (halt 𝑛 ) = 1 and there are no other tokens. After that, firing

242

10 Petri Net Computers 𝐴𝑛+1 ( 𝑥)

𝑀𝑥

𝑡5 𝑡6

−−−−−→ −−−

𝑡7

𝑀

leads to the claimed calculation yielding 𝑀 (out 𝑛+1 )=𝐴𝑛+1 (𝑥) and 𝑀 (halt 𝑛+1 )=1. Moreover, the net is then dead. Less than 𝐴𝑛+1 (𝑥) tokens are produced if in 𝐴0 (the innermost Petri net) some tokens become stuck during the 𝑡 5 𝑡 6 𝑡7 process. However, it is not possible to retain more than 𝐴𝑛+1 (𝑥) tokens. They can only remain on the in𝑖 place or on the out𝑖 place of some 𝐴𝑖 . Because of 𝐴𝑖 (𝑥) + 1 ≤ 𝐴𝑖 (𝑥 + 1) (which can be obtained from the definition), an orphaned token contributes less to the overall sum of tokens as if it had been used fully during the calculation. This effect amplifies with every start of some 𝐴𝑖 net, since the 𝐴𝑖 increase strictly monotonically. 10.12 Instead of an arbitrary 𝑥, the net 𝐴𝑛 can also be given 𝑥 = 𝑛 as a special input parameter. In this case, it calculates the value (not the function!) 𝑎(𝑛) = 𝐴𝑛 (𝑛). Of course, for different values of 𝑛 there will be different nets. It is possible to estimate the size of such a net, depending on 𝑛, and compare this with the number the net produces. Theorem 10.14 Let 𝑛 ∈ N be given. There is a Petri net 𝑁 𝑛 of size 𝑂 (𝑛) which computes the number 𝑎(𝑛) = 𝐴𝑛 (𝑛) weakly. Proof: The net 𝐴𝑛 (𝑛) computes 𝑎(𝑛) weakly. Its size (that is, the number of places, transitions, arcs and tokens) can be calculated directly from Figures 10.10 and 10.9. Outside the subroutine, there are 6 places, 7 transitions, and 20 arcs, totalling 33 elements. The 𝑛 tokens on in𝑛+1 and the single token on the outermost start are not yet counted, so that, conforming to formula (10.1), the size so far is 33 ∗ 𝑛. The innermost 𝐴0 has 5 places, 3 transitions and 9 arcs, totalling 18 elements. So far, the size is 33 ∗ 𝑛 + 18. Add to that the size of a net producing the constant 𝑛 for 𝑥 (see Figure 10.5) which has 4 places, 3 transitions and 𝑛 + 7 (!) arcs, and also add a 1 for the single token on the outermost start place. The number 2 has to be subtracted, however, because the start and 𝑖𝑛 places of 𝐴𝑛 are respectively identified with the halt and out places of the net producing 𝑛. In total, this gives size 33 ∗ 𝑛 + 31, which is 𝑂 (𝑛). 10.14 Of course, this theorem does not mean that 𝑎(𝑛) is weakly PN-computable, because for different numbers 𝑛, there are different nets 𝐴𝑛 (𝑛). The theorem shows, however, that relatively small nets (of size 𝑂 (𝑛)) can produce fantastically large numbers (of size 𝑂 (𝑎(𝑛))).4 Alternatively, one could think of producing the number 𝑎(𝑛), for a fixed 𝑛, by a net of the form shown in Figure 10.5, with 𝑐 being 𝑎(𝑛). However, such a net is enormous to start with. It would be highly unsurprising (not to say totally trivial) that it can produce a number of about its own size.

4 Consult https://en.wikipedia.org/wiki/Ackermann_function in order to get a feeling of how large these numbers can become.

10.6 Sizes of reachability and coverability graphs

243

10.6 Sizes of reachability and coverability graphs The net 𝐴𝑛 (𝑛) is bounded, since, by the definition of weak PN-computability, no place can have more than 𝑎(𝑛) tokens. Hence its reachability graph is finite. Moreover, its coverability graph equals its reachability graph. However, since every value ≤ 𝑎(𝑛) can be computed, their sizes are in 𝑂 (𝑎(𝑛)). As the Ackermann function 𝑎(𝑛) is not primitive recursive, there is no way around the following insight: Theorem 10.15 Bound for reachability and coverability graphs The sizes of the reachability and coverability graphs of Petri nets, depending on a net’s size, are not bounded from above by any primitive recursive function. The Ackermann function is breathtakingly fast growing. While the definitions in Section 10.5 were moderately complex, there are much simpler Petri nets whose reachability graphs are finite, but of exponential size. Figure 10.11 depicts such an example.

𝑠3

𝑥

𝑝

𝑡3

(0, 0, 0)

𝑞

(0, 1, −1)

𝑠1

𝑠2

(0, −1, 2) (−1, 0, 0)

𝑡4 Start state: 𝑝 Counters: (𝑐1 , 𝑐2 , 𝑐3 ) 𝑠4

Corresponding places: (𝑠3 , 𝑠4 , 𝑠5 )

𝑡1

𝑡2 𝑠5

2

Fig. 10.11: Left-hand side: a Petri net exhibiting exponential counting. Initially, place 𝑠3 contains 𝑥 tokens. Eventually, place 𝑠5 contains 2 𝑥 tokens. Right-hand side: a corresponding VASS (vector addition system with states). The VASS shown on the right-hand side of this figure consists of a finite-state automaton with two states 𝑝, 𝑞. The automaton may start at 𝑝 with a nonnegative content of three counters 𝑐 1 , 𝑐 2 , 𝑐 3 and modify those counters by adding the vectors attached to the state transitions of the automaton (as long as the new values stay nonnegative). For instance, the following is a run of the automaton, starting with (𝑐 1 , 𝑐 2 , 𝑐 3 ) = (7, 0, 1):

244

𝑝(7, 0, 1)

10 Petri Net Computers

−→

𝑝(7, 1, 0)

−→

𝑞(7, 1, 0)

−→

𝑞(7, 0, 2)

−→

𝑝(6, 0, 2)

This has the effect of decreasing the value of 𝑐 1 by 1 while doubling the value of 𝑐 3 . Iterating and generalising this, we can get a run as follows: 𝑝(𝑥, 0, 1)

−→

...

−→

𝑝(0, 0, 2 𝑥 )

In the Petri net on the left-hand side, the places 𝑠3 , 𝑠4 , 𝑠5 correspond to the counters 𝑐 1 , 𝑐 2 , 𝑐 3 , and (𝑥, 0, 1) corresponds to the initial marking of these places. Moreover, the net’s transitions correspond to the transitions of the VASS. The above runs can thus be “simulated” by firing sequences; and vice versa. With a fixed initial number 𝑥 of tokens on 𝑠3 , no infinite number of states can be generated. Thus, this small example has an exponentially large, finite, state space in the size of the net (which is 𝑂 (𝑥)).

10.7 Bibliographical remarks and further reading The result about diophantine predicates mentioned in the text is the MRDP theorem by Martin Davis, Yuri Matiyasevich, Hilary Putnam and Julia Robinson.5 Weak computers have traditionally been used for undecidability results, starting with a proof of the undecidability of Petri net reachability set inclusion which is due to Michael O. Rabin (1967); see, e.g., Michel Hack’s dissertation [Hac75a]. More recently, Jérôme Leroux √︁ and Philippe Schnoebelen showed that unbounded sublinear functions, such as ⌊ (𝑛)⌋ or ⌊log(𝑛)⌋, cannot be weakly Petri net computed [LS14]. A proof that the reachability problem requires exponential space is due to Richard J. Lipton [Lip76]. In his proof, Lipton used strong (rather than weak) computers. Recently, both upper and lower bounds for the reachability problem have been improved (see [Cze+21], by Wojciech Czerwinski, Slawomir Lasota, Ranko Lazic, Jérôme Leroux, and Filip Mazowiecki, and [LS15], by Jérôme Leroux and Sylvain Schmitz). See also Sections 3.7 and 18.8. The exponential counting device of Figure 10.11 is a slightly modified version of a Petri net with a non-semilinear state space. The reader is referred to Section 17.8. The counting net can be generalised and modified in order to obtain extremely large finite state spaces. The paper [Cze22] (by Wojciech Czerwinski) nicely illustrates the use of these techniques, especially in their VASS versions, in order to derive tight complexity bounds for the reachability problem.

5 See https://en.wikipedia.org/wiki/Hilbert’s_tenth_problem.

10.8 Exercises

245

10.8 Exercises Exercise 10.1 Fibonacci The Fibonacci function fib : N → N is defined by fib(0) := 1, fib(1) := 1, and for 𝑥 ≥ 0, fib(𝑥 + 2) := fib(𝑥 + 1) + fib(𝑥). Construct a Petri net computing this function weakly. Exercise 10.2 Exponentiation Construct a Petri net which computes the function 𝑓 : N → N with 𝑓 (𝑥) = 2 𝑥 − 1 weakly. You may use the weak PN-computers for addition and multiplication as “black boxes”. However, it is perhaps easier to think of an independent “recursive” solution. Exercise 10.3 Determining the function of a PN-computer Determine the function computed by the weak Petri net computer shown in Figure 10.12. The “black box” called div2 computes the division by 2, neglecting the remainder, i.e., div2(𝑥) = ⌊ 2𝑥 ⌋. (where ⌊𝑥⌋ is defined as the largest integer less or equal to 𝑥).

in1

out

div2

start

halt

in2

Fig. 10.12: Weak Petri net computer to Exercise 10.3.

Solution 10.1 (Exercise 10.1) See Figure 10.13. The addition takes place successively in the inner cycle 𝑠2 , 𝑠4 , 𝑠5 . Basically, the idea is that the result of the last addition is eventually assembled on 𝑠5 , whereas the last but one addition is assembled on 𝑠2 (and then, whenever appropriate, transferred to 𝑠5 by sequences of 𝑐 and f ). The outer cycle 𝑠1 , 𝑠3 , 𝑠6 controls the token creation by transition 𝑑 appropriately, while removing input tokens at intermittent points in time. For instance, a systematic way of creating the value 5 for fib(4) is by the firing sequence

246

10 Petri Net Computers

𝑎𝑐 𝑏𝑔f 𝑒 𝑏 𝑑 𝑔f 𝑒𝑐 𝑏 𝑑 𝑔f f 𝑒𝑐 𝑏 𝑑 𝑑 𝑔f f f 𝑒𝑐𝑐 𝑏𝑔f f ℎ𝑖𝑖𝑖𝑖𝑖 𝑗 The transitions ℎ, 𝑖, 𝑗 just serve to transfer the final value on 𝑠5 to out in an orderly way. If ℎ is prematurely taken, prior values can also be generated.

𝑠3 𝑔

𝑏 𝑠5

𝑠2 start

𝑎

𝑐

f

𝑠1

𝑠4

out

𝑖

𝑑

𝑗

ℎ 𝑠6

halt

𝑠7

𝑒

in

Fig. 10.13: Solution of Exercise 10.1

Solution 10.2 (Exercise 10.2) See Figure 10.14. Exponentiation is performed in the outer cycle, and it is controlled by the inner cycle.

start

out

2

halt

in

Fig. 10.14: Solution of Exercise 10.2.

10.8 Exercises

247

Solution 10.3 (Exercise 10.3) If there is no token on in2 , then div2 can be executed once with input in1 . After that, the tokens can be moved (back) to in1 and (forward) to out. We can restart div2 only if there are tokens on in2 , and we can do so as often as there are tokens on in2 . However, for every restart, we can move one token less onto in1 and to out than were delivered by div2 (because the transition moving a token from in2 to start steals a token from −1 the inner out). Hence we first get 𝑓 (𝑥, 0) = ⌊ 2𝑥 ⌋, then 𝑓 (𝑥, 1) = 𝑓 (𝑥, 0) + ⌊ 𝑓 ( 𝑥,0) ⌋, 2 −1 𝑥 1 then 𝑓 (𝑥, 2) = 𝑓 (𝑥, 1) + ⌊ 𝑓 ( 𝑥,1) ⌋, and so on. Hence 𝑓 (𝑥, 1) = 𝑓 (𝑥, 0) + ⌊ − 2 4 2 ⌋, 𝑥 𝑓 (𝑥, 2) = 𝑓 (𝑥, 1) + ⌊ 8𝑥 − 34 ⌋, 𝑓 (𝑥, 3) = 𝑓 (𝑥, 2) + ⌊ 16 − 78 ⌋, etc. The function can be written summarily as follows: 𝑓 (𝑥, 𝑦) =

∑︁ 0≤𝑖 ≤ 𝑦



𝑥 2𝑖+1



2𝑖 − 1 ⌋ 2𝑖

Chapter 11

Synthesis of Petri Nets from Labelled Transition Systems

Abstract The focus now shifts from analysing the behavioural properties of a given Petri net to synthesising a Petri net from a given behavioural specification. Behavioural specifications are going to be captured formally as labelled transition systems. Such transition systems are very much akin to the reachability graph of a Petri net. Labelled transition systems are often used, quite independently of Petri nets, as specifications in many different contexts. We will now – and in the next chapters – consider a labelled transition system to be given, and we will investigate whether a Petri net can be synthesised from it, and if so, how this can be done algorithmically. The idea is that a labelled transition system specifies some desired behaviour which is then realised by a – hopefully much smaller and more concurrent – Petri net.

Contents 11.1 11.2 11.3 11.4 11.5 11.6 11.7 11.8 11.9

Introduction . . . . . . . . . . . . . . . . . . Labelled transition systems . . . . . . . . . . Regions . . . . . . . . . . . . . . . . . . . . Separation properties, and the region theorem Characterising state and event/state separation Solving the coffee machine . . . . . . . . . . Homomorphisms and overapproximations . . Bibliographical remarks and further reading . Exercises . . . . . . . . . . . . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. Best and R. Devillers, Petri Net Primer, Computer Science Foundations and Applied Logic, https://doi.org/10.1007/978-3-031-48278-6_11

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

250 251 255 257 262 266 269 270 272

249

250

11 Synthesis of Petri Nets from Labelled Transition Systems

11.1 Introduction Petri nets allow one to model and to analyse existing systems with complex, intertwined, mixtures of sequences, (exclusive) choices, and concurrency. In this capacity, they exhibit an interesting expressive power. However, Petri nets may also be viewed as static specifications of imaginary, non-existing, systems which are yet to be built, perhaps in terms of hardware, rather than as models of existing systems. In this capacity, they need a behavioural specification which clarifies what the nets are supposed to achieve. A (finite) labelled transition system may serve as such a behavioural specification, from which a (bounded) synthesised net system of some class may be derived. The latter may then serve as the static specification of some piece of hardware or software (perhaps constructed automatically from the net’s structure) which “runs” the net from its initial marking. When a reachability graph was associated with an initially marked Petri net in Chapter 3, the Petri net may have appeared as being of primary, essential, interest while its reachability graph was seen as an auxiliary construct in order to visualise the net’s behaviour in a convenient way. From this chapter onwards (until Chapter 15), however, we shall switch to the opposite point of view. The reachability graph of a Petri net is a special object within the much larger class of labelled transition systems (abbreviated lts). This class has an interesting life of its own, quite independently of Petri nets. We now consider one of these objects, that is to say, an arbitrary lts, as being of primary interest as a specification of some desired behaviour, and ask whether there is some Petri net realising, or conforming to, this behaviour. Why should one want to find a Petri net to a given lts? One reason is size. Just as the reachability graph is usually much larger than the Petri net it belongs to, one might hope that a Petri net found for a given lts is much smaller. In that case, there is hope that the behaviour specified by such an lts can, in some sense, be “implemented” (perhaps in hardware) by a more compact and more manageable Petri net. Another reason is concurrency. An lts specifies the behaviour of a system in a predominantly sequential way. If there is a Petri net exhibiting similar behaviour, there is hope that it can be implemented in a much more parallel way, say, for instance, on asynchronous hardware. Let us have a cursory first look at what could happen in such an approach. One of the things to consider is that the relationship is not injective: while the reachability graph of a Petri net is unique (up to isomorphism), there could be many Petri nets with isomorphic reachability graphs, possibly with very different structural characteristics. The relationship is also not necessarily total: while every initially marked Petri net has a reachability graph, there may be labelled transition systems for which no Petri net with an isomorphic reachability graph can be found at all. Before starting to chart this – so far in the text – unknown territory, we present a small toy example. Coffee machines have often been cited as prime examples for small (still realistic) behavioural specifications and implementations. An internet search of "coffee machine transition system" yielded Figure 11.1 as one of the first hits. This is actually

11.2 Labelled transition systems

251

not the simplest coffee machine around, since it can also deliver tea if desired, and it can handle 5𝑝 coins as well as 10𝑝 coins. In the initial state Idle, the machine waits for a coin to be inserted. If a 5𝑝 coin is inserted and the button tea is pressed, the state Make tea is entered and a cupOfTea is eventually issued. The other state transitions are, more or less, just as self-explanatory.

Fig. 11.1: The behaviour of a coffee machine (source: the internet). A Petri net with an isomorphic reachability graph is given at the end of this chapter. The challenge is to find a Petri net with isomorphic behaviour. One might object that this lts is really a finite automaton, and thus, a Petri net with equivalent behaviour can be constructed straight away as in Proposition 2.5. There is a catch, though. The machine has only one tea button to be pressed when tea is desired, not two “tea” buttons, one after inserting a 5𝑝 coin and another one after inserting a 10𝑝 coin. Similarly, there is only one coffee button. So what we really want is a Petri net with only one transition labelled tea, and also, with only one transition labelled coffee. A labelled net with an injective labelling can actually be identified with an unlabelled net, so the real challenge is to find an unlabelled net with an isomorphic reachability graph. The number of transitions should not match the number (11) of edges but the number (9) of edge labels.

11.2 Labelled transition systems A labelled transition system is a finite or infinite edge-labelled directed graph. Vertices are interpreted as states, and edges are interpreted as state changes. The edges can be labelled by the transitions of a Petri net (if one exists) which effect those state changes. This is why we now use the same letter, 𝑇, for the edge labels of a transition system and for the transitions of a Petri net.

252

11 Synthesis of Petri Nets from Labelled Transition Systems

Definition 11.1 lts, paths, enabledness, reachability A labelled transition systema with initial state, abbreviated lts, is a quadruple 𝑇 𝑆 = (𝑄, →, 𝑇, 𝑞 0 ) where 𝑄 is a set of states, 𝑇 is a set of labels, → ⊆ (𝑄 × 𝑇 × 𝑄) are the edges (labelled by elements of 𝑇), and 𝑞 0 ∈ 𝑄 is an initial state. For 𝑞, 𝑞 ′ ∈ 𝑄, a path 𝜎 = 𝑎 1 . . . 𝑎 𝑚 leads from 𝑞 to 𝑞 ′ (denoted 𝜎 by 𝑞 −→ 𝑞 ′ ) if ∃𝑟 0 , 𝑟 1 , . . . , 𝑟 𝑚 ∈𝑄 : 𝑞 = 𝑟 0 ∧ 𝑟 𝑚 = 𝑞 ′ ∧ ∀ 𝑗 ∈{1, . . . , 𝑚} : (𝑟 𝑗 −1 , 𝑎 𝑗 , 𝑟 𝑗 ) ∈→ 𝜎

A path 𝜎 ∈ 𝑇 ∗ is called enabled in a state 𝑞,b denoted by 𝑞 −→ , if there 𝜎 is some state 𝑞 ′ such that 𝑞 −→ 𝑞 ′ . A state 𝑞 ′ is reachable from state 𝑞 if 𝜎 ∃𝜎 ∈ 𝑇 ∗ : 𝑞 −→ 𝑞 ′ . By [𝑞⟩, the set of states reachable from 𝑞 is denoted. a

Since a labelled transition system is an edge-labelled directed graph, the terminology introduced here partially (but also consistently) overlaps with the terminology used in Section 3.2 for graphs. b Being inspired by Petri net concepts, this terminology also partially (and again, consistently) overlaps with the terminology introduced in Chapter 1 for Petri nets.

Remark 11.2 Paths and firing sequences In the lts model, a path corresponds to what has been called a firing sequence in the Petri net model. The paths of a labelled transition system 𝑇 𝑆 are also sometimes called the evolutions, or the behaviours, of 𝑇 𝑆. The reachability graph 𝑅𝐺 of a Petri net with initial marking 𝑀0 is a labelled transition system where the reachable markings [𝑀0 ⟩ are states, 𝑀0 is the initial state 𝑞 0 , and transitions 𝑇 are used as edge labels. This is true even if 𝑅𝐺 is infinite. The coverability graph and the coverability tree can also be viewed as labelled transition systems whose states are nodes labelled with generalised, rather than ordinary, markings. The converse direction is not so clear-cut. For a given lts, it is not always possible to identify a Petri net such that the latter’s firing sequences correspond to the former’s paths, let alone a unique net with this property.

Example 11.3 Edge labels and transition labels The labelled transition system 𝑇 𝑆1 = ({𝑞 0 , 𝑞 1 , 𝑞 2 , 𝑞 3 }, {(𝑞 0 , 𝑎, 𝑞 1 ), (𝑞 1 , 𝑐, 𝑞 2 ), (𝑞 2 , 𝑏, 𝑞 3 ), (𝑞 3 , 𝑐, 𝑞 0 ), }, {𝑎, 𝑏, 𝑐}, 𝑞0) is shown on the left-hand side of Figure 11.2. The Petri nets also shown in this figure are transition-labelled in the sense of Chapter 2. However,

11.2 Labelled transition systems

253

PN 1 and PN 1′ carry an injective labelling. If a net is labelled injectively, its transitions can – and will – be identified with their labels. In this sense, 𝑇 𝑆1 is (isomorphic to) the reachability graphs of both PN 1 and PN 1′ . By contrast, PN 1′′ is a non-injectively labelled Petri net: 𝑡1 and 𝑡 2 carry the same label 𝑐. Nevertheless, its reachability graph is isomorphic to 𝑇 𝑆1 , except for the edge labelling, and even fully isomorphic if the net’s transitions in the reachability graph are replaced by their labels.

Provision 11.4 Injective labelling of Petri net transitions In keeping with Section 11.1, non-injectively labelled Petri nets will not be considered anymore in this chapter. This rules out examples such as PN 1′′ in Figure 11.2. We will use small letters near the beginning of the alphabet (𝑎, 𝑏, etc.) in order to denote a Petri net’s transitions, i.e., the members of the set 𝑇, and also, to denote the labels of the edges of a labelled transition system. However, we will still allow non-plain nets. This allows PN 1′ as a Petri net whose firing sequences are precisely the paths of 𝑇 𝑆1 in Figure 11.2.

𝑞0

𝑐

PN 1′ :

PN 1 :

𝑇𝑆1 : 𝑞3

𝑐

𝑏 𝑎

𝑐

𝑏

𝑎

𝑏

𝑏 2

𝑞1

𝑐 𝑡2

2

2 𝑎

𝑎

PN 1′′ :

𝑐

2

𝑞2

𝑡1 𝑐

Fig. 11.2: A transition system 𝑇 𝑆1 and three Petri nets PN 1 , PN 1′ , PN 1′′ .

Definition 11.5 Parikh vectors, cycles, and label-disjointness A T-vector is a function Φ : 𝑇 → Z.a Its support is the set of indices corresponding to nonnull values, i.e. {𝑡 ∈ 𝑇 | Φ(𝑡) ≠ 0}. For a path 𝜎 ∈ 𝑇 ∗ , the Parikh vector of 𝜎 is a T-vector P (𝜎), defined inductively as follows: P (𝜀)

= 0 (the null vector)  (P (𝜎))(𝑡) + 1 if 𝑡 = 𝑎 ∈ 𝑇 (P (𝜎𝑎))(𝑡) = (P (𝜎))(𝑡) if 𝑡 ≠ 𝑎

Two finite sequences are Parikh-equivalent if they have the same Parikh 𝜎 vector. A path 𝑞 −→ 𝑞 ′ is called a cycle, or more precisely, a cycle at (or

254

11 Synthesis of Petri Nets from Labelled Transition Systems

around) state 𝑞, if 𝑞 = 𝑞 ′ . Two paths 𝜎, 𝜏 ∈ 𝑇 ∗ are called label-disjoint if their Parikh vectors have disjoint supports. a

As usual, a T-vector will be considered to be a column vector.

Example 11.6 Parikh vectors, paths, and cycles As an example, reconsider 𝑇 𝑆1 . The path 𝑎𝑐𝑏𝑐 emanating from state 𝑞 0 is written more precisely, exhibiting all intermediate states, as follows: 𝑎

𝑐

𝑏

𝑐

𝑞 0 −→ 𝑞 1 −→ 𝑞 2 −→ 𝑞 3 −→ 𝑞 0 This path has Parikh vector (1, 1, 2), meaning that it contains one 𝑎, one 𝑏, and two 𝑐’s. It also happens to be a simple, nontrivial cycle.

Definition 11.7 Properties, equivalences, and solvability 𝑇 𝑆 is finite if 𝑄 and 𝑇 (hence also →) are finite, and totally reachable if every state is reachable from 𝑞 0 . The language of 𝑇 𝑆 is the set 𝜎

𝐿(𝑇 𝑆) = {𝜎 ∈ 𝑇 ∗ | 𝑞 0 −→ } Two lts 𝑇 𝑆1 = (𝑄 1 , →1 , 𝑇, 𝑞 0 1 ) and 𝑇 𝑆2 = (𝑄 2 , →2 , 𝑇, 𝑞 0 2 ) are languageequivalent if 𝐿(𝑇 𝑆1 ) = 𝐿 (𝑇 𝑆2 ), and isomorphic (denoted by 𝑇 𝑆1 ≡ 𝑇 𝑆2 ) if there is a bijection 𝑓 : 𝑄 1 → 𝑄 2 with 𝑓 (𝑞 0 1 ) = 𝑞 0 2 and (𝑞, 𝑡, 𝑞 ′ ) ∈ →1 ⇐⇒ ( 𝑓 (𝑞), 𝑡, 𝑓 (𝑞 ′ )) ∈ →2 , for all 𝑞, 𝑞 ′ ∈ 𝑄 1 and 𝑡 ∈ 𝑇 A Petri net 𝑁 with transition set 𝑇 solves 𝑇 𝑆 if 𝑇 𝑆 is isomorphic to the reachability graph of 𝑁.

Example 11.8 Solvability and unsolvability In Figure 11.2, the nets PN 1 and PN 1′ solve 𝑇 𝑆1 . PN 1′′ does not solve 𝑇 𝑆1 , since it is non-injectively labelled and Definition 11.7 does not apply. Figure 11.3 depicts four labelled transition systems which are not solvable by any (unlabelled) Petri net. This can be seen for 𝑇 𝑆2 straight away by invoking Lemma 1.14 (long ago). The first part of this lemma states that firing is locally determined, that is, if 𝑎 is fired from some state, there cannot be two different states that could be reached by it. The unsolvability of 𝑇 𝑆3 , 𝑇 𝑆4 and 𝑇 𝑆5 is less obvious. The techniques developed in the next sections are designed to check solvability and to construct a Petri net solution if one exists. We limit ourselves to the finite case, which means that our transition systems will be finite and our Petri nets will be (finite and) bounded.

11.3 Regions

255

𝑞0 𝑇𝑆2 : 𝑎 𝑞1

𝑇𝑆3 :

𝑎 𝑞2

𝑞3 𝑎 𝑞0

𝑞4 𝑎

𝑎 𝑇𝑆4 : 𝑞0

𝑏 𝑞1 𝑏

𝑞2

𝑏

𝑞2 𝑎 𝑞1

𝑎 𝑇𝑆5 : 𝑞0

𝑎

𝑞2 𝑎 𝑞1

Fig. 11.3: Unsolvable transition systems.

11.3 Regions Throughout the remainder of this chapter, it is assumed that 𝑇 𝑆 = (𝑄, →, 𝑇, 𝑞 0 ) is a finite, totally reachable, labelled transition system. We are now embarking on the question whether there exists a bounded, singly live, Petri net whose reachability graph is isomorphic to 𝑇 𝑆. If some state of 𝑇 𝑆 is not reachable from its initial state 𝑞 0 , then there cannot be such a Petri net. If 𝑇 𝑆 is infinite, then there cannot be any bounded Petri net whose reachability graph is isomorphic to 𝑇 𝑆. This is why we require a priori that 𝑇 𝑆 is finite and totally reachable. It is completely clear what should be the transitions of a (singly live) Petri net whose reachability graph is isomorphic to 𝑇 𝑆. They should be (in one-to-one correspondence with) the edge labels of 𝑇 𝑆. But in a labelled transition system, there is no notion of a place. Hence we need to define structures which mimick the properties of Petri net places at the transition system level. Such structures are called regions. Regions are triples 𝜌 = (𝑅, 𝐵, 𝐹) with the following intended meaning: 𝑅 describes the number of tokens on the place 𝑝 𝜌 modelled by 𝜌 at some state of the given 𝑇 𝑆 (so that 𝑅(𝑞 0 ) will correspond to the initial marking 𝑀0 ( 𝑝 𝜌 )), while 𝐵 and 𝐹 correspond to B( 𝑝 𝜌 , .) and F( 𝑝 𝜌 , .) in a Petri net, assigning backward and forward weights to edge labels (and hence transitions) 𝑡 ∈ 𝑇 with regard to the place modelled by 𝜌. Definition 11.9 Regions of an lts, and the effect function Let 𝑇 𝑆 = (𝑄, →, 𝑇, 𝑞 0 ) be an lts. A triple 𝜌 = (𝑅, 𝐵, 𝐹)



(𝑄 → N, 𝑇 → N, 𝑇 → N)

is a region of 𝑇 𝑆 if it satisfies (R1) and (R2) below. 𝑡

(R1): for all 𝑟 ∈ 𝑄 and 𝑡 ∈ 𝑇, 𝑟 −→ implies 𝑅(𝑟) ≥ 𝐵(𝑡) 𝑡

(R2): for all 𝑟, 𝑟 ′ ∈ 𝑄 and 𝑡 ∈ 𝑇, 𝑟 −→ 𝑟 ′ implies 𝑅(𝑟 ′ ) = 𝑅(𝑟) − 𝐵(𝑡) + 𝐹 (𝑡) The derived function 𝐸 : 𝑇 → Z defined by 𝐸 (𝑡) = 𝐹 (𝑡) − 𝐵(𝑡) is called the effect of 𝑡 on the region 𝜌. It can be extended to the effect of a sequence 𝜎 ∈ 𝑇 ∗ by defining, inductively, 𝐸 (𝜀) = 0 and 𝐸 (𝜎𝑡) = 𝐸 (𝜎) + 𝐸 (𝑡). (R1) mimicks the enabling rule of Petri nets. (R2) corresponds to the firing rule of Petri nets. If a net 𝑁 with incidence matrix 𝐶 is a solution, then the effect 𝐸 (𝜎) of a sequence 𝜎 corresponds to what has previously been defined as 𝐶·P (𝜎).

256

11 Synthesis of Petri Nets from Labelled Transition Systems

The next lemma repeats Lemma 1.17 at the transition system level. Lemma 11.10 The effects of Parikh-equivalent sequences are equal Let 𝜎, 𝜎 ′ ∈ 𝑇 ∗ be two sequences. If P (𝜎) = P (𝜎 ′ ), then 𝐸 (𝜎) = 𝐸 (𝜎 ′ ). Proof: By induction on the length of 𝜎. If 𝜎 = 𝜀, then by P (𝜎) = P (𝜎 ′ ), also 𝜎 ′ = 𝜀, and thus, 𝐸 (𝜎) = 0 = 𝐸 (𝜎 ′ ). If 𝜎 = 𝜏𝑡, then by P (𝜎) = P (𝜎 ′ ), 𝑡 occurs in 𝜎 ′ . Thus, 𝜎 ′ = 𝜏1′ 𝑡𝜏2′ with P (𝜏) = P (𝜏1′ 𝜏2′ ) because of P (𝜎) = P (𝜎 ′ ). By the definition of the effect function, 𝐸 (𝜎) = 𝐸 (𝜏) + 𝐸 (𝑡) 𝐸 (𝜎 ′ ) = 𝐸 (𝜏1′ ) + 𝐸 (𝑡) + 𝐸 (𝜏2′ ) = 𝐸 (𝜏1′ 𝜏2′ ) + 𝐸 (𝑡) The claim follows by induction hypothesis, using P (𝜏) = P (𝜏1′ 𝜏2′ ).

11.10

Example 11.11 The transition system 𝑇 𝑆1 in Figures 11.2 and 11.4 Consider the triple 𝜌1 = (𝑅1 , 𝐵1 , 𝐹1 ) with  𝑞 0 ↦→ 1         𝑞 1 ↦→ 0   𝑎 ↦→ 1   𝑎 ↦→ 0  𝑅1 : 𝐵1 : 𝑏 ↦→ 0 𝐹1 : 𝑏 ↦→ 1 𝑞 2 ↦→ 0      𝑐 ↦→ 0  𝑐 ↦→ 0   𝑞 3 ↦→ 1    To see that 𝜌1 is a region of 𝑇 𝑆1 , we need to check the two region conditions for all edges; that is, eight conditions need to be verified. We give only two of these checks; all others can be done in the same way: 𝑎

(R1) for 𝑞 0 −→ :

𝑅1 (𝑞 0 ) ≥ 𝐵1 (𝑎) (since 1 ≥ 1)

𝑏

(R2) for 𝑞 2 −→ 𝑞 3 : 𝑅1 (𝑞 3 ) = 𝑅1 (𝑞 2 ) − 𝐵1 (𝑏) + 𝐹1 (𝑏) (since 1 = 0 − 0 + 1) Because the range of all three functions is the set N of natural numbers, it is possible to write the functions 𝑅, 𝐵, 𝐹 as multisets. This allows a much shorter representation of regions. Region 𝜌1 can thus be written as 𝜌1 = ({𝑞 0 , 𝑞 3 }, {𝑎}, {𝑏}). Informally, 𝜌1 corresponds to the topmost place of PN 1 in Figure 11.2. It may be helpful to visualise a region as a partitioning of the state set into states of equal 𝑅-values. Example 11.12 Understanding regions (Example 11.11 continued) For instance, see Figure 11.4. The states 𝑞 with 𝑅1 (𝑞)=1 are drawn in red, the states 𝑞 with 𝑅1 (𝑞)=0 are drawn in blue. Note that all 𝑎-arrows (there is only one) go from red to blue; all 𝑏-arrows (there is only one) go from blue to red; and all 𝑐-arrows (there are two) stay within red or blue. This is a general observation, which is due to Property (R2).

11.4 Separation properties, and the region theorem 𝑞0

𝑐

𝑎 𝑞1

𝑞3

257 𝑅1 = 1

𝑏 𝑐

𝑞2

𝑅1 = 0

Fig. 11.4: The transition system 𝑇 𝑆1 and a region 𝜌1 = ({𝑞 0 , 𝑞 3 }, {𝑎}, {𝑏}).

11.4 Separation properties, and the region theorem We wish to construct a Petri net 𝑁 from a transition system 𝑇 𝑆. The set of transitions of 𝑁, 𝑇, is already determined: it is the set of edge labels of 𝑇 𝑆. If no places are added, any sequence can be fired. So, we must add places in such a way that only the enabled sequences of 𝑇 𝑆 remain firable in 𝑁. The next definition identifies two circumstances in which it is necessary to add a place of 𝑁, and hence, to postulate the existence of a suitable region in 𝑇 𝑆 in order for it to be solvable. The main result will then show that no other cases arise. Definition 11.13 State and event/state separation properties An lts 𝑇 𝑆 = (𝑄, →, 𝑇, 𝑞 0 ) satisfies SSP (state separation property) iff ∀𝑞, 𝑞 ′ ∈ [𝑞 0⟩ : 𝑞 ≠ 𝑞 ′ ⇒ ∃ region 𝜌 = (𝑅, 𝐵, 𝐹) with 𝑅(𝑞) ≠ 𝑅(𝑞 ′ ) (11.1) meaning that if all regions agree on two states, then the latter are equal. 𝑇 𝑆 satisfies ESSP (event/state separation property) iff 𝑎

∀𝑞∈[𝑞 0⟩∀𝑎∈𝑇 : ¬(𝑞 −→ ) ⇒ ∃ region 𝜌 = (𝑅, 𝐵, 𝐹) with 𝑅(𝑞) ct-pn.apt java -jar apt.jar draw ct-pn.apt | dot -Tpdf > ct-apt.pdf The first command creates a Petri net file ct-pn.apt in apt format out of the input file ct.apt. The second command invokes GraphViz in order to create a drawing of this net in .pdf format. Since the Petri net looks terrible,67 it is not converted into tikz at this point, but included verbatim, so to speak, in Figure 11.6. It is more or less by chance that this particular coffee machine is solvable. Other lts examples may be culled from the literature which do not have solutions. Let us have a look at one of them (Figure 11.7). This lts is a high-level model of a receive/transmit protocol, and it has no Petri net solution in the sense of Definition 11.7. When given to apt as an input, apt complains that some SSP and ESSP problems are unsolvable. In fact, apt issues a full list of unsolvable problems, pointing out pairs of states that cannot be distinguished, as well as pairs state/event whose ESSP cannot be resolved. Looking at Figure 11.7, it is easy to detect a generalised cycle having two command.1 (or command.2) edges, leading from state 1 via state −1 to state 2 (or vice versa), so that apt is quite right in thinking that, in line with Theorem 11.30, states 1 and 2 cannot be distinguished. In such a case, region theory can still be useful. It is always possible to find an “overapproximation” of a given lts which has a Petri net implementation. This is realised in apt by a module called overapproximate_synthesize

which accepts any (finite) labelled transition system as an input, and which outputs the least upper approximation. The theory behind this will be explained in the next section. 3 Which can be obtained here: https://github.com/CvO-Theory/apt/blob/master/doc/ obtaining.md. 4 The naming abbreviates, but corresponds to, the naming used in Figure 11.1. 5 Possibly after editing away the first line of file ct-pn.apt between the two commands. Of course, java has to be installed. 6 It has 9 transitions as required, but also, a plethora of places. 7 However, recall, from Section 11.1, that one of the motivations of Petri net synthesis is to be able to “implement” a Petri net, say, as a piece of hardware or software, and to do so, preferably, automatically. For this purpose, aesthetic aspects play a secondary role.

268

11 Synthesis of Petri Nets from Labelled Transition Systems

p0 1 1 10 10

1 p1

1

1

0 1

1

cc

1

cc p2 1 1

1

1

1

5

1

1 1 1

5 1

1

1

1

1

1

p4 0

ct

1

1

1 ct

1

p7 1

1

1

1

1 1

1 1

1

1

p6 1

pt pt 11 1 p9 1 0 1 pc pc

rx

1

1 1

1

1

p3

rx

1

1 1

x x

1

p5 r5

1

0 11 r5 1

p8 1

Fig. 11.6: A Petri net of the coffee machine shown in Figure 11.1, straight from the horse’s mouth.

11.7 Homomorphisms and overapproximations

269

Fig. 11.7: A RX&TX (receive and transmit) specification. Arrows are meant to point clockwise. This lts violates both SSP and ESSP.

11.7 Homomorphisms and overapproximations The idea behind the next definition is to weaken the notion of lts isomorphism of Definition 11.7 by considering only “half” of it. This achieves language inclusion and synthesisability in a “tight” way (in the sense that no better approximation is possible while retaining synthesisability). Definition 11.38 Homomorphic embedding Let 𝑇 𝑆1 = (𝑄 1 , →1 , 𝑇, 𝑞 0 1 ) and 𝑇 𝑆2 = (𝑄 2 , →2 , 𝑇, 𝑞 0 2 ) be two labelled transition systems with the same set of labels 𝑇. A function 𝑓 : 𝑄 1 → 𝑄 2 with 𝑓 (𝑞 0 1 ) = 𝑞 0 2 and (𝑞, 𝑡, 𝑞 ′ ) ∈ →1 ⇒ ( 𝑓 (𝑞), 𝑡, 𝑓 (𝑞 ′ )) ∈ →2 , for all 𝑞, 𝑞 ′ ∈ 𝑄 1 and 𝑡 ∈ 𝑇 is called a homomorphism from 𝑇 𝑆1 to 𝑇 𝑆2 . If there is a homomorphism 𝑓 from 𝑇 𝑆1 to 𝑇 𝑆2 , this is denoted, for short, as 𝑇 𝑆1 ⊑ 𝑇 𝑆2 (via 𝑓 ). The relation ⊑ has some nice properties. First of all, of course, it is reflexive and transitive. Next, it is easy to prove that if 𝑇 𝑆1 ⊑ 𝑇 𝑆2 , then 𝐿 (𝑇 𝑆1 ) ⊆ 𝐿 (𝑇 𝑆2 ). Most important, in the present context, is that every lts has a unique minimal Petri net overapproximation. To explain this, call a Petri net 𝑁 an overapproximation of an lts 𝑇 𝑆 if 𝑇 𝑆 ⊑ 𝑅𝐺 (𝑁). Then 𝑁 is called a minimal overapproximation of 𝑇 𝑆 if 𝑅𝐺 (𝑁) ⊑ 𝑅𝐺 (𝑁 ′ ) for every other overapproximation 𝑁 ′ of 𝑇 𝑆. One of these

270

11 Synthesis of Petri Nets from Labelled Transition Systems

minimal overapproximations8 is computed by apt’s module of the corresponding name. For example, we may invoke apt’s overapproximate_synthesize module with the transition system shown in Figure 11.7 as an input. Out comes a text file describing a Petri net (having the minimal overapproximation as a reachability graph) which can, in turn, be given to apt’s coverability_graph module which computes the coverability (and in the bounded case, the reachability) graph of a marked net. This graph is finally shown in Figure 11.8 and should be compared with the lts in Figure 11.7. Theory tells us that the language of the original lts (Figure 11.7) is a subset of the language of the new lts so constructed (Figure 11.8). But in this special case, the two transition systems are even language-equivalent, the latter having one state less than the former. We might want to check the original publication as to whether one could live with this kind of optimisation. If so, Petri net synthesis has been helpful in finding a better (in terms of size) specification. command.1,command.2 0

12

command.1,command.2

−1

ack.1,ack.2,cancel.1,cancel.2

Fig. 11.8: A smaller RX&TX specification. This lts satisfies SSP and ESSP, and there is an unlabelled Petri net with an isomorphic reachability graph.

11.8 Bibliographical remarks and further reading Labelled transition systems are often attributed to Robert M. Keller [Kel76],9 while their unlabelled versions had already found uses in areas such as modal logics.10 They are also the subject of André Arnold’s monograph [Arn94]. Implementing Petri nets on asynchronous hardware is the topic of several cooperative research projects. Companies such as Dialog Semiconductor PLC are involved in this research. Compare, for instance, [Sok+20] (Danil Sokolov, Victor Khomenko, Andrey Mokhov, Vladimir Dubikhin, David Lloyd, and Alex Yakovlev). This group uses the Petri net semantics of concurrent programs, such as described in Chapter 20 below. The coffee machine of Section 11.1 is located in https://www.site.uottawa.ca/ ~bochmann/SEG-2106-2506/Notes/M1-2-StateMachines/example-coffeeMachine/index. 8 The Petri net is still not unique, but its reachability graph is. 9 See https://en.wikipedia.org/wiki/Transition_system. 10 Compare, for instance, https://en.wikipedia.org/wiki/Kripke_structure_(model_ checking).

11.8 Bibliographical remarks and further reading

271

html (by Gregor von Bochmann).The RX&TX specification in Figure 11.7 has been

culled from [MKG97] (Jeff Magee, Jeff Kramer, and Dimitra Giannakopoulou). Region theory, including Definition 11.13 and Theorem 11.18, is presented in [BD96; BBD15] (by Eric Badouel, Philippe Darondeau and Luca Bernardinello), and is originally due to Andrzej Ehrenfeucht and Grzegorz Rozenberg [ER90]. The theory of infinite synthesis (yielding unbounded Petri nets from infinite labelled transition systems) has been pursued by Philippe Darondeau [Dar03]. The theory of lts homomorphisms and (minimal) overapproximations has been explored in [SW17] (Uli Schlachter, Harro Wimmel) and in [Sch18a; Sch18b] (Uli Schlachter). What has been called an lts homomorphism in Definition 11.38 is a special case of an lts morphism as decribed in [NSW93] (Mogens Nielsen, Vladimiro Sassone, and Glynn Winskel). Exercises 11.4 to 11.6 serve to point out a few relationships between the existence of a morphism and various notions of simulation, such as bisimulation. These exercises seem to suggest that bisimulation is a weak notion of equivalence, but this is not necessarily true in the context of many other concepts for comparing the “power” of transition systems that can be found in the literature. Efficient and intricate versions of the algorithm described in the proof of Theorem 11.18 have been implemented and described by Uli Schlachter in [Sch16; Sch18a], and are publicly available in the tool apt (acronym for “analysis of Petri nets and transition systems”, an early description of which can be found in [BS15], by Eike Best and Uli Schlachter). The apt suite implements the basic linear-algebraic algorithm of [BBD15] and uses a fast SMT solving package [CHN13] (Jürgen Christ, Joachim Hoenicke, Alexander Nutz) in order to search for regions that solve the associated separation problems. The inequality systems dealt with by the algorithm can be very numerous as well as large. More precisely, 𝑂 (𝑛2 ) plus 𝑂 (𝑛 · 𝑚) systems are needed, where 𝑛 is the number of states and 𝑚 is the number of labels, the former for SSPs and the latter for ESSPs. In practice, 𝑛 can be expected to be much larger than 𝑚, and then, the algorithm is likely to be of the order of 𝑂 (𝑛7.5 ) in the worst case.11 The fact that adding non-homogeneous constraints and/or an optimisation goal may lead to NP-complete syntheses may be found in various papers like [BBD97] (Eric Badouel, Luca Bernardinello and Philippe Darondeau) or [Tre19; Tre21] (Ronny Tredup). The region-finding steps of the synthesis algorithm can not only be implemented by linear inequality solvers, but also by an approach constructing a region (if one exists) in a stepwise manner [Cor+97] (J. Cortadella, M. Kishinevsky, A. Kondratyev, L. Lavagno, and A. Yakovlev). This has given rise to the tool petrify.12 In presenting the characterisation of state and event/state separation problems in Section 11.5, we followed [Bes+20] (by Eike Best, Raymond Devillers, Evgeny 11 Karmarkar’s method [Kar84] (by Narendra Karmarkar) involves a time complexity of 𝑂 (𝑛3.5 ·𝐿 2 · log 𝐿· log log 𝐿) where 𝐿 = 𝑛·𝑚· log 𝑐 with 𝑐 being the largest coefficient. If we forget about the label set and the coefficients, this reduces to 𝑂 (𝑛5.5 · log 𝑛· log log 𝑛), so we might write 𝑂 (𝑛 (5.5+𝜖 ) ). This is for solving a single inequality system. Since we have up to ( (𝑛−1) ·𝑛)/2 SSPs to solve (plus a few ESSPs), we arrive at 𝑂 (𝑛 (7.5+𝜖 ) ). 12 See https://www.cs.upc.edu/~jordicf/petrify/.

272

11 Synthesis of Petri Nets from Labelled Transition Systems

Erofeev and Harro Wimmel). The proofs of the main theorems of that section can be found in [BDS17] (by Eike Best, Raymond Devillers and Uli Schlachter, for the SSP theorem 11.30) and in [SW18] (by Uli Schlachter and Harro Wimmel, for the ESSP theorem 11.34). These g-cycle-based theorems stipulate computing general cycles and convex hulls in transition systems, as opposed to the region-based algorithm which involves systems of linear inequalities. Computing general cycles can be helped by pre-computing a spanning tree and a finite cycle base (see [Sch18b] by Uli Schlachter). Computing convex hulls can be quite involved. By analogy to Euclidean geometry, the number of labels corresponds to the dimension of the space considered, while places correspond to the facets which belong to the border of a convex hull (see [Ero18] by Evgeny Erofeev). Hence, if the border of a convex hull is found, the places can be constructed from it directly. The complexity of computing a convex hull is Θ(𝑛 ⌊𝑑/2⌋ ) [Ber+08] (Mark de Berg, Otfried Cheong, Marc J. van Kreveld, and Mark H. Overmars), where 𝑛 is the number of elements and 𝑑 is the number of dimensions. Transferred to synthesis by analogy, for a single ESSP, 𝑛 would be the number of states in 𝑆 𝑎 , at which a given label 𝑎 is enabled, and 𝑑 = |𝑇 | would be the total number of labels.

11.9 Exercises Exercise 11.1 Regions, SSP, and ESSP Consider the labelled transition system 𝑇 𝑆 with state set 𝑄 shown in Figure 11.9. (i) Let the partial function 𝑅0 : 𝑄 → N be defined as 𝑅0 (𝑞 0 ) = 1, 𝑅0 (𝑞 1 ) = 0, 𝑅0 (𝑞 2 ) = 2, 𝑅0 (𝑞 3 ) = 0 Extend 𝑅0 to a function 𝑅 : 𝑄 → N and define two functions 𝐵, 𝐹 : 𝑇 → N such that that the triple (𝑅, 𝐵, 𝐹) is a region of 𝑇 𝑆. Hint: 𝑅 is uniquely determined by 𝑅0 , and so are the effects of the labels 𝑎, 𝑏, 𝑐. However, 𝐵 and 𝐹 are not unique;13 try to find minimal such functions. (ii) Do the same with 𝑅0 (𝑞 0 ) = 1, 𝑅0 (𝑞 1 ) = 2, 𝑅0 (𝑞 2 ) = 0, 𝑅0 (𝑞 3 ) = 1 (iii) ... and again the same with 𝑅0 (𝑞 0 ) = 1, 𝑅0 (𝑞 1 ) = 1, 𝑅0 (𝑞 2 ) = 0, 𝑅0 (𝑞 3 ) = 2 (iv) Call the three regions from (i)–(iii), respectively, 𝜌1 , 𝜌2 , and 𝜌3 . Show that 𝑇 𝑆 satisfies both SSP and ESSP with this set of regions.

13 A fact that is exploited in Lemmata 11.20 and 11.21.

11.9 Exercises

273 𝑞0 𝑎

𝑏 𝑐

𝑞1

𝑞2 𝑐

𝑞3

𝑎

𝑐 𝑏

𝑞4

𝑞5 𝑏

𝑎

𝑎

𝑞6

𝑏 𝑞7

𝑞8

𝑐 𝑞9 𝑎 𝑏 𝑞10

𝑞11

Fig. 11.9: A labelled transition system 𝑇 𝑆 with state set 𝑄 = {𝑞 0 , . . . , 𝑞 11 }.

Exercise 11.2 Characterisations of SSP and ESSP Reconsider the labelled transition system 𝑇 𝑆 with state set 𝑄 shown in Figure 11.9. (i) Determine a spanning tree of 𝑇 𝑆 and specify all Parikh vectors P (𝜋𝑞 ) of the states 𝑞. (ii) Using Theorem 11.30, show that the state separation problem SSP(𝑞 0 , 𝑞 1 ) is solvable. (iii) Using Theorem 11.34, show that the event/state separation problem ESSP(𝑞 1 , 𝑎) is solvable. Exercise 11.3 ESSP Consider the lts shown in Figure 11.10. Show in two ways that ESSP(𝑞 2 , 𝑎) is solvable: (i) by exhibiting a solving region (𝑅, 𝐵, 𝐹); (ii) by appealing to Theorem 11.34.

𝑏

𝑞0

𝑐

𝑞1

𝑞2 𝑎

𝑏

Fig. 11.10: A labelled transition system with state set 𝑄 = {𝑞 0 , 𝑞 1 , 𝑞 2 }.

Exercise 11.4 Homomorphisms and simulations The notion of simulation from Definition 8.4 and the notion of lts homomorphism from Definition 11.38 are somewhat similar in spirit,14 but not directly comparable. The former applies to pairs of Petri nets (and thus, only indirectly to transition 14 In the sense that the second object may have “more behaviour” than the first.

274

11 Synthesis of Petri Nets from Labelled Transition Systems

systems via reachability graphs) and does not presuppose exactly the same set of transitions (since there could be any number of additional “silent transitions” in one of the nets). By contrast, the latter applies to pairs of labelled transition systems directly and presupposes the same label set in both of them. (1) Give a definition transporting the simulation notion to pairs of lts and, at the same time, restricting it to transition systems without silent transitions having exactly the same set of labels. (2) Compare this modified and restricted notion of simulation with the notion of homomorphism. (a) Are there examples of a homomorphism which is not a simulation in the sense of (1)? (b) What about the existence of simulations in the sense of (1) which are not also homomorphisms? Exercise 11.5 Homomorphisms and bisimulation-related simulations Another related notion is that of simulation which stems from bisimulation, a concept that is widely used in concurrency theory and has only been mentioned in passing in Section 8.7. Bisimulation can be defined by means of the following definition of simulation:15 Let two labelled transition systems 𝑇 𝑆1 = (𝑄 1 , →1 , 𝑇, 𝑞 0 1 ) and 𝑇 𝑆2 = (𝑄 2 , →2 , 𝑇, 𝑞 0 2 ) with the same set of labels 𝑇 be given. 𝑇 𝑆1 simulates 𝑇 𝑆2 if there is a relation 𝜌 ⊆ (𝑄 1 × 𝑄 2 ) such that (i) (𝑞 0 1 , 𝑞 0 2 ) ∈ 𝜌 𝑎

(ii) if (𝑞 1 , 𝑞 2 ) ∈ 𝜌 with 𝑞 1 ∈ 𝑄 1 , 𝑞 2 ∈ 𝑄 2 and if 𝑞 1 −→ 𝑞 1′ for 𝑞 1′ ∈ 𝑄 1 𝑎 then ∃𝑞 2′ ∈ 𝑄 2 with 𝑞 2 −→ 𝑞 2′ and with (𝑞 1′ , 𝑞 2′ ) ∈ 𝜌 Show that this notion is strictly weaker than the existence of a homomorphism, i.e.: (a) Show that if there is a homomorphism from 𝑇 𝑆1 to 𝑇 𝑆2 , then 𝑇 𝑆1 simulates 𝑇 𝑆2 . (b) Exhibit an example of two transition systems 𝑇 𝑆1 , 𝑇 𝑆2 such that 𝑇 𝑆1 simulates 𝑇 𝑆2 but there is no homomorphism from 𝑇 𝑆1 to 𝑇 𝑆2 . Bisimulation is the stronger, symmetric version of this kind of simulation: 𝑇 𝑆1 and 𝑇 𝑆2 are bisimilar if there is a relation 𝜌 as above such that both 𝜌 and 𝜌 −1 are simulations. Exercise 11.6 Simulations and bisimulation-related simulations Finally, do the same for the notion of simulation defined in Solution 11.4(1) and the notion of simulation defined in Exercise 11.5.

15 See also https://en.wikipedia.org/wiki/Simulation_(computer_science).

11.9 Exercises

275

Solution 11.1 (Exercise 11.1) (i): Using 𝑅0 , the effects of 𝑎, 𝑏, 𝑐 are seen to be, respectively, −1, −1, +1. This allows one to compute the remaining 𝑅-values as follows: 𝑅(𝑞 4 )=1, 𝑅(𝑞 5 )=1, 𝑅(𝑞 6 )=0, 𝑅(𝑞 7 )=0, 𝑅(𝑞 8 )=0, 𝑅(𝑞 9 )=1, 𝑅(𝑞 10 )=0, 𝑅(𝑞 11 )=0 The functions

   𝑎 ↦→ 1  𝐵 : 𝑏 ↦→ 1   𝑐 ↦→ 0 

   𝑎 ↦→ 0  𝐹 : 𝑏 ↦→ 0   𝑐 ↦→ 1 

complete the job: 𝜌1 = (𝑅, 𝐵, 𝐹) is a region of 𝑇 𝑆. (ii) and (iii): Analogous, yielding 𝜌2 and 𝜌3 . (iv): For instance, 𝑞 0 ≠ 𝑞 1 ; but 𝜌1 distinguishes these two states by having different values in them. 𝑎 Also, ¬(𝑞 1 −→ ); again, 𝜌1 “witnesses” that 𝑎 is not enabled in state 𝑞 1 . The remaining state sparation and event/state separation problems can be solved by a suitable one of the regions 𝜌1 , 𝜌2 , 𝜌3 in a similar way.16 Solution 11.2 (Exercise 11.2) (i): See Figure 11.11. We have P (𝜋𝑞0 ) = (0 0 0) T P (𝜋𝑞1 ) = (1 0 0) T P (𝜋𝑞2 ) = (0 0 1) T .. . P (𝜋𝑞10 ) = (2 1 2) T P (𝜋𝑞11 ) = (1 2 2) T (ii): First, we may observe that all g-cycles of 𝑇 𝑆 have Parikh vector (0 0 0) T . By (i), the Parikh vectors of 𝑞 0 and 𝑞 1 are (0 0 0) T and (1 0 0) T , respectively. Their difference is (1 0 0) T (or (−1 0 0) T ). Hence, no rational number 𝑝 ∈ Q can be found such that 𝑝·(1 0 0) T (or 𝑝·(−1 0 0) T ) is the Parikh vector of a g-cycle. This implies, by definition, that ¬(𝑞 0  𝑞 1 ). Therefore, by Theorem 11.30, SSP(𝑞 0 , 𝑞 1 ) is solvable. In fact, arguing like this for all states, we see that  = id. (iii): The set of states enabling 𝑎 is 𝑆 𝑎 = {𝑞 0 , 𝑞 2 , 𝑞 4 , 𝑞 5 , 𝑞 9 } Therefore, any state (𝑥 𝑦 𝑧) T in the convex hull ch(𝑆 𝑎 ) has to satisfy, by definition, 16 Congratulations, you just synthesised the marked Petri net shown in the middle of Figure 4.2. The three regions 𝜌1 , 𝜌2 , 𝜌3 correspond to the places 𝑠1 , 𝑠2 , 𝑠3 shown there.

276

11 Synthesis of Petri Nets from Labelled Transition Systems

𝑥 0 0 1 0 1 © ª © ª © ª © ª © ª © ª 𝑦 0 0 0 1 = 𝑘 · + 𝑘 · + 𝑘 · + 𝑘 · + 𝑘 · ­ ® 1 ­ ® 2 ­ ® 3 ­ ® 4 ­ ® 5 ­1® with 𝑘 1 +𝑘 2 +𝑘 3 +𝑘 4 +𝑘 5 = 1 𝑧 0 1 1 1 « ¬ « ¬ « ¬ « ¬ « ¬ «2¬ and all 𝑘 𝑖 are rational numbers 0 ≤ 𝑘 𝑖 ≤ 1. Of course, all states of 𝑆 𝑎 are in ch(𝑆 𝑎 ) (using the five unit vectors for the 𝑘 𝑖 ). But what other states are in ch(𝑆 𝑎 )? For example, 𝑞 1 is not in ch(𝑆 𝑎 ), since 𝑥 = 1, 𝑦 = 0, 𝑧 = 0 yields 1 0 0 1 0 1 © ª © ª © ª © ª © ª © ª ­0® = 𝑘 1 · ­0® + 𝑘 2 · ­0® + 𝑘 3 · ­0® + 𝑘 4 · ­1® + 𝑘 5 · ­1® with 𝑘 1 +𝑘 2 +𝑘 3 +𝑘 4 +𝑘 5 = 1 «0¬ «0¬ «1¬ «1¬ «1¬ «2¬ and this is not solvable by rational numbers between 0 and 1. Also, 𝑞 1 is not  to any state in ch(𝑆 𝑎 ); we can either see this directly, or use Part (ii) which states that  = id. Using Theorem 11.34, it follows that ESSP(𝑞 1 , 𝑎) is solvable.

𝑞0 𝑎

𝑏 𝑐

𝑞1

𝑞2

𝑐 𝑎 𝑏

𝑞4 𝑎

𝑞5 𝑏

𝑏 𝑎

𝑞6

𝑞3

𝑐

𝑞7

𝑞8

𝑐 𝑞9

𝑎 𝑞10

𝑏

𝑞11

Fig. 11.11: Solution of Exercise 11.2(i). A spanning tree of the transition system 𝑇 𝑆 shown in Figure 11.9, drawn in blue.

Solution 11.3 (Exercise 11.3) (i) The region (𝑅, 𝐵, 𝐹) with 𝑅(𝑞 0 ) = 1, 𝑅(𝑞 1 ) = 2, 𝑅(𝑞 2 ) = 0, 𝐵(𝑎) = 1, 𝐹 (𝑎) = 2, 𝐵(𝑏) = 1, 𝐹 (𝑏) = 0, 𝐵(𝑐) = 0, 𝐹 (𝑐) = 1 solves ESSP(𝑞 2 , 𝑎). (ii) 𝑆 𝑎 = {𝑞 0 } and ch(𝑆 𝑎 ) = {𝑞 0 }. No other state is  to 𝑞 0 (in fact,  = id). In particular, 𝑞 2  𝑞 0 now entails that ESSP(𝑞 2 , 𝑎) is solvable, by Theorem 11.34. Solution 11.4 (Exercise 11.4) (1): Since the label sets are the same, the injection 𝑓 from Definition 8.4 is superfluous. Thus, we arrive at the following. Let 𝑇 𝑆 = (𝑄, →, 𝑇, 𝑞 0 ) and 𝑇 𝑆 ′ = (𝑄 ′ , →′ , 𝑇, 𝑞 0 ′ ) be two labelled transition systems with the same label set 𝑇. We say that 𝑇 𝑆 ′ simulates 𝑇 𝑆 if there is a surjective function 𝛽 : 𝑄 ′ → 𝑄 such that17 17 The idea is that all reachable states of 𝑁 are simulated by some, but possibly more than one, reachable state(s) of 𝑁 ′ .

11.9 Exercises

277

(i) 𝑞 0 = 𝛽(𝑞 0 ′ ) (ii) if 𝑞 1 = 𝛽(𝑞 1′ ) with 𝑞 1 ∈ [𝑞 0⟩ and 𝑞 1′ ∈ [𝑞 0 ′⟩ then 𝑡

(a) whenever 𝑞 1 −→ 𝑞 2 with 𝑡 ∈ 𝑇 𝑡

then ∃𝑞 2′ ∈ 𝛽 −1 (𝑞 2 ) : 𝑞 1′ −→ 𝑞 2′ 𝑡

(b) whenever 𝑞 1′ −→ 𝑞 2′ with 𝑡 ∈ 𝑇 𝑡

then 𝑞 1 −→ 𝛽(𝑞 2′ ) Note that this is an almost literal transcription of Definition 8.4, applied to labelled transitions systems rather than to Petri nets, by means of replacing 𝑀 by 𝑞 and omitting all that has become superfluous by the identity of the label sets of 𝑇 𝑆 and 𝑇 𝑆 ′ . (2): See Figure 11.12. 𝑇 𝑆1 can be embedded homomorphically into 𝑇 𝑆1′ , but 𝑇 𝑆1′ does not simulate 𝑇 𝑆1 . There is no homomorphism from 𝑇 𝑆2 to 𝑇 𝑆2′ , but 𝑇 𝑆2′ simulates 𝑇 𝑆2 .18 Thus, homomorphic embedding and simulation (extended to lts’s) are two incomparable notions. Also, observe that none of these two pairs allow isomorphisms, since the number of states of 𝑇 𝑆𝑖 and 𝑇 𝑆𝑖′ are different.

𝑇𝑆1′

𝑇𝑆1 𝑓 𝑎

𝑎

𝑏

𝑏

𝑏

𝑎

𝑇𝑆2′

𝑇𝑆2 𝛽

𝑎

𝑎

𝑏

𝑐

𝑏

𝑎

𝑐

Fig. 11.12: Solution of Exercise 11.4(2). Left hand side: an embedding homomorphism 𝑓 from 𝑇 𝑆1 to 𝑇 𝑆1′ drawn in green. There is no homomorphism from 𝑇 𝑆1′ to 𝑇 𝑆1 . Right-hand side: a simulation surjection 𝛽 drawn in blue. Thus, 𝑇 𝑆2′ simulates 𝑇 𝑆2 in the sense of Solution 11.4(1). The converse is not true. Solution 11.5 (Exercise 11.5) (a): If there is a homomorphism 𝑓 from 𝑇 𝑆1 to 𝑇 𝑆2 , then 𝜌 = 𝑓 (interpreted as a relation between states) already proves that 𝑇 𝑆1 simulates 𝑇 𝑆2 . (b): See Figure 11.13. There is no homomorphism from 𝑇 𝑆1 to 𝑇 𝑆2 because after mapping the state in the middle, either 𝑐 or 𝑑 are “missed out” (recall that a homomorphism is a function from 𝑄 1 to 𝑄 2 , not a general relation between 𝑄 1 and 𝑄 2 ). Solution 11.6 (Exercise 11.6) To solve this exercise, it is advantageous to rewrite (and turn around) the definition in Exercise 11.4 into a form that is more directly comparable with the definition in 18 Note, however, that a situation such as exemplified by 𝑇𝑆2 , 𝑇𝑆2′ cannot occur in the original definition, which applies only to Petri nets, since 𝑎 is not deterministic in 𝑇𝑆2′ .

278

11 Synthesis of Petri Nets from Labelled Transition Systems

𝑇𝑆1

𝑇𝑆2 𝜌 𝑎

𝑎

𝑎

𝑐 𝑐

𝑑

𝑑 𝑏

𝑏

𝑏

Fig. 11.13: Solution of Exercise 11.5(2). A simulation 𝜌 in the sense of this exercise is drawn in blue. Exercise 11.5: 𝑇 𝑆1 = (𝑄 1 , →1 , 𝑇, 𝑞 0 1 ) simulates 𝑇 𝑆2 = (𝑄 2 , →2 , 𝑇, 𝑞 0 2 ) if there is a surjection 𝛽 : 𝑄 1 → 𝑄 2 such that (i) 𝑞 0 2 = 𝛽(𝑞 0 1 ) (ii) if 𝑞 2 = 𝛽(𝑞 1 ) with 𝑞 1 ∈ 𝑄 1 and 𝑞 2 ∈ 𝑄 2 then 𝑎

𝑎

(a) if 𝑞 2 −→ 𝑞 2′ in 𝑇 𝑆2 then ∃𝑞 1′ ∈ 𝛽 −1 (𝑞 2′ ) : 𝑞 1 −→ 𝑞 1′ in 𝑇 𝑆1 𝑎

𝑎

(b) if 𝑞 1 −→ 𝑞 1′ in 𝑇 𝑆1 then 𝑞 2 −→ 𝛽(𝑞 1′ ) in 𝑇 𝑆2 With 𝜌 = 𝛽, it is immediate that simulation in the sense of Solution 11.4(1) implies simulation in the sense of Exercise 11.5. Conversely, 𝑇 𝑆1 and 𝑇 𝑆2 in Figure 11.14 exhibit a simulation in the sense of Exercise 11.5 which is not also a simulation in the sense of Solution 11.4(1). Summarising, it might be noted that simulation in the sense of Solution 11.4(1) and the existence of a homomorphism (both relying on functions) are relatively strong notions while simulation in the sense of Exercise 11.5 and bisimulation (both relying on relations) are strictly weaker notions.

𝑇𝑆1

𝑇𝑆2 𝜌 𝑎

𝑎

𝑎

Fig. 11.14: Solution of Exercise 11.6. A simulation 𝜌 in the sense of Exercise 11.5 (in fact, a bisimulation) is drawn in blue. However, 𝑇 𝑆1 does not simulate 𝑇 𝑆2 in the sense of Solution 11.4(1); there is not even a surjection from 𝑄 1 to 𝑄 2 .

Chapter 12

Persistent Transition Systems and Choice-free Petri Nets

Abstract In a persistent labelled transition system, concurrency and conflict are not quite as intricately interrelated as might be the case in general. A large class of persistent transition systems allows a decomposition into small cycles with disjoint label sets. For those generated by a Petri net, this corresponds to a covering by mutually disjoint minimal T-invariants. A closely related Petri net class is called choice-free Petri nets. Persistent transition systems and choice-free nets are introduced in this chapter. In later chapters, it will be argued that Petri net synthesis can be performed fairly efficiently if attention is restricted to these classes.

Contents 12.1 12.2 12.3 12.4 12.5 12.6 12.7

Introductory remarks . . . . . . . . . . . . Persistence, and some other lts properties . Choice-free Petri nets . . . . . . . . . . . . Keller’s theorem, and home states . . . . . Small cycle decompositions . . . . . . . . Bibliographical remarks and further reading Exercises . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

279 280 282 285 290 293 294

12.1 Introductory remarks Even though the worst case complexity of the algorithm synthesising a Petri net from a given labelled transition system is only polynomial, rather than exponential, it is still forbidding if the number of states is very large. We shall start looking for ways of alleviating this burden. In the next chapters, two methods will be explored:

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. Best and R. Devillers, Petri Net Primer, Computer Science Foundations and Applied Logic, https://doi.org/10.1007/978-3-031-48278-6_12

279

280

12 Persistent Transition Systems and Choice-free Petri Nets

(i) A divide-and-conquer approach which can deal with the general synthesis algorithm (Chapter 13). (ii) A more restrictive approach focusing on classes of transition systems and Petri nets (Chapters 14 and 15). As a preparation for both, the present chapter introduces persistent transition systems and choice-free Petri nets. To start with, let us take a closer look at two properties that have been called “non-properties of firing” in Lemma 1.18, especially at the property of persistence, but also at the related property of confluence. Persistence of Petri nets means that once a transition is enabled, it cannot lose the property of being enabled, except through its own firing. Confluence means that two reachable states have a common successor state. Both properties can easily be translated into the language of transition systems. Persistent lts include the reachability graphs of marked graphs as well as the reachability graphs of (weighted) T-systems. In a T-system, be it plain or weighted, tokens cannot be removed from a place, except by its – unique, if any – output transition. More generally, we may consider the class of choice-free Petri nets where every place has at most one outgoing transition, but may have more than one incoming transition. All of the Petri nets in this class have the property just mentioned for T-systems, and their reachability graphs turn out to be automatically persistent. However, there are many more Petri nets whose reachability graphs are persistent but are not isomorphic to the reachability graphs of choice-free Petri nets. In the next section, persistence and some other important properties for general labelled transition systems are defined, ignoring, at first, whether they have a generating Petri net or not.

12.2 Persistence, and some other lts properties The following properties rely on the definition of labelled transition systems in Section 11.2. Definition 12.1 Determinism, weak periodicity, persistence, reversibility A labelled transition system (𝑄, →, 𝑇, 𝑞 0 ) with states 𝑄 and labels 𝑇 is 𝑡 𝑡 • deterministic if ∀𝑞, 𝑞 ′ , 𝑞 ′′ ∈ 𝑄 ∀𝑡 ∈ 𝑇 : 𝑞 −→ 𝑞 ′ ∧ 𝑞 −→ 𝑞 ′′ ⇒ 𝑞 ′ = ′′ 𝑞 (i.e., from a state 𝑞, a label 𝑡 may not lead to two different states); 𝑡 𝑡 • backward deterministic if ∀𝑞 ′ , 𝑞 ′′ , 𝑞 ∈ 𝑄 ∀𝑡 ∈ 𝑇 : 𝑞 ′ −→ 𝑞 ∧ 𝑞 ′′ −→ 𝑞 ⇒ 𝑞 ′ = 𝑞 ′′ (i.e., a label 𝑡 may not lead from two different states into a state 𝑞);a • weakly periodic if for every 𝑞 1 ∈ 𝑆, label sequence 𝜎 ∈ 𝑇 ∗ , and infinite 𝜎 𝜎 𝜎 𝜎 sequence 𝑞 1 −→ 𝑞 2 −→ 𝑞 3 −→ 𝑞 4 −→ . . ., either 𝑞 𝑖 = 𝑞 𝑗 for all 𝑖, 𝑗 ≥ 1, or 𝑞 𝑖 ≠ 𝑞 𝑗 for all 𝑖, 𝑗 ≥ 1 with 𝑖 ≠ 𝑗;

12.2 Persistence, and some other lts properties

281 𝑡′

𝑡

• persistent if for all states 𝑞 ∈ 𝑄 and labels 𝑡, 𝑡 ′ ∈ 𝑇, if 𝑞 −→ and 𝑞 −→ 𝑡𝑡 ′

𝑡′𝑡

with 𝑡 ≠ 𝑡 ′ , there is some state 𝑟 ∈ 𝑄 such that both 𝑞 −→ 𝑟 and 𝑞 −→ 𝑟 (i.e., once two different labels are both enabled, neither can disable the other, and executing both, in any order, leads to the same state); 𝑡 • backward persistent if for all states 𝑞, 𝑞 ′ , 𝑞 ′′ , and labels 𝑡, 𝑡 ′ , if 𝑞 ′ −→ 𝑞 𝑡′

and 𝑞 ′′ −→ 𝑞 and 𝑡 ≠ 𝑡 ′ , there is some state 𝑟 ∈ 𝑄 such that both 𝑡′

𝑡

𝑟 −→ 𝑞 ′ and 𝑟 −→ 𝑞 ′ (i.e., persistence in backward direction); • reversible if ∀𝑞 ∈ [𝑞 0⟩ : 𝑞 0 ∈ [𝑞⟩ (i.e., the initial state 𝑞 0 always remains reachable). A Petri net is called reversible (persistent or backward persistent) if its reachability graph is reversible (respectively, persistent or backward persistent). a

Backward determinism plays no role in the present chapter, but will do so later; the same is true for backward persistence. 𝜎

𝜎

𝜎

Weak periodicity forbids cycles of the form 𝑞 1 −→ 𝑞 2 −→ 𝑞 3 . . . −→ 𝑞 𝑚 = 𝑞 1 , unless all 𝑞 𝑗 are the same state. This property is violated, for instance, in the transition system 𝑇 𝑆5 shown in Figure 11.3 of Chapter 11. Determinism and backward determinism are respectively violated in 𝑇 𝑆2 and in 𝑇 𝑆4 of the same figure. The first three properties of the previous definition are distinguished from the other three by the fact that they are always satisfied for Petri nets: Proposition 12.2 Properties of Petri net reachability graphs The reachability graph 𝑅𝐺 (𝑁) of a Petri net 𝑁 is totally reachable (cf. Definition 11.7), deterministic, backward deterministic, and weakly periodic. 𝑁 is bounded iff 𝑅𝐺 (𝑁) is finite. Proof: Total reachability comes directly from the definition of a reachability graph. For determinism, see Lemma 1.14. Backward determinism has a symmetric proof. Using the results of Section 11.5 (Theorem 11.30), weak periodicity can be proved in two steps: (i):  = id implies that the Parikh vector of a cycle may not be a positive multiple of the Parikh vector of a non-cyclic path; and (ii): the latter implies weak periodicity. The result follows from Theorem 11.30. A bounded (finite) net has a finite reachability graph, since if every place 𝑠 has bound 𝑚 𝑠 , then there can be at most Π𝑠∈𝑆 (𝑚 𝑠 + 1) reachable markings. Conversely, if the reachability graph of a net is finite, then a finite bound for every place can be computed, as shown in Section 3.4. 12.2 In the remainder of this chapter, we concentrate on the other properties of Definition 12.1, chiefly on persistence and reversibility. Persistence allows to complete every “branching” at some state 𝑞 (as shown on the left-hand side of Figure 12.1) to a “full diamond” (as depicted on the right-hand side

282

12 Persistent Transition Systems and Choice-free Petri Nets

of Figure 12.1). That is, whenever 𝑎 and 𝑏, with 𝑎 ≠ 𝑏, are activated at 𝑞, then also 𝑎𝑏 and 𝑏𝑎 are, and lead to the same state.1 Because of the shape of the right-hand side of this figure, persistence is sometimes called the “diamond property”.

𝑞

𝑞

𝑎

𝑎

𝑏

𝑏

𝑎

𝑏

𝑞1

𝑞2



∃𝑞 ′ :

𝑞1

𝑎≠𝑏

𝑞2 𝑞′

Fig. 12.1: Illustration of persistence.

Example 12.3 A non-example and an example The left-hand side of Figure 12.2 exemplifies a violation of persistence. There is a state (namely, 𝑞 0 ) with two different outgoing arrows labelled 𝑎 and 𝑏, but neither 𝑎𝑏 nor 𝑏𝑎 is activated at 𝑞 0 . By contrast, Figure 12.3 depicts a persistent transition system. This example gives an idea for the decomposition theorem to be proved later in this chapter. Note that the transition system has four “small” cycles which are divided into 𝑎, 𝑐-cycles and 𝑏, 𝑑-cycles, with no label in common. The Petri net generating it has two independent T-components.

𝑞0

𝑎

𝑐

𝑏

𝑎

𝑐

𝑏

𝑠

Fig. 12.2: A non-persistent transition system and a generating Petri net.

12.3 Choice-free Petri nets The class of choice-free Petri nets excludes choice places, i.e., by definition, places with more than one outgoing transition:

1 The states shown in this figure need not be different.

12.3 Choice-free Petri nets

283

𝑞0 𝑎

𝑎

𝑐

𝑏

𝑑

𝑑 𝑐 𝑏 𝑑 𝑎 𝑐

𝑏 𝑞

Fig. 12.3: A persistent transition system and a generating Petri net.

Definition 12.4 Choice-free Petri nets A Petri net 𝑁 is called choice-free (CF) if |𝑠• | ≤ 1 for all places 𝑠 ∈ 𝑆. They are related to persistent transition systems by the following propositions. Proposition 12.5 Persistence of choice-free net reachability graphs The reachability graph 𝑅𝐺 (𝑁) of a choice-free Petri net 𝑁 is persistent, whatever its initial marking. Proof: Suppose that some marking 𝑀 enables two transitions 𝑡1 and 𝑡2 with 𝑡 1 ≠ 𝑡 2 . By the choice-freeness property, • 𝑡 1 ∩ • 𝑡 2 = ∅. It follows by the transition rule that 𝑀 enables 𝑡 1 𝑡 2 as well as 𝑡2 𝑡1 , and by Lemma 4.8 they lead to the same marking. 12.5 Proposition 12.6 Partial converse of Proposition 12.5 If a pure Petri net is not choice-free, there is an initial marking generating a non-persistent reachability graph.a a

Recall that a net is pure if there are no self-loops, see Definition 1.2.

Proof: Let 𝑠 be a place in a pure net such that 𝑡 1 , 𝑡2 ∈ 𝑠• and 𝑡1 ≠ 𝑡2 . Define 𝑀0 such that all transitions are enabled and max(𝐹 (𝑠, 𝑡1 ), 𝐹 (𝑠, 𝑡2 )) ≤ 𝑀0 (𝑠) < 𝐹 (𝑠, 𝑡1 ) + 𝐹 (𝑠, 𝑡2 ) At 𝑀0 , all the transitions are enabled, but after, say, 𝑡2 is fired, 𝑡 1 is no longer enabled since 𝑡2 removed some tokens from 𝑠 without replacing any. 12.6 Proposition 12.6 is only a partial converse of Proposition 12.5, in two ways. First, there exist non-pure nets with are not choice-free, and yet, all initial markings produce a persistent reachability graph (for instance, a net with a single place and two transitions forming two side loops around this place). Secondly, there exist marked non-choice-free nets with a persistent reachability graph, for instance one of the nets shown in Figure 12.5 below.

284

12 Persistent Transition Systems and Choice-free Petri Nets

Choice-free nets need not be plain. They encompass T-systems, and T-systems, in turn, encompass marked graphs. There is no immediate containment relationship between choice-free nets and free-choice nets. Some FC-nets are not CF, for instance the one shown on the left-hand side of Figure 12.2. Conversely, FC-nets are plain but some CF-nets are not plain, for instance some of those shown in Figure 12.5 below. However, plain choice-free nets are also free-choice. This may be a good opportunity to insert a containment diagram, Figure 12.4, for most Petri net classes considered so far. This diagram compares only classes defined from the structure of their nets, not from the structure of their reachability graphs, like live nets, or persistent nets. all place/transition-nets plain nets AC-nets FC-nets

CF-nets T-nets

S-nets

weighted T-nets

marked graphs plain cycles

Fig. 12.4: Containment between structurally defined Petri net classes. Whether a net is choice-free or a marked graph depends only on its structure. These properties can be checked without constructing the reachability graph. The properties defined in Definition 12.1, on the other hand, can be detected only on the reachability graph. For instance, in order to check persistence, one may check the property indicated in Figure 12.1 for every reachable marking 𝑀 and for every pair of different transitions emanating from the state 𝑠 = 𝑀 in the reachability graph. In general, persistent nets are considerably less well-behaved than marked graphs. For instance, even if they are strongly connected, there may be reproducing T-vectors which are different from multiples of 1. As an example, see Figure 12.5. A persistent labelled transition system is shown on the left-hand side of this figure. Three different Petri net solutions are also shown. We will see in Chapter 14 that no marked graph solution exists.2 We will study choice-free Petri nets in more detail later, in Chapter 15.

2 In fact, this is already clear by Theorem 5.31, because in a weakly connected marked graph, any minimal nontrivial reproducing firing sequence has Parikh vector (1, . . . , 1).

12.4 Keller’s theorem, and home states

285

𝑐 𝑞0

𝑐 2

2 𝑎

𝑏

𝑎

𝑐

2

𝑏 𝑎

𝑐

2

𝑏 𝑎

𝑐

2

𝑏

2

Fig. 12.5: A labelled transition system; a plain, safe, reversible, and non-choicefree system; and two non-plain, non-safe, 2-bounded, reversible, and choice-free systems. The transition system is solved by all three Petri nets. The minimal nonempty reproducing firing sequence has Parikh vector (1, 1, 2).

12.4 Keller’s theorem, and home states Keller’s theorem is a seminal result about persistent systems. It is a kind of extension of the diamond property from firing transitions to firing sequences. Keller’s theorem is formulated in terms of the residue of a sequence 𝜏 of transitions with respect to another sequence 𝜎, denoted by 𝜏−• 𝜎. Intuitively, 𝜏−• 𝜎 is what is left of 𝜏 after cancelling successively all symbols from 𝜎 (if possible), read from left to right. Example 12.7 Residue example For 𝜏 = 𝑎𝑏𝑐𝑎𝑏𝑎𝑑 and 𝜎 = 𝑑𝑑𝑎𝑎𝑑𝑏, we get 𝜏−• 𝜎 = 𝑐𝑏𝑎 and 𝜎−• 𝜏 = 𝑑𝑑. Throughout the following, let (𝑄, →, 𝑇, 𝑞 0 ) be a labelled transition system. Definition 12.8 Residues Let 𝜎, 𝜏 ∈ 𝑇 ∗ be two finite sequences of labels. The sequence 𝜏−• 𝜎 is defined by induction on the length of 𝜎: 𝜏−• 𝜀 𝜏−• 𝑡

=𝜏  𝜏, if there is no label 𝑡 in 𝜏 = the sequence obtained by erasing the leftmost 𝑡 in 𝜏, otherwise

𝜏−• (𝑡𝜎) = (𝜏−• 𝑡)−• 𝜎. In other words, 𝜏−• 𝜎 is simply 𝜏 where, for each 𝑡 ∈ 𝑇, one has wiped out the first min{P (𝜎)(𝑡), P (𝜏)(𝑡)} occurrences of 𝑡. It follows directly that 𝜏−• 𝜎 and 𝜎−• 𝜏 contain no common labels, and that for any 𝑡 ∈ 𝑇: P (𝜏(𝜎−• 𝜏))(𝑡) = P (𝜏)(𝑡)+P (𝜎)(𝑡)− min{P (𝜎)(𝑡), P (𝜏)(𝑡)} = P (𝜎(𝜏−• 𝜎))(𝑡)

286

12 Persistent Transition Systems and Choice-free Petri Nets

Before using this device in order to state Keller’s theorem, we state and prove two small facts about permutations of firing sequences. Definition 12.9 Transpositions and permutations Two sequences 𝜎 ∈ 𝑇 ∗ and 𝜎 ′ ∈ 𝑇 ∗ are said to arise from each other by a transposition from a state 𝑞 ∈ 𝑄 if both are activated at 𝑞 and if they are the same, except for the order of an adjacent pair of labels, thus: 𝜎′

𝜎

𝑞 −→ and 𝑞 −→ and 𝜎 = 𝑡 1 . . . 𝑡 𝑘 𝑡𝑡 ′ . . . 𝑡 𝑛 and 𝜎 ′ = 𝑡 1 . . . 𝑡 𝑘 𝑡 ′ 𝑡 . . . 𝑡 𝑛 Two sequences 𝜎 and 𝜎 ′ are said to be permutation equivalent, or, more precisely, permutations of each other from 𝑞 (written 𝜎 ≡𝑞 𝜎 ′ ) if they are both activated at 𝑞 and if they arise out of each other through a finite sequence of transpositions from 𝑞. This definition almost duplicates Definitions 8.16 and 8.18 in Chapter 8. One difference is that we are here only interested in finite firing sequences. Another difference is that we do not care whether the permutations are weak or strong, whereas in Section 8.5, the emphasis was exclusively on strong permutations. Lemma 12.10 A permutation lemma Let (𝑄, →, 𝑇, 𝑞 0 ) be a deterministic and persistent lts. Let 𝑞 ∈ [𝑞 0 ⟩, 𝑡 ∈ 𝑇 𝑡 𝜏 and 𝜏 ∈ 𝑇 ∗ be arbitrary. If 𝑞 −→ and 𝑞 −→ and P (𝜏)(𝑡) = 0, then 𝑡𝜏 ≡𝑞 𝜏𝑡 𝑡𝜏

𝜏𝑡

and there is a unique state 𝑞 ′ with 𝑞 −→ 𝑞 ′ and 𝑞 −→ 𝑞 ′ . Proof: By induction on the length of 𝜏. 𝑡

If 𝜏 = 𝜀, the existence of 𝑞 ′ follows from 𝑞 −→ and the uniqueness of 𝑞 ′ follows from determinism. Moreover, 𝑡𝜏 ≡𝑞 𝜏𝑡 by the empty sequence of transpositions. Suppose 𝜏 = 𝑢𝜏 ′ with 𝑢 ∈ 𝑇. Since 𝑡 does not occur in 𝜏, 𝑡 ≠ 𝑢. By persistence, 𝑡 𝑢 𝑢 𝑡 both 𝑞 −→ 𝑞 1 −→ 𝑟 and 𝑞 −→ 𝑞 2 −→ 𝑟, for states 𝑞 1 , 𝑞 2 and 𝑟 (see Figure 12.6(l.h.s.)). By determinism, 𝑞 2 is the unique state reached from 𝑞 after executing 𝑢. 𝑢𝜏 ′

𝜏′

Hence, by 𝑞 −→ , 𝑞 2 −→ . Since 𝑡 does not occur in 𝜏 ′ , the induction hypothesis can 𝑡 𝜏′

𝜏′ 𝑡

be applied at state 𝑞 2 , yielding a unique state 𝑞 ′ such that 𝑞 2 −→ 𝑞 ′ and 𝑞 2 −→ 𝑞 ′ . 𝜏′

Since 𝑟 is the unique state reached from 𝑞 2 after executing 𝑡, 𝑟 −→ 𝑞 ′ . As 𝑟 is 𝑡𝑢𝜏 ′

𝑡𝜏

𝜏′ 𝑡

reachable from 𝑞 through 𝑡𝑢, also 𝑞 −→ −−− 𝑞 ′ and hence 𝑞 −→ 𝑞 ′ . Since 𝑞 2 −→ 𝑞 ′ 𝑢𝜏 ′ 𝑡

𝜏𝑡

and 𝑞 2 is reachable from 𝑞 by executing 𝑢, also 𝑞 −→ 𝑞 ′ and hence 𝑞 −→ 𝑞 ′ . Uniqueness follows by determinism. By the inductive hypothesis, 𝜏 ′ 𝑡 ≡𝑞2 𝑡𝜏 ′ . Using the same sequence of transpositions, 𝑢𝜏 ′ 𝑡 ≡𝑠 𝑢𝑡𝜏 ′ . Using one further transposition, 𝑢𝜏 ′ 𝑡 ≡𝑞 𝑡𝑢𝜏 ′ and hence 𝜏𝑡 ≡𝑞 𝑡𝜏. 12.10

12.4 Keller’s theorem, and home states

287

𝑞0

𝑞0

𝜏′

𝑡

𝜏 𝑡

𝑢 𝑞2

𝑞1

𝑞1

𝜎

𝜏 𝑡

𝜏′

𝑡

𝜎′

𝑢

𝜏′ 𝑞2

𝑟

𝜏 ′′

𝑡 𝜏′ 𝑞′

𝑞′

=

𝑞 ′′

Fig. 12.6: Illustrating Lemma 12.10 (l.h.s.) and Lemma 12.11 (r.h.s.).

Lemma 12.11 A lemma on Parikh vectors Let (𝑄, →, 𝑇, 𝑞 0 ) be a deterministic and persistent lts. Let 𝑞 ∈ [𝑞 0 ⟩ and 𝜎 𝜏 𝜎, 𝜏 ∈ 𝑇 ∗ be arbitrary. If 𝑞 −→ 𝑞 ′ and 𝑞 −→ 𝑞 ′′ and P (𝜎) = P (𝜏), then 𝜎 ≡𝑞 𝜏 and 𝑞 ′ = 𝑞 ′′ . Proof: By induction on the length of 𝜎. Note that by P (𝜎) = P (𝜏), 𝜎 and 𝜏 have equal lengths. If 𝜎 = 𝜀, then also 𝜏 = 𝜀, and 𝑞 = 𝑞 ′ = 𝑞 ′′ . Clearly, also 𝜀 ≡𝑞 𝜀. 𝜎

𝜏

Suppose that 𝑞 −→ 𝑞 ′ , 𝑞 −→ 𝑞 ′′ , and that 𝜎 = 𝑡𝜎 ′ (see Figure 12.6(r.h.s.)). By P (𝜎) = P (𝜏), 𝜏 can be split as 𝜏 = 𝜏 ′ 𝑡𝜏 ′′ such that 𝑡 does not occur in 𝜏 ′ . Let 𝑞 1 𝑡 be the state reached after 𝑞 by executing 𝑡, i.e. 𝑞 −→ 𝑞 1 . By Lemma 12.10, there is 𝑡 𝜏′

𝜏′ 𝑡

a state 𝑞 2 with 𝑞 −→ 𝑞 2 and 𝑞 −→ 𝑞 2 . So, 𝜏 ′′ is executable from 𝑞 2 . Therefore, from 𝑞 1 , both 𝜎 ′ and 𝜏 ′ 𝜏 ′′ are executable, the former leading to 𝑞 ′ and the latter to 𝑞 ′′ . But P (𝜎 ′ ) = P (𝜏 ′ 𝜏 ′′ ), implying that the induction hypothesis is applicable at state 𝑞 1 , yielding 𝑞 ′ = 𝑞 ′′ . By the inductive hypothesis, 𝜎 ′ ≡𝑞1 𝜏 ′ 𝜏 ′′ . Using the same sequence of transpositions, 𝑡𝜎 ′ ≡𝑞 𝑡𝜏 ′ 𝜏 ′′ . By Lemma 12.10, 𝑡𝜏 ′ ≡𝑞 𝜏 ′ 𝑡. Using this, 𝑡𝜎 ′ ≡𝑞 𝑡𝜏 ′ 𝜏 ′′ ≡𝑞 𝜏 ′ 𝑡𝜏 ′′ , and hence 𝜎 ≡𝑞 𝜏. 12.11 Theorem 12.12 Keller’s theorem Let (𝑄, →, 𝑇, 𝑞 0 ) be a deterministic and persistent lts. Let 𝜏 and 𝜎 be two label sequences activated at some reachable state 𝑞 ∈ [𝑞 0⟩. Then 𝜏(𝜎−• 𝜏) and 𝜎(𝜏−• 𝜎) are also activated from 𝑞, and 𝜏(𝜎−• 𝜏) ≡𝑞 𝜎(𝜏−• 𝜎). Furthermore, the state reached after 𝜏(𝜎−• 𝜏) equals the state reached after 𝜎(𝜏−• 𝜎).

288

12 Persistent Transition Systems and Choice-free Petri Nets 𝜎

Part of this theorem is a confluence statement. That is to say, it states that if 𝑞 0 −→ 𝑞 𝜏 and 𝑞 0 −→ 𝑞 ′ , then [𝑞⟩ ∩ [𝑞 ′⟩ ≠ ∅. The other part of the theorem asserts that Parikh vectors of sequences leading to a common successor marking of two reachable markings can actually be computed explicitly, using residues. Proof: Let (𝑄, →, 𝑇, 𝑞 0 ) be deterministic and persistent and let 𝜏, 𝜎 ∈ 𝑇 ∗ be two label sequences starting from some reachable state 𝑞 ∈ [𝑞 0⟩ (see Figure 12.7). The proof proceeds by “completing big diamonds” systematically, matching as many labels from 𝜏 as possible with labels from 𝜎, which is technically done by induction on the length of 𝜏. 𝑞

𝑞 𝑡

𝜏

𝜎 𝜏′

𝑞′

𝑡

𝜏

𝜎′ 𝜎

𝜏′

𝑞′

𝑡 𝜎′

𝜎 𝜎 ′′ 𝑡 •



𝜎− 𝜏 ′ • = 𝜎− 𝜏

𝜏′ − 𝜎



𝜏− 𝜎 = • (𝑡 𝜏 ′ ) − 𝜎



( 𝜎 ′ 𝜎 ′′ ) − 𝜏 ′ • = 𝜎− 𝜏



𝜏 ′ − ( 𝜎 ′ 𝜎 ′′ ) • = 𝜏− 𝜎

Fig. 12.7: Illustrating Theorem 12.12: Case 1 (l.h.s.); Case 2 (r.h.s.). If 𝜏 = 𝜀, both 𝜏(𝜎−• 𝜏) and 𝜎(𝜏−• 𝜎) are equal to 𝜎, and the result follows directly from the premise that 𝜎 is activated at 𝑞, the definition of ≡𝑞 , and persistence. Let 𝜏 = 𝑡𝜏 ′ . We distinguish two cases (compare Figure 12.7). Case 1: 𝜎 does not contain the label 𝑡, i.e. P (𝜎)(𝑡) = 0. In that case, as in Lemma 12.10, state 𝑞 ′ , reached after executing 𝑡 from 𝑞, enables both 𝜏 ′ and 𝜎. The induction hypothesis is applicable at state 𝑞 ′ , yielding a diamond completion as in Figure 12.7(l.h.s.) and proving the desired result because 𝜎−• 𝜏 equals 𝜎−• 𝜏 ′ and 𝜏−• 𝜎 equals (𝑡𝜏 ′ )−• 𝜎 (as 𝑡 does not occur in 𝜎). Permutation equivalence is shown as in Lemma 12.10. Case 2: 𝜎 contains the label 𝑡, i.e. P (𝜎)(𝑡) > 0. In that case, as in Lemma 12.11, 𝜎 can be split as 𝜎 ′ 𝑡𝜎 ′′ such that 𝑡 does not occur in 𝜎 ′ and such that state 𝑞 ′ , reached after executing 𝑡 from 𝑞, enables 𝜎 ′ 𝜎 ′′ . The induction hypothesis is applicable at state 𝑞 ′ , yielding a diamond completion as in Figure 12.7(r.h.s.) and proving the desired result because 𝜎−• 𝜏 equals (𝜎 ′ 𝜎 ′′ )−• 𝜏 ′ and 𝜏−• 𝜎 equals 𝜏 ′ −• (𝜎 ′ 𝜎 ′′ ). Permutation equivalence is shown as in Lemma 12.11. 12.12 There are several consequences of Theorem 12.12. Next to persistence, confluence is a property that has been identified as a “non-property of firing” in Lemma 1.18.

12.4 Keller’s theorem, and home states

289

Corollary 12.13 Confluence Let (𝑄, →, 𝑇, 𝑞 0 ) a deterministic and persistent. Let 𝑞 1 and 𝑞 2 be two reachable states. Then there is some state 𝑞 3 which is reachable from 𝑞 1 and from 𝑞2. Proof: By the definition of reachability, there are two sequences 𝜏 and 𝜎 leading from the initial state 𝑞 0 to 𝑞 1 and to 𝑞 2 , respectively. The claim follows from Theorem 12.12, applied to 𝑞 = 𝑞 0 . 12.13 Lemma 12.14 Another permutation lemma 𝛾 𝜅𝛾 Let (𝑄, →, 𝑇, 𝑞 0 ) be a deterministic and persistent lts. Let 𝑞 −→ and 𝑞 −→ be two label sequences activated at some reachable state 𝑞. Then there is a 𝛾𝜅 ′

label sequence 𝑞 −→ such that P (𝜅) = P (𝜅 ′ ) and 𝜅𝛾 ≡𝑞 𝛾𝜅 ′ . That is, 𝜅 can as a whole be permuted with 𝛾, albeit, possibly, up to label re-orderings within 𝜅. • 𝛾 𝜅𝛾 𝛾 (𝜅 𝛾 − 𝛾) Proof: By Theorem 12.12, 𝑞 −→ and 𝑞 −→ imply that 𝑞 −→ −−− . Put 𝜅 ′ = 𝜅𝛾−• 𝛾. • ′ Clearly, P (𝜅𝛾− 𝛾) = P (𝜅), hence P (𝛾𝜅 ) = P (𝜅𝛾). By Lemma 12.11, 𝜅𝛾 ≡𝑞 𝛾𝜅 ′ follows since both sequences are activated at 𝑞. 12.14 Definition 12.15 Home state A state 𝑞e ∈ 𝑄 is called a home state if ∀𝑞 ∈ [𝑞 0⟩ : 𝑞e ∈ [𝑞⟩. Corollary 12.16 Finite persistent lts have home states Let (𝑄, →, 𝑇, 𝑞 0 ) be finite, deterministic and persistent. Then there exists a home state 𝑞e ∈ [𝑞 0⟩. Proof: As the lts is finite, we have [𝑞 0⟩ = {𝑞 0 , . . . , 𝑞 𝑚 } for some 𝑚 ∈ N. Put 𝑞e0 = 𝑞 0 . Inductively, select for each 𝑖 from 1 up to 𝑚 some state 𝑞e𝑖 reachable from 𝑞e𝑖−1 and 𝑞 𝑖 , which exists by Corollary 12.13. Then put 𝑞e = 𝑞e𝑚 . 12.16 A strongly connected component of (𝑄, →, 𝑇, 𝑞 0 ) is a maximal subset 𝐶 ⊆ 𝑆 such that ∀𝑞, 𝑞 ′ ∈ 𝐶 : 𝑞 ′ ∈ [𝑞⟩ ∧ 𝑞 ∈ [𝑞 ′ ⟩. Clearly, two strongly connected components are either equal or disjoint. Moreover, two strongly connected components 𝐶1 , 𝐶2 can be partially ordered by the “earlier than” relation 𝐶1 ⪯ 𝐶2 iff ∃𝑞 1 ∈ 𝐶1 ∃𝑞 2 ∈ 𝐶2 : 𝑞 2 ∈ [𝑞 1 ⟩. Clearly, the set of strongly connected components is a partial order with respect to ⪯. The next corollary states that for a finite persistent lts, it is also a directed partial order, that is, it has a unique maximal element.

290

12 Persistent Transition Systems and Choice-free Petri Nets

Corollary 12.17 The set of strongly connected components Let (𝑄, →, 𝑇, 𝑞 0 ) be finite, deterministic and persistent. Then the set of strongly connected components of (𝑄, →, 𝑇, 𝑞 0 ) is a directed partial order with respect to ⪯. Proof: By Corollary 12.16, the set of home states 𝐶 𝐻 is strongly connected and nonempty. For any two strongly connected components 𝐶1 and 𝐶2 , the home state property implies that 𝐶1 ⪯ 𝐶 𝐻 and 𝐶2 ⪯ 𝐶 𝐻 . Thus, the set of home states is maximal with respect to ⪯. This proves the claim. 12.17 Thus, the relation ⪯ has a top (“latest”) element, namely the set 𝐶 𝐻 of home states. If additionally 𝑄 = [𝑞 0⟩ (which follows from total reachability), it also has a bottom (“earliest”) element with respect to ⪯, namely {𝑞 ∈ 𝑄 | 𝑞 0 ∈ [𝑞⟩}. Corollary 12.18 Cycles may be pushed “forward” Let (𝑄, →, 𝑇, 𝑞 0 ) be deterministic and persistent, and let 𝑞 ∈ [𝑞 0 ⟩. For 𝜌

𝜉

any cycle 𝑞 −→ 𝑞, and for any state 𝑞e with 𝑞 −→ 𝑞e, there exists a cycle 𝜌 e

𝑞e −→ 𝑞e such that P (𝜌) = P (e 𝜌 ). 𝜉

Proof: Let 𝑞 −→ 𝑞e. Apply Lemma 12.14 in 𝑞 with 𝛾=𝜉 and 𝜅=𝜌. 12.18 In particular, all cycles (small – see next section – or otherwise) can be retrieved Parikh-equivalently at every state of the unique maximal strongly connected component.

12.5 Small cycle decompositions This section generalises some observations made earlier. In Figure 12.2, which shows a non-persistent lts, two (minimal, nontrivial) cycles have a label in common, but also, they do not have all labels in common. In Figure 12.3, which shows a persistent lts, all (minimal, nontrivial) cycles either have all labels in common, or they are label-disjoint. This property is general, and it is characteristic for finite persistent lts. From Definition 11.5, we recall: Definition 12.19 Parikh-equivalence and label-disjointness 𝛼

𝛽

Two sequences 𝑞 −→ and 𝑞 ′ −→ are Parikh-equivalent if P (𝛼) = P (𝛽), and label-disjoint (or Parikh-disjoint) if ∀𝑡 ∈ 𝑇 : P (𝛼)(𝑡) = 0 ∨ P (𝛽)(𝑡) = 0.

12.5 Small cycle decompositions

291

Example 12.20 Figure 12.3 The lts shown on the left-hand side of Figure 12.3 can be seen as the union of “small” cycles which are mutually either Parikh-equivalent or label-disjoint, two of them containing the labels 𝑎 and 𝑐, and the other two containing the labels 𝑏 and 𝑑. We need to define the notion of a “small cycle” in order to make this precise. Definition 12.21 Small cycles Let 𝑇 𝑆 = (𝑄, →, 𝑇, 𝑞 0 ) be a labelled transition system. For 𝑞 ∈ 𝑄, let 𝑡 𝜎 𝑞 • = {𝑡 ∈ 𝑇 | 𝑞 −→ }. A cycle 𝑞 −→ 𝑞 is nontrivial if 𝜎 ≠ 𝜀. An lts 𝜎 is called acyclic if it has no nontrivial cycles. A nontrivial cycle 𝑞 −→ 𝑞 𝜎′

around a state 𝑞 ∈ 𝑄 is called small if there is no nontrivial cycle 𝑞 ′ −→ 𝑞 ′ with 𝑞 ′ ∈ 𝑄 and P (𝜎 ′ ) < P (𝜎).a a

See Definition 3.12 for < on vectors.

Example 12.22 Figure 12.3 The lts shown on the left-hand side of Figure 12.3 has – up to permuting 𝑎𝑐

𝑏𝑑

𝑐𝑎

the states – four small cycles, namely 𝑞 0 −→ 𝑞 0 , 𝑞 0 −→ 𝑞 0 , 𝑞 −→ 𝑞, 𝑑𝑏

and 𝑞 −→ 𝑞. Note that the notion of a small cycle may differ from that of an elementary cycle in the sense of graph theory. For example, the cycle 𝑎𝑏𝑐𝑑

𝑞 0 −→ −−− 𝑞 0 is elementary (since no state is contained twice, except the first and the last one), but it is not small, because at least one nontrivial cycle 𝑎𝑐 (e.g. 𝑞 0 −→ 𝑞 0 ) has a smaller Parikh vector. Indeed, every small cycle is elementary, but the converse need not be true. By the basic property of T-invariants (Proposition 4.17), the cyclic Parikh vectors P (𝑎𝑐), P (𝑏𝑑) and P (𝑎𝑏𝑐𝑑) define realisable T-invariants of the Petri net shown on the left-hand side of Figure 12.3, such that the third one is additively composed from the other two. Under the conditions of the next theorem, a mutually label-disjoint set of small cycles always exists, such that all cycles can be composed additively from them, even if Petri net solvability is not assumed. Theorem 12.23 A decomposition theorem (general version) Let (𝑆, →, 𝑇, 𝑞 0 ) be finite, deterministic, persistent, and weakly periodic. 𝜌𝑖 There exist a reachable state 𝑞eand a finite set C = { 𝑞e −→ 𝑞e | 1 ≤ 𝑖 ≤ 𝑛 } of mutually label-disjoint cycles on 𝑞e, with 𝑛 ≤ |𝑇 |, such that for any reachable 𝜌 state 𝑞 and for any cycle 𝑠 −→ 𝑞, the Parikh vector of 𝜌 decomposes as Í𝑛 P (𝜌) = 𝑖=1 𝑘 𝑖 · P (𝜌𝑖 ) for some natural numbers 𝑘 𝑖 .

292

12 Persistent Transition Systems and Choice-free Petri Nets

If persistent Petri net solvability is additionally assumed, then this leads directly to the following result. Theorem 12.24 A decomposition theorem (Petri net version) Let (𝑄, →, 𝑇, 𝑞 0 ) be finite, persistent, and Petri net solvable. There exist 𝜌𝑖 a reachable state 𝑞e and a finite set C = {e 𝑞 −→ 𝑞e | 1 ≤ 𝑖 ≤ 𝑛} of mutually label-disjoint cycles on 𝑞e, with 𝑛 ≤ |𝑇 |, such that for any reachable 𝜌 state 𝑞 and for any cycle 𝑞 −→ 𝑞, the Parikh vector of 𝜌 decomposes as Í𝑛 P (𝜌) = 𝑖=1 𝑘 𝑖 · P (𝜌𝑖 ) for some natural numbers 𝑘 𝑖 . Proof: By Theorem 11.30 and Proposition 12.2 in Section 11.5, weak periodicity follows from Petri net solvability, and so does determinism, by Lemma 1.17. Hence the claim follows from Theorem 12.23. 12.24 If reversibility is additionally assumed, then every state is a home state, and from Theorem 12.24, we get the following: Corollary 12.25 Small cycles in bounded, reversible, persistent nets Let 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 ) be a bounded, reversible, and persistent Petri net. There exists a finite set {𝑋1 , . . . , 𝑋𝑛 } of semipositive transition-disjoint T𝜌 invariants such that every cycle 𝑀 −→ 𝑀 in the reachability graph of 𝑁 𝜌1 𝜌2 can be decomposed, up to permutations, to some sequence 𝑀 −→ 𝑀 −→ 𝜌𝑛 𝑀 . . . −→ 𝑀 of cycles where all Parikh vectors P (𝜌𝑖 ) are in {𝑋1 , . . . , 𝑋𝑛 }. Moreover, {𝑋1 , . . . , 𝑋𝑛 } can be chosen as the set of Parikh vectors of small cycles through any fixed reachable marking of 𝑁.

Example 12.26 The relevance of transition-disjointness The reachability graph shown in Figure 12.3 has two (sets of) transitiondisjoint cycles, (𝑎𝑐) ∗ and (𝑏𝑑) ∗ , which are executable from 𝑀0 . They correspond to two realisable, minimal, transition-disjoint T-invariants, (1 0 1 0) T and (0 1 0 1) T . From every state and for each one of these T-invariants, a cycle can be executed which has it as a Parikh vector. By contrast, consider Figure 12.2. The reachability graph shown on the left-hand side of this figure also has two cycles, (𝑎𝑐) ∗ and (𝑏𝑐) ∗ . However, they are not transition-disjoint, because 𝑐 belongs to both. It will be seen later (Chapter 15) that the small cycles in this decomposition correspond to minimal T-invariants. In essence, in a bounded, reversible, persistent Petri net, realisable minimal T-invariants describe “independent” repetitive behaviours, and this observation can be extended: Suppose that 𝑋1 , . . . , 𝑋𝑛 are as in the previous corollary. Then there are 𝑛 bounded, persistent and reversible nets 𝑁1 , . . . , 𝑁 𝑛 , such that each net 𝑁𝑖 has exactly one minimal realisable T-invariant 𝑋𝑖 and the reachabil-

12.6 Bibliographical remarks and further reading

293

ity graph of 𝑁 is isomorphic to the reachability graph of the place-disjoint union of the nets 𝑁1 , . . . , 𝑁 𝑛 . Example 12.27 Petri net decomposition For Figure 12.3, this means that the net is reachability-graph-isomorphic to a disjoint composition of two simpler ones, realising (𝑎𝑐) ∗ and (𝑏𝑑) ∗ , respectively. They arise from the original Petri net by making two disjoint copies, keeping all the places (and their tokens) and erasing all transitions which are not contributing to the T-invariant under consideration (and their adjacent arrows). The result of this is shown on the left-hand side of Figure 12.8, and the reachability graph of this new disjoint composition is shown on the right-hand side of Figure 12.8; it is isomorphic to the one shown before.

𝑞0 𝑎

𝑐 𝑎

𝑑 𝑐 𝑏 𝑑 𝑎

𝑏 𝑏

𝑐

𝑑

Fig. 12.8: Left-hand side: the net resulting as a decomposition of the net shown in Figure 12.3. Right-hand side: its reachability graph. In the case of – not necessarily connected – T-systems (and a fortiori, marked graphs), Theorem 5.31 ensures that all entries in the Parikh vector of a small cycle are either zero or one. This implies (again) that the labelled transition system shown in Figure 12.5 cannot be solved by a T-system, much less by a marked graph, because its only (up to state reordering) small cycle has a Parikh vector with an entry 2 for label 𝑐.

12.6 Bibliographical remarks and further reading The class of choice-free Petri nets has been introduced in [CM75] by Stefano CrespiReghizzi and Dino Mandrioli. See also [TCS97] (Enrique Teruel, José M. Colom, Manuel Silva) where it is shown that a strongly connected CF net satisfies (PT) iff it has a T-invariant and has at most one minimal T-invariant (as opposed to the net in Figure 12.3 which has two minimal T-invariants). Persistent transition systems and persistent Petri nets go back to Lawrence H. Landweber and Edward L. Robertson [LR78] where they have been credited to Richard M. Karp and Raymond E. Miller for parallel program schemata [KM69] and, in the context of Petri nets, to Robert M. Keller [Kel74].

294

12 Persistent Transition Systems and Choice-free Petri Nets

Figure 12.4 does not display all structurally defined Petri net classes that can be found in the literature, but only those that are studied in the present book. In [LR78], a class of nets called conflict-free nets has also been defined; these nets should not be confused with choice-free Petri nets. The class of conflict-free nets is smaller than the class of choice-free nets. It has been studied, for example, by Rodney R. Howell and Louis E. Rosier in [HR89]. The class of equal-conflict Petri nets generalises both the free-choice and choice-free classes; it has been introduced and analysed for instance by Enrique Teruel and Manuel Silva in [TS93] and [TS96]. There are also various duals, reverses and variants of these classes, for instance: join-free nets (Section 5.9); fork-attribution nets which are both choice-free and join-free; and homogeneous nets where for every place 𝑠, and for all 𝑡1 , 𝑡2 ∈ 𝑠• , 𝐹 (𝑠, 𝑡1 ) = 𝐹 (𝑠, 𝑡2 ). The paper [HD18] (by Thomas Hujsa and Raymond Devillers) has a containment diagram for these and a variety of other classes. A persistent and Petri net solvable lts without a choice-free PN solution will be given in Chapter 14 (see Figure 14.3). Section 12.4 is based on Bob Keller’s paper [Kel74]. The class of persistent Petri nets has been studied from different perspectives and extended in various ways; see, e.g., [BO09] by Kamila Barylska and Edward Ochmanski. The results quoted in Section 12.5, in particular, Theorems 12.23 and 12.24, are based on [BD09] (Eike Best and Philippe Darondeau). David de Frutos Escrig and Colette Johnen proved in [FEJ89] that the home state property is decidable in general nets. This result was extended in [JL22] (Petr Jancar and Jérôme Leroux) to the problem of deciding whether home spaces from a semilinear set can be reached from configurations in a semilinear set. See Chapter 17 for the concept of semilinearity. In [BE16] (Eike Best and Javier Esparza), the problem whether an unmarked Petri net has a home state was shown to be decidable and as hard as liveness.

12.7 Exercises Exercise 12.1 Straight nets Call a Petri net straight if its reachability graph is finite and linearly ordered. (i) Is a straight Petri net always persistent? (ii) Can one make a statement about the number of home states? (iii) For a given natural number 𝑛, design a straight and asymmetric choice Petri net with the following properties: (a) its size is a polynomial in 𝑛 (to show this, count precisely all places, all transitions, all arcs, and all tokens); (b) it has exactly 2𝑛 states.

12.7 Exercises

295

Exercise 12.2 Keller’s theorem Show that Keller’s theorem is not applicable to the (non-persistent) lts shown on the left-hand side of Figure 12.2. To this end, exhibit two firing sequences 𝜏 and 𝜎 (both enabled at 𝑞 0 ) such that neither 𝜏(𝜎−• 𝜏) nor 𝜎(𝜏−• 𝜎) are enabled at 𝑞 0 . Exercise 12.3 Weak periodicity Show that weak periodicity is essential amongst the premises of Theorem 12.23. To this end, exhibit a finite, deterministic, and persistent lts which does not satisfy the decomposition into small and Parikh-disjoint cycles as stated in this theorem. Exercise 12.4 Persistence versus choice-freeness [difficult] Exhibit an lts which has to following properties: • finite; • deterministic; • persistent; • reversible; • all small cycles have Parikh vector 1; • solvable by a plain and pure Petri net; and yet, is not solvable by a choice-free Petri net.34 Solution 12.1 (Exercise 12.1) (i) Yes. The situation depicted on the left-hand side of Figure 12.1 never occurs. (ii) There is exactly one home state, namely the final state of the (finite) reachability graph. (iii) The idea is to count from 0 to 2𝑛 − 1 using a “stratified” bit net. For 𝑛 = 4, it is shown in Figure 12.9 how this can be done; the reachability graph of this net is linearly ordered and has exactly 24 = 16 states. In general, there are 2·𝑛 places, 𝑛 transitions, 𝑛 tokens, and 𝑛·(𝑛+1) arcs. Observe that the net is asymmetric choice. Solution 12.2 (Exercise 12.2) 𝜏 = 𝑎 and 𝜎 = 𝑏 (both enabled at 𝑞 0 ) already do the job, since 𝜎−• 𝜏 = 𝑏 and 𝜏−• 𝜎 = 𝑎, and neither 𝜏(𝜎−• 𝜏) = 𝑎𝑏 nor 𝜎(𝜏−• 𝜎) = 𝑏𝑎 are enabled at 𝑞 0 . Solution 12.3 (Exercise 12.3) See Figure 12.10. 3 In a sense, the existence of such a transition system highlights the “distance” between the (“large”) class of persistent lts and the (“much smaller”) class of choice-free Petri net solvable lts. 4 As an aside, backward persistence is absent from the above list of properties. A non-choice-freely solvable lts satisfying all of these properties plus backward persistence can necessarily be solved by a marked graph (and thus, a fortiori, choice-freely). This result is proved in Chapter 14 below.

296

12 Persistent Transition Systems and Choice-free Petri Nets

Fig. 12.9: Solution of Exercise 12.1 for 𝑛 = 4. 𝑏 𝑎 𝑎 𝑏

Fig. 12.10: Solution of Exercise 12.3.

Solution 12.4 (Exercise 12.4) See Figure 12.11. This solution is taken from [BD15b]. To “see” the lts, which has 89 states and 180 state transitions, we are afraid that it is helpful to run the reachability graph generating algorithm of apt (or any other such algorithm taken from the literature) on the file shown in Figure 12.11. The reader may consult the above paper for a fuller explanation of the reasons why there is no choice-free Petri net generating this 89-state transition system.

.places p1 p2 p3 p4 p5 p6 p7 p8 p9 p10 p11 py q1 q2 q3 q4 q5 q6 q7 q8 q9 q10 q11 q12 r1 r2 r3 r4 r5 r6 r7 r8 r9 r10 r11 r12 .transitions a1 a2 a3 b1 b2 b3 x y1 y2 z .flows a1: q1,q2,r1 -> p1 a2: p1,p9,q3,q4,r2 -> p2,p3,r8 a3: p2,p3,p4,q5,r8 -> q1,q3,r3,r4,r5,r6,r9,r10,r11 b1: q6,q7,r6 -> p5 b2: p5,p10,q8,q9,r3 -> p3,p6,r10 b3: p3,p6,p7,q10,r10 -> q6,q8,r1,r2,r4,r7,r8,r11,r12 x: p8,r4,r11 -> q2,q4,q5,q7,q9,q10,q11,q12 y1: p3,p11,q11,r4,r5,r7,r8,r10 -> py y2: py -> p3,p4,p7,p8,r8,r10 z: q12,r8,r9,r10,r11,r12 -> p9,p10,p11 .initial_marking q1,q2,q3,q4,q5,q6,q7,q8,q9,q10,q11,q12, 2*r1,2*r2,2*r3,2*r4,2*r5,2*r6,2*r7,2*r8,2*r9,2*r10,2*r11,2*r12

Fig. 12.11: This text describes a (3-bounded, plain, and pure) Petri net in apt format. Its reachability graph has the properties required in Exercise 12.4.

Chapter 13

Divide-and-Conquer Methods for Synthesis

Abstract This chapter describes several general techniques aimed at accelerating Petri net synthesis from labelled transition systems. One technique, called presynthesis, eliminates non-synthesisable transition systems before actual synthesis is started. Other techniques, called divide-and-conquer, aim at exploiting a transition system’s structure in order to speed up synthesis. In the divide-and-conquer approach, the synthesis problem is broken up into several subproblems which are smaller and easier to solve, and whose solution can be combined to an overall solution. One method attempts to detect hidden concurrency in a given transition system, and to determine independent parts which consequently can be factored out. Another method attempts to detect loose couplings between transition system parts, and then to separate these parts into smaller subproblems.

Contents 13.1 13.2 13.3 13.4 13.5 13.6

Introduction . . . . . . . . . . . . . . . . . Pre-synthesis . . . . . . . . . . . . . . . . Extracting concurrency from an lts . . . . . Detecting loose coupling in an lts . . . . . Bibliographical remarks and further reading Exercises . . . . . . . . . . . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

297 298 300 308 318 318

13.1 Introduction The synthesis algorithm described in the proof of Theorem 11.18 has a polynomial worst case complexity in terms of the size of the given lts. Indeed, there is a quadratic number of separation problems, and for each of them, we have to solve a (polynomial) © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. Best and R. Devillers, Petri Net Primer, Computer Science Foundations and Applied Logic, https://doi.org/10.1007/978-3-031-48278-6_13

297

298

13 Divide-and-Conquer Methods for Synthesis

system of linear constraints. Í Each such system is homogeneous, i.e., composed of constraints of the form 𝑖 𝑘 𝑖 ·𝑥 𝑖 ⊲⊳ 0, where ⊲⊳ ∈ {=, ≠, ≤, }, so that any positive multiple of a solution is also a solution. As a consequence, we may search for a solution in the domain of rational numbers, for which there are polynomial algorithms, and afterwards multiply it by a suitable factor in order to convert it to the domain of integers. This is no longer true if we add non-homogeneous constraints, for instance if we want to limit a priori the markings of places and/or the weights of arcs (leading to constraints of the kind 𝑥 𝑖 ≤ 𝑐 𝑖 , where 𝑐 𝑖 denotes some possibly non-null constant). This is particularly the case if we search for a plain or for a safe solution. It is also no longer true if we want to optimise the solution, for instance if we want to minimise the number of places, the maximal marking, or the sum of all arc weights. In cases such as these, the problem usually becomes NP-complete. At any rate, a synthesis may be lengthy and we may search for strategies reducing the burden of the computations. First, we may observe that it is not always necessary to apply a linear solver to each separation problem: we may check if a region previously constructed does not already do the job. As a general kind of strategy, we may also search for a pair of conjugate operators (△, ▽) acting on Petri nets and transition systems, respectively, such that if 𝑇 𝑆1 , 𝑇 𝑆2 are two lts, 𝑇 𝑆1 ▽ 𝑇 𝑆2 has a Petri net solution iff 𝑇 𝑆1 and 𝑇 𝑆2 separately have solutions 𝑁1 and 𝑁2 , and a possible solution for 𝑇 𝑆1 ▽𝑇 𝑆2 is 𝑁1 △ 𝑁2 . Of course, we may have other solutions too,1 and transition systems as well as Petri nets have to be considered up to isomorphism.2 Also, we need to be able to decompose a given lts 𝑇 𝑆 into a form 𝑇 𝑆1 △ 𝑇 𝑆2 rather easily, and to compose the two individual solutions sol(𝑇 𝑆1 ) and sol(𝑇 𝑆2 ) into a global solution sol(𝑇 𝑆1 ) △ sol(𝑇 𝑆2 ) easily as well. This is a kind of divide-and-conquer strategy, based on the idea that the complexity to perform a decomposition of a transition system, plus two smaller syntheses, plus a Petri net composition, may be expected to be much smaller than a direct global synthesis. In both cases, it may turn out to be useful if non-synthesisable transition system can be rejected in a pre-synthesis step, before any divide-and-conquer computations, or, for that matter, any true synthesis steps, are performed. Section 13.2 introduces pre-synthesis and explains the use of necessary synthesisability conditions. Sections 13.3 and 13.4 describe two applications of the divideand-conquer method, suggesting the use of two different pairs (△, ▽) of conjugate operators.

13.2 Pre-synthesis From Section 11.5, necessary and sufficient conditions are already known for a given lts to be Petri net synthesisable. One way of checking these conditions is by 1 If a synthesis succeeds, there are infinitely many solutions. 2 The identities of the states in a transition system and of the places in a Petri net are irrelevant.

13.2 Pre-synthesis

299

actually performing synthesis; but this begs the question, of course. It is therefore also interesting to weaken this exact equivalence, for instance by considering a set of conditions which is only necessary and not sufficient, but is easier to check than the full set of conditions. If a given lts does not satisfy one of these necessary conditions, then it is certainly not synthesisable. For instance, consider the property of determinism formalised in Definition 12.1, which is true for any Petri net reachability graph, and is therefore necessary for synthesisability. The transition system 𝑇 𝑆2 shown in Figure 11.3 does not satisfy this property. Hence it can be rejected immediately as non-synthesisable (with an explanation, which may be used to modify the given lts before another try). The same is true if an lts is not backward deterministic. For example, the RT&TX specification in Figure 11.7 is neither forward nor backward deterministic and could thus be rejected immediately; no (exact) Petri net solution exists. As a third example, consider weak periodicity, also defined in Definition 12.1 The transition system 𝑇 𝑆5 shown in Figure 11.3 is not weakly periodic and can be rejected straight away because of Proposition 12.2. We shall also need other general properties of Petri net reachability graphs. Notation 13.1 ± symbol denoting forward and backward transitions In what follows, let 𝑇 𝑆 = (𝑄, →, 𝑇, 𝑞 0 ) with states 𝑄, labels 𝑇, arcs →, and initial state 𝑞 0 , be a labelled transition system. We shall make heavy use of 𝛼 ← − backward arrows and abbreviate (𝑇 ∪ 𝑇 ) to ±𝑇. For the notation 𝑞 −→ 𝑞 ′ , where 𝛼 is a general path in (±𝑇) ∗ , we refer to Definition 11.24.

Definition 13.2 Two-way chains and strong cycle consistency An lts 𝑇 𝑆 = (𝑄, →, 𝑇, 𝑞 0 ) has a forward two-way uniform chain if there is a 𝛼 sequence 𝛼 ∈ 𝑇 ∗ and states 𝑞 𝑖 ∈ 𝑆 for 𝑖 ∈ Z such that 𝑞 𝑖 −→ 𝑞 𝑖+1 for each 𝑖 ∈ Z and ∀𝑖, 𝑗 ∈ Z : (𝑖 ≠ 𝑗 ⇒ 𝑞 𝑖 ≠ 𝑞 𝑗 ). It has a general two-way uniform chain if the same is true for some 𝛼 ∈ (±𝑇) ∗ . 𝛼 𝑇 𝑆 is strongly cycle-consistent if, whenever there is a general path 𝑞 −→ 𝛽1

𝛽2

𝛽𝑛

𝑞 ′ , general cycles 𝑞 1 −→ 𝑞 1 , 𝑞 2 −→ 𝑞 2 , . . . , 𝑠 𝑛 −→ 𝑞 𝑛 and numbers Í𝑛 𝑘 1 , 𝑘 2 , . . . , 𝑘 𝑛 ∈ Q such that P (𝛼) = 𝑖=1 𝑘 𝑖 · P (𝛽𝑖 ), then 𝑞 = 𝑞 ′ . Intuitively, this means that a combination of Parikh vectors of cycles cannot produce non-cycles. Figure 13.1 depicts examples of two-way uniform chains. The following result extends Proposition 12.2 to the new necessary conditions. Proposition 13.3 More properties of Petri net reachability graphs Let 𝑁 = (𝑃, 𝑇, 𝐹, 𝑀0 ) be any Petri net. Its reachability graph RG(𝑁) contains neither two-way forward nor two-way general chains. Moreover, it is strongly cycle-consistent.

300

13 Divide-and-Conquer Methods for Synthesis 𝑞−2

𝑞−1

𝑞0

𝑞1

𝑞2

...

... 𝑎

𝑎

𝑏

′ 𝑞−2

𝑎

𝑏

𝑞0′

′ 𝑞−1

𝑏

𝑎

𝑏

𝑏

𝑞2′

𝑞1′

... 𝑎

𝑎

𝑏

𝑎

𝑎

𝑏

...

𝑏

Fig. 13.1: A forward two-way uniform chain (top, with 𝛼 = 𝑎𝑏), and a general ← − two-way uniform chain (bottom, with 𝛼 = 𝑎 𝑏 ).

Proof: The proof of strong cycle-consistency is similar to the proof of weak periodicity in Proposition 12.2. For the non-existence of forward two-way chains, suppose that such a chain 𝛼

𝛼

𝛼

𝛼

𝛼

. . . 𝑀−2 −→ 𝑀−1 −→ 𝑀0 −→ 𝑀1 −→ 𝑀2 −→ . . . is present in 𝑁. If any two marking 𝑀𝑖 , 𝑀 𝑗 (𝑖 ≠ 𝑗) on this chain are equal, then all states are equal, by weak periodicity. Hence all markings on the chain are different. Pick any one, say 𝑀𝑖 , of them. By Dickson’s lemma (Lemma 3.20), there is a number 𝑘 ≥ 1 such that 𝛼 𝛼 𝑀𝑖 −→ . . . −→ 𝑀𝑖+𝑘 and 𝑀𝑖 < 𝑀𝑖+𝑘 | {z } 𝑘 times

But then also

𝛼

𝛼

𝑀𝑖−𝑘 < 𝑀𝑖 with 𝑀𝑖−𝑘 −→ . . . −→ 𝑀𝑖 | {z } 𝑘 times

and so on. This is impossible because every place has a constant lower limit for the number of tokens it can carry. The same argument is true for general two-way chains, since it depends only on the Parikh vector of 𝛼. 13.3 Hence, if during pre-synthesis, any violation of the new properties introduced in Definition 13.2 (or any of the first three properties of Definition 12.1) is detected, the lts under consideration is not Petri net synthesisable and can be rejected before synthesis is started. In the next chapters, we shall find other general properties, dedicated to several net subclasses, usable for targeted pre-syntheses.

13.3 Extracting concurrency from an lts A product of two disjoint lts (i.e., with disjoint label sets) is again an lts. Its states are pairs of states of the two lts and an edge exists if one of the underlying states can do the transition. An example is shown in Figure 13.2.

13.3 Extracting concurrency from an lts

301

Definition 13.4 Product of two disjoint lts Let 𝑇 𝑆1 = (𝑄 1 , →1 , 𝑇1 , 𝑞 0 1 ) and 𝑇 𝑆2 = (𝑄 2 , →2 , 𝑇2 , 𝑞 0 2 ) be two lts with disjoint label sets (𝑇1 ∩ 𝑇2 = ∅). The  (disjoint) product 𝑇 𝑆1 ⊗ 𝑇 𝑆′2 is the  lts 𝑄 1 × 𝑄 2 , →, 𝑇1 ∪ 𝑇2 , (𝑞 0 1 , 𝑞 0 2 ) , where → = { (𝑞 1 , 𝑞 2 ), 𝑡1 , (𝑞 1 , 𝑞 2 ) |  (𝑞 1 , 𝑡1 , 𝑞 1′ ) ∈ →1 } ∪ { (𝑞 1 , 𝑞 2 ), 𝑡2 , (𝑞 1 , 𝑞 2′ ) | (𝑞 2 , 𝑡2 , 𝑞 2′ ) ∈ →2 }.

0

𝑎

1

𝑇𝑆1 :

𝑇𝑆2 :

𝑞0

𝑇𝑆3 :

𝑎

(0, 𝑞0 )

𝑏 𝑞1

(0, 𝑞1 )

𝑏

𝑏 𝑞2

(0, 𝑞2 )

(1, 𝑞0 ) 𝑏

𝑎 𝑏

(1, 𝑞1 ) 𝑏

𝑎

(1, 𝑞2 )

Fig. 13.2: Example for a disjoint product. We have 𝑇 𝑆1 ⊗ 𝑇 𝑆2 = 𝑇 𝑆3 . When a product is given and the individual label sets 𝑇1 and 𝑇2 are known, the factors can be computed by only following edges with labels in 𝑇1 , respectively 𝑇2 , from the initial state: Corollary 13.5 Factors of a product If 𝑇 𝑆 = (𝑄, →, 𝑇1 ∪ 𝑇2 , 𝑞 0 ) ≡ 𝑇 𝑆1 ⊗ 𝑇 𝑆2 where 𝑇 𝑆1 and 𝑇 𝑆2 are totally reachable and 𝑇1 ∩ 𝑇2 = ∅, then 𝑇 𝑆1 ≡ ⟨𝑞 0 ⟩ 𝑇1 , where 𝛼

⟨𝑞 0 ⟩ 𝑇1 = (𝑄 1 , →1 , 𝑇1 , 𝑞 0 ) with 𝑄 1 = {𝑞 ∈ 𝑄 | ∃𝛼1 ∈ (±𝑇1 ) ∗ : 𝑞 0 −→ 𝑞} and →1 = {(𝑞 1 , 𝑡1 , 𝑞 2 ) ∈ → | 𝑞 1 , 𝑞 2 ∈ 𝑄 1 , 𝑡1 ∈ 𝑇1 } and similarly for 𝑇 𝑆2 . It is easy to see that, up to isomorphism, the disjoint product of lts is commutative, associative and has a neutral (the lts with a single state and no label). Hence, if {𝑇 𝑆𝑖 } is Ëa family of transition systems with pairwise disjoint label sets, we may denote by 𝑖 {𝑇 𝑆 𝑖 } their disjoint product in any commutative and/or associative form. Example 13.6 Producing a cube Figure 13.3 shows how a three-dimensional cubea can be produced as a 3-fold product of three one-dimensional cubes. a

Well, at least an object that is, in some sense, homeomorphic to a cube.

There is an interesting relation between transition system products and Petri nets: if two nets are disjoint, putting them side by side yields a new net whose reachability graph is (up to isomorphism) the disjoint product of the reachability graphs of the two

302

13 Divide-and-Conquer Methods for Synthesis 024

𝑎 0 ⊗

𝑎



𝑏

𝑐 𝑏

4

2



𝑐

124

𝑎

025

𝑐 034

𝑏

𝑏 125

1

3

5

𝑐

134

𝑎

035

𝑏 𝑐

𝑎 135

Fig. 13.3: Building a three-dimensional cube (right-hand side) out of three onedimensional cubes (left-hand side).

original nets. For example, the three-dimensional cube in Figure 13.3 is isomorphic to the reachability graph of the 𝑛-bit system shown in Figure 7.2, when 𝑛 = 3. This leads to the following definition.3 Definition 13.7 (Disjoint) sum of Petri nets Let 𝑁1 = (𝑆1 , 𝑇1 , 𝐹1 , 𝑀01 ) and 𝑁2 = (𝑆2 , 𝑇2 , 𝐹2 , 𝑀02 ) be two Petri nets with disjoint transition sets (𝑇1 ∩ 𝑇2 = ∅). Since we consider nets up to isomorphism, we may also assume that 𝑆1 ∩ 𝑆2 = ∅. The disjoint sum 𝑁1 ⊕ 𝑁2 is defined (up to isomorphism) as the net 𝑁 = (𝑆1 ∪𝑆2 , 𝑇1 ∪𝑇2 , 𝐹1 ∪𝐹2 , 𝑀01 ∪𝑀02 ). Again, up to isomorphism, the disjoint sum of Petri nets is commutative, associative É and has a neutral (the empty net), and we may use the notation 𝑖 {𝑁𝑖 } for the disjoint sum in any commutative and/or associative form of a family {𝑁𝑖 } of pairwise disjoint nets. An additional remark that may be precious for applications is that many subclasses of Petri nets found in the literature (free-choice, choice-free, join-free, fork-attribution, homogeneous, for which see Chapters 5 and 12) are compatible with the above (de)composition, in the sense that a disjoint sum of nets belongs to such a subclass if and only if each component belongs to the same subclass. Corollary 13.8 Reachability graph of a sum of nets The reachability graph of a disjoint sum of nets is isomorphic to the disjoint product of the reachability graphs of the composing nets. There is also a reverse property:

3 Compare also Section 2.5.1 where the disjoint sum of Petri nets was used in order to prove the closure of Petri net languages with respect to the shuffle operator.

13.3 Extracting concurrency from an lts

303

Proposition 13.9 Petri net solution of a disjoint product of lts A disjoint product of lts has a Petri net solution iff each composing lts has a Petri net solution, and a possible solution is the disjoint sum of the latter. The addition of disjoint Petri nets is easy, and synthesising true (.i.e., not just onestate) factors of an lts will usually be much simpler than the synthesis of the whole system. It remains to find an (optimal) factorisation for the given lts. To this aim, Corollary 13.5 alone is not useful because, while it is not too difficult to check whether a tentative decomposition 𝑇 = 𝑇1 ∪ 𝑇2 leads to 𝑇 𝑆 ≡ ⟨𝑞 0 1 ⟩ 𝑇1 ⊗ ⟨𝑞 0 2 ⟩ 𝑇2 , if it does not work we will have to try with another decomposition of the label set (ideally a partition), and there is an exponential number of them. To solve this problem, we shall introduce a suitable equivalence relation between the labels of a transition system. Definition 13.10 Label separation An lts 𝑇 𝑆 = (𝑄, →, 𝑇, 𝑞 0 ) with 𝑇1 , 𝑇2 ⊆ 𝑇 is {𝑇1 , 𝑇2 }-separated if for all 𝛽

𝛼

𝑞, 𝑞 ′ ∈ 𝑄 and for all 𝛼 ∈ (±𝑇1 ) ∗ and 𝛽 ∈ (±𝑇2 ) ∗ : (𝑞 −→ 𝑞 ′ ∧ 𝑞 −→ 𝑞 ′ ) ⇒ 𝑞 = 𝑞′. This means that, if 𝑞 ≠ 𝑞 ′ , it is not possible to go from 𝑞 to 𝑞 ′ while using only labels (and reverse labels) from 𝑇1 as well as only labels (and reverse labels) from 𝑇2 . Trivially, 𝑇 𝑆 is always {∅, 𝑇 }- as well as {𝑇, ∅}-separated. Proposition 13.11 Product implies separation Let 𝑇 𝑆1 = (𝑄 1 , →1 , 𝑇1 , 𝑞 0 1 ) and 𝑇 𝑆2 = (𝑄 2 , →2 , 𝑇2 , 𝑞 0 2 ) be two lts with disjoint label sets, then 𝑇 𝑆1 ⊗ 𝑇 𝑆2 is {𝑇1 , 𝑇2 }-separated. Another remarkable property of a product lts is the following. Definition 13.12 General diamond property An lts 𝑇 𝑆 = (𝑄, →, 𝑇, 𝑞 0 ) enjoys the general diamond property for two ← − distinct labels 𝑎, 𝑏 ∈ 𝑇 if ∀𝑞, 𝑞 1 , 𝑞 2 ∈ 𝑄 ∀𝑢 ∈ {𝑎, ← 𝑎−} ∀𝑣 ∈ {𝑏, 𝑏 }: 𝑢

𝑣

𝑣

𝑢

𝑞 −→ 𝑞 1 ∧ 𝑞 −→ 𝑞 2 ⇒ (∃𝑞 ′ ∈ 𝑄 : 𝑞 1 −→ 𝑞 ′ ∧ 𝑞 2 −→ 𝑞 ′ ) If 𝑇1 , 𝑇2 ⊆ 𝑇 with 𝑇1 ∩ 𝑇2 = ∅, then 𝑇 𝑆 will be called {𝑇1 , 𝑇2 }-gdiam if it enjoys the general diamond property for each pair of labels 𝑎 ∈ 𝑇1 , 𝑏 ∈ 𝑇2 . In other words, 𝑇 𝑆 has the general diamond property for 𝑎 ≠ 𝑏 ∈ 𝑇 if, whenever there are two adjacent edges in a diamond as in Figure 13.4, the other two are also present. Any lts 𝑇 𝑆 = (𝑆, →, 𝑇, 𝑞 0 ) is both {∅, 𝑇 }- and {𝑇, ∅}-gdiam. Figure 13.4 shows that the general diamond property captures both forms of persistence (forward and

304

13 Divide-and-Conquer Methods for Synthesis

backward), as well as (either weak or strong) transpositions, as defined in Chapters 8 and 12. 𝑞3 𝑎

𝑏 𝑞1

𝑞4 𝑎

𝑏 𝑞2

𝑎

𝑏

𝑎

𝑏

𝑎

𝑏

𝑎

𝑏

𝑎

𝑏

𝑏

𝑎

𝑏

𝑎

𝑎

𝑏

𝑞1 −→ 𝑞2 ∧ 𝑞1 −→ 𝑞3 ⇒ ∃𝑞4 : 𝑞3 −→ 𝑞4 ∧ 𝑞2 −→ 𝑞4 (forward persistence) 𝑞3 −→ 𝑞4 ∧ 𝑞2 −→ 𝑞4 ⇒ ∃𝑞1 : 𝑞1 −→ 𝑞2 ∧ 𝑞1 −→ 𝑞3 (backward persistence) 𝑞1 −→ 𝑞2 ∧ 𝑞2 −→ 𝑞4 ⇒ ∃𝑞3 : 𝑞1 −→ 𝑞3 ∧ 𝑞3 −→ 𝑞4 (transposition) 𝑞1 −→ 𝑞3 ∧ 𝑞3 −→ 𝑞4 ⇒ ∃𝑞2 : 𝑞1 −→ 𝑞2 ∧ 𝑞2 −→ 𝑞4 (transposition)

Fig. 13.4: General diamond property.

The general diamond property can be extended to paths, as follows. Proposition 13.13 Projections and permutation 𝛼 Let 𝑇 𝑆 = (𝑄, →, 𝑇, 𝑞 0 ) be a {𝑇1 , 𝑇2 }-gdiam lts with 𝑇1 ∩𝑇2 = ∅. If 𝑞 −→ 𝑞 ′ for some 𝑞, 𝑞 ′ ∈ 𝑆 and general path 𝛼 ∈ ((±𝑇1 ) ∪ (±𝑇2 )) ∗ , let 𝛼1 be the projection of 𝛼 on 𝑇1 (i.e., 𝛼 where all the elements in ±𝑇2 are dropped) and 𝛼2 be the projection of 𝛼 on 𝑇2 (thus dropping the elements in ±𝑇1 ). Then 𝛼1 𝛼2 there are states 𝑞 1 , 𝑞 2 ∈ 𝑄 such that 𝑞 −→ 𝑞 1 −→ 𝑞 ′ and 𝛼2 𝛼1 𝑞 −→ 𝑞 2 −→ 𝑞 ′ . This implies a variant of Keller’s theorem 12.12. Proposition 13.14 General diamonds imply big general diamonds 𝛼1 Let 𝑇 𝑆 = (𝑄, →, 𝑇, 𝑞 0 ) be a {𝑇1 , 𝑇2 }-gdiam lts with 𝑇1 ∩𝑇2 = ∅. If 𝑞 −→ 𝑞 1 𝛼2 and 𝑞 −→ 𝑞 2 for some 𝑞, 𝑞 1 , 𝑞 2 ∈ 𝑆, 𝛼1 ∈ (±𝑇1 ) ∗ and 𝛼2 ∈ (±𝑇2 ) ∗ , then 𝛼2 𝛼1 for some 𝑞 ′ ∈ 𝑆, 𝑞 1 −→ 𝑞 ′ and 𝑞 2 −→ 𝑞 ′ . Taken together, Propositions 13.13 and 13.14 may be interpreted as the fact that big general diamonds are filled with small ones. Now, the interest of the notion of general diamonds arises from the following observation: Proposition 13.15 Product implies general diamonds Let 𝑇 𝑆1 = (𝑄 1 , →1 , 𝑇1 , 𝑞 0 1 ) and 𝑇 𝑆2 = (𝑄 2 , →2 , 𝑇2 , 𝑞 0 2 ) be two labelled transition systems with disjoint label sets, then 𝑇 𝑆1 ⊗ 𝑇 𝑆2 enjoys the general diamond property for each 𝑎 ∈ 𝑇1 and 𝑏 ∈ 𝑇2 .

13.3 Extracting concurrency from an lts

305

Unfortunately, the reverse property does not hold in all generality, as shown by the counterexamples in Figure 13.5. On the left, in 𝑇 𝑆1 , 𝑎 and 𝑏 form general diamonds; however, 𝑇 𝑆1 is not a product of two lts based on label sets {𝑎} and {𝑏}, since 𝑎← 𝑎−

𝑏

the general paths 𝑞 0 −→ 𝑞 2 and 𝑞 0 −→ 𝑞 2 show that the lts is not {{𝑎}, {𝑏}}separated, hence contradicting Proposition 13.11. On the right, in 𝑇 𝑆2 , 𝑎 and 𝑏 also ← − 𝑏𝑏

← 𝑎− 𝑎

form general diamonds, but we have 𝑞 5 −→ 𝑞 2 and 𝑞 5 −→ 𝑞 2 , contradicting again Proposition 13.11. But note that 𝑇 𝑆1 is backward non-deterministic while 𝑇 𝑆2 is forward nondeterministic. Hence neither of them can have a Petri net solution. 𝑞5

𝑇𝑆2 : 𝑇𝑆1 :

𝑞2 𝑎

𝑏

𝑏 𝑞3

𝑎 𝑞0

𝑏 𝑞0

𝑎

𝑏

𝑞4

𝑞1

𝑎

𝑎

𝑏

𝑞2

𝑏

Fig. 13.5: General diamond property does not imply product. In the following, we extend the ⟨ ⟩ notation already used in Corollary 13.5. Notation 13.16 The ⟨ ⟩ notation If 𝑞 ∈ 𝑄 and 𝑇 ′ ⊆ 𝑇, we shall denote by ⟨𝑞⟩𝑇 ′ the subset of states generally reachable from 𝑞 while only using labels from 𝑇 ′ , i.e., 𝛼

{𝑞 ′ ∈ 𝑆 | 𝑞 −→ 𝑞 ′ for some 𝛼 ∈ (±𝑇 ′ ) ∗ } ′

and by ⟨𝑞⟩ 𝑇 the corresponding subsystem, i.e., (⟨𝑞⟩𝑇 ′ , →𝑇 ′ , 𝑇 ′ , 𝑞), where 𝑎

→𝑇 ′ = {𝑞 ′ −→ 𝑞 ′′ ∈ → with 𝑞 ′ , 𝑞 ′′ ∈ ⟨𝑞⟩𝑇 ′ and 𝑎 ∈ 𝑇 ′ } The main result of this section is then: Theorem 13.17 General diamonds and separation imply product Let 𝑇 𝑆 = (𝑄, →, 𝑇, 𝑞 0 ) be a connected lts with 𝑇 = 𝑇1 ∪ 𝑇2 and 𝑇1 ∩ 𝑇2 = ∅. There are two lts 𝑇 𝑆1 = (𝑄 1 , →1 , 𝑇1 , 𝑞 0 1 ) and 𝑇 𝑆2 = (𝑄 2 , →2 , 𝑇2 , 𝑞 0 2 ) such that 𝑇 𝑆 ≡ 𝑇 𝑆1 ⊗ 𝑇 𝑆2 iff 𝑇 𝑆 is {𝑇1 , 𝑇2 }-separated and {𝑇1 , 𝑇2 }-gdiam. Moreover, we then have 𝑇 𝑆1 ≡ ⟨𝑞 0 ⟩ 𝑇1 and 𝑇 𝑆2 ≡ ⟨𝑞 0 ⟩ 𝑇2 .

306

13 Divide-and-Conquer Methods for Synthesis

Given an lts and two disjoint sets 𝑇1 and 𝑇2 of labels, it is easy to check if the lts is {𝑇1 , 𝑇2 }-gdiam: For each state a property that only considers its immediate neighborhood has to be examined. We call such a property local. In contrast, checking if the lts is {𝑇1 , 𝑇2 }-separated has to consider paths of arbitrary length and is global. A fundamental consequence is that, for Petri net solvability, the local property suffices: Theorem 13.18 Divide-and-conquer synthesisability using factors If a totally reachable lts 𝑇 𝑆 = (𝑄, →, 𝑇1 ∪ 𝑇2 , 𝑞 0 ) with 𝑇1 ∩ 𝑇2 = ∅ is forward as well as backward deterministic and satisfies the general diamond property for each pair of labels 𝑎 ∈ 𝑇1 and 𝑏 ∈ 𝑇2 , then it is Petri net synthesisable iff so are ⟨𝑞 0 ⟩ 𝑇1 and ⟨𝑞 0 ⟩ 𝑇2 ; moreover, we then have 𝑇 𝑆 ≡ ⟨𝑞 0 ⟩ 𝑇1 ⊗ ⟨𝑞 0 ⟩ 𝑇2 and therefore a possible solution of the synthesis problem for 𝑇 𝑆 is the disjoint sum of a solution of ⟨𝑞 0 ⟩ 𝑇1 and a solution of ⟨𝑞 0 ⟩ 𝑇2 . Proof: Let us first assume 𝑇 𝑆 is Petri net synthesisable. By Propositions 12.2 and 13.3, it is weakly periodic, has no general two-way uniform chains and is strongly cycle-consistent. If it has the general diamond property for {𝑇1 , 𝑇2 }, let us show that it is also {𝑇1 , 𝑇2 }-separable. Let us assume that for some states 𝑟 0 , 𝑟 1 ∈ 𝑆, 𝛼 ∈ (±𝑇1 ) ∗ 𝛽

𝛼

and 𝛽 ∈ (±𝑇2 ) ∗ , we have 𝑟 0 −→ 𝑟 1 ∧ 𝑟 0 −→ 𝑟 1 . Then, from Proposition 13.14, we 𝛽

𝛼

𝛽

𝛼

have for some 𝑟 2 as well as some 𝑟 −1 : 𝑟 1 −→ 𝑟 2 ∧𝑟 1 −→ 𝑟 2 ∧𝑟 −1 −→ 𝑟 0 ∧𝑟 −1 −→ 𝑟 0 , and we may continue the construction forward as well as backward. Since the reachability graph of a Petri net has no general two-way uniform chains, we must have 𝑟 𝑖 = 𝑟 𝑗 for some 𝑖 ≠ 𝑗 ∈ Z. But then, from the strong cycle consistency (as well as the weak periodicity), we must have 𝑟 𝑖 = 𝑟 𝑗 for all 𝑖, 𝑗 ∈ Z, and in particular 𝑟 0 = 𝑟 1 , hence the separation property for {𝑇1 , 𝑇2 }. From Theorem 13.17 and Proposition 13.9, it follows that a possible solution of the synthesis problem is the disjoint sum of the solutions of ⟨𝑞 0 ⟩ 𝑇1 and ⟨𝑞 0 ⟩ 𝑇2 . Conversely, let us assume that ⟨𝑞 0 ⟩ 𝑇1 and ⟨𝑞 0 ⟩ 𝑇2 are Petri net solvable. Let us show that 𝑇 𝑆 satisfies the separation property for {𝑇1 , 𝑇2 }. Like in the first part, since by hypothesis we have the general diamond property, if for some 𝑟 0 , 𝑟 1 ∈ 𝑆, 𝛼 ∈ (±𝑇1 ) ∗ 𝛼

𝛽

and 𝛽 ∈ (±𝑇2 ) ∗ we have 𝑟 0 −→ 𝑟 1 ∧ 𝑟 0 −→ 𝑟 1 , we may construct a structure 𝛽

𝛼

𝑟 𝑖 −→ 𝑟 𝑖+1 ∧ 𝑟 𝑖 −→ 𝑟 𝑖+1 . Since the lts is totally reachable, let us assume that 𝛾 𝑞 0 −→ 𝑟 0 for some 𝛾 ∈ 𝑇 ∗ , with 𝛾1 being its projection on 𝑇1 and 𝛾2 being its projection on 𝑇2 . 𝛾1 𝛼𝑖

𝛾2

From Proposition 13.13, for each 𝑖 ∈ Z we have 𝑞 0 −→ 𝑟 𝑖′ −→ 𝑟 𝑖 for some 𝑟 𝑖′ . 𝛾2

From forward and backward determinism, each 𝑟 𝑖′ is unique, and since 𝑟 𝑖′ −→ 𝑟 𝑖 𝛾2

and 𝑟 ′𝑗 −→ 𝑟 𝑗 for any 𝑖, 𝑗 ∈ Z, 𝑟 𝑖′ = 𝑟 ′𝑗 iff 𝑟 𝑖 = 𝑟 𝑗 ; hence all the states 𝑟 𝑖′ are distinct 𝛼

′ iff this is true for all the states 𝑟 𝑖 . But in ⟨𝑞 0 ⟩ 𝑇1 , 𝑟 𝑖′ −→ 𝑟 𝑖+1 for each 𝑖 ∈ Z. Since 𝑇 ′ 1 ⟨𝑞 0 ⟩ is Petri net solvable, Proposition 13.3 ensures that 𝑟 𝑖 = 𝑟 ′𝑗 for some 𝑖 ≠ 𝑗 ∈ Z (no two-way general uniform chains) and 𝑟 𝑖′ = 𝑟 ′𝑗 for all 𝑖, 𝑗 ∈ Z (weak periodicity),

13.3 Extracting concurrency from an lts

307

and consequently 𝑟 𝑖 = 𝑟 𝑗 for all 𝑖, 𝑗 ∈ Z. In particular 𝑟 0 = 𝑟 1 , hence the claimed separation property. And from Theorem 13.17 and Proposition 13.9, we thus have that a possible solution of the synthesis problem is the disjoint sum of the solutions of ⟨𝑞 0 ⟩ 𝑇1 and ⟨𝑞 0 ⟩ 𝑇2 . 13.18 In some sense, this means that the missing (global) separation property is hidden in the Petri net synthesis of the subsystems ⟨𝑞 0 ⟩ 𝑇1 and ⟨𝑞 0 ⟩ 𝑇2 , when this succeeds. It remains to find suitable subsets of labels 𝑇1 and 𝑇2 , partitioning 𝑇 and satisfying the general diamond property. To this end, one may rely on the following, which is again a local property. Definition 13.19 Connected labels Let 𝑇 𝑆 = (𝑄, →, 𝑇, 𝑞 0 ) be an lts and 𝑎, 𝑏 ∈ 𝑇 be two distinct labels. We shall denote by 𝑎 ↔ 𝑏 the fact that they do not form general diamonds, 𝑢 𝑣 i.e., there are states 𝑞, 𝑞 1 , 𝑞 2 ∈ 𝑆, 𝑞 −→ 𝑞 1 , 𝑞 −→ 𝑞 2 with 𝑢 ∈ {𝑎, ← 𝑎−} and ← − 𝑣 𝑢 𝑣 ∈ {𝑏, 𝑏 }, such that there is no state 𝑞 ′ with (𝑞 1 −→ 𝑞 ′ ∧ 𝑞 2 −→ 𝑞 ′ ). If 𝑎 ↔ 𝑏, then, in any decomposition, 𝑎 and 𝑏 must belong to the same component. In other words, 𝑎 ∈ 𝑇1 ⇒ [𝑎] ⊆ 𝑇1 , where [𝑎] = {𝑏 ∈ 𝑇 | 𝑎 ↔∗ 𝑏} and ↔∗ is the reflexive and transitive closure of ↔ (intuitively meaning that the labels connected by it are “non-diamondisable”). But from Theorem 13.18, we know that this is enough: for each equivalence class, either the synthesis works and we have a global solution by taking the disjoint sum of all the solutions, or one (or more) of the subproblems fails, and we know there is no global solution for the whole system. Our proposed algorithm now works as follows: First, iterate over all states of the given lts, and for each state, check if the adjacent edges form general diamonds. If not, their labels must be in the same equivalence class of ↔∗ . Then, for each equivalence class [𝑎], try Petri net synthesis on ⟨𝑞 0 ⟩ [𝑎] . If it works, the result is the disjoint sum of the computed Petri nets. This constructs the equivalence relation by repeatedly joining classes, but it also allows to stop the iteration early when only one equivalence class remains. Example 13.20 Example 13.6 continued In Figure 13.3, all pairs of labels (𝑎, 𝑏), (𝑎, 𝑐) and (𝑏, 𝑐) provide for general diamonds around every state. So, the transition system can be factored into three mutually independent and smaller transition systems. This checking has to be done for every state, but it is local, for each of them. Concerning the estimation of the efficiency of this procedure, we may observe that, from the forward and the backward determinism, any state can be connected to 2 · |𝑇 | edges (each label may occur at most once in forward and in backward direction). Checking the presence of general diamonds requires examining each pair of these edges, so it requires time in 𝑂 (|𝑇 | 2 ) for each state, and 𝑂 (|𝑆| · |𝑇 | 2 ) for the full lts. However, in practice, in the non-factorisable case, the procedure is likely to be a lot faster since one may stop as soon as it is discovered that there is a single equivalence

308

13 Divide-and-Conquer Methods for Synthesis

class; in this case the factorisation procedure is useless, but the extra burden may be expected to be neglible with respect to the time of the proper synthesis. For a factorisable lts, if there are 𝑓 factors approximately of the same size, the size of each subsystem to synthesise is the 𝑓 th root of the original size. This divides the polynomial degree of the synthesis by 𝑓 , i.e., replacing 𝑂 (|𝑆| 𝑑 ) with 𝑂 ( 𝑓 · |𝑆| 𝑑/ 𝑓 ). However, this concerns the worst case complexity analysis and the true one may be rather different, which may be checked by conducting various experiments. Also, if the factorisation procedure is placed at the end of the pre-synthesis phase which checks the structure of the lts to determine whether there is a chance that the synthesis succeeds, the factorisation will not be entered at all if a failure is detected before.

13.4 Detecting loose coupling in an lts A second pair of conjugated operators is provided by an articulation feature. Intuitively, an articulation is a state which, when deleted, causes a given lts to be separated into two or more disconnected components. Definition 13.21 Articulation of labelled transition systems Let 𝑇 𝑆 = (𝑄, →, 𝑇, 𝑞 0 ) be a labelled transition system (we shall assume it is deterministic, totally reachable, weakly live, defined up to isomorphism, and that 𝑇 ≠ ∅). 𝑡 𝑡 Let 𝑇1 ⊆ 𝑇. We shall denote by adj(𝑇1 ) = {𝑞 ∈ 𝑄 | ∃𝑡 ∈ 𝑇1 : 𝑞 −→ or −→ a 𝑞} the adjacency set of 𝑇1 , i.e., the set of states connected to 𝑇1 . Let ∅ ⊂ 𝑇1 ⊂ 𝑇, 𝑇2 = 𝑇 \ 𝑇1 and 𝑞 ∈ 𝑄. We shall say that 𝑇 𝑆 is articulated by 𝑇1 and 𝑇2 around 𝑞 if adj(𝑇1 ) ∩ 𝛼1 adj(𝑇2 ) = {𝑞}, ∀𝑞 1 ∈ adj(𝑇1 )∃𝛼1 ∈ 𝑇1∗ : 𝑞 0 −→ 𝑞 1 and ∀𝑞 2 ∈ adj(𝑇2 )∃𝛼2 ∈ 𝛼2

𝑇2∗ : 𝑞 −→ 𝑠𝑞 2 . Let 𝑇 𝑆1 = (𝑄 1 , →1 , 𝑇1 , 𝑞 0 1 ) and 𝑇 𝑆2 = (𝑄 2 , →2 , 𝑇2 , 𝑞 0 2 ) be two (totally reachable) transition systems with 𝑇1 ∩ 𝑇2 = ∅ and 𝑞 ∈ 𝑄 1 . Thanks to isomorphisms, we may assume that 𝑄 1 ∩ 𝑄 2 = {𝑞} and 𝑞 0 2 = 𝑞. We shall then denote by 𝑇 𝑆1 ⊳ 𝑞 ⊲ 𝑇 𝑆2 = (𝑄 1 ∪ 𝑄 2 , 𝑇1 ∪ 𝑇2 , →1 ∪ →2 , 𝑞 0 1 ) the articulation of 𝑇 𝑆1 and 𝑇 𝑆2 around 𝑞. 𝑡

The notation 𝑞 −→ is meant to indicate that 𝑞 has an outgoing arrow labelled 𝑡, while 𝑡 −→ 𝑞 means that 𝑞 has an incoming 𝑡-arrow.

a

Several easy but interesting properties may be derived for this articulation operator. Note first that the operator is only defined up to isomorphism, since we may need to rename the state sets (usually 𝑄 2 ; but we may also rename 𝑄 1 , or both). The only

13.4 Detecting loose coupling in an lts

309

constraint is that, after the relabellings, 𝑞 is the unique common state of 𝑇 𝑆1 and 𝑇 𝑆2 , and it is the state where the two systems are to be articulated. Figure 13.6 illustrates this operator. It also shows that the articulation highly depends on the state around which the articulation takes place. It may also be observed that, if 𝑇 𝑆0 = ({𝑞 0 }, ∅, ∅, 𝑞 0 ) is the trivial empty LTS, then for any state 𝑞 of 𝑇 𝑆, 𝑇 𝑆 ⊳ 𝑞 ⊲ 𝑇 𝑆0 ≡ 𝑇 𝑆, i.e., we have a kind of right neutral trivial articulation. Similarly, 𝑇 𝑆0 ⊳ 𝑞 0 ⊲ 𝑇 𝑆 ≡ 𝑇 𝑆, i.e., we have a kind of left neutral trivial articulation. However, these neutrals will play no role in the remaining part of this section, so that we shall exclude them from our considerations (which is why the edge label sets were assumed to be non-empty). Corollary 13.22 Both forms of articulation are equivalent If 𝑇 𝑆 = (𝑄, →, 𝑇, 𝑞 0 ) is articulated by 𝑇1 and 𝑇2 around 𝑞, then the structures 𝑇 𝑆1 = (adj(𝑇1 ), →1 , 𝑇1 , 𝑞 0 ) and 𝑇 𝑆2 = (adj(𝑇2 ), →2 , 𝑇2 , 𝑞), where →1 is the restriction of → to 𝑇1 (i.e., →1 =→ ∩adj(𝑇1 ) ×𝑇1 ×adj(𝑇1 )), and similarly for →2 , are (totally reachable) transition systems, 𝑇 𝑆 ≡ 𝑇 𝑆1 ⊳ 𝑞 ⊲ 𝑇 𝑆2 (in that case we do not need to apply a relabelling to 𝑇 𝑆1 and 𝑇 𝑆2 ). Conversely, 𝑇 𝑆1 ⊳ 𝑞 ⊲ 𝑇 𝑆2 is articulated by the label sets of 𝑇 𝑆1 and 𝑇 𝑆2 around 𝑞, if these transition systems are totally reachable.

𝑞0

𝑐 𝑎

𝑏

𝑒

𝑏

𝑐

𝑔 𝑞

𝑞0

𝑞 𝑑 𝑇𝑆2

𝑇𝑆1

𝑞0

𝑎

𝑑

𝑒 𝑔 𝑞 𝑑

𝑏

𝑔 𝑞′

𝑞′

𝑇𝑆3 ≡ 𝑇𝑆1 ⊳ 𝑞 ⊲ 𝑇𝑆2

𝑒

𝑐 𝑞0

𝑎

𝑞

𝑇𝑆4 ≡ 𝑇𝑆1 ⊳ 𝑞0 ⊲ 𝑇𝑆2

Fig. 13.6: Some articulations.

Corollary 13.23 Evolutions (i.e., paths) of an articulation 𝛼 If 𝑇 𝑆 ≡ 𝑇 𝑆1 ⊳ 𝑞 ⊲ 𝑇 𝑆2 , then 𝑞 0 −→ 𝑞 ′ is an evolution of 𝑇 𝑆 iff it is an alternation of evolutions of 𝑇 𝑆1 and 𝑇 𝑆2 separated by occurrences of 𝑞, i.e., either 𝛼 ∈ 𝑇1∗ or 𝛼 = 𝛼1 𝛼2 . . . 𝛼𝑛 such that 𝛼𝑖 ∈ 𝑇1∗ if 𝑖 is odd, 𝛼𝑖 ∈ 𝑇2∗ if 𝑖 is 𝛼𝑖

𝛼1

𝛼𝑖+1

even, 𝑞 0 −→ 𝑞 and ∀𝑖 ∈ {1, 2, . . . , 𝑛 − 1} : −→ 𝑞 −→ .

Example 13.24 Illustrating the alternation of evolutions In 𝑇 𝑆3 from Figure 13.6, a possible evolution is 𝑎𝑏𝑐

𝑔𝑒𝑑𝑒

𝑏

𝑞 0 −→ 𝑞 −→ 𝑞 −→ 𝑞 0

310

13 Divide-and-Conquer Methods for Synthesis

using 𝑇 𝑆1 , 𝑇 𝑆2 , and 𝑇 𝑆1 (in this order), but also, equivalently, 𝑎

𝜀

𝑔𝑒

𝑏𝑐

𝜀

𝑑𝑒

𝑏

𝑞 0 −→ 𝑞 −→ 𝑞 −→ 𝑞 −→ 𝑞 −→ 𝑞 −→ 𝑞 −→ 𝑞 0 (where 𝜀 is the empty sequence), alternating several more times between 𝑇 𝑆1 and 𝑇 𝑆2 . The two sequences are the same, but they differ in terms of which constituent lts is chosen at each step.

Corollary 13.25 Associativity of articulations Let us assume that 𝑇 𝑆1 , 𝑇 𝑆2 and 𝑇 𝑆3 are three transition systems whose label sets 𝑇1 , 𝑇2 and 𝑇3 are pairwise disjoint. Let 𝑞 1 be a state of 𝑇 𝑆1 and 𝑞 2 be a state of 𝑇 𝑆2 . Then 𝑇 𝑆1 ⊳ 𝑞 1 ⊲ (𝑇 𝑆2 ⊳ 𝑞 2 ⊲ 𝑇 𝑆3 ) ≡ (𝑇 𝑆1 ⊳ 𝑞 1 ⊲ 𝑇 𝑆2 ) ⊳ 𝑞 2′ ⊲ 𝑇 𝑆3 where 𝑞 2′ in 𝑇 𝑆1 ⊳ 𝑞 1 ⊲ 𝑇 𝑆2 corresponds to 𝑞 2 in 𝑇 𝑆2 (let us recall that the articulation operator may rename the states of the second operand). This corollary is illustrated by Figure 13.7.

𝑏

𝑖

𝑒

𝑐 𝑞0

𝑞 ℎ

𝑇𝑆5

𝑞0

𝑔 𝑞′

𝑞 𝑎

𝑖

𝑑

𝑞 ′′ ℎ

(𝑇𝑆1 ⊳ 𝑞 ⊲ 𝑇𝑆2 ) ⊳ 𝑞 ′ ⊲ 𝑇𝑆5 ≡ 𝑇𝑆1 ⊳ 𝑞 ⊲ (𝑇𝑆2 ⊳ 𝑞 ⊲ 𝑇𝑆5 )

Fig. 13.7: Associativity of articulations, with the systems from Figure 13.6.

Corollary 13.26 Commutativity of articulations If 𝑇 𝑆1 = (𝑄 1 , →1 , 𝑇, 𝑞 0 1 ) and 𝑇 𝑆2 = (𝑄 2 , →2 , 𝑇, 𝑞 0 2 ) with disjoint label sets. Then 𝑇 𝑆1 ⊳ 𝑞 0 1 ⊲ 𝑇 𝑆2 ≡ 𝑇 𝑆2 ⊳ 𝑞 0 2 ⊲ 𝑇 𝑆1 . For instance, in Figure 13.6, 𝑇 𝑆4 ≡ 𝑇 𝑆1 ⊳ 𝑞 0 ⊲ 𝑇 𝑆2 ≡ 𝑇 𝑆2 ⊳ 𝑞 0 ⊲ 𝑇 𝑆1 . Corollary 13.27 Commutative associativity of articulations Let us assume that 𝑇 𝑆1 , 𝑇 𝑆2 and 𝑇 𝑆3 are three transition systems whose label sets 𝑇1 , 𝑇2 and 𝑇3 are pairwise disjoint. Let 𝑞 2 and 𝑞 3 be two states of 𝑇 𝑆1 (𝑞 2 = 𝑞 3 is allowed). Then (𝑇 𝑆1 ⊳ 𝑞 2 ⊲ 𝑇 𝑆2 ) ⊳ 𝑞 3 ⊲ 𝑇 𝑆3 ≡ (𝑇 𝑆1 ⊳ 𝑞 3 ⊲ 𝑇 𝑆3 ) ⊳ 𝑞 2 ⊲ 𝑇 𝑆2

13.4 Detecting loose coupling in an lts

311 𝑏



𝑒

𝑐 𝑞 ′′ 𝑖

𝑔 𝑞′

𝑞

𝑞0

𝑎

𝑑

(𝑇𝑆1 ⊳ 𝑞 ⊲ 𝑇𝑆2 ) ⊳ 𝑞0 ⊲ 𝑇𝑆5 ≡ (𝑇𝑆1 ⊳ 𝑞0 ⊲ 𝑇𝑆5 ) ⊳ 𝑞 ⊲ 𝑇𝑆2

Fig. 13.8: Commutative associativity of articulations.

Corollary 13.28 Sequence of articulations If 𝑇 𝑆1 = (𝑄 1 , →1 , 𝑇, 𝑞 0 1 ) and 𝑇 𝑆2 = (𝑄 2 , →2 , 𝑇, 𝑞 0 2 ) have disjoint label 𝛼1 sets, if ∀𝑞 1 ∈ 𝑄 1 ∃𝛼1 ∈ 𝑇1∗ : 𝑞 1 −→ 𝑞 (that is, 𝑞 is a home state in 𝑇 𝑆1 ) and 𝑡1

š𝑡 1 ∈ 𝑇1 : 𝑞 −→ (that is, 𝑞 is a deadlock in 𝑇 𝑆1 ), then 𝑇 𝑆1 ⊳ 𝑞 ⊲ 𝑇 𝑆2 behaves like a sequence, i.e., once 𝑇 𝑆2 has started, it is no longer possible to execute 𝑇1 . 𝛼2 The same occurs when 𝑞 0 2 does not occur in a non-trivial cycle, i.e., 𝑞 0 2 −→ 𝑞 0 2 ∧ 𝛼2 ∈ 𝑇2∗ ⇒ 𝛼2 = 𝜀: once 𝑇 𝑆2 has started, it is no longer possible to execute 𝑇1 . This is illustrated in Figure 13.9. It may be observed that ordinary sequences correspond to the intersection of both cases.

𝑏

𝑒 𝑔 𝑐

𝑇𝑆6

𝑐

𝑔 𝑞

𝑞0

𝑞

𝑞0 𝑇𝑆7

𝑞0

𝑔 𝑞 𝑑

𝑐 𝑞′

𝑞′

𝑇𝑆6 ⊳ 𝑞 ⊲ 𝑇𝑆2

𝑞0

𝑎

𝑞

𝑇𝑆1 ⊳ 𝑞0 ⊲ 𝑇𝑆7

Fig. 13.9: Sequential articulations. If an articulation is solvable, then each component is individually solvable too: Proposition 13.29 Synthesis of components of an articulation Let us assume that 𝑇 𝑆 = (𝑆, →, 𝑇1 ∪ 𝑇2 , 𝑞 0 ) is articulated by 𝑇1 and 𝑇2 around 𝑞, so that 𝑇 𝑆 ≡ 𝑇 𝑆1 ⊳ 𝑞 ⊲ 𝑇 𝑆2 with 𝑇 𝑆1 = (adj(𝑇1 ), →1 , 𝑇1 , 𝑞 0 ) and 𝑇 𝑆2 = (adj(𝑇2 ), →2 , 𝑞). If 𝑇 𝑆 is PN-solvable, so is each component 𝑇 𝑆1 and 𝑇 𝑆2 . Moreover, in the corresponding solution for 𝑇 𝑆1 , the marking corresponding to 𝑞 is not dominated by any other reachable marking. The same happens in 𝑇 𝑆2 if the latter is finite.

312

13 Divide-and-Conquer Methods for Synthesis

Proof: Let 𝑁 = (𝑃, 𝑇, 𝐹, 𝑀0 ) be a solution for 𝑇 𝑆. It is immediate that 𝑁1 = (𝑃, 𝑇1 , 𝐹1 , 𝑀0 ), where 𝐹1 is the restriction of 𝐹 to 𝑇1 , is a solution for 𝑇 𝑆1 (but there may be many other ones). Similarly, if 𝑀 is the marking of 𝑁 (and 𝑁1 ) corresponding to 𝑞, it may be seen that 𝑁2 = (𝑃, 𝑇2 , 𝐹2 , 𝑀), where 𝐹2 is the restriction of 𝐹 to 𝑇2 , is a solution for 𝑇 𝑆2 (but there may be many other ones). 𝑡2

Moreover, if 𝑞 −→ for some label 𝑡 2 ∈ 𝑇2 and 𝑀 ′ is a marking of 𝑁1 corresponding 𝑡2 to some state 𝑞 ′ in 𝑇 𝑆1 with 𝑀 ′ ≩ 𝑀, then 𝑞 ≠ 𝑞 ′ , 𝑞 ′ −→ and 𝑞 is not the unique articulation between 𝑇1 and 𝑇2 . If 𝑀 ′ is a reachable marking of 𝑁2 with 𝑀 ′ ≩ 𝑀, then 𝑁2 is unbounded, hence 𝑇 𝑆2 may not be finite. 13.29 This is illustrated in Figure 13.10. Conversely, let us now assume that 𝑇 𝑆 = 𝑇 𝑆1 ⊳ 𝑞 ⊲ 𝑇 𝑆2 is an articulated LTS and that it is possible to solve 𝑇 𝑆1 and 𝑇 𝑆2 . Is it possible from that to build a solution of 𝑇 𝑆? To this end, we shall add the constraint already mentioned in Proposition 13.29 that, in the solution of 𝑇 𝑆1 as well as in the solution of 𝑇 𝑆2 , the marking corresponding to 𝑞 is not dominated by another reachable marking. If this is satisfied, we shall say that the solution is adequate (with respect to 𝑞). Hence, in the treatment of the system in Figure 13.10, we want to avoid considering the solution 𝑁1′ of 𝑇 𝑆1 ; by contrast, 𝑁1 or 𝑁1′′ will be acceptable. As already mentioned, if 𝑇 𝑆2 is finite, then the initial marking 𝑀02 (corresponding to 𝑞) in the solution of 𝑇 𝑆2 is not dominated by any reachable marking. Otherwise, 𝛼 there is a path 𝑀02 −→ 𝑀 with 𝑀02 ≨ 𝑀 in that solution, and an infinite path 𝛼∞

𝛼∞

𝑀02 −→ , hence also an infinite path 𝑞 0 2 −→ in 𝑇 𝑆2 , contradicting the finiteness assumption. If 𝑇 𝑆1 is finite and reversible, from a similar argument, no marking reachable in the solution of 𝑇 𝑆1 is dominated by another one, so that the constraint on 𝑞 is satisfied. Otherwise, it is possible to force such a solution (if there is one) in the following way: Proposition 13.30 Forcing an adequate solution for 𝑇 𝑆1 𝑢 Let us add to 𝑇 𝑆1 an arc 𝑞 −→ 𝑞 where 𝑢 is a fresh label. Let 𝑇 𝑆1′ be the transition system so obtained. If 𝑇 𝑆1′ is not solvable, there is no adequate solution. Otherwise, we may solve 𝑇 𝑆1′ and erase 𝑢 from the solution. Let 𝑁1 be the net obtained by the procedure just described; it is a solution of 𝑇 𝑆1 with the adequate property that the marking corresponding to 𝑞 is not dominated by another one. Proof: If there is an adequate solution 𝑁1 of 𝑇 𝑆1 , with a marking 𝑀 corresponding to 𝑞, let us add a new transition 𝑢 to it such that, for each place 𝑝 of 𝑁1 , 𝐹 ( 𝑝, 𝑢) = 𝑀 ( 𝑝) = 𝐹 (𝑢, 𝑝): the reachability graph of this new net is (isomorphic to) 𝑇 𝑆1′ , since 𝑢 is enabled by the marking 𝑀 (or any larger one, but there is none) and does not

13.4 Detecting loose coupling in an lts 𝑞0 𝑎 𝑞1 𝑎

313

𝑞4

𝑞0 1

𝑏 𝑞3

𝑞1

𝑞4 𝑎

𝑏

𝑏 𝑞3

𝑎

𝑞2

𝑏 𝑞0 2

𝑞2

𝑇𝑆 = 𝑇𝑆1 ⊳ 𝑠2 ⊲ 𝑇𝑆2

𝑎

𝑏

𝑇𝑆1

𝑇𝑆2

𝑎

𝑏 2

2

2

2 𝑁1

𝑁

𝑎 𝑁1′

𝑁2

𝑎

𝑏 𝑁2′

𝑁1′′

Fig. 13.10: The lts 𝑇 𝑆 is articulated around 𝑞 2 , with 𝑇1 = {𝑎} and 𝑇2 = {𝑏}, hence leading to 𝑇 𝑆1 and 𝑇 𝑆2 . It is solved by 𝑁, and the corresponding solutions for 𝑇 𝑆1 and 𝑇 𝑆2 are 𝑁1 and 𝑁2 , respectively. 𝑇 𝑆1 also has the solution 𝑁1′ but the marking corresponding to 𝑞 2 is then empty, hence it is dominated by the initial marking (as well as by the intermediate one). This is not the case for the other solution 𝑁1′′ (obtained from 𝑁1 by erasing the useless isolated place: we never claimed that 𝑁1 is a minimal solution). 𝑇 𝑆2 also has the solution 𝑁2′ .

modify the marking. Hence, if there is no adequate solution of 𝑇 𝑆1 , then there is no solution of 𝑇 𝑆1′ . Let us now assume there is a solution 𝑁1′ of 𝑇 𝑆1′ . The marking 𝑀 corresponding to 𝑞

𝑞 is not dominated. Otherwise, there would be a loop 𝑀 ′ −→ 𝑀 ′ elsewhere in the reachability graph of 𝑁1′ , hence also in 𝑇 𝑆1′ . Hence, dropping 𝑢 in 𝑁1′ will lead to an adequate solution of 𝑇 𝑆1 . 13.30 ′′ For instance, when applied to 𝑇 𝑆1 in Figure 13.10, this will lead to 𝑁1 , and not 𝑁1′ (𝑁1 could also be produced, but it is likely that a “normal” synthesis procedure will not construct the additional isolated place).

314

13 Divide-and-Conquer Methods for Synthesis

Now, to understand how one may generate a solution for 𝑇 𝑆 from the ones obtained for 𝑇 𝑆1 and 𝑇 𝑆2 , let us first consider the example illustrated in Figure 13.11. The following construction generalises the idea of introducing side conditions as in this example. 𝑞0 1

𝑞0 𝑏 𝑞

𝑎

𝑑

𝑐

𝑎

𝑏

𝑞0 2

𝑞 𝑐

𝑑

𝑞2

𝑞2

𝑇𝑆 = 𝑇𝑆1 ⊳ 𝑞 ⊲ 𝑇𝑆2

𝑇𝑆1

𝑇𝑆2 𝑐

𝑎

𝑏

𝑐

𝑑

𝑑 𝑏 𝑎

𝑁1

𝑁2

𝑁

Fig. 13.11: The lts 𝑇 𝑆 is articulated around 𝑞, with 𝑇1 = {𝑎, 𝑏} and 𝑇2 = {𝑐, 𝑑}, hence leading to 𝑇 𝑆1 and 𝑇 𝑆2 . It is solved by 𝑁, and the corresponding solutions for 𝑇 𝑆1 and 𝑇 𝑆2 are 𝑁1 and 𝑁2 , respectively. In 𝑁, we may recognise 𝑁1 and 𝑁2 , connected by two kinds of side conditions: one of them connects the label 𝑏 out of 𝑞 in 𝑇 𝑆1 to the initial marking of 𝑁2 , and the other one connects the label 𝑐 out of 𝑞 0 2 in 𝑇 𝑆2 to the marking of 𝑁1 corresponding to 𝑞.

Construction 13.31 Solving an articulated transition system Let 𝑇 𝑆 = 𝑇 𝑆1 ⊳ 𝑞 ⊲ 𝑇 𝑆2 be an articulation of the LTS 𝑇 𝑆 around 𝑞 for the partition 𝑇 = 𝑇1 ∪ 𝑇2 with 𝑇1 ∩ 𝑇2 = ∅. Let 𝑁1 be a Petri net solution of 𝑇 𝑆1 , with a non-dominated marking 𝑀1 corresponding to 𝑞, and 𝑁2 be a (bounded) Petri net solution of 𝑇 𝑆2 , with an initial marking 𝑀2 that we know to be non-dominated. Let us assume that the places of 𝑁1 and 𝑁2 are disjoint, which is possible since we consider nets up to isomorphism, and let us put them side by side. For each transition 𝑡1 out of 𝑞 in 𝑇 𝑆1 , and each place 𝑝 2 such that 𝑀2 ( 𝑝 2 ) > 0, let us create a side condition 𝐹 (𝑡 1 , 𝑝 2 ) = 𝐹 ( 𝑝 2 , 𝑡1 ) = 𝑀2 ( 𝑝 2 ). For each transition 𝑡2 out of 𝑞 0 2 in 𝑇 𝑆2 , and each place 𝑝 1 such that 𝑀1 ( 𝑝 1 ) > 0, let us create a side condition 𝐹 (𝑡2 , 𝑝 1 ) = 𝐹 ( 𝑝 1 , 𝑡2 ) = 𝑀1 ( 𝑝 1 ). The result is a Petri net 𝑁.

13.4 Detecting loose coupling in an lts

315

Proposition 13.32 Synthesis of articulation If 𝑇 𝑆1 or 𝑇 𝑆2 are not solvable, neither is 𝑇 𝑆. Otherwise, the net 𝑁 produced by Construction 13.31 is a solution of 𝑇 𝑆. Proof: The property arises from the observation that 𝑁1 with the additional side conditions behaves like the original 𝑁1 provided that, when we reach 𝑀1 , 𝑁2 does not leave 𝑀2 . Similarly, 𝑁2 with the added side conditions behaves like the original 𝑁2 provided 𝑁1 reached 𝑀1 and stays there, until 𝑁2 returns to 𝑀2 . 13.32 Taken together, Propositions 13.29 and 13.32, combined with Construction 13.31, provide a divide-and-conquer technique for lts which are (nontrivially) articulated. It remains to investigate when and how a transition system may be decomposed by an articulation (or several ones). Let us thus consider some transition system 𝑇 𝑆 = (𝑄, →, 𝑇, 𝑞 0 ). We may assume that it is totally reachable (the states which may not be reached from 𝑞 0 play no role in the evolutions of the system) and that the label set 𝑇 is finite (otherwise, it may happen that the finest decomposition is infinite). Usually we shall also assume that the state set 𝑄 is also finite, otherwise there may be a problem to implement the procedure we are about to describe in a true 𝑡

𝑡′

algorithm. We may also assume it is deterministic, i.e., (𝑠 −→ ∧𝑠 −→ ) ⇒ 𝑡 = 𝑡 ′ 𝑡

𝑡′

and ( −→ 𝑠∧ −→ 𝑠) ⇒ 𝑡 = 𝑡 ′ for any state 𝑠 ∈ 𝑆 and labels 𝑡, 𝑡 ′ ∈ 𝑇, otherwise there may be no unlabelled Petri net solution. First, we may observe that, for any two distinct labels 𝑡, 𝑡 ′ ∈ 𝑇, if |adj({𝑡}) ∩ adj({𝑡 ′ })| > 1, 𝑡 and 𝑡 ′ must belong to a same subset for defining an articulation (if any). Let us extend the function adj to non-empty subsets of labels by stating adj(𝑇 ′ ) = ∪𝑡 ∈𝑇 ′ adj(𝑡) when ∅ ⊂ 𝑇 ′ ⊂ 𝑇 . We then have that, if ∅ ⊂ 𝑇1 , 𝑇2 ⊂ 𝑇 and we know that all the labels in 𝑇1 must belong to a same subset for defining an articulation (and similarly for 𝑇2 ), |adj(𝑇1 ) ∩ adj(𝑇2 )| > 1 implies that 𝑇1 ∪ 𝑇2 must belong to a same subset of labels defining an articulation (if any). If we get the full set 𝑇, it means that there is no possible articulation (but trivial ones, that we excluded from this study). Hence, starting from any partition T of 𝑇 (initially, if 𝑇 = {𝑡 1 , 𝑡2 , . . . , 𝑡 𝑛 }, we shall start from the finest partition T = {{𝑡1 }, {𝑡2 }, . . . , {𝑡 𝑛 }}), we shall construct the finest partition compatible with the previous rule: Algorithm 13.33 Computing the finest suitable partition while there are 𝑇1 , 𝑇2 ∈ T such that 𝑇1 ≠ 𝑇2 and |adj(𝑇1 ) ∩ adj(𝑇2 )| > 1 → replace 𝑇1 and 𝑇2 in T by 𝑇1 ∪ 𝑇2 endwhile At the end, if T = {𝑇 }, we may stop with the result “there is no non-trivial articulation”. However, if Algorithm 13.33 stops with T ≠ {𝑇 }, then we may define a finite bipartite undirected graph whose nodes are the members of the partition T and some states of 𝑄, such that if 𝑇𝑖 , 𝑇 𝑗 ∈ T , 𝑇𝑖 ≠ 𝑇 𝑗 and adj(𝑇𝑖 ) ∩ adj(𝑇 𝑗 ) = {𝑞},

316

13 Divide-and-Conquer Methods for Synthesis

there is a node 𝑞 in the graph, connected to 𝑇𝑖 and 𝑇 𝑗 (and this is the only reason to have a state as a node of the graph). Since 𝑇 𝑆 is weakly live and totally reachable, this graph is connected, and each state occurring in it has at least two neighbours (on the contrary, a subset of labels may be connected to a single state). Indeed, since 𝑇 𝑆 is weakly live, ∪𝑇 ′ ∈ T adj(𝑇 ′ ) = 𝑄. Each state 𝑞 occurring as a node in the graph is connected to at least two members of the T , by the definition of the introduction of 𝑞 in the graph. Let 𝑇1 be the member of T such that 𝑞 0 ∈ adj(𝑇1 ), let 𝑇𝑖 be any 𝛼 other member of T , and let us consider a path 𝑞 0 −→ ending with some 𝑡 ∈ 𝑇𝑖 (we may restrict our attention to a short such path, but this is not necessary): each time there is a sequence 𝑡 ′ 𝑡 ′′ in 𝛼 such that 𝑡 ′ and 𝑡 ′′ belong to two different members 𝑇 ′ 𝑡′

𝑡 ′′

and 𝑇 ′′ of T , we have −→ 𝑞 −→ , where 𝑞 is the only state-node connected to 𝑇 ′ and 𝑇 ′′ , hence in the graph we have 𝑇 ′ → 𝑞 → 𝑇 ′′ . This will yield a path in the constructed graph going from 𝑇1 to 𝑇𝑖 , hence the connectivity. If there is a cycle in this graph, this means that there is no way to group the members of T in this cycle in two subsets such that the corresponding adjacency sets only have a single common state. Hence we need to fuse all these members, for each such cycle, leading to a new partition, and we also need to go back to the refinement of the partition in order to be compatible with the intersection rule, and to the construction of the graph. Finally, we shall get an acyclic graph 𝐺, with at least three nodes (otherwise we stopped the articulation algorithm). We shall now define a procedure articul(𝑆𝐺) that builds a transition system expression based on articulations from a subgraph 𝑆𝐺 of 𝐺 with a chosen state-node root. We shall then apply it recursively to 𝐺, leading finally to an articulation-based (possibly complex) expression equivalent to the original transition system 𝑇 𝑆. The basic case will be that, if 𝑆𝐺 is a graph composed of a state 𝑞 connected to a subset node 𝑇𝑖 , articul(𝑆𝐺) will be the transition system 𝑇 𝑆𝑖 = (adj(𝑇𝑖 ), 𝑇𝑖 , →𝑖 , 𝑞) (as usual →𝑖 is the projection of → on 𝑇𝑖 ; by construction, it will always be the case that 𝑞 ∈ adj(𝑇𝑖 )). First, if 𝑞 0 is a state-node of the graph, 𝐺 then has the form of a star with root 𝑞 0 and a set of satellite subgraphs 𝐺 1 , 𝐺 2 , ..., 𝐺 𝑛 (𝑛 is at least 2). Let us denote by 𝑆𝐺 𝑖 the subgraph with root 𝑞 0 connected to 𝐺 𝑖 : the result will then be the (commutative, see Corollary 13.26) articulation around 𝑞 0 of all the transition systems articul(𝑆𝐺 𝑖 ). Otherwise, let 𝑇1 be the (unique) label subset in the graph such that 𝑞 0 ∈ adj(𝑇1 ). 𝐺 may then be considered as a star with 𝑇1 at the center, surrounded by subgraphs 𝑆𝐺 1 , 𝑆𝐺 2 , ..., 𝑆𝐺 𝑛 (here 𝑛 may be 1), each one with a root 𝑠𝑖 connected to 𝑇1 (we have here that 𝑞 𝑖 ∈ adj(𝑇1 ), and we allow 𝑞 𝑖 = 𝑞 𝑗 ): the result is then ((. . . ((adj(𝑇1 ), 𝑇1 , →1 , 𝑞 0 ) ⊳𝑞 1 ⊲articul(𝑆𝐺 1 )) ⊳𝑞 2 ⊲articul(𝑆𝐺 2 )) . . .) ⊳𝑞 𝑛 ⊲articul(𝑆𝐺 𝑛 )). Note that, if 𝑛 > 1, the order in which we consider the subgraphs is irrelevant from Corollary 13.27. Finally, if a subgraph starts from a state 𝑞 ′ , followed by a subset 𝑇 ′ , itself followed by subgraphs 𝑆𝐺 1 , 𝑆𝐺 2 , ..., 𝑆𝐺 𝑛 (𝑛 ≥ 1; if it is 0 we have the base case), each one with a root 𝑠𝑖 connected to 𝑇 ′ (we have here that 𝑞 ′ ∈ adj(𝑇 ′ ), and we allow 𝑞 𝑖 = 𝑞 𝑗 ): the result is then ((. . . ((adj(𝑇 ′ ), 𝑇 ′ , →′ , 𝑞 ′ ) ⊳ 𝑞 1 ⊲ articul(𝑆𝐺 1 )) ⊳ 𝑞 2 ⊲

13.4 Detecting loose coupling in an lts

317

articul(𝑆𝐺 2 )) . . .) ⊳ 𝑞 𝑛 ⊲ articul(𝑆𝐺 𝑛 )). Again, if 𝑛 > 1, the order in which we consider the subgraphs is irrelevant from Corollary 13.27. This procedure is illustrated in Figure 13.12. 𝑞3

𝑓

𝑓

𝑇𝑆: 𝑒 𝑞0

𝑎 𝑞1

𝑑

𝑏

𝑘

𝑐

𝑖

ℎ 𝑞2

𝑞7

𝑘

𝑗

𝑔

𝑞3

{𝑓}

{𝑐, 𝑑, 𝑒}

𝑞2

{ℎ, 𝑔}

{𝑖, 𝑗 }

𝑞7

{𝑘}

𝑞3

𝑇𝑆3 :

𝑞0 3

𝐺:

{𝑎, 𝑏}

𝑞0

𝑇𝑆1 :

𝑞1

𝑇𝑆2 :

𝑓

𝑒 𝑎

𝑏

𝑞0 2

𝑑 𝑓

𝑐 𝑞1

𝑞2

𝑞0 4

𝑇𝑆4 :

𝑇𝑆6 :

𝑞0 5

𝑇𝑆5 :

𝑞0 6 𝑘

𝑔



𝑗

𝑖 𝑘 𝑞7

𝑇𝑆 ≡ 𝑇𝑆1 ⊳ 𝑞1 ⊲ ( ( (𝑇𝑆2 ⊳ 𝑞3 ⊲ 𝑇𝑆3 ) ⊳ 𝑞2 ⊲ 𝑇𝑆4 ) ⊳ 𝑞2 ⊲ (𝑇𝑆5 ⊳ 𝑞7 ⊲ 𝑇𝑆6 ) )

Fig. 13.12: The lts 𝑇 𝑆 gives rise to the graph 𝐺. The corresponding components are 𝑇 𝑆1 to 𝑇 𝑆6 , which may easily be synthesised; note that, from the total reachability of 𝑇 𝑆, they are all totally reachable themselves. This leads to the articulated expression shown at the bottom of the figure.

318

13 Divide-and-Conquer Methods for Synthesis

13.5 Bibliographical remarks and further reading For the NP-completeness of synthesis with non-homogenous constraints, especially when safe nets are targeted, compare [BBD15] (by Eric Badouel, Luca Bernardinello, Philippe Darondeau), [DR96] (by Jörg Desel and Wolfgang Reisig), and [NRT92] (by Mogens Nielsen, Grzegorz Rozenberg, and P.S. Thiagarajan). The analysis of decomposition-composition techniques, either with products of transition systems and additions of Petri nets, or with articulations, is described in more detail, and with full proofs and with experimental results, in [Dev16; Dev18; Dev19; Dev21] (by Raymond Devillers). Articulated points, introduced in Definition 13.21, have some similarity with the cut vertices (or articulation points) introduced for connected, unlabelled, undirected graphs, whose removal disconnects the graph (more precisely: increases the number of connected components). They have been used, for instance, to decompose such graphs into biconnected components.4 The definition of strong cycle-consistency is slightly more demanding than a similar one in [BD16] (Eike Best and Raymond Devillers), since here we allow general paths and cycles.

13.6 Exercises Exercise 13.1 General diamonds (a) For the transition system 𝑇 𝑆1 in Figure 13.5, compute a minimal (Petri net solvable) overapproximation 𝑇 𝑆1′ in the sense of Section 11.7 with start state ′ ; compute ⟨𝑞 ′ ⟩ {𝑎} and ⟨𝑞 ′ ⟩ {𝑏} ; check whether 𝑇 𝑆 ′ satisfies the general 𝑞 01 01 01 1 diamond property with respect to labels 𝑎 and 𝑏; and if so, construct the factors. More precisely: (i) Overapproximation: 𝐿 (𝑇 𝑆1 ) ⊆ 𝐿 (𝑇 𝑆1′ ). (ii) Solvability: 𝑇 𝑆1′ satisfies SSP and ESSP (Sections 11.4 and 11.5). (iii) Minimality: A transition system 𝑇 𝑆 ′ satisfying SSP and ESSP and 𝐿(𝑇 𝑆1 ) ⊆ 𝐿(𝑇 𝑆) ⊆ 𝐿 (𝑇 𝑆1′ ) also satisfies 𝐿 (𝑇 𝑆) = 𝐿 (𝑇 𝑆1′ ). ′ ⟩ {𝑎} and ⟨𝑞 ′ ⟩ {𝑏} , and check (iv) General diamond property: construct ⟨𝑞 01 01 ′ whether 𝑇 𝑆1 enjoys the general diamond property with respect to 𝑎 and 𝑏. (v) Solution: If so, construct the two factors; solve them individually and show that the reachability graph of the product is isomorphic to 𝑇 𝑆1′ . (b) Similarly, construct and examine a minimal overapproximation 𝑇 𝑆2′ for the transition system 𝑇 𝑆2 in Figure 13.5.

4 See https://en.wikipedia.org/wiki/Biconnected_component.

13.6 Exercises

319

Exercise 13.2 Weak periodicity and cycle-consistency (a) Exhibit a finite, totally reachable, deterministic, backward deterministic, and persistent, labelled transition system which is weakly periodic but not strongly cycle-consistent. (b) Prove that if a finite labelled transition system is strongly cycle-consistent, then it is also weakly periodic. (b) Clarify the relationship between the existence of (forward/general) two-way chains, weak periodicity, and strong cycle-consistency. Solution 13.1 (Exercise 13.1) See Figure 13.13.

𝑞1

𝑇𝑆1′ :

′ 𝑞01

𝑎 𝑞1

𝑎

𝑇𝑆2′ :

𝑏

′ 𝑞02

𝑞3 𝑎

𝑏 𝑏

𝑏

𝑞2

𝑞1

𝑞0 ′ 𝑎

𝑞1 ′ ⟩ {𝑎} = ⟨𝑞01

𝑞0 ′

𝑎

′ ⟩ {𝑎} = ⟨𝑞02

𝑞0 ′

′ ⟩ {𝑏} = ⟨𝑞02

′ ⟩ {𝑏} = ⟨𝑞01

𝑏 𝑞2

𝑏 ′ ⟩ {𝑎} ⊗ ⟨𝑞 ′ ⟩ {𝑏} 𝑇𝑆1′ = ⟨𝑞01 01

Solution of 𝑇𝑆1′ :

𝑞0 ′

𝑎

𝑏

′ ⟩ {𝑎} ⊗ ⟨𝑞 ′ ⟩ {𝑏} 𝑇𝑆2′ = ⟨𝑞02 02

Solution of 𝑇𝑆2′ :

𝑎

𝑏

Fig. 13.13: Solution of Exercises 13.1(a) (left-hand side) and 13.1(b) (right-hand side).

320

13 Divide-and-Conquer Methods for Synthesis 𝑞1

𝑎

𝑞0

𝑏

𝑞2

𝑎

Fig. 13.14: Strong cycle-consistency fails because 𝑎 is both a cycle (from 𝑞 2 ) and a non-cycle (from 𝑞 0 ).

Solution 13.2 (Exercise 13.2) (a): See Figure 13.14. (b): 𝜎

𝜎

𝜎

If 𝑞 1 −→ 𝑞 2 −→ 𝑞 3 −→ . . . for states 𝑞 1 , 𝑞 2 , 𝑞 3 , then, by finiteness, two states 𝜎𝑘

must be equal, which implies that 𝑞 −→ 𝑞, for some state 𝑞 and some integer 𝑘 ≥ 1. By strong cycle-consistency, since P (𝜎) = 𝑘1 ·P (𝜎 𝑘 ), 𝜎 must also be cyclic, and hence, 𝑞 1 = 𝑞 2 = 𝑞 3 = . . .; i.e., we have weak periodicity. (c): By definition, a finite lts may not have two-way chains. Figure 13.1 above shows that an infinite lts may have a forward two-way chain (hence also a general one) and still be weakly periodic, strongly cycle-consistent, forward and backward deterministic. It is not totally reachable, but the variant shown in Figure 13.15 is.

𝑞−2 ...

𝑎

𝑏 𝑐

𝑞−1

𝑎

𝑏

𝑞0

𝑎

𝑏

𝑞1

𝑎

𝑏

𝑞2 ...

𝑐

Fig. 13.15: A forward two-way uniform chain which is totally reachable.

Chapter 14

Marked Graph Synthesis

Abstract If marked graphs are targeted by synthesis from labelled transition systems, it is possible to devise a very fast algorithm. To this end, the necessary conditions for synthesisability considered earlier can be strengthened to a full characterisation, that is, a set of necessary and sufficient conditions. The present chapter presents a standalone characterisation of the class of (connected, bounded and live) marked graph reachability graphs. A specialised, efficient, synthesis algorithm can be derived from this characterisation and, at the same time, serves as a proof of the latter.

Contents 14.1 14.2 14.3 14.4 14.5 14.6 14.7 14.8 14.9

Introduction . . . . . . . . . . . . . . . . . . Some examples and counterexamples . . . . Short and general paths . . . . . . . . . . . . Connected, bounded and live marked graphs Distances, restricted lts, and lattices . . . . . Synthesis using sequentialising states . . . . Minimality, and place bounds . . . . . . . . Bibliographical remarks and further reading . Exercises . . . . . . . . . . . . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

321 322 324 327 329 331 335 336 337

14.1 Introduction We are still considering labelled transition systems as our primary objects of interest. Of course, the reachability graphs of weakly connected, bounded and live marked graphs are a (tiny, if you wish) subclass thereof. In this section, we shall see which of the properties of this subclass are characteristic for it, so that we can say, conversely, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. Best and R. Devillers, Petri Net Primer, Computer Science Foundations and Applied Logic, https://doi.org/10.1007/978-3-031-48278-6_14

321

322

14 Marked Graph Synthesis

that every lts satisfying those properties is generated by some weakly connected, bounded and live marked graph. So, consider a labelled transition system 𝑇 𝑆 = (𝑄, →, 𝑇, 𝑞 0 ) satisfying1 𝑇 𝑆 is finite 𝑇 𝑆 is totally reachable 𝑇 𝑆 is deterministic 𝑇 𝑆 is reversible 𝑇 𝑆 is persistent 𝑇 𝑆 is backward persistent P1: the Parikh vector of any small cycle in 𝑇 𝑆 equals 1

(14.1)

It turns out that the reachability graph of a weakly connected, bounded and live marked graph satisfies all of these properties.2 In this chapter, a converse result is proved: if a labelled transition system 𝑇 𝑆 satisfies Properties (14.1), then it is isomorphic to the reachability graph of a weakly connected, bounded, and live marked graph.

14.2 Some examples and counterexamples

𝑞4

𝑐

𝑞0

𝑐

𝑞2 𝑁1

𝑇𝑆1 𝑏

𝑏 𝑎 𝑞5

𝑎 𝑐

𝑞1

𝑎 𝑐

𝑎

𝑏

𝑐

𝑞3

Fig. 14.1: A transition system 𝑇 𝑆1 (l.h.s.) and a (2-bounded) Petri net 𝑁1 solving it (r.h.s.). 𝑁1 is not minimal, in the sense that the side-place around 𝑎 can be dropped without changing the reachability graph. Our running example will be the labelled transition system 𝑇 𝑆1 shown on the lefthand side of Figure 14.1.

1 Total reachability means that all states are accessible from the initial state by a directed path. Determinism means that no two equally labelled arrows emanate from a state. Reversibility means that the initial state is reachable from every reachable state. Persistence is what has been called the “diamond property”, and backward persistence is the same property in backward direction. 1 is the all-ones T-vector, i.e., 1(𝑡 ) = 1 for every edge label 𝑡 ∈ 𝑇. 2 Recall that marked graphs are always plain.

14.2 Some examples and counterexamples

323

Example 14.1 𝑇 𝑆1 in Figure 14.1 𝑇 𝑆1 enjoys all properties in (14.1). For example, P1 is satisfied because all 𝑏𝑐𝑎

small cycles (e.g. 𝑞 0 −→ 𝑞 0 ) are of length 3 and have Parikh vector 1.a 𝑇 𝑆1 is (isomorphic to) the reachability graph of the net 𝑁1 shown on the right-hand side of this figure, with 𝑞 0 representing 𝑁1 ’s initial marking. a

Strictly speaking, 𝑇𝑆1 contains 12 small cycles. However, it is customary to identify cycles 𝑏𝑐𝑎

𝑐𝑎𝑏

which only differ in the ordering of their states. For instance, 𝑞0 −→ 𝑞0 , 𝑞5 −→ 𝑞5 and 𝑎𝑏𝑐

𝑞1 −→ 𝑞1 are considered the same cycle. This way, four essentially different small cycles remain in 𝑇𝑆1 , and all of them are of length 3 and have Parikh vector 1.

Compare Figure 14.1 with 𝑇 𝑆2 and 𝑁2 , as shown in Figure 14.2. 𝑇 𝑆1 and 𝑇 𝑆2 are edge-reversals of each other (in the sense of the following definition), and the marked graphs 𝑁1 and 𝑁2 happen to be arc-reversals of each other. Definition 14.2 Reverse lts Let a labelled transition system 𝑇 𝑆 = (𝑄, →, 𝑇, 𝑞 0 ) be given. The reverse lts ←− is 𝑇 𝑆 = (𝑄, ←, 𝑇, 𝑞 0 ) with (𝑞, 𝑡, 𝑞 ′ ) ∈← iff (𝑞 ′ , 𝑡, 𝑠) ∈→.

𝑞4

𝑐

𝑞0

𝑇𝑆2 𝑎

𝑏 𝑐

𝑎

𝑐 𝑏 𝑐

𝑞2 𝑁2 𝑎 𝑎

𝑞5

𝑞1

𝑏

𝑐

𝑞3

Fig. 14.2: Another transition system and a Petri net solving it (reversing both the transition system and the net shown in Figure 14.1)

Example 14.3 𝑇 𝑆2 in Figure 14.2 𝑇 𝑆2 again satisfies all properties of the list (14.1). It represents the reachability graph of 𝑁2 , with the same initial marking as 𝑁1 .

Example 14.4 𝑇 𝑆3 and 𝑇 𝑆4 in Figures 14.3 and 14.4 𝑇 𝑆3 and 𝑇 𝑆4 , as shown in Figures 14.3 and 14.4, are counterexamples. 𝑇 𝑆3 satisfies all properties in (14.1) except P1, since its only small cycle has Parikh vector 2 instead of 1. 𝑇 𝑆4 satisfies all properties including P1, but it is not backward persistent (consider the state 𝑟 and its incoming arrows 𝑎 and 𝑑 shown in blue). Two different solutions of 𝑇 𝑆4 are depicted in Figure 14.4. Later, it will be proved that neither 𝑇 𝑆3 nor 𝑇 𝑆4 has a marked graph solution. In Chapter 15, it will also be proved that 𝑇 𝑆3 has no choice-free solution.

324

14 Marked Graph Synthesis 𝑀0 𝑇𝑆3

𝑐

𝑐 𝑎

𝑎 2

2

𝑁3 𝑏 𝑐

𝑎 2 𝑏

𝑏

Fig. 14.3: 𝑇 𝑆3 satisfies all properties but P1. The non-choice-free Petri net 𝑁3 shown on the right-hand side solves it. There is no choice-free solution, and, a fortiori, no marked graph solution. 𝑎 𝑑 𝑞

𝑁4′

𝑐

𝑁4

𝑇𝑆4

𝑎

𝑑 𝑎 𝑏

𝑏

2

𝑞0 𝑐

𝑏

𝑏

𝑑 𝑎

𝑑

𝑐 𝑎

𝑑

𝑏

𝑟

Fig. 14.4: 𝑇 𝑆4 satisfies all properties except backward persistence at state 𝑟. Both 𝑁4 and 𝑁4′ solve 𝑇 𝑆4 . 𝑁4 is plain but not choice-free. 𝑁4′ is not plain but choice-free. Both are minimal in the sense that no place is redundant. There is no marked graph solution.

14.3 Short and general paths We investigate the structure of paths in labelled transition systems satisfying Properties (14.1). Definition 14.5 Short paths Let 𝑇 𝑆 = (𝑄, →, 𝑇, 𝑞 0 ) be a labelled transition system. For 𝑞, 𝑞 ′ ∈ 𝑄, with 𝜏 𝑞 ′ ∈ [𝑞 ⟩, a path 𝑞 −→ 𝑞 ′ is called short if |𝜏| ≤ |𝜏 ′ | for every path 𝜏′

𝑞 −→ 𝑞 ′ .a a

Where | 𝜏 | denotes the length of 𝜏.

Example 14.6 Figure 14.4 𝑎

𝑏𝑎𝑐𝑎𝑑

In 𝑇 𝑆4 , 𝑞 0 −→ 𝑞 is a short path from 𝑞 0 to 𝑞, and 𝑞 0 −→ 𝑞 is a non-short (but graph-theoretically still elementary) path from 𝑞 0 to 𝑞.

14.3 Short and general paths

325

Lemma 14.7 Cyclic extensions Let 𝑇 𝑆 = (𝑄, →, 𝑇, 𝑞 0 ) be finite, totally reachable, deterministic, reversible, and persistent. Also assume that 𝑇 𝑆 satisfies P1. 𝛼 Suppose 𝑞 ∈ 𝑄 and 𝑞 −→ , with 𝛼 ∈ 𝑇 ∗ and P (𝛼) ≤ 1. Then there is a 𝜅 small cycle 𝑞 −→ 𝑞 such that 𝛼 is a prefix of 𝜅. 𝛼 e

Proof: Let e 𝛼 be such that 𝑞 −→ 𝑞 and 1 = P (e 𝛼). Such a sequence e 𝛼 exists because – by Corollary 12.18 – small cycles can be pushed, Parikh-equivalently, to all home 𝛼 states, and – by reversibility – all states of 𝑇 𝑆 are home states. Suppose 𝑞 −→ 𝑞 ′ . • 𝛼 𝛼 e− 𝛼 By Keller’s theorem (Theorem 12.12), 𝑞 −→ 𝑞 ′ −→ 𝑞 ′′ . By P (𝛼) ≤ 1 = P (e 𝛼), P (e 𝛼) = P (𝛼(e 𝛼−• 𝛼)). By the cyclicity of e 𝛼, 𝑞 ′′ = 𝑞. Choosing 𝜅 = 𝛼(e 𝛼−• 𝛼) proves the lemma. 14.7 Example 14.8 Figure 14.4 𝑎

𝑎𝑏𝑐𝑑

In 𝑇 𝑆4 , 𝑞 0 −→ can be extended to the small cycle 𝑞 0 −→ 𝑞 0 .a a

𝑇𝑆4 does not satisfy the full list of Properties (14.1), but a subset of them is enough to prove Lemma 14.7.

Lemma 14.9 Characterisation of short paths Let 𝑇 𝑆 = (𝑄, →, 𝑇, 𝑞 0 ) be finite, totally reachable, deterministic, reversible, and persistent. Also assume that 𝑇 𝑆 satisfies P1. 𝜏 𝜏 Suppose that 𝑞 −→ 𝑞 ′ . Then 𝑞 −→ 𝑞 ′ is short iff ¬(1 ≤ P (𝜏)). 𝜏

Proof: (⇒): By contraposition. Suppose that 𝑞 −→ 𝑞 ′ and that 1 ≤ P (𝜏). By 𝜅 Lemma 14.7, there is some cycle 𝑞 −→ 𝑞 with P (𝜅) = 1. By Keller’s theorem, • 𝜅 𝜏− 𝜅 𝑞 −→ 𝑞 −→ 𝑞 ′′ . By P (𝜅) = 1 ≤ P (𝜏), P (𝜏) = P (𝜅(𝜏−• 𝜅)), and therefore, 𝑞 ′ = 𝑞 ′′ (by determinism). Since neither 𝜅 nor 𝜏 is the empty sequence, and by the 𝜏 fact that 𝜅 contains every transition at least once, |𝜏−• 𝜅| < |𝜏|. Hence 𝑞 −→ 𝑞 ′ is not short. 𝜏′

𝜏

(⇐): Suppose that 𝑞 −→ 𝑞 ′ and ¬(1 ≤ P (𝜏)). Consider any other path 𝑞 −→ 𝑞 ′ 𝜌𝜏 from 𝑞 to 𝑞 ′ . By reversibility, there is some path 𝜌 from 𝑞 ′ to 𝑞. Hence both 𝑞 ′ −→ 𝑞 ′ 𝜌𝜏 ′

and 𝑞 ′ −→ 𝑞 ′ are cycles at 𝑞 ′ . By Keller’s theorem, 𝜌𝜏 ′

𝑞 ′ −→ 𝑞 ′

• (𝜌𝜏 ) − (𝜌𝜏 ′ )

−→

𝑞′

• 𝜏− 𝜏′ Hence 𝑞 ′ −→ 𝑞 ′ , and since this is a cycle, P (𝜏−• 𝜏 ′ ) is a multiple of 1. In view of ¬(1 ≤ P (𝜏)), this can only be the case if P (𝜏−• 𝜏 ′ ) = 0, i.e., 𝜏−• 𝜏 ′ = 𝜀. This

326

14 Marked Graph Synthesis 𝜏

implies, in particular, that P (𝜏) ≤ P (𝜏 ′ ) and that |𝜏| ≤ |𝜏 ′ |. Therefore, 𝑞 −→ 𝑞 ′ is short. 14.9 Lemma 14.10 Parikh-uniqueness of short Parikh vectors Let 𝑇 𝑆 = (𝑄, →, 𝑇, 𝑞 0 ) be finite, totally reachable, deterministic, reversible, and persistent. Also assume that 𝑇 𝑆 satisfies P1. 𝜏′

𝜏

Suppose that 𝑞 −→ 𝑞 ′ and 𝑞 −→ 𝑞 ′ are both short. Then P (𝜏) = P (𝜏 ′ ). Proof: By Lemma 14.9, both ¬(1 ≤ P (𝜏)) and ¬(1 ≤ P (𝜏 ′ )). As in the second part of the previous proof, we may conclude, using some suitable (in fact any) path • • 𝜌 𝜏− 𝜏′ 𝜏′− 𝜏 𝑞 ′ −→ 𝑞, both 𝑞 ′ −→ 𝑞 ′ and 𝑞 ′ −→ 𝑞 ′ . Therefore, both P (𝜏) ≤ P (𝜏 ′ ) and P (𝜏 ′ ) ≤ P (𝜏), implying P (𝜏) = P (𝜏 ′ ). 14.10 Lemma 14.11 Characterisation of Parikh vectors of paths Let 𝑇 𝑆 = (𝑄, →, 𝑇, 𝑞 0 ) be finite, totally reachable, deterministic, reversible, and persistent. Also assume that 𝑇 𝑆 satisfies P1. 𝜏′

𝜏

Suppose that 𝑞 −→ 𝑞 ′ and that 𝑞 −→ 𝑞 ′ is a short path. Then P (𝜏) = P (𝜏 ′ ) + 𝑚·1, for some number 𝑚 ∈ N. 𝜏

Proof: Assume that 𝑞 −→ 𝑞 ′ . Let 𝑚 be the maximal number in N such that 𝜅𝑚

𝜅

P (𝑚·1) ≤ P (𝜏). Let 𝑞 −→ 𝑞 be some cycle with P (𝜅) = 1. Then also 𝑞 −→ 𝑞, 𝜅𝑚

𝜏′

with P (𝜅 𝑚 ) = 𝑚·1. By Keller’s theorem, 𝑞 −→ 𝑞 −→ 𝑞 ′ , with 𝜏 ′ = 𝜏−• 𝜅 𝑚 . By the 𝜏′

maximality of 𝑚, 𝑞 −→ 𝑞 ′ is short, and by P (𝜅 𝑚 ) ≤ P (𝜏), P (𝜏) can be written as P (𝜏) = P (𝜏 ′ ) + P (𝜅 𝑚 ). By Lemma 14.10, the choice of 𝜏 ′ is arbitrary. 14.11 Example 14.12 Figure 14.4 𝑎

𝑏𝑎𝑐𝑎𝑑

In 𝑇 𝑆4 , the two paths 𝜏 ′ = 𝑞 0 −→ 𝑞 and 𝜏 = 𝑞 0 −→ 𝑞 satisfy P (𝑏𝑎𝑐𝑎𝑑) = P (𝑎) + (1 1 1 1 ) T .

Lemma 14.13 Existence of short paths Let 𝑇 𝑆 = (𝑄, →, 𝑇, 𝑞 0 ) be finite, totally reachable, deterministic, reversible, and persistent. Also assume that 𝑇 𝑆 satisfies P1. Suppose that 𝑞, 𝑞 ′ are states. There is a short path from 𝑞 to 𝑞 ′ . 𝜏

𝜏′

Proof: By reversibility, 𝑞 −→ 𝑞 ′ for some 𝜏. Just take the path 𝑞 −→ 𝑞 ′ from the proof of Lemma 14.11. 14.13

14.4 Connected, bounded and live marked graphs

327

14.4 Connected, bounded and live marked graphs Theorem 14.14 below confirms that the reachability graphs of weakly connected, bounded and live marked graphs satisfy all properties listed in (14.1). Theorem 14.17 at the end of this section states the converse, that is: all labelled transition systems satisfying (14.1) can be synthesised into a weakly connected, bounded and live marked graph. Theorem 14.14 Necessary conditions for marked graphs Let 𝑁 be a net with an initial marking and reachability graph 𝑅𝐺 (𝑁). (a) 𝑁 is bounded ⇒ 𝑅𝐺 (𝑁) is finite [and conversely] (b) 𝑅𝐺 (𝑁) is totally reachable and deterministic (c) 𝑁 is a live marked graph ⇒ 𝑅𝐺 (𝑁) is reversible (d) 𝑁 is a marked graph ⇒ 𝑅𝐺 (𝑁) is persistent [the same for choice-free nets] (e) 𝑁 is a live marked graph ⇒ 𝑅𝐺 (𝑁) is backward persistent [the same for T-systems, but not for choice-free nets] (f) 𝑁 is a connected marked graph ⇒ 𝑅𝐺 (𝑁) satisfies P1 [the same for connected T-systems, but not for choice-free nets] The proof of Theorem 14.14 employs the following limited converse of the marking equation.3 Lemma 14.15 Realisability of a semipositive T-vector Let 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 ) be a live marked graph, let 𝐶 be its incidence matrix, and let 𝑋 : 𝑇 → N be a T-vector. If 𝑀 ∈ [𝑀0⟩ and 𝑀 ′ = 𝑀 + 𝐶·𝑋 ≥ 0, then 𝜏 there is a sequence 𝜏 ∈ 𝑇 ∗ such that P (𝜏) = 𝑋 and 𝑀 −→ 𝑀 ′ . Proof: By induction on |𝑋 |, the sum of the entries in 𝑋. |𝑋 | = 0. Then 𝑀 ′ = 𝑀, and 𝜏 = 𝜀 (the empty sequence) does the job. |𝑋 | > 0. For every 𝑡 ∈ 𝑇 with 𝑋 (𝑡) > 0, define a “smaller” T-vector 𝑋−𝑡 as follows:  𝑋 (𝑢)−1 if 𝑢 = 𝑡 for 𝑢 ∈ 𝑇: 𝑋−𝑡 (𝑢) = 𝑋 (𝑢) if 𝑢 ≠ 𝑡 The main idea is to show that some 𝑡 ∈ 𝑇 with 𝑋 (𝑡) > 0 is enabled at 𝑀. Such a 𝑡 𝑡 e for some marking 𝑀 e ∈ [𝑀⟩ ⊆ [𝑀0 ⟩, and the can then be fired, giving 𝑀 −→ 𝑀 ′ e induction hypothesis can be applied to 𝑀 = 𝑀 + 𝐶·𝑋−𝑡 ≥ 0. Suppose, on the contrary, that none of the transitions 𝑡 with 𝑋 (𝑡) > 0 can be fired at 𝑀. This implies that 𝑀 + 𝐶·P (𝑡) ≱ 0 for all 𝑡 with 𝑋 (𝑡) > 0. Consider any such 𝑡 and consider a place 𝑠 for which 3 For the marking equation, see Section 4.3.

328

14 Marked Graph Synthesis

𝑀 (𝑠) + (𝐶·P (𝑡))(𝑠) < 0

(14.2)

By the definition of 𝐶, we have 𝑠 ∈ • 𝑡 and by (14.2) and the fact that 𝑁 is plain, we have 𝑀 (𝑠) = 0. By the fact that 𝑁 is a (live) marked graph, there is a unique transition 𝑢 ∈ • 𝑠, and by the premise 𝑀 + 𝐶·𝑋 ≥ 0, we have 𝑀 (𝑟) + (𝐶·𝑋)(𝑟) ≥ 0

(14.3)

for every 𝑟 ∈ 𝑆, and in particular, for 𝑟 = 𝑠. Hence, in order to switch the left-hand side in (14.2) from negative to semipositive (for 𝑟 = 𝑠) in (14.3), we must have 𝑋 (𝑢) > 0, since, by the net being a marked graph, every line of 𝐶 contains at most one positive entry. Therefore, for every 𝑡 with 𝑋 (𝑡) > 0, there is a place 𝑠 ∈ • 𝑡 with 𝑀 (𝑠) = 0 and 𝑋 (𝑢) = 0 for 𝑢 ∈ • 𝑠. This contradicts the liveness of 𝑁. 14.15 Proof: (of Theorem 14.14) (a) and (b): See Proposition 12.2. (c): We can use Theorem 5.31((a)⇒(c)) in the following way. If 𝑁 is live at the initial marking, then 𝑁 is live at any reachable marking. Thus the Parikh vector 1 (an all-ones vector) can be realised at any reachable marking. The claim follows, since a firing sequence with this vector reproduces the initial marking, by | • 𝑠| = 1 = |𝑠• | for every place 𝑞. (d): See Proposition 12.5. 𝑎

𝑏

(e): Assume that 𝑀1 −→ 𝑀 and 𝑀2 −→ 𝑀, with 𝑎 ≠ 𝑏. Since 𝑁 is a marked graph, this implies that 𝑎 • ∩ 𝑏 • = ∅. It follows that there is a marking 𝑀 ′ ≥ 0 that can be reached either by back-firing 𝑏 from 𝑀1 or by back-firing 𝑎 from 𝑀2 . We only need to show that 𝑀 ′ ∈ [𝑀0⟩. To this end, we use Lemma 14.15. 𝛼

𝛽

By 𝑀1 , 𝑀2 ∈ [𝑀0 ⟩, we get 𝑀0 −→ 𝑀1 and 𝑀0 −→ 𝑀2 . Moreover, 𝛼 contains a letter 𝑏 and 𝛽 contains a letter 𝑎. This follows from Lemma 14.11. Define 𝑋 = P (𝛼−• 𝑏) or (alternatively) 𝑋 = P (𝛽−• 𝑎). Then 𝑋 ≥ 0, and by Lemma 𝜏 e with P (𝜏) = 𝑋. By the construction of 14.15, we get a firing sequence 𝑀0 −→ 𝑀 𝜏 ′ ′ e 𝑋 and 𝑀 , we have 𝑀 = 𝑀 and hence 𝑀0 −→ 𝑀 ′ , as was to be proved. 𝜌

(f): Let 𝑀 −→ be any small cycle in 𝑅𝐺 (𝑁); we show that P (𝜌) = 1. A reproducing sequence is either empty or contains every transition at least once, since 𝑁 is connected. If a nonempty reproducing sequence exists, then every cycle in the net contains at least one token, and the net is live by Theorem 5.31((b)⇒(a)). But then, every small cycle contains every transition at most once, hence exactly once. 14.14 Example 14.16 Figures 14.3 and 14.4 Theorem 14.14 implies that neither 𝑇 𝑆3 (shown in Figure 14.3) nor 𝑇 𝑆4 (shown in Figure 14.4) can be solved by a marked graph. The former does not satisfy P1 and the latter is not backward persistent.

14.5 Distances, restricted lts, and lattices

329

Theorem 14.17 Synthesis of connected, bounded, live marked graphs Let 𝑇 𝑆 be a labelled transition system which is • finite • totally reachable • deterministic • reversible • persistent • backward persistent • and satisfies P1. Then 𝑇 𝑆 is isomorphic to the reachability graph of a weakly connected, bounded, and live marked graph. Theorem 14.17 will be proved by the algorithm developed in Sections 14.5 and 14.6 below.

14.5 Distances, restricted lts, and lattices The existence and the Parikh-uniqueness of short paths allow to define a coherent notion of a distance between states, as follows. Definition 14.18 Distances Let 𝑇 𝑆 = (𝑄, →, 𝑇, 𝑞 0 ) be finite, totally reachable, deterministic, reversible, and persistent. Also assume that 𝑇 𝑆 satisfies P1. For two states 𝑞 and 𝑞 ′ with 𝑞 ′ ∈ [𝑞 ⟩, the distance Δ𝑞,𝑞 ′ : 𝑇 → N is defined as Δ𝑞,𝑞 ′ = P (𝜏), where 𝜏 𝑞 −→ 𝑞 ′ is any short path. Lemmata 14.10 and 14.13 ensure that this definition is sound, i.e., that Δ𝑞,𝑞 ′ is well-defined for any two states 𝑞, 𝑞 ′ .4 Definition 14.19 Restricted lts, and the order ⪯ 𝑥 Let 𝑇 𝑆 = (𝑄, →, 𝑇, 𝑞 0 ) be finite, totally reachable, deterministic, reversible, and persistent. Also assume that 𝑇 𝑆 satisfies P1. For a label 𝑥 ∈ 𝑇, the restricted lts TS-𝑥 is the lts (𝑄, {(𝑞, 𝑡, 𝑞 ′ ) ∈→| 𝑡 ≠ 𝑥}, 𝑇 \ {𝑥}, 𝑞 0 ) obtained 𝛼 by dropping all 𝑥-labelled transitions. For 𝑞, 𝑞 ′ ∈ 𝑄, let 𝑞 ⪯ 𝑥 𝑞 ′ iff 𝑞 −→ 𝑞 ′ for some 𝛼 ∈ (𝑇 \{𝑥}) ∗ . By (part of) the following Lemma 14.20, ⪯ 𝑥 is acyclic.

4 Note that by reversibility, all states are reachable from each other.

330

14 Marked Graph Synthesis

← − Lemma 14.20 Properties of TS and TS-𝑥 Let 𝑇 𝑆 = (𝑄, →, 𝑇, 𝑞 0 ) be finite, totally reachable, deterministic, reversible, and persistent. Also assume that 𝑇 𝑆 satisfies P1. ← − ← − ← − TS has the same state set and label set as TS; it satisfies TS = TS and has Properties (14.1) whenever the same is true for TS. TS-𝑥 has state set 𝑄 and label set 𝑇 \{𝑥}; it is acyclic and its paths are precisely the short paths of TS not containing 𝑥. ← − ← − ← − Proof: TS has the same constituent sets 𝑄 and 𝑇 as TS, and TS = TS, by definition. Moreover, finiteness and P1 are individually preserved by reversal. ← − If 𝑇 𝑆 is totally reachable and reversible, then TS (and hence also TS) is strongly connected. In that case, total reachability and reversibility are self-reversal-invariant, and persistence and backward persistence correspond to each other through arrow 𝑎 reversal. To see the reversal-invariance of determinism, assume 𝑞 1 −→ 𝑞 and 𝛽1

𝑎

𝑎

𝑞 2 −→ 𝑞. By strong connectedness, there are two cyclic sequences 𝑞 −→ 𝑞 1 −→ 𝑞 𝛽2

𝑎

and 𝑞 −→ 𝑞 2 −→ 𝑞. Using Lemma 14.11, we may assume, without loss of • 𝛽1 𝛽2 − 𝛽1 generality, that P (𝛽1 𝑎) ≤ P (𝛽2 𝑎). Keller’s theorem yields 𝑞 −→ 𝑞 1 −→ 𝑞 ′ and 𝛽2

𝜀

𝑞 −→ 𝑞 2 −→ 𝑞 ′ , hence 𝑞 2 = 𝑞 ′ . But 𝛽2 −• 𝛽1 = (𝛽2 𝑎)−• (𝛽1 𝑎) is cyclic, hence also 𝑞 1 = 𝑞 ′ = 𝑞 2 .5 As for the claims about TS-𝑥, acyclicity arises from the fact that every nontrivial cycle in TS contains at least one 𝑥 by property P1. The property on short paths results from Lemma 14.9. The property on the constituent sets is true by the definition of TS-𝑥. 14.20 More than that, TS-𝑥 turns out to be a lattice with respect to ⪯ 𝑥 : Proposition 14.21 TS-𝑥 is a lattice Let 𝑇 𝑆 = (𝑄, →, 𝑇, 𝑞 0 ) be finite, totally reachable, deterministic, reversible, and persistent. Also assume that 𝑇 𝑆 satisfies P1. For any label 𝑥 ∈ 𝑇, TS-𝑥 is a lattice for the order ⪯ 𝑥 . Proof: First, let us show that TS-𝑥 is connected. In TS, for any pair of states 𝑞, 𝑞 ′ 𝛼 there is a short path 𝑞 −→ 𝑞 ′ . From P1, and Lemma 14.7, we may replace each 𝑥 in it by a path in the reverse order where each label but 𝑥 occurs once. We then obtain a non-directed path between 𝑞 and 𝑞 ′ without 𝑥, hence in TS-𝑥.6

5 Persistence was used at this point. 6 Observe that P1 is really instrumental in order to get connectedness. For instance, one may define structures such as 𝑇𝑆-𝑥 also in 𝑇𝑆3 (cf. Figure 14.3), for 𝑥 ∈ {𝑎, 𝑏, 𝑐}; but two disconnected acyclic lts are obtained in these cases, one of them consisting only of an isolated state.

14.6 Synthesis using sequentialising states

331

In TS-𝑥, if 𝑞 ⪯ 𝑥 𝑞 1 and 𝑞 ⪯ 𝑥 𝑞 2 , applying Keller’s theorem yields a state 𝑞 ′ such that 𝑞 1 ⪯ 𝑥 𝑞 ′ and 𝑞 2 ⪯ 𝑥 𝑞 ′ . Similarly, using backward persistence, if 𝑞 1 ⪯ 𝑥 𝑞 and 𝑞 2 ⪯ 𝑥 𝑞, applying Keller’s theorem yields a state 𝑞 ′ such that 𝑞 ′ ⪯ 𝑥 𝑞 1 and 𝑞 ′ ⪯ 𝑥 𝑞 2 . 𝛼 Applying iteratively those properties to any undirected path 𝑞 −→ 𝑞 ′ , we shall get ′ ′ two states 𝑞 1 , 𝑞 2 such that 𝑞 1 ⪯ 𝑥 𝑞, 𝑞 1 ⪯ 𝑥 𝑞 , 𝑞 ⪯ 𝑥 𝑞 2 and 𝑞 ⪯ 𝑥 𝑞 2 . Applying iteratively this property to any finite set 𝑅 ⊆ 𝑄, we shall get two states 𝑞 1 , 𝑞 2 such that 𝑞 1 ⪯ 𝑥 𝑟 and 𝑟 ⪯ 𝑥 𝑞 2 for any 𝑟 ∈ 𝑅. Moreover, for any finite non-empty set 𝑅 ⊆ 𝑄, let 𝑞 1 ⪯ 𝑥 𝑅 as above. For any 𝑞, if 𝑅 ⪯ 𝑥 𝑞, we must have Δ𝑞1 ,𝑠 ≥ max𝑟 ∈ 𝑅 {Δ𝑞1 ,𝑟 }. But Keller’s theorem exactly constructs a state 𝑞 ′ such that Δ𝑞1 ,𝑞 ′ = max𝑟 ∈ 𝑅 {Δ𝑞1 ,𝑟 }, hence there is a unique minimal majorant of 𝑅 for ⪯ 𝑥 . Similarly, there is a unique maximal minorant of 𝑅 for ⪯ 𝑥 . Hence the claim. 14.21 Notation 14.22 Minimal state 𝑟 𝑥 and maximal state 𝑞 𝑥 In what follows, the minimal state of the restricted lts 𝑇 𝑆-𝑥 is denoted by 𝑟 𝑥 . The maximal state is denoted by 𝑞 𝑥 .

Example 14.23 Figure 14.1 Figure 14.5 shows the lts 𝑇 𝑆 = 𝑇 𝑆1 known from Figure 14.1, and all three restricted lts 𝑇 𝑆-𝑥, for 𝑥 ∈ {𝑎, 𝑏, 𝑐}. Note that indeed, all of them are acyclic and connected, and they have exactly one minimal state and one maximal state. The states 𝑟 𝑥 and 𝑞 𝑥 , for every 𝑥 ∈ {𝑎, 𝑏, 𝑐}, are also given in the figure. Intuitively, if 𝑇 𝑆 belongs to a marked graph, the lattice 𝑇 𝑆-𝑥 has the following significance. Its maximal state 𝑞 𝑥 is such that the next firable transition is definitely 𝑥. Similarly, its minimal state 𝑟 𝑥 is such that the last transition that has occurred has definitely been 𝑥. Between 𝑟 𝑥 and 𝑞 𝑥 , 𝑇 𝑆-𝑥 describes the maximally possible firings of non-𝑥 transitions. (It does not mean that all of those must be fired before another 𝑥 can occur, nor that, from the previous 𝑥 backwards, all of them must have occurred.)

14.6 Synthesis using sequentialising states It will now be shown how a marked graph can be created from a labelled transition system satisfying (14.1). As previously, the set of transitions is defined as 𝑇, the set of labels. Unlike previously, relevant places can be introduced in a canonical way, leading to a unique (up to isomorphism) minimal solution. Example 14.24 Figures 14.1 and 14.2 revisited Neither 𝑁1 in Figure 14.1 nor 𝑁2 in Figure 14.2 are minimal. Omitting the side-place around 𝑎 yields minimal nets with the same state spaces.

332

14 Marked Graph Synthesis 𝑞4

𝑞0

𝑐

𝑞4 =𝑟𝑎

𝑞2

𝑐

𝑞0

𝑐

𝑐

𝑞2

𝑐

𝑞3 =𝑞𝑎

𝑇𝑆 𝑇𝑆-𝑎 𝑏

𝑏

𝑎

𝑏

𝑎 𝑐

𝑞5 𝑞4

𝑐

𝑞1 𝑐

𝑏

𝑎

𝑞0

𝑞3 𝑐

𝑐

𝑞5

𝑞2 =𝑞𝑏

𝑇𝑆-𝑏

𝑞1

𝑞4 =𝑞𝑐

𝑞0

𝑞2

𝑇𝑆-𝑐 𝑏 𝑎 𝑞5 =𝑟𝑏

𝑎 𝑐

𝑞1

𝑎 𝑐

𝑞3

𝑎

𝑞5

𝑎 𝑞3 =𝑟𝑐

𝑞1 𝑠1

𝑁 Seq(𝑎) = {𝑞2 } Seq(𝑏) = {𝑞3 , 𝑞4 } Seq(𝑐) = {𝑞2 }

𝑏

𝑎

𝑠2

𝑎

𝑐

𝑏 𝑠2

𝑠4

Fig. 14.5: The running example of Figure 14.1 revisited. 𝑇 𝑆 is isomorphic to 𝑇 𝑆1 but 𝑁 is not isomorphic to 𝑁1 .

Places need to be introduced in order to prevent the occurrence of a transition 𝑥 in states of TS that do not enable 𝑥. It is therefore reasonable to consider the set of states that do not enable 𝑥 but for which 𝑥 is necessarily enabled after one further step, called “sequentialising states”. In the following, let 𝑇 𝑆 be an lts satisfying Properties (14.1), and let 𝑥 ∈ 𝑇 be a fixed label of 𝑇 𝑆. Definition 14.25 Sequentialising states with respect to a label 𝑥 𝑥 𝑎 𝑎𝑥 Seq(𝑥) = { 𝑞 ∈ 𝑄 | ¬(𝑞 −→ ) ∧ ∀𝑎 ∈ 𝑇 : (𝑞 −→ ) ⇒ (𝑞 −→ ) }

Example 14.26 the running example In Figure 14.5, the sets Seq(𝑎), Seq(𝑏), Seq(𝑐) are also given.

Lemma 14.27 Properties of Seq(𝑥) 1. Let 𝑥 ∈ 𝑇 and 𝑞 ∈ Seq(𝑥); then Δ𝑟𝑥 ,𝑞 has exactly two entries that are zero (all other entries are positive). 2. For any 𝑥 ∈ 𝑇, Seq(𝑥) ⊆ {𝑞 𝑎 | 𝑎 ∈ 𝑇 \{𝑥}}. Proof: (1.) Δ𝑟𝑥 ,𝑠 (𝑥) = 0, because 𝑟 𝑥 is the unique minimal state in TS-𝑥. 𝛼 Assume that all other entries of Δ𝑟𝑥 ,𝑠 are positive; there is a path 𝑟 𝑥 −→ 𝑞 with 𝛽

P (𝛼) = Δ𝑟𝑥 ,𝑠 . By Lemma 14.7 and P1, there is a cycle 𝑟 𝑥 −→ 𝑟 𝑥 with P (𝛽) = 1,

14.6 Synthesis using sequentialising states

333

• 𝛽− 𝛼 𝛼 𝑥 hence 𝛽 contains 𝑥. Thus, by Keller’s theorem, 𝑟 𝑥 −→ 𝑠 −→ , so that 𝑞 −→ , contradicting 𝑞 ∈ Seq(𝑥). Therefore, Δ𝑟𝑥 ,𝑠 has at least two entries 0. 𝑎𝑥 Assume that 𝑎 ∈ 𝑠• and 𝑞 −→ 𝑞 ′ . This is possible by 𝑞 ∈ Seq(𝑥). By Lemma 14.7, 𝛾 𝑎𝑥 there is a small cycle 𝑞 −→ 𝑞 ′ −→ 𝑞 where every label except 𝑎 and 𝑥 occurs once 𝛿𝛾

𝛿

in 𝛾. Let 𝑟 𝑥 −→ 𝑞 ′ be any short path (not containing 𝑥). Then 𝑟 𝑥 −→ 𝑞 is a path from 𝑟 𝑥 to 𝑞 not containing 𝑥, and therefore short, but containing all transitions in TS-𝑥 except 𝑎. Therefore, Δ𝑟𝑥 ,𝑠 has at most two entries 0. 𝑎

(2.) The proof of (1.) implies that a label 𝑎 with 𝑞 −→ is uniquely determined by the choice of 𝑥 and 𝑞, and that 𝑞 = 𝑞 𝑎 , the unique state enabling only 𝑎. The claimed property results. 14.27 The following algorithm associates a net MG(TS) with TS. The idea is that a place 𝑝 𝑞,𝑥 is introduced for every pair (𝑥, 𝑞) where 𝑥 is a label 𝑥 ∈ 𝑇 and 𝑞 is a state 𝑞 ∈ Seq(𝑥). From Part (2) of the previous lemma, it is known that 𝑞 = 𝑞 𝑎 for a unique label 𝑎 ≠ 𝑥. Then place 𝑝 𝑞,𝑥 leads from 𝑎 to 𝑥, and its marking prohibits 𝑥 at 𝑞 but allows 𝑥 after executing 𝑎 in 𝑞. Intuitively, this is because in 𝑞 𝑎 , all other transitions (other than 𝑎 or 𝑥) have already been fired as often as possible. Initially, this place has to have Δ𝑟𝑥 ,𝑞0 (𝑎) many tokens, because this is how often occurrences of 𝑎 before 𝑞 0 allow 𝑥 to fire before the next 𝑎. Algorithm 14.28 Synthesis of a marked graph input a labelled transition system (𝑆, →, 𝑇, 𝑞 0 ) satisfying (14.1); for every label 𝑥 ∈ 𝑇 do for every state 𝑞 ∈ Seq(𝑥) do determine 𝑎 ∈ 𝑇 for which 𝑞 = 𝑞 𝑎 ; define a place 𝑝=𝑝 𝑞, 𝑥 with • 𝑝={𝑎}, 𝐹 (𝑎, 𝑝)=1 and 𝑝 • ={𝑥}, 𝐹 ( 𝑝, 𝑥)=1; put 𝑀0 ( 𝑝 𝑞,𝑥 ) = Δ𝑟𝑥 ,𝑞0 (𝑎) tokens on 𝑝 𝑞, 𝑥 end for end for

Example 14.29 𝑇 𝑆 and 𝑁 of Figure 14.5 Consider 𝑥 = 𝑐 and 𝑞 2 ∈ Seq(𝑐). We determine label 𝑏 as the one satisfying 𝑞 2 = 𝑞 𝑏 . Thus, the algorithm yields a place 𝑝 leading from 𝑏 to 𝑐. In the net 𝑁, this place is named 𝑠2 . This place gets Δ𝑟𝑥 ,𝑞0 (𝑎) tokens initially, which in our case evaluates to Δ𝑞3 ,𝑞0 (𝑏) tokens, i.e. 1 token. The other places of 𝑁 are created in a similar way by going through all the 𝑥 ∈ {𝑎, 𝑏}, and for each of them, going through all the states in Seq(𝑥).a We do not need to consider 𝑥 = 𝑐 anymore, since Seq(𝑐) = {𝑞2 }, and this was already processed. a

Lemma 14.30 The result is a marked graph The result of the previous algorithm is a pure marked graph.

334

14 Marked Graph Synthesis

Proof: Every place 𝑝 𝑞, 𝑥 has exactly one input transition, viz. 𝑎, and exactly one output transition, viz. 𝑥, with 𝑎 ≠ 𝑥, and the net is plain. 14.30 ← − Observe that Algorithm 14.28 yields the reverse net for the reverse lts, since in TS, 𝑞 𝑎 and 𝑟 𝑥 exchange their roles with 𝑟 𝑎 and 𝑞 𝑥 , respectively. Fixing (𝑥, 𝑞 𝑎 ) as above, let R : 𝑆 → N be the function defined by R(𝑞) = Δ𝑟𝑥 ,𝑞 (𝑎) for any state 𝑞 ∈ 𝑆. Example 14.31 Figure 14.5; Example 14.29 continued Instantiate 𝑥 = 𝑐 and 𝑎 = 𝑏 (i.e., for the generic 𝑎, fix the specific 𝑏). Then (𝑥, 𝑞 𝑎 ) = (𝑐, 𝑞 𝑏 ) = (𝑐, 𝑞 2 ). Then R(𝑞 0 ) = Δ𝑟𝑐 ,𝑞0 (𝑏) = Δ𝑞3 ,𝑞0 (𝑏) = 1, because on a short path from 𝑞 3 to 𝑞 0 , exactly one 𝑏 occurs. Moreover, define  B(𝑡) =

 1 if 𝑡 = 𝑥 0 if 𝑡 ≠ 𝑎 and F(𝑡) = 0 if 𝑡 ≠ 𝑥 1 if 𝑡 = 𝑎

(14.4)

Lemma 14.32 (R, B, F) is a region The triple (R, B, F), as constructed above, is a region in TS. 𝑡

Proof: Suppose 𝑞 1 −→ 𝑞 2 . We prove the second region property, (R2), in Definition 11.9, R(𝑞 2 ) = R(𝑞 1 ) + F(𝑡) − B(𝑡), by case distinction. Case 1: 𝑡 ≠ 𝑥. Then B(𝑡) = 0 by (14.4), and we have to prove R(𝑞 2 ) = R(𝑞 1 ) + F(𝑡). 𝛼 This follows from the definition of R, the definition of F, and the fact that if 𝑟 𝑥 −→ 𝑞 1 𝛼𝑡 is a short path from 𝑟 𝑥 to 𝑞 1 , the path 𝑟 𝑥 −→ 𝑞 2 is also short by 𝑡 ≠ 𝑥. Case 2: 𝑡 = 𝑥. Then B(𝑡) = 1 ∧ F(𝑡) = 0, and we have to prove R(𝑞 2 ) = R(𝑞 1 ) − 1. 𝛽

𝑥𝛿

Let 𝑞 1 −→ 𝑞 2 be a small cycle from Lemma 14.7 and suppose that 𝑟 𝑥 −→ 𝑞 2 is a 𝛽𝛿

short (𝑥-free) path from 𝑟 𝑥 to 𝑞 2 . Then 𝑟 𝑥 −→ 𝑞 1 is also a short path from 𝑟 𝑥 to 𝑞 1 . As a consequence (counting the single letter 𝑎 in 𝛿), R(𝑞 1 ) = R(𝑞 2 ) + 1. The first region property, (R1), viz. R(𝑞 1 ) ≥ B(𝑡), is implied by the semipositivity of R(𝑞 1 ) if 𝑡 ≠ 𝑥, by R(𝑞 1 ) ≥ 1 (Case 2) if 𝑡 = 𝑥, and by the definition of B(𝑡). 14.32 Theorem 14.33 Isomorphism of TS and RG(𝑁, 𝑀0 ) Let 𝑁 with initial marking 𝑀0 be the Petri net constructed according to the above procedure. Then TS and the reachability graph RG(𝑁, 𝑀0 ) of (𝑁, 𝑀0 ) are isomorphic. Moreover, 𝑁 is weakly connected. Proof: The set of regions constructed above satisfies ESSP (cf. Chapter 11). For 𝑥 let 𝑞 be such that ¬(𝑞 −→ ). Then by Lemma 14.27(1) and the fact that Seq(𝑥) comprises the ⪯ 𝑥 -maximal states not enabling 𝑥, there is some 𝑞 ∈ Seq(𝑥) with

14.7 Minimality, and place bounds 𝛼

𝛽

335 𝑎

a short path 𝑟 𝑥 −→ 𝑟 −→ 𝑞 −→ such that 𝛼𝛽 contain neither 𝑎 nor 𝑥. Hence R(𝑟 𝑥 ) = R(𝑟) = R(𝑞) = 0 for the region associated with place 𝑝 𝑞, 𝑥 , which prevents 𝑥 at state 𝑞. By (the proof of) Theorem 11.18(2), this ensures that TS and RG(𝑁, 𝑀0 ) are language-equivalent. To see that SSP is also satisfied, assume that 𝑞 1 and 𝑞 2 in TS are mapped to the same marking 𝑀 reachable in (𝑁, 𝑀0 ). By the strong connectedness of TS, there is 𝜎 𝜎 a sequence 𝑞 1 −→ 𝑞 2 . Since 𝑀 −→ by language equivalence, and because 𝑞 2 is 𝜎 mapped to 𝑀, there is also a sequence 𝑞 2 −→ 𝑞 3 , etc. By the finiteness of 𝑇 𝑆, we 𝜎ℓ

get 𝑞 𝑖 −→ 𝑞 𝑖 for some 𝑖, ℓ ≥ 1. P1 implies that every label occurs equally often in 𝜎 ℓ , and hence also equally often in 𝜎. Thus 𝜎 is cyclic, entailing 𝑞 1 = 𝑞 2 . Note that 𝑁 has no isolated places. Hence it is weakly connected, since otherwise, each connected component generates small cycles which do not satisfy P1. 14.33 Corollary 14.34 Live and bounded marked graph reachability graphs A labelled transition system is isomorphic to the reachability graph of a weakly connected, bounded, and live marked graph iff it satisfies the properties listed in (14.1). Proof: For (⇒), see Theorem 14.14. For (⇐), see Theorem 14.33.

14.34

14.7 Minimality, and place bounds This section contains supplementary remarks. We use the same premises and the same notation as before. In particular, let a 𝑇 𝑆 satisfying (14.1) be given. Lemma 14.35 Minimality Any marked graph solution of TS contains (a copy of) the net MG(TS). Proof: Let us consider some 𝑥 ∈ 𝑇 and 𝑞 𝑎 ∈ Seq(𝑥) as above. There must be a place 𝑝 𝑥,𝑎 in the solution that excludes 𝑥 at 𝑞 𝑎 , that is 𝑀𝑞𝑎 ( 𝑝 𝑥,𝑎 ) = 0 since the net 𝑎𝑥 is a marked graph, hence plain. Since 𝑞 −→ and 𝑝 𝑥,𝑎 belongs to a marked graph • solution of TS, we must have 𝑝 𝑥,𝑎 = {𝑎} and 𝑝 •𝑥,𝑎 = {𝑥}, so that 𝑝 𝑥,𝑎 = 𝑝 𝑞, 𝑥 , with the same initial marking. The property results. 14.35 The key role played by backward persistence should be emphasised. If backward persistence fails to hold, then the state 𝑟 𝑥 is not well-defined, as 𝑇 𝑆-𝑥 may have more than one minimal state. Also, Lemma 14.27 fails. Similarly, the uniqueness of a minimal solution is not guaranteed, as shown in Figure 14.4.

336

14 Marked Graph Synthesis

Corollary 14.34 characterises the structure of the reachability graph of a connected, live and bounded marked graph. Let us now look more carefully at this bound. It can be recovered from the paths of TS as follows. Lemma 14.36 Exact bound of MG(TS) The bound of the marked graph MG(TS) is 𝐾 = max{Δ𝑞𝑎 ,𝑞𝑥 (𝑎) | 𝑥 ∈ 𝑇, 𝑞 𝑎 ∈ Seq(𝑥)} Proof: We already saw that 𝑀𝑟 ( 𝑝 𝑞, 𝑥 ) = Δ𝑟𝑥 ,𝑟 (𝑎) for each 𝑥 ∈ 𝑇, 𝑞 = 𝑞 𝑎 ∈ Seq(𝑥) and 𝑟 ∈ 𝑆, and 𝑀𝑠 ( 𝑝 𝑞, 𝑥 ) = 0. Hence, the maximum marking for that place is 𝑀𝑞𝑥 ( 𝑝 𝑞,𝑥 ) = Δ𝑟𝑥 ,𝑞𝑥 (𝑎) so that, if 𝑞 = 𝑞 𝑎 , 𝑀𝑞𝑥 ( 𝑝 𝑞, 𝑥 ) = Δ𝑟𝑥 ,𝑞𝑥 (𝑎) = Δ𝑟𝑥 ,𝑞𝑎 (𝑎) + Δ𝑞𝑎 ,𝑞𝑥 (𝑎) = Δ𝑞𝑎 ,𝑞𝑥 (𝑎), and this is the maximal marking of that place. The claimed bound results. 14.36 From the minimality of the construction, we thus have: Corollary 14.37 Live and bounded marked graph reachability graphs TS is isomorphic to the reachability graph of a connected, live, 𝑘-bounded marked graph iff the properties listed in (14.1) are satisfied and 𝑘 ≥ max{Δ𝑞𝑎 ,𝑞𝑥 (𝑎) | 𝑥 ∈ 𝑇, 𝑞 𝑎 ∈ Seq(𝑥)}. Thus 𝐾 is the tightest possible bound for a marked graph realising 𝑇 𝑆: this results from Lemmata 14.36 and 14.35. As a consequence, the constructed marked graph is not only minimal, but also unique. Example 14.38 Uniqueness of the minimal marked graph While 𝑁1 solves 𝑇 𝑆1 in Figure 14.1, it is not the minimal solution. Instead, the marked graph 𝑁 shown in Figure 14.5 also solves 𝑇 𝑆1 and is guaranteed to be the unique minimal such net (up to isomorphism).

14.8 Bibliographical remarks and further reading The result of the present chapter is contained in [BD14a] (by Eike Best and Raymond Devillers). Marked graphs differ from T-systems, defined in Section 5.3, only by requiring | • 𝑠| = 1 = |𝑠• | for every place 𝑞 ∈ 𝑄, instead of the weaker property | • 𝑠| ≤ 1 ≥ |𝑠• |. This suggests that generalisations to bounded (or even unbounded) T-systems are likely to be possible, and such results have, indeed, been obtained in [BD15a; BD17]. Lemma 14.15 (using the marking equation as a sufficient condition for reachability) holds for some classes of Petri nets which properly include live marked graphs, such as the class of live, plain choice-free nets (consult Theorem 17 in [Mur89], by

14.9 Exercises

337

Tadao Murata). There are counterexamples, however, for the full class of live, not necessarily plain, choice-free nets.7 If backward persistence is dropped but all other properties (including forward persistence) are kept, one can find solvable examples which cannot be solved by choice-free Petri nets, even if arbitrary arc weights and arbitrary side-conditions are allowed. Compare Exercise 12.4. Note that Figure 14.3 does not qualify as such an example because it does not satisfy P1. State space characterisations are hard to achieve in general. At the time of writing, for example, no exact, stand-alone, state space characterisation seems to be known for safe free-choice nets. On the one hand, this is hardly a surprise, given that some salient properties of theirs are NP-complete, such as non-liveness (Proposition 5.28) and reachability [Esp98] (by Javier Esparza). On the other hand, quite a few properties of FC-nets can be decided in polynomial time [Esp96] (Javier Esparza).

14.9 Exercises Exercise 14.1 Generalised marked graph synthesis Consider the three-dimensional cube from the right-hand side of Figure 13.3. Which of the P1 properties are satisfied? Produce a marked graph whose reachability graph is isomorphic to this cube. Hint: Solve this exercise “intuitively”. The present chapter does not contain an algorithm which can deal with the cube, but you may be inspired by the ideas exhibited in this chapter and in the previous one. Exercise 14.2 More generalised marked graph synthesis Produce a live marked graph whose reachability graph is isomorphic to the (infinite) transition system shown in Figure 14.6. Hint: Solve this exercise “intuitively”. Chapter 11 does not contain an algorithm which can deal with infinite inputs, but you may be inspired by the ideas exhibited in this chapter.

𝑏

𝑎

𝑏

𝑎

... 𝑐

𝑐

Fig. 14.6: A transition system to Exercise 14.2.

Solution 14.1 (Exercise 14.1) All P1 properties except reversibility are satisfied. 7 See Figure 4.7(r.h.s.) where the marking equation does not imply reachability, even though the net is choice-free and live.

338

14 Marked Graph Synthesis

The 3-bit marked graph shown for 𝑛 bits in Figure 7.2 produces a reachability graph which is isomorphic to the cube. Solution 14.2 (Exercise 14.2) See Figure 14.7.

𝑠1 𝑠3 𝑎

𝑏

𝑐

𝑠2

Fig. 14.7: Solution of Exercise 14.2.

𝑠4

Chapter 15

Bounded Choice-free Net Synthesis

Abstract In the hierarchy of structurally defined Petri net classes, choice-free nets lie above marked graphs, but their reachability graphs are still persistent. Thus, they inherit the properties of Petri nets with persistent reachability graphs. In this chapter, we show that they also enjoy properties that resemble some of the properties of marked graphs. This leads to a rich assortment of necessary synthesisability properties that may be exploited for pre-synthesis and ultimately for synthesis, when a synthesis problem targets a choice-free net.

Contents 15.1 15.2 15.3 15.4 15.5 15.6 15.7

Introduction . . . . . . . . . . . . . . . . . . Concurrent implementations of Petri nets . . Some properties of bounded choice-free-nets Pre-synthesis . . . . . . . . . . . . . . . . . Choice-free synthesis . . . . . . . . . . . . . Bibliographical remarks and further reading . Exercises . . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

339 340 342 348 349 351 351

15.1 Introduction In the previous chapter 14, a fast synthesis algorithm was developed for the synthesis of marked graphs from a labelled transition specification. This algorithm relies on an exact characterisation of the reachability graphs of marked graphs, strengthening the necessary synthesisability conditions developed in Chapters 12 and 13 (Propositions 12.2 and 13.3) for Petri nets in general.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. Best and R. Devillers, Petri Net Primer, Computer Science Foundations and Applied Logic, https://doi.org/10.1007/978-3-031-48278-6_15

339

340

15 Bounded Choice-free Net Synthesis

An exact state space characterisation is not available for choice-free nets. Nevertheless, it is possible to add a multitude of properties to the ones we already discussed, such that all of these properties must be satisfied for an lts to be synthesisable. For example, if bounded choice-free Petri nets are targeted, then we may add to this list the disjoint cycle decomposition which was proved to be valid for finite persistent transition systems in Chapter 12. The present chapter describes several further necessary conditions of this type. In a sense, these properties adapt and generalise the lemmata of Section 14.3 which were needed in order to prove the correctness of the marked graph synthesis. They will be listed in Section 15.3 below, and their efficient use for a bounded choicefree Petri net (pre-) synthesis algorithm will be described in Sections 15.4 and 15.5. Before embarking on these results, however, we discuss, in Section 15.2, the possible benefits of the free-choice property of Petri nets in the context of the implementation of a concurrent system.

15.2 Concurrent implementations of Petri nets Since both transition systems and Petri nets permit concurrency between transition firings, a natural implementation strategy for Petri nets would be to build a system composed of data structures to model places and tokens, and parallel agents, one for each transition, interacting only through their competition to access their needed resources, one place at a time to obtain a fully distributed realisation. Because memory resources are finite, we need to restrict our attention to bounded net systems and finite transition systems from the start, in this view. The structure of each implemented transition can be sketched as follows: % tokens are treated as resources that may be accessed by transitions repeat check the availability of needed resources if some are missing, retry after some time otherwise, collect the needed tokens process the action of the transition (possibly using some hidden information of the collected black tokens) produce the output tokens (possibly with suitable hidden information, depending on the information hidden in some of the used tokens) There are variants of this scheme, however. For instance, the production of the output tokens may be performed one by one during the action’s progress, and not all together at the end of the processing. Similarly, the collection of tokens may be performed one by one at the beginning, and not all together at once.

15.2 Concurrent implementations of Petri nets

341

However, there is potentially a big problem with this procedure: between the checking phase and the collection phase, the situation may have changed, due to the parallel action of other transitions, and it may happen that the tokens which were available during the checking phase are no longer there. It may even be possible that this is only discovered in the middle of the collection phase, and that we need to “give back” the tokens that were absorbed during the beginning of the phase. This is illustrated by the system 𝑁 in Figure 15.1. We may partly avoid this problem by absorbing the remainder of the needed tokens whenever the transition discovers they have been produced, but it may still be necessary to give back the absorbed tokens if we observe that it takes too long to get the remaining ones, possibly meaning that we created a deadlock, even if there is no deadlock in the original system. It is possible to avoid this kind of problem by blocking in a critical section all the data structures implementing the places and tokens, in order to check and absorb the needed tokens without being bothered by the other transitions, relaxing the critical section to wait a bit if not all the needed tokens were present, or to start the process and produce phase. However, this seriously reduces the parallelism of the implementation, may lead to starvation problems, and is certainly not distributed in the way we searched for. We may also slightly alleviate this technique by only blocking together the input places of the transition, by protecting each place separately by a critical section and nesting them while following a predefined ordering in order to avoid deadlocks. However, this still reduces the parallelism of the implementation, may still lead to starvation problems, needs that the transitions first agree on a common ordering of the places, and is again not distributed as expected.

𝑁:

...

𝑡1

...

𝑝1

𝑡1 𝑝1

𝑢

...

𝑡2

...

𝑝2 ...

𝑁 ′:

𝑝2 𝑢

Fig. 15.1: A Petri net system 𝑁, where transition 𝑢 may see that all its needed tokens are present, but by the time it decides to take them, one after the other, conflicting transitions 𝑡1 and/or 𝑡2 may have taken them before. If 𝑢 takes a token from 𝑝 1 and then sees that the token in 𝑝 2 has disappeared, 𝑢 must give back the token to 𝑝 1 , or the culprit must give a token back to 𝑝 2 . 𝑁 ′ shows that this problem already occurs in free-choice systems. It may be the case that 𝑢 grabs the green token while, in the meantime, while 𝑢 also tries to take the red token, 𝑡 1 grabs it before. Then one of the transitions has to return a token, or else, we create an artificial deadlock. In a naive implementation, this leads either to a deadlock, or to a livelock (i.e., an internal divergence) as discussed in Chapter 8.

342

15 Bounded Choice-free Net Synthesis

These problems accentuate the difficulties encountered when one tries to realise a distributed hardware implementation of Petri nets. However, they disappear, whatever the marking, if places are not shared, that is, if each place has at most a single output transition. In that case, it is never necessary to give back some absorbed token(s), and it is even possible to fuse the check and collect phases, leading, for each transition, to a (parallel) procedure of the following kind: repeat for each input place do while the needed tokens are not present do wait for some time grab the needed tokens process the action of the transition and produce the output tokens Of course, it is necessary to protect the data structures representing places by semaphore-like or monitor-like devices, in order to ensure that if many producers or the consumer and some producer(s) access the place, the result may be serialised. For instance, if two producers add one token in some common place, the result will be to add two tokens, and not a single one, the last addition wiping out the first one as it can happen in a – badly implemented – parallel system.

15.3 Some properties of bounded choice-free-nets We shall now be interested in general properties of a bounded choice-free net, as they may be read off a transition system isomorphic to its reachability graph. We shall thus not be interested in properties relying on the exact token counts on places of reachable markings, but instead, continue to treat the latter as unstructured states, rather than vectors of integers. We already know from Chapters 12 and 13 that every reachability graph is totally reachable (valid for any Petri net); forward and backward deterministic, weakly periodic, free of two-way uniform chains, and strongly cycle-consistent (valid for any net); finite (valid for any bounded net); and persistent (valid for any choice-free net). And from persistence, we may infer, from Chapter 12, that if the net is bounded, there are home markings; for each cycle and home state, there is a cycle around that home state with the same Parikh vector; for any two small cycles, either they have the same Parikh vector or they are transition-disjoint; and for each cycle, its Parikh vector is a linear combination of Parikh vectors of small cycles. The other properties that we shall develop now concern the small cycles, the short paths and the (minimal) T-invariants. Small cycles have been defined previously in Section 12.5, while short paths have been defined in Section 14.3. 𝜎

Let us first recall, from Proposition 4.17, that if 𝑞 −→ 𝑞 (with 𝜎 ≠ 𝜀), then P (𝜎) is a (non-trivial) T-invariant of the net under consideration, and conversely, if P (𝜎) 𝜎 is a T-invariant, then 𝑞 −→ 𝑞 whenever 𝑞 enables 𝜎.

15.3 Some properties of bounded choice-free-nets

343

In order to understand the following lemma, it is useful to examine the general form of a place 𝑠 in a choice-free Petri net (see Figure 15.2). All incoming places of 𝑥 have this form, with varying numbers 𝑘, ℎ, 𝑘 𝑗 . Suppose that 𝐴(𝑥) = 𝑇 \{𝑥}. If 𝐽 is a T-invariant, then the identity ∑︁ 𝑘 · 𝐽 (𝑥) = 𝑘 𝑎 · 𝐽 (𝑎) 𝑎∈ 𝐴( 𝑥 ) 𝜎

holds by definition.1 Now consider a firing sequence 𝑀1 −→ 𝑀2 such that 𝑀1 enables 𝑥. We ask for conditions under which 𝑀2 (re-)enables 𝑥: 𝜎

𝑀1 −→ 𝑀2 ↓𝑥 ↓ 𝑥? If it so happens that for each 𝑎 ∈ 𝐴(𝑥), the ratio between the number of occurrences of 𝑎 in 𝜎 and the number of occurrences of 𝑥 in 𝜎 exactly equals 𝐽 (𝑎)/𝐽 (𝑥), for some T-invariant 𝐽 with 𝐽 (𝑥) > 0, then the marking on places • 𝑥 (such as 𝑠) is exactly reproduced; and if this ratio even exceeds 𝐽 (𝑎)/𝐽 (𝑥), then the marking on • 𝑥 is also exceeded. In both cases, 𝑀2 enables 𝑥. 𝑎1 𝑘𝑎1 𝑎2

𝑘𝑎2

ℎ 𝑥

.. . 𝑘 𝑎𝑛

𝑠

𝑘+ℎ

𝑎𝑛

Fig. 15.2: A general pure (ℎ = 0) or non-pure (ℎ > 0) choice-free place 𝑠. It is assumed that 𝑘 + ℎ > 0, i.e., that 𝑥 is a true outgoing transition of 𝑠, and that 𝑇 = {𝑥, 𝑎 1 , . . . , 𝑎 𝑛 }, i.e., that the set {𝑎 1 , . . . , 𝑎 𝑛 } contains all other transitions. By choice-freeness, 𝐹 (𝑠, 𝑎 𝑗 ) = 0. The weights 𝑘 𝑎 𝑗 = 𝐹 (𝑎 𝑗 , 𝑠) could be = 0 (𝑎 𝑗 is not connected to 𝑠) or > 0 (the firing of 𝑎 𝑗 increases the number of tokens on 𝑠).

The following lemma generalises this observation to possibly undirected sequences. It is a first, rather general, property, which is, however, specific for choice-free nets, and not for all the persistent ones.

1 And by intuition, since the marking on 𝑠 is reproduced if such a T-invariant is fired.

344

15 Bounded Choice-free Net Synthesis

Lemma 15.1 Re-enabling along a general path 𝑥 In a choice-free net 𝑁, suppose that 𝑀1 −→ ; that 𝑥 belongs to the support 𝜎 of some T-invariant 𝐽; that 𝑀1 −→ 𝑀2 with 𝜎 ∈ (±𝑇) ∗ ;a and that ∀𝑎 ∈ 𝐴(𝑥) : P (𝜎)(𝑎) ≥ P (𝜎)(𝑥) · 𝐽 (𝑎)/𝐽 (𝑥) 𝑥

Then 𝑀2 −→ . a

For (±𝑇 ), see Notation 13.1.

Proof: We only need to show that ∀𝑠 ∈ • 𝑥 : 𝑀2 (𝑠) ≥ 𝑀1 (𝑠). Referring to the general form of a place 𝑠 ∈ • 𝑥 in Figure 15.2, we know that Í 𝑀2 (𝑠) = 𝑀1 (𝑠) + 𝑎∈ 𝐴( 𝑥 ) 𝑘 𝑎 · P (𝜎)(𝑎) − 𝑘 · P (𝜎)(𝑥) by the marking equation. Í Since 𝐽 is a T-invariant, 𝑘 · 𝐽 (𝑥) = 𝑎∈ 𝐴( 𝑥 ) 𝑘 𝑎 · 𝐽 (𝑎), and 𝐽 (𝑥) > 0 since 𝑥 belongs to the support of 𝐽. Hence ∑︁ 𝑀2 (𝑠) = 𝑀1 (𝑠) + 𝑘 𝑎 · [P (𝜎)(𝑎) − P (𝜎)(𝑥) · 𝐽 (𝑎)/𝐽 (𝑥)] 𝑎∈ 𝐴( 𝑥 )

The claimed property arises since no 𝑘 𝑎 is negative.2 15.1 This may be (re-) interpreted as follows: if there is a general path 𝜎 from some marking 𝑀1 to some marking 𝑀2 , and some (possibly fractional, possibly negative) factor 𝑓 ∈ Q such that, for each 𝑠 ∈ • 𝑥, P (𝜎) ≥ 𝑓 · 𝐽 on 𝐴(𝑥), with 𝑓 = P (𝜎)(𝑥)/𝐽 (𝑥) (or 𝑥 𝑥 P (𝜎)(𝑥) = 𝑓 · 𝐽 (𝑥)), then 𝑀1 −→ ⇒ 𝑀2 −→ ; i.e., we have a sufficient condition for the re-enabling of 𝑥. An interesting consequence may be derived from this property. Intuitively, it states that “paths can be rearranged in such a way that the full T-invariants occur last”. Proposition 15.2 Path decomposition 𝜎 Let 𝐽 be a T-invariant of a choice-free net and let 𝑀 −→ 𝑀 ′ be a (directed) 𝜎0 𝜎𝐽 path in it. Then there is also a path 𝑀 −→ 𝑀 ′ ( −→ 𝑀 ′ ) ℓ with the same Parikh vector, where P (𝜎0 ) ≱ 𝐽, P (𝜎𝐽 ) = 𝐽 and ℓ is a suitable integer. Proof: The property is trivial if P (𝜎) ≱ 𝐽 (we may take 𝜎0 = 𝜎 and ℓ = 0). Let 𝜎′

𝜎𝐽

us thus assume that P (𝜎) ≥ 𝐽. We shall first prove that 𝑀 −→ 𝑀 ′ −→ 𝑀 ′ with P (𝜎 ′ ) = P (𝜎) − 𝐽 and P (𝜎𝐽 ) = 𝐽 . The claimed property will then result from a repeated application of this property (it may happen that we get different 𝜎𝐽 ’s, but since they yield cycles around 𝑀 ′ with the same Parikh vector, we may interchange them at will). The new property is trivial if P (𝜎) = 𝐽 (we may take 𝜎 ′ = 𝜀 and 𝜎𝐽 = 𝜎). Let us thus 𝜎1 𝜎2 𝜎 𝑥 assume that P (𝜎) > 𝐽. 𝑀 −→ may be rewritten as 𝑀 −→ 𝑀1 −→ 𝑀2 −→ with 2 By contrast, P ( 𝜎) (𝑎) and P ( 𝜎) ( 𝑥 ) may be negative since 𝜎 is a general path.

15.3 Some properties of bounded choice-free-nets

345

P (𝜎1 ) ≤ 𝐽 and P (𝜎)(𝑥) = 𝐽 (𝑥) (i.e., 𝑥 is the first transition “in excess” in 𝜎 with ← 𝜎−1

respect to 𝐽). Let us consider the reverse path 𝑀1 −→ 𝑀. We have that 𝑥 belongs to 𝑥 the support of 𝐽; that 𝑀1 −→ ; and that, for all 𝑎 ∈ 𝑇 \{𝑥}, we have P (𝜎1 )(𝑎) ≤ 𝐽 (𝑎), together with 𝐽 (𝑥) = P (𝜎1 )(𝑥), so that P ( ← 𝜎−1 )(𝑎) ≥ −𝐽 (𝑎) = P ( ← 𝜎−1 )(𝑥) · 𝐽 (𝑎)/𝐽 (𝑥). 𝑥

We may thus apply Lemma 15.1 and deduce 𝑀 −→ . By Keller’s theorem, we then • • 𝑥 𝜎− 𝑥 𝜎− 𝑥 get 𝑀 −→ 𝑀 ′ −→ and we may resume the argument to 𝑀 ′ −→ , leading to a 𝜎′

𝜎𝐽

decomposition 𝑀 −→ 𝑀 ′ −→ 𝑀 ′ with P (𝜎𝐽 ) = 𝐽 (since 𝐽 is a T-invariant), as expected. 15.2 Proposition 15.2 is a counterpart of Lemma 14.11.3 We shall usually apply this proposition to minimal nontrivial T-invariants. Definition 15.3 Primality A T-vector 𝐽 ∈ N𝑇 will be called prime if gcd{𝐽 (𝑡) | 𝑡 ∈ 𝑇 } = 1, i.e., if its entries do not have a non-trivial common factor. A sequence 𝜎 ∈ 𝑇 ∗ is prime if P (𝜎) is prime. An important consequence of Proposition 15.2 is then: Proposition 15.4 Small cycles are prime In the reachability graph of a bounded choice-free net, all small cycles are prime and correspond to minimal T-invariants. 𝜎

Proof: If a cycle 𝑀 −→ 𝑀 is not prime, there is an integer 𝑘 > 1 dividing P (𝜎), but then P (𝜎)/𝑘 is still a T-invariant, smaller than P (𝜎), and from Proposition 15.2 there is a cycle (around 𝑀) with Parikh vector P (𝜎)/𝑘, so that 𝜎 may not be small. If a cycle does not yield a minimal T-invariant, then from the decomposition proposition 15.2, the cycle may not be short. 15.4 Observing that the all-ones vector is prime, the first part of this proposition is a counterpart of Theorem 14.14(f). The second part is related to Corollary 12.25.4 Moreover, not only the Parikh vector of any cycle may be decomposed into small cycle Parikh vectors that could occur elsewhere, but the same is true “in place”. Proposition 15.5 Cycle decompositions 𝜎 In a bounded choice-free net, if 𝑀 −→ 𝑀 is a cycle, there are small cycles Í 𝜎𝑖 𝑀 −→ 𝑀 such that P (𝜎) = 𝑖 𝑘 𝑖 · P (𝜎𝑖 ), with adequate 𝑘 𝑖 ∈ N.

3 However, it is not a generalisation, since the premises differ. Lemma 15.2 applies to nets while Lemma 14.11 applies to lts satisfying P1. 4 Again, these results do not generalise each other. It is important to compare their respective premises.

346

15 Bounded Choice-free Net Synthesis

Proof: Immediate application of Proposition 15.2 and 15.4. 15.5 Since choice-free nets are persistent, we know from Theorem 12.23 that any two small cycles either have the same Parikh vector or are transition disjoint. Note however that there may be minimal T-invariants which never occur as the Parikh vector of a small cycle. Additionally, an important property of short paths may be derived from Proposition 15.2. Proposition 15.6 Short paths have the same Parikh vector 𝜎1 𝜎2 In a bounded choice-free net, if 𝑀0 −→ 𝑀 and 𝑀0 −→ 𝑀 are two short paths from 𝑀0 to some reachable marking 𝑀, then P (𝜎1 ) = P (𝜎2 ). Moreover, the Parikh vector of a short path may not be larger (or equal) than the Parikh vector of a small cycle, and more generally than a minimal T-invariant. Proof: By definition, 𝜎1 and 𝜎2 must have the same length. Let us assume that P (𝜎1 ) ≠ P (𝜎2 ). • • 𝜎1 − 𝜎2 𝜎2 − 𝜎1 By Keller’s theorem, we have 𝑀 −→ and 𝑀 −→ . Since P (𝜎1 (𝜎2 −• 𝜎1 )) = • • 𝜎1 − 𝜎2 𝜎2 − 𝜎1 P (𝜎2 (𝜎1−• 𝜎2 )), by general determinism 𝑀 −→ 𝑀1 and 𝑀 −→ 𝑀1 for some common marking 𝑀1 We also know that 𝜎1 −• 𝜎2 and 𝜎1 −• 𝜎2 are transition-disjoint. • • 𝜎1 − 𝜎2 𝜎2 − 𝜎1 But then we also have 𝑀1 −→ 𝑀2 and 𝑀1 −→ 𝑀2 , and we may repeat this evolution as we want. By weak periodicity and boundedness, we then know that all 𝑀𝑖 ’s are the same, so that P (𝜎1 −• 𝜎2 ) and P (𝜎2 −• 𝜎1 ) are (non-trivial) T-invariants. But then, still from Proposition 15.2, 𝜎1 and 𝜎2 may be decomposed from 𝑀0 to 𝑀 and they may not be short. Contradiction. Similarly, if P (𝜎1 ) ≥ P (𝜏) for a small cycle 𝜏, since P (𝜏) is a T-invariant, Propo15.6 sition 15.2 again shows that 𝜎1 may not be short. Proposition 15.6 is a counterpart of Lemma 14.10. Definition 14.18 can also be transferred, as follows. Definition 15.7 State distance In a transition system 𝑇 𝑆 = (𝑄, 𝑇, →, 𝑞 0 ) where the short paths to each state 𝑠 ∈ 𝑄 all have the same Parikh vector (this will thus apply in particular to the reachability graph of a bounded choice-free net), if 𝜎 is a short path from 𝑞 0 to 𝑠, we shall call Δ(𝑠) = P (𝜎) the distance of 𝑠 from the initial state. From the general determinism and from Proposition 15.6, we have that two distinct reachable markings also have distinct distances from the initial marking. Next, we show that amongst the reachable markings, full firings of T-invariants can always be “cut away” consistently with distances. To formulate this concisely, we introduce residues. This transfers Definition 12.8 from sequences to T-vectors.

15.3 Some properties of bounded choice-free-nets

347

Definition 15.8 Residues Let 𝐽1 , 𝐽2 ∈ N𝑇 be T-vectors. The residue 𝐽1 −• 𝐽2 is defined by ∀𝑡 ∈ 𝑇 : (𝐽1 −• 𝐽2 )(𝑡) = max(𝐽1 (𝑡) − 𝐽2 (𝑡), 0)

Lemma 15.9 Residues of words and vectors If 𝜎, 𝜏 ∈ 𝑇 ∗ , then P (𝜎−• 𝜏) = P (𝜎)−• P (𝜏).

Proposition 15.10 The reduced distance property In the reachability graph of a bounded choice-free net 𝑁 = (𝑃, 𝑇, 𝐹, 𝑀0 ), let 𝑀 be a reachable marking and let 𝐽 be a T-invariant. Then there is a reachable marking 𝑀 ′ such that Δ(𝑀 ′ ) = Δ(𝑀)−• 𝐽. Proof: If Δ(𝑀) ≤ 𝐽, Δ(𝑀)−• 𝐽 is the null vector and the property is trivial (𝑀 ′ = 𝑀0 ). Otherwise, we may apply the reasoning at the heart of the proof of Proposition 15.2. 𝜎1 𝜎2 𝜎 𝑥 We may decompose a short path 𝑀0 −→ 𝑀 into 𝑀0 −→ 𝑀1 −→ 𝑀2 −→ 𝑀 (𝜎 = 𝜎1 𝑥𝜎2 ) with P (𝜎1 ) ≤ 𝐽 and P (𝜎)(𝑥) = 𝐽 (𝑥) (i.e., 𝑥 is the first transition ← 𝜎−1

“in excess” in 𝜎 with respect to 𝐽). Let us consider the reverse path 𝑀1 −→ 𝑀0 . 𝑥 We have that 𝑥 belongs to the support of 𝐽, 𝑀1 −→ and ∀𝑎 ∈ 𝑇 \ {𝑥} we have P (𝜎1 )(𝑎) ≤ 𝐽 (𝑎) together with 𝐽 (𝑥) = P (𝜎1 )(𝑥) so that P ( ← 𝜎−1 )(𝑎) ≥ −𝐽 (𝑎) = ← − P ( 𝜎1 )(𝑥) · 𝐽 (𝑎)/𝐽 (𝑥). 𝑥

We may thus apply Lemma 15.1 and deduce 𝑀0 −→ 𝑀”. By Keller’s theorem, we • 𝑥 𝜎− 𝑥 then get 𝑀0 −→ 𝑀” −→ 𝑀 and we may resume the argument to (the smaller • 𝜎− 𝑥 path) 𝑀” −→ , until we get a path with no transition in excess. This will lead to a • 𝜎− 𝐽 path 𝑀0 −→ 𝑀 ′ . Since P (𝜎−• 𝐽) ≤ P (𝜎), P (𝜎−• 𝐽) is smaller than any small cycle Parikh vector and is thus short. Hence Δ(𝑀 ′ ) = Δ(𝑀)−• 𝐽 as required. 15.10 We have seen (Corollary 12.18) that in a finite, deterministic, and persistent transition system, cycles are forward transported Parikh-equivalently (in the sense that, if there 𝜏

𝜎

𝜙

is a cycle 𝑞 −→ 𝑞 and a directed path 𝑞 −→ 𝑞 ′ , then there is a cycle 𝑞 ′ −→ 𝑞 ′ with P (𝜙) = P (𝜏)). We shall now see that, in a bounded choice-free net, cycles may also be backward transported Parikh equivalently, up to some extent. Corollary 15.11 The earliest Parikh cycles property In a bounded choice-free net, let 𝐽 be a T-invariant. Let 𝑀 be a reachable marking with a (small) cycle around it with Parikh vector Φ disjoint from 𝐽. There is a cycle with Parikh vector Φ around the (unique) marking at distance Δ(𝑀)−• 𝐽 (which exists by the previous property).

348

15 Bounded Choice-free Net Synthesis

Proof: Let 𝑀 ′ be the marking at distance Δ(𝑀)−• 𝐽 (from 𝑀0 ). Any short path from 𝑀 ′ to 𝑀 has no transition in excess with respect to 𝐽. On the contrary, since the cycle considered around 𝑀 is disjoint from 𝐽, if we prolongate this path with that cycle, all the added transitions will be in excess, and if we apply the (by now, familiar) re-enabling property we shall transport the cycle as is around 𝑀 ′ , as claimed. 15.11 We may now iterate that property, getting that there is a copy of the considered cycle around any marking at distance Δ(𝑀)−• (𝐽) ℓ , for any natural integer ℓ. In particular, if 𝜎 is some small cycle around 𝑀 with Parikh vector Φ, let ΔΦ be the T-vector such that, for any small cycle Parikh vector 𝐽 different from Φ and transition 𝑡 in the support of 𝐽 (let us recall that all the small cycle Parikh vectors are disjoint), ΔΦ (𝑡) = min(Δ(𝑀)(𝑡), 𝐽 (𝑡)), and ΔΦ (𝑡) = Δ(𝑀)(𝑡) otherwise (thus wiping out all the transitions occurring in excess with respect to all 𝐽’s different from Φ), we immediately get: Corollary 15.12 The wiping property In a bounded choice-free net, let 𝑀 be a reachable marking with a small cycle around it, with Parikh vector Φ. There is a copy of that cycle around the (unique) marking 𝑀 ′ at distance ΔΦ . And of course, there is a small cycle with Parikh vector Φ around each successor marking of 𝑀 ′ since cycles are transported Parikh-equivalently in any bounded persistent net.

15.4 Pre-synthesis When one needs to synthesise a Petri net from a given finite labelled transition system 𝑇 𝑆 = (𝑄, 𝑇, →, 𝑞 0 ), one generally starts by performing some preliminary checks, like total reachability and determinacy. But if the target is a choice-free net, such as discussed here, the list of checks may be much more rich. From the results of the previous section, we should for instance perform the following checks (if one of them fails, we may stop and produce an informative message explaining the reason of the failure, but we may also continue to get a more complete set of features we need to manage if we want to later submit a modified synthesis problem with some chance of success afterwards): (i) General connectivity: 𝑇 𝑆 must be forward and backward deterministic, totally reachable and persistent. (ii) The small cycles property: Letting G be the set of Parikh vectors of small cycles, all the members of G must be prime (from persistence, we already know they are transition-disjoint, and we also know that if the transition system is solvable, then the members of G correspond to minimal realisable T-invariants).

15.5 Choice-free synthesis

349

(iii) The short distance property: For each state 𝑞 of 𝑇 𝑆, all the short paths from 𝑞 0 to 𝑞 have the same Parikh vector (namely, distance Δ𝑞 to 𝑞), and none of those distances may dominate a member of G. (iv) The reduced distance property: For each state 𝑞 in 𝑇 𝑆 and member 𝐽 ∈ G, there must be a state 𝑟 in 𝑇 𝑆 such that Δ𝑟 = Δ𝑞 −• 𝐽. (v) The wiping property: For any state 𝑞 in 𝑇 𝑆 with a small cycle around it with Parikh vector 𝐽, there is a cycle with Parikh vector Φ around the state 𝑟 at distance ΔΦ (which exists from an iterated application of the previous property). These properties may (rather easily) be checked by constructing a depth-first spanning tree of 𝑇 𝑆 and by analysing the chords (i.e., the arcs of 𝑇 𝑆 not in the spanning tree). For instance, the (unique) path from 𝑞 0 to any state 𝑞 yields a short path to the latter. Note that some properties obtained in the previous section are more general than the ones used here, since they concern (small) cycles and not only their Parikh vectors, but it is easier to record Parikh vectors than exact cycles.

15.5 Choice-free synthesis Let us assume that the finite transition system 𝑇 𝑆 = (𝑄, 𝑇, →, 𝑞 0 ) to be synthesised survived the pre-synthesis described in the previous section. We may assume that, during this process, for each state 𝑞 its distance Δ𝑞 was constructed, and that the set G of Parikh vectors of small cycles was also constructed. It remains to build, if possible, an adequate region (or place) solving each needed separation problem. Let (𝑞, 𝑎) be an event/state separation problem (i.e., such that there is no arc labelled 𝑎 from state 𝑞). We need to find a place 𝑝 compatible with 𝑇 𝑆 which excludes to perform 𝑎 at the marking corresponding to 𝑞. Since the target is a choice-free net, we need | 𝑝 • | ≤ 1; hence, to exclude 𝑎 somewhere, we shall have 𝑝 • = {𝑎} (and the cardinality of the output transitions of 𝑝 will be exactly 1, and not 0). The variables to solve will be 𝑚 (the initial marking of the place 𝑝 to be built), 𝑥 𝑎 (yielding 𝐹 ( 𝑝, 𝑎), the other ones being null) and, for each 𝑡 ∈ 𝑇, a variable 𝑦 𝑡 (yielding 𝐹 (𝑡, 𝑝)). They all belong to N. The linear homogeneous constraints they need to satisfy are: Í • ∀𝑟 ∈ 𝑄, 𝑡 ∈ 𝑇 from 𝑟 : 𝑚 − Δ𝑟 (𝑎) · 𝑥 𝑎 + 𝑧 ∈𝑇 Δ𝑟 (𝑧) · 𝑦 𝑧 ≥ 𝑥 𝑡 (meaning that 𝑀 ( 𝑝) ≥ 𝐹 ( 𝑝, 𝑡) at state 𝑟), Í • ∀𝐽 ∈ G : 𝑧 ∈𝑇 𝐽 (𝑧) · 𝑦 𝑧 = 𝐽 (𝑎) · 𝑥 𝑎 (meaning that the place is compatible with the small cycles - hence with all the cycles; this in particular ensures that if 𝑧 𝑞 −→ 𝑞 ′ , the marking of 𝑝 at state 𝑞 ′ is the same as the one in state 𝑞 plus 𝑦 𝑧 , minus 𝑥 𝑧 if 𝑧 = 𝑎, and that if two different paths lead to a same state they will yield the same marking for 𝑝), Í • 𝑀 − Δ𝑞 (𝑎) · 𝑥 𝑎 + 𝑧 ∈𝑇 Δ𝑞 (𝑧) · 𝑦 𝑧 < 𝑥 𝑎 (meaning that 𝑀 ( 𝑝) does not allow 𝑎 at state 𝑞).

350

15 Bounded Choice-free Net Synthesis

In fact, it is not necessary to rely to a linear solver for each event/state separation problem: for each of them we may first check if a region previously built does the job (this was already noticed in Section 13.1). An interesting point for the synthesis of a bounded choice-free net is that, if the various checks for the pre-synthesis phase were positive and the event/state separation problems were all solved, the state separation problems are automatically solved and thus do not need to be solved additionally. Proposition 15.13 Automatic state separation Let 𝑇 𝑆 = (𝑄, 𝑡, →, 𝑞 0 ) be finite transition system satisfying the pre-synthesis checks and let 𝑁 be the choice-free net corresponding to the solutions of the event/state separation problems (we shall denote by 𝑀𝑞 the marking of 𝑁 corresponding to any state 𝑞). Then 𝑀𝑞1 = 𝑀𝑞2 implies 𝑞 1 = 𝑞 2 (i.e., the state separation properties are also solved). Proof: By construction, the language of 𝑁 is the same as the one of 𝑇 𝑆, but the reachability graph of 𝑁 is not necessarily isomorphic to 𝑇 𝑆 since it could happen that two distinct states correspond to the same marking, and we need to show this may not be the case. We shall proceed by contradiction. Let us thus assume that 𝑞 1 ≠ 𝑞 2 and 𝑀𝑞1 = 𝑀𝑞2 . Since 𝑇 𝑆 is finite, deterministic 𝜎 and persistent, there are home states. Let 𝑟 2 be a home state and 𝑞 2 −→ 𝑟 2 be a path 𝜎 𝜎 to it, so that also 𝑀𝑞2 −→ 𝑀𝑞2 . Since 𝑀𝑞1 = 𝑀𝑞2 , we must also have 𝑞 1 −→ 𝑟 1 for some state 𝑟 1 . We may not have 𝑟 1 = 𝑟 2 , otherwise by backward determinism we would have 𝑞 1 = 𝑞 2 , but by general determinism, 𝑀𝑟1 = 𝑀𝑟2 . 𝜏

Since 𝑟 2 is a home state, we must have a (directed) path 𝑟 1 −→ 𝑟 2 , but since 𝜏 𝑀𝑟1 = 𝑀𝑟2 we must also have 𝑟 2 −→ 𝑟 3 for some state 𝑟 3 , and we may continue building a series of paths 𝜏. Since 𝑇 𝑆 is finite, at some point we shall close a cycle, and by backward determinism again all the 𝑟 𝑖 ’s (hence in particular 𝑟 1 and 𝑟 2 ) will be on that cycle, meaning in particular that 𝑟 1 is also a home state. We thus have a cycle with Parikh vector 𝑘 · P (𝜏) for some 𝑘 ∈ N, and since 𝑇 𝑆 is Í persistent, 𝑘 · P (𝜏) = 𝐽 ∈ G 𝑘 𝐽 · 𝐽 for some coefficients 𝑘 𝐽 ∈ N. But since the T-vectors 𝐽 are Í prime and disjoint, we must have that 𝑘 divides each 𝑘 𝐽 . We thus have P (𝜏) = 𝐽 ∈ G 𝑘 ′𝐽 · 𝐽 with 𝑘 ′𝐽 = 𝑘 𝐽 /𝑘. Since 𝑟 1 is a home state, for each 𝐽 ∈ G, we have a cycle around Í 𝑟 1 with Parikh vector 𝐽. There is thus also a cycle around 𝑟 1 with Parikh vector 𝐽 ∈ G 𝑘 ′𝐽 · 𝐽 and by persistence (Keller’s theorem again), we have 𝑟 2 = 𝑟 1 , a contradiction. 15.13 A similar proof may be used to justify the statement at the end of point 2 in the system solving an event/state problem.

15.7 Exercises

351

15.6 Bibliographical remarks and further reading The difficulties encountered when one tries to realise a distributed hardware implementation of Petri nets are detailed, for instance, by Suhas Patil in [Pat72], and in [Cor+00] (Jordi Cortadella, Michael Kishinevsky, Alex Kondratyev, Luciano Lavagno, and Alexandre Yakovlev). Semaphores were introduced by Edsger W. Dijkstra in [Dij68a], and monitors by C.A.R. Hoare in [Hoa74]. The analysis and synthesis of choice-free Petri nets was initiated in [CM75] (Stefano Crespi-Reghizzi and Dino Mandrioli), followed up in (amongst others) [TCS97] (Enrique Teruel, José Manuel Colom, Manuel Silva Suárez) and was later developed in a series of papers by the authors and their collaborators [BD14b; BD15c; BDE20; BDS18; Bes+20] (Eike Best, Raymond Devillers, Evgeny Erofeev, Uli Schlachter, Harro Wimmel). The relationship between various necessary conditions for the synthesisability of Petri nets from labelled transition systems, especially those that apply to choice-free synthesis (and including several of the properties described in the previous chapters), has been explored in [BD18] (by Eike Best and Raymond Devillers).

15.7 Exercises

𝑎 𝑞1

𝑞5

𝑇𝑆: 𝑑

𝑏

𝑑

𝑏

𝑐

𝑒 𝑎

𝑞0 𝑞3 𝑑

𝑏 𝑞2

𝑞7 𝑞4

𝑑

𝑏 𝑐

𝑎

𝑒 𝑞6

Fig. 15.3: A transition system 𝑇 𝑆 (Exercise 15.1).

Exercise 15.1 (Choice-free) synthesis Examine the labelled transition system 𝑇 𝑆 shown in Figure 15.3. Check the properties “total reachability”, “determinism”, “weak periodicity”, “freeness of two-way chains”, “strong cycle-consistency” from Chapters 12 and 13, and draw conclusions about the (choice-free) synthesis of 𝑇 𝑆. Also, specify two short paths from 𝑞 0 to 𝑞 7 and compute, if possible, the state distance between 𝑞 0 and 𝑞 7 . Furthermore,

352

15 Bounded Choice-free Net Synthesis

compute G of Section 15.4 (the set of Parikh vectors of all small cycles) and check all items (i)–(v) listed in that section, as well as Property P1 from Chapter 14 and disjoint cycle decomposition from Chapter 12.

𝑞0

𝑇𝑆1 : 𝑎 𝑏

𝑠2

𝑎 𝑐 𝑠3

𝑠4

𝑎 𝑇𝑆2 :

𝑏

𝑏

𝑎

𝑎

𝑞0

𝑐

𝑠1 𝑏

𝑠5

Fig. 15.4: Two labelled transition systems for Exercise 15.2.

Exercise 15.2 Some lts properties Consider the two transition systems 𝑇 𝑆1 and 𝑇 𝑆2 of Figure 15.4. Determine which of the following properties are satisfied (or not satisfied) by them: Full forward and backward determinism; strong cycle-consistency; absence of two-way chains; prime cycle property; weak periodicity; persistence; synthesisability into a Petri net.5 Solution 15.1 (Exercise 15.1) 𝑇 𝑆 is • totally reachable; • deterministic (backward and forward); • weakly periodic; • free of two-way chains (forward and general); • strongly cycle-consistent. These conditions are necessary for (choice-free) synthesisability, and they are satisfied. Thus, so far, 𝑇 𝑆 could, or could not, be (choice-freely) Petri net synthesisable. 𝑏𝑎𝑐

𝑑𝑎𝑒

Two short paths from 𝑞 0 to 𝑞 7 are 𝑞 0 −→ 𝑞 7 and 𝑞 0 −→ 𝑞 7 . The set G is {P (𝑏𝑐), P (𝑑𝑒)}. We check P1, cycle decomposition, and the remaining conditions listed as (i)–(v) in Section 15.4. • 𝑇 𝑆 satisfies P1 (which implies, in particular, the prime cycle property). • Small cycle decomposition holds true. • 𝑇 𝑆 is persistent. (But not reversible; which does not matter, however, because reversibility is not a necessary condition for choice-free synthesisability.) • It is not possible to define the state distance between 𝑞 0 and 𝑞 7 by means of Definition 15.7, since the short paths between 𝑞 0 and 𝑞 7 are not Parikh-equivalent (more concretely, P (𝑏𝑎𝑐) ≠ P (𝑑𝑎𝑒)).6 5 Full determinism extends determinism to Parikh-equivalent paths instead of just labels. 6 It is possible to modify the definition of a unique “distance” by considering the cycle-reduced paths between two states. For 𝑇𝑆, cycle-reduction of 𝑏𝑎𝑐 yields 𝑎 (since 𝑏𝑐 is a small cycle)

15.7 Exercises

353

• The properties of path decomposition, reduced distance, earliest Parikh and wiping are not valid, essentially because Proposition 15.10 and all of its consequences fail to hold. Because of the last (failed) check(s), 𝑇 𝑆 is not choice-freely synthesisable. However, it could still be Petri net synthesisable (and indeed, it is). Solution 15.2 (Exercise 15.2) 𝑇 𝑆1 is fully forward deterministic but not fully backward deterministic (since 𝑞 0 [𝑎𝑏𝑐⟩𝑠3 , 𝑠3 [𝑎𝑐𝑏⟩𝑠3 , P (𝑎𝑏𝑐) = P (𝑎𝑐𝑏), and 𝑞 0 ≠ 𝑠3 ). 𝑇 𝑆1 is also not strongly cycle-consistent (for the same reason), but note that there is no non-cyclic path 𝑎𝑐𝑏, 𝑐𝑏𝑎 or 𝑏𝑎𝑐. There are no (two-way) infinite chains in 𝑇 𝑆1 . Also, 𝑇 𝑆1 satisfies the prime cycle property. 𝑇 𝑆1 is persistent, and it has home states. By non-backwarddeterminism, 𝑇 𝑆1 is not Petri net synthesisable. 𝑇 𝑆2 satisfies all properties, including the existence of a home state, except Petri net synthesisability. For the last point, this results from the following observation:7 to exclude 𝑎 after the execution of 𝑎𝑏, we need to have, for some place 𝑠, 𝑀0 (𝑠) − 𝐹 (𝑠, 𝑎) < 𝐸 𝑎 + 𝐸 𝑏 (where 𝐸 𝑎 is the effect of transition 𝑎, and similarly for 𝐸 𝑏 ) while allowing 𝑎 initially and after executing 𝑎𝑏𝑏𝑎 needs that 𝑀0 (𝑠) − 𝐹 (𝑠, 𝑎) ≥ 0 and 𝑀0 (𝑠) − 𝐹 (𝑠, 𝑎) ≥ 2 · 𝐸 𝑎 + 2 · 𝐸 𝑏 ; this means that we must have both 𝐸 𝑎 + 𝐸 𝑏 > 0 and 𝐸 𝑎 + 𝐸 𝑏 < 0, a contradiction.

and cycle-reduction of 𝑑𝑎𝑒 also yields 𝑎 (since 𝑑𝑒 is a small cycle). However, Proposition 15.10 ensures that in the persistent framework, we should then have a state which is reachable from 𝑞0 by a single 𝑎, which is not the case in 𝑇𝑆, and also, that the cycle-reduced paths are exactly the small paths. 7 Which corresponds to an application of Theorem 11.34 to this case.

Chapter 16

Model Checking Safe, Strongly Persistent Petri Nets

Abstract System properties can often be expressed as formulae of a temporal logic. In this chapter, a small logic called 𝑆4 is introduced, which is however strong enough to express properties such as reachability and liveness. A model checker is an algorithm with two input parameters, deciding the truth or falsehood of a given temporal logic formula with respect to a given system. We shall describe a simple model checking algorithm which allows 𝑆4 formulae to be checked on a restricted class of Petri net systems. Eventually, this leads to a polynomial-time algorithm which can check whether a given marked graph satisfies any of the properties expressible in 𝑆4 .

Contents 16.1 Introductory Remarks . . . . . . . . . . . . . . . 16.2 Syntax and semantics of 𝑆4 . . . . . . . . . . . . . 16.3 The model checking problem . . . . . . . . . . . . 16.4 Safe, strongly persistent Petri net systems . . . . . 16.5 Conjunctions of literals . . . . . . . . . . . . . . . 16.6 Elimination of diamonds . . . . . . . . . . . . . . 16.7 The model checker . . . . . . . . . . . . . . . . . 16.8 Computing Last 𝜒 polynomially for marked graphs 16.9 Bibliographical remarks and further reading . . . . 16.10 Exercises . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. Best and R. Devillers, Petri Net Primer, Computer Science Foundations and Applied Logic, https://doi.org/10.1007/978-3-031-48278-6_16

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

356 356 359 360 362 364 366 368 370 371

355

356

16 Model Checking Safe, Strongly Persistent Petri Nets

16.1 Introductory Remarks Model checking is a generic name for an algorithm which allows one to check an entire class of properties of a system, rather than a single, specific, property such as boundedness or liveness. In order to express system properties, various types of logics can be considered. Often, these logics are called temporal, because they allow one to reason about the evolutions of systems. Historically, temporal logics are based on modal logics which allow the reasoning about possibilities and necessities. In Section 16.2, we introduce a simple temporal logic called 𝑆4 . In Sections 16.3 to 16.7, an algorithm will be described by which a class of Petri nets can be modelchecked with respect to this logic, leading to a decision procedure for all properties which can be expressed in it. Section 16.8 specialises this algorithm to marked graphs. We shall limit ourselves to safe Petri nets systems. In such systems, arc weights greater than 1 do not make sense, since a transition with such an arc is necessarily dead. Hence we limit our attention to plain, safe systems.

16.2 Syntax and semantics of 𝑺4 In the following, we use the upper-case Greek letter Φ for formulae of the logic. The logic refers to an underlying (safe) Petri net Σ. For this reason, places 𝑠 and transitions 𝑡 are allowed to appear in a formula. For historical reasons (see Section 16.9), the logic is called 𝑆4 . Definition 16.1 Syntax of 𝑆4 Formulae Φ adhere to the following syntax. Φ ::= true |𝑠 |𝑡≤𝑘 | ¬Φ | Φ1 ∧ Φ2 | ♦Φ

( logical constant ) ( where 𝑠 is a place ) ( where 𝑡 is a transition and 𝑘 ∈ N ∪ {−1} ∪ {∞} ) ( logical negation ) ( logical conjunction ) ( intuitively: “Φ holds now, or possibly later” )

Lines 2, 3 and 6 of this syntax will be explained in due course. Lines 1, 4 and 5 have the same significance as in propositional logic. As usual in logic, a literal is a (positive or negated) primitive formula. Here, a literal has one of the four forms 𝑠, or ¬𝑠, or (𝑡 ≤ 𝑘), or ¬(𝑡 ≤ 𝑘). The unary ♦ operator should intuitively be read as “possibly”. It is supposed to mean that there is some (future) point in time at which the formula is true, independently of whether or not it is guaranteed that such a point in time will actually be reached. (This is emphatically different from “eventually”.)

16.2 Syntax and semantics of 𝑆4

357

A formula such as ♦ Φ can be evaluated for a given notion of behaviour of the underlying Petri net. So far, at least two such notions are available: firing sequences (Section 1.3), and unfoldings (Section 9.3). Both notions play an important part in the remainder of this chapter. Remark 16.2 Remarks and abbreviations Here are some derived operators: false Φ1 ∨ Φ2 Φ1 ⇒ Φ2 □Φ

= ¬true = ¬(¬Φ1 ∧ ¬Φ2 ) = ¬Φ1 ∨ Φ2 = ¬♦ ¬Φ ( intuitively: “Φ holds now and later” )

The □ operator should be read as “always”. It means that at all possible (future) points in time, the formula is true. The operators ♦ and □ are often called modalities. Anticipating the semantics, it will also turn out that the following identifications can be made: (𝑡 ≤ −1) is equivalent to false (𝑡 ≤ ∞) is equivalent to true

Definition 16.3 Semantics of 𝑆4 Let Σ = (𝑆, 𝑇, 𝐹, 𝑀0 ) be a safe (and hence also, without loss of generality, plain), Petri net system. Let 𝜏 be a finite firing sequence from 𝑀0 , i.e., 𝜏 𝑀0 −→ . Let Φ be a temporal logic formula according to the syntax in Definition 16.1. We define a relation |= (read as satisfies, or models). 𝜏 𝜏 𝜏 𝜏 𝜏

|= true |= 𝑠 |= (𝑡 ≤ 𝑘) |= ¬Φ |= Φ1 ∧ Φ2

𝜏 |= ♦ Φ

iff iff iff iff iff

true ( i.e., this is always true ) 𝜏 𝑀0 −→ 𝑀 ∧ 𝑀 (𝑠) = 1 ( note that 𝑀 is unique ) P (𝜏)(𝑡) ≤ 𝑘 ( where P (𝜏) is the Parikh vector of 𝜏 ) ¬(𝜏 |= Φ) (𝜏 |= Φ1 ) ∧ (𝜏 |= Φ2 ) 𝜏′

iff ∃𝜏 ′ : (𝑀0 −→ ) ∧ (𝜏 is a prefix of 𝜏 ′ ) ∧ 𝜏 ′ |= Φ

Σ |= Φ iff 𝜀 |= Φ (where 𝜀 is the empty sequence). If Σ |= Φ is true, then we will say that the system Σ satisfies the property defined by the formula Φ. If Φ does not involve any modalities, then its truth on 𝜏 can be determined simply by checking the marking reached after 𝜏 and the Parikh vector of 𝜏. However, if Φ involves modalities, checking it does not appear to be straightforward, since the future behaviour after 𝜏 (and hence, the entire behaviour of Σ if 𝜏 = 𝜀) has to be explored exhaustively, and possibly several times over. In the sequel, we

358

16 Model Checking Safe, Strongly Persistent Petri Nets

shall restrict ourselves to a class of persistent systems Σ, for which a (relatively) straightforward, efficient, algorithm checking Σ |= Φ exists. Example 16.4 The system Σ in Figure 16.1 By safeness, place names can be interpreted as Boolean variables. We use the phrase “𝜏 can be extended to 𝜏 ′ ”, or “𝜏 ′ is an extension of 𝜏”, if 𝜏 is a prefix of 𝜏 ′ . In Figure 16.1, we have: Σ Σ Σ Σ Σ

Σ:

|= ̸ |= |= |= ̸ |=

𝑠1 𝑠2 ♦ (¬𝑠1 ∧ 𝑠2 ∧ 𝑠4 ) ( but 𝑡 1 𝑡 2 ̸ |= ♦ (¬𝑠1 ∧ 𝑠2 ∧ 𝑠4 ) ) □ ♦ 𝑠3 ( since every 𝜏 can be extended to 𝜏 ′ with 𝜏 ′ |= 𝑠3 ) ♦ □ 𝑠2 ( since there is always an extension 𝜏 ′ with 𝜏 ′ ̸ |= 𝑠2 )

𝑠1

𝑡1 𝑠1 𝑠4

𝑠2

𝑡3

𝑡2

𝑡4 𝑠5

𝑠2

𝑡2 𝑒2

𝑠4

𝑒3 𝑡3

𝑠3

𝑡1 𝑒1

𝑠5

𝑒4 𝑡4

𝑠5

𝑒5 . . . 𝑡4

𝑠3

Fig. 16.1: A safe Petri net system Σ (left-hand side) and its maximal unfolding (right-hand side). Example 16.4 shows that diverse properties can be expressed in the same logic, even though they were treated separately in the previous parts of this book. • Reachability (exemplified by the third line). • Liveness (indicated by the fourth line). • Finite delay (alluded to by the fifth line). Full expressiveness is not achieved by 𝑆4 . Indeed, 𝑆4 is not strong enough to formulate liveness properties in the sense of Section 8.1. For instance, the property “Will some given marking eventually (rather than just possibly), be reached?” cannot be expressed in 𝑆4 . The semantics allows a number of logical relationships to be proved. Lemma 16.5 First lemma ♦ (Φ1 ∨ Φ2 ) is equivalent to (♦ Φ1 ) ∨ (♦ Φ2 ).

16.3 The model checking problem

359

Proof: (⇒): Let 𝜏 |= ♦ (Φ1 ∨ Φ2 ). Then 𝜏 can be extended to 𝜏 ′ such that 𝜏 ′ |= (Φ1 ∨ Φ2 ). By Remark 16.2 and Definition 16.3, 𝜏 ′ |= Φ1 or 𝜏 ′ |= Φ2 . If 𝜏 ′ |= Φ1 , the same extension (𝜏 ′ ) proves that 𝜏 |= ♦ Φ1 . If 𝜏 ′ |= Φ2 , then, symmetrically, 𝜏 |= ♦ Φ2 . (⇐): If 𝜏 |= ♦ Φ1 , then also 𝜏 |= ♦ (Φ1 ∨ Φ2 ), since (Φ1 ∨ Φ2 ) simply weakens Φ1 . The same is analogously true if 𝜏 |= ♦ Φ2 . 16.5 Lemma 16.6 Second lemma □ (Φ1 ⇒ Φ2 ) entails (□ Φ1 ⇒ □ Φ2 ). Proof: Suppose (i) that every extension of 𝜏 satisfies (Φ1 ⇒ Φ2 ). Further, suppose that (ii) every extension satisfies Φ1 . Consider an arbitrary extension 𝜏 ′ of 𝜏. By (ii), 𝜏 ′ satisfies Φ1 . By (i), 𝜏 ′ also satisfies (Φ1 ⇒ Φ2 ). Hence 𝜏 ′ satisfies Φ2 . Taken together, 𝜏 satisfies (□ Φ1 ⇒ □ Φ2 ). 16.6 Lemma 16.7 Third lemma □ Φ entails □ □ Φ. Symmetrically, ♦ ♦ Φ entails ♦ Φ. Proof: Suppose that every extension of 𝜏 satisfies Φ. Consider an extension 𝜏 ′ of 𝜏 and an extension 𝜏 ′′ of 𝜏 ′ . Then 𝜏 ′′ is also an extension of 𝜏, hence 𝜏 ′′ satisfies Φ. The symmetric version has a similar proof. 16.7

16.3 The model checking problem We address the following decision problem relating to safe nets and to 𝑆4 : Decision Problem 16.8 Model checking 𝑆4 on safe Petri nets Input: A safe Petri net system Σ = (𝑆, 𝑇, 𝐹, 𝑀0 ) and a formula Φ of the logic 𝑆4 defined in Section 16.2. To decide: Does Σ |= Φ? Even for very simple systems Σ, Problem 16.8 is computationally difficult. Remark 16.9 Model checking 𝑆4 is NP-hard for bit-nets Consider a class of systems Σ1 , Σ2 , Σ3 , . . .where Σ𝑖 :

𝑠𝑖

𝑠1 ... 𝑡1

𝑡𝑖

Claim: The problem “Given a number 𝑖 ∈ N and a formula Φ, does Σ𝑖 |= Φ?” is NP-hard. The claim is proved by reduction from SAT (the satisfiability

360

16 Model Checking Safe, Strongly Persistent Petri Nets

problem of propositional logic). Let 𝜙 denote a formula of propositional logic with variables 𝑥1 , . . . , 𝑥 𝑘 . Define Φ by replacing all 𝑥 𝑗 by 𝑠 𝑗 and transforming the formula into a form having only ¬ and ∨ as operators. Then ♦ Φ is a formula of 𝑆4 which satisfies Σ 𝑘 |= Φ if and only if 𝜙 is satisfiable. However, this is not that bad, because for the class Σ1 , Σ2 , . . ., the computational difficulty depends exclusively on the complexity of a formula. As systems Σ tend to be much larger than interesting formulae Φ, it is desirable to develop algorithms which are “fast in the size of Σ” but, possibly, “slow in the size of Φ”. Such an algorithm will be given, in the next sections, for a system class that is larger than the class of bit-nets, and also larger than the class of T-systems, but also slightly smaller than the class of (safe) persistent systems.

16.4 Safe, strongly persistent Petri net systems The class of Petri nets defined in this section have maximal unfoldings whose underlying occurrence nets are free of conflicts. The class of conflict-free occurrence nets has a dedicated name: Definition 16.10 Causal nets A (possibly infinite) occurrence net 𝑂 = (𝐵, 𝐸, 𝐺) is called a causal net if it is additionally conflict-free, that is, ∀𝑏 ∈ 𝐵 : |𝑏 • | ≤ 1 (and hence, # = ∅).

Definition 16.11 Strong persistence A safe Petri net system Σ = (𝑆, 𝑇, 𝐹, 𝑀0 ) is called strongly persistent if 𝑡1 𝑡2 ∀𝑡1 , 𝑡2 ∈ 𝑇 ∀𝑀 ∈ [𝑀0⟩ : ( 𝑡1 ≠ 𝑡2 ∧ 𝑀 −→ ∧ 𝑀 −→ ) ⇒ • 𝑡 1 ∩ • 𝑡 2 = ∅. Strong persistence directly implies persistence (using safeness), but this is not true the other way round. Moreover, every safe T-system is strongly persistent, but not conversely. The differences are shown in Figure 16.2. Proposition 16.12 Unfoldings of safe, strongly persistent nets Let 𝑂 = (𝐵, 𝐸, 𝐺) be the maximal unfolding of a safe, strongly persistent Petri net system Σ = (𝑆, 𝑇, 𝐹, 𝑀0 ). Then 𝑂 is a (unique) causal net. Proof: Uniqueness (up to isomorphism) already follows from Theorem 9.11. We only need to prove that 𝑂 is a causal net. Assume that 𝑒 1 and 𝑒 2 are two events in the unfolding such that 𝑒 1 ≠ 𝑒 2 and • 𝑒 1 ∩ • 𝑒 2 ≠ ∅. Without loss of generality, consider an earliest such pair. We aim to derive a contradiction. Case 1: 𝑒 1 , 𝑒 2 have different labels. This contradicts strong persistence.

16.4 Safe, strongly persistent Petri net systems Σ1 :

361

𝑠4 𝑠3

𝑠3

𝑠2

𝑠3

𝑐

𝑠2

𝑐

𝑠3

𝑠2

𝑐 ...

𝑎

𝑐

𝑏 𝑎 𝑠4

𝑠2

𝑠1

...

𝑎

𝑏 𝑠4

𝑠1

𝑠1 𝑎 𝑠

𝑎

...

𝑏

𝑠 Σ2 :

𝑠

𝑎

𝑏

𝑎 𝑠 ... 𝑏 𝑏 Σ3 :

𝑠1

𝑠2

𝑎

𝑏

𝑠3

𝑠5

𝑏

𝑠3

𝑐

𝑠4

𝑎

𝑠3

𝑐

𝑠4

𝑠2 𝑠5

𝑐 𝑠1 𝑠4

Fig. 16.2: Three Petri net systems (left-hand side) and their maximal unfoldings (right-hand side). Σ1 is safe and strongly persistent; its maximal unfolding is a causal net. Σ2 is safe and persistent, but not strongly persistent. Σ3 is persistent but not safe. Neither the maximal unfolding of Σ2 nor the maximal unfolding of Σ3 is a causal net. Case 2: 𝑒 1 , 𝑒 2 have the same label. • 𝑒 = • 𝑒 contradicts the unfolding property (no duplication of transitions, last item 1 2 in Definition 9.9). • 𝑒 ⊆ • 𝑒 and • 𝑒 ⊆ • 𝑒 are both impossible because 𝑒 and 𝑒 have the same label. 1 2 2 1 1 2

362

16 Model Checking Safe, Strongly Persistent Petri Nets

Hence there are conditions 𝑏 1 , 𝑏 2 such that 𝑏 1 ∈ • 𝑒 1 \ • 𝑒 2 , 𝑏 2 ∈ • 𝑒 2 \ • 𝑒 1 , and 𝑏 1 , 𝑏 2 are labelled by the same place 𝑠. What is the temporal relationship between 𝑏 1 and 𝑏 2 ? 𝑏 1 #𝑏 2 is impossible since 𝑒 1 , 𝑒 2 were chosen as an earliest pair. 𝑏 1 ≺ 𝑏 2 is impossible for the same reason. Hence 𝑏 1 and 𝑏 2 are concurrent, implying that place 𝑠 can carry two tokens in a reachable marking. This contradicts safeness. No cases remain. Hence the assumption was wrong. No two different events 𝑒 1 , 𝑒 2 with • 𝑒 1 ∩ • 𝑒 2 ≠ ∅ exist, and 𝑂 is a causal net, as was claimed. 16.12 To trace the proof, consider Σ3 in Figure 16.2. The events 𝑒 1 and 𝑒 2 mentioned in Case 2 are the two events labelled by 𝑐; 𝑏 1 and 𝑏 2 (also mentioned in Case 2) are the two conditions labelled by 𝑠3 ; and 𝑠 = 𝑠3 is the unsafe place. Thus, it turns out that the maximal process of a safe, strongly persistent Petri net system Σ is actually a (finite or infinite) acyclic T-system. This is the case even if Σ is not itself a T-system; Σ1 in Figure 16.2 shows an example. In the same figure, Σ2 shows that strong persistence cannot be replaced by persistence, and Σ3 shows that safeness cannot be replaced by (2-)boundedness.

16.5 Conjunctions of literals In propositional logic, the satisfiability of a formula 𝜙 can be checked by bringing 𝜙 into disjunctive normal form, i.e., into a disjunction of conjunctions of literals. In this form, every conjunction can be checked individually as to whether there is a satisfying assignment, and the whole formula is satisfiable iff at least one of these terms is satisfiable.1 We may envisage a similar procedure for 𝑆4 formulae if it is possible to transform a formula ♦ 𝜒, where 𝜒 is a conjunction of literals, into a conjunction of literals not containing the modality ♦ . In the following, let Σ be a safe, strongly persistent Petri net system. For T-vectors P and Q, let max(P, Q) be defined by max(P, Q)(𝑡) = max(P (𝑡), Q (𝑡)), for all 𝑡 ∈ 𝑇

Proposition 16.13 Extending the |= relation Let 𝜒 be a conjunction of literals. Let 𝜎1 , 𝜎2 be firing sequences of Σ such that both 𝜎1 |= 𝜒 and 𝜎2 |= 𝜒. Then every firing sequence 𝜎 with P (𝜎) = max(P (𝜎1 ), P (𝜎2 )) satisfies 𝜎 |= 𝜒. Proof: We shall argue on the maximal unfolding 𝑂 = (𝐵, 𝐸, 𝐺, 𝑝) where (𝐵, 𝐸, 𝐺) is a causal net (by Proposition 16.12) and 𝑝 is the inscription function.2 The proof 1 However, in the worst case, all known methods of bringing a formula into disjunctive normal form are exponential in the formula’s size. 2 See Definition 9.9.

16.5 Conjunctions of literals

363 𝜎

can be traced on Figure 16.3 and Example 16.14. For a firing sequence 𝑀0 −→ , let Cut(𝜎) be the cut reached after firing 𝜎.3 The set of cuts of 𝑂 can be partially ordered by 𝑐 ⊑ 𝑐′

⇐⇒

∀𝑏 ∈ 𝑐 ∃𝑏 ′ ∈ 𝑐 ′ : 𝑏 ⪯ 𝑏 ′ (where ⪯ = 𝐺 ∗ )

This is a lattice, which is complete if 𝑂 is finite (i.e., the greatest lower bound ⊓ is always defined, but the smallest upper bound ⊔ is only defined for finite subsets). Claim: let 𝜎1 , 𝜎2 be firing sequences of Σ (from the initial marking). There are prolongations 𝜌1 , 𝜌2 such that • 𝜎1 𝜌1 and 𝜎2 𝜌2 are firable from the initial marking, and • P (𝜎1 𝜌1 ) = P (𝜎1 𝜌1 ) = max(P (𝜎1 ), P (𝜎2 )). Proof of the claim: assume 𝜏 is such that Cut(𝜏) = Cut(𝜎1 ) ⊔ Cut(𝜎2 ). Then 𝜌1 , 𝜌2 can be chosen as 𝜌1 = 𝜏−• 𝜎1 and 𝜌2 = 𝜏−• 𝜎2 . Since two Parikh-equivalent sequences behave identically with regard to the |= relation, it suffices to prove the claim of the proposition for a particular sequence 𝜎 satisfying the condition on the Parikh vector. Let 𝑐 1 = Cut(𝜎1 ) and 𝑐 1 = Cut(𝜎1 ). Choose 𝜎 as one of the linearisations satisfying Cut(𝜎) = 𝑐 1 ⊔ 𝑐 2 . We prove the claim separately for the possible literals in 𝜒, first for the positive and then for the negative ones: (i) 𝜎1 |= (𝑡 ≤ 𝑘) ∧ 𝜎2 |= (𝑡 ≤ 𝑘) ⇒ 𝜎 |= (𝑡 ≤ 𝑘). Let P (𝜎1 )(𝑡) ≤ 𝑘 and P (𝜎2 )(𝑡) ≤ 𝑘. Then also P (𝜎)(𝑡) = max(P (𝜎1 )(𝑡), P (𝜎2 )(𝑡)) ≤ 𝑘. (ii) 𝜎1 |= 𝑠 ∧ 𝜎2 |= 𝑠 ⇒ 𝜎 |= 𝑠. Let 𝜎1 |= 𝑠 and 𝜎2 |= 𝑠. Then there are conditions 𝑏 1 ∈ 𝑐 1 and 𝑏 2 ∈ 𝑐 2 such that 𝑝(𝑏 1 ) = 𝑝(𝑏 2 ) = 𝑠, where 𝑝 is the inscription function of the unfolding. If 𝑏 1 = 𝑏 2 = 𝑏, then 𝑏 ∈ 𝑐 1 ∩ 𝑐 2 and hence also 𝑏 ∈ 𝑐 1 ⊔ 𝑐 2 . If 𝑏 1 ≠ 𝑏 2 , then by safeness, 𝑏 1 and 𝑏 2 are not concurrent. Therefore, either 𝑏 1 ≺ 𝑏 2 (in which case, 𝑏 2 ∈ 𝑐 1 ⊔ 𝑐 2 ) or 𝑏 2 ≺ 𝑏 1 (in which case, 𝑏 1 ∈ 𝑐 1 ⊔ 𝑐 2 ). (iii) 𝜎1 |= (𝑡 > 𝑘) ∧ 𝜎2 |= (𝑡 > 𝑘) ⇒ 𝜎 |= (𝑡 > 𝑘). Let P (𝜎1 )(𝑡) > 𝑘 and P (𝜎2 )(𝑡) > 𝑘. Then also P (𝜎)(𝑡) = max(P (𝜎1 )(𝑡), P (𝜎2 )(𝑡)) > 𝑘. (iv) 𝜎1 |= ¬𝑠 ∧ 𝜎2 |= ¬𝑠 ⇒ 𝜎 |= 𝑠. Let 𝑐 1 , 𝑐 2 be the cuts reached by 𝜎1 , 𝜎2 , respectively. By 𝜎1 |= ¬𝑠 and 𝜎2 |= ¬𝑠, 𝑠 ∉ 𝑝(𝑐 1 ) and 𝑠 ∉ 𝑝(𝑐 2 ). Because 𝑐 1 ⊔ 𝑐 2 ⊆ 𝑐 1 ∪ 𝑐 2 , also 𝑠 ∉ 𝑝(𝑐 1 ⊔ 𝑐 2 ), and hence 𝜎 |= ¬𝑠. 16.13

3 This notation is consistent with the notation introduced in Section 9.4, since firing sequences are simply linearisations of configurations.

364 Σ:

16 Model Checking Safe, Strongly Persistent Petri Nets 𝑠1 𝑠2

𝑐1

𝑡1

𝑏2

𝑡2

𝑠3 𝑏3

𝑠4

𝑏1

𝑠2

𝑡1 𝑒1

𝑐6 𝑐5

𝑠1 𝑡3

𝑐7

𝑐2 𝑐4

𝑐3 𝑏5

𝑡2 𝑠4

𝑡4 𝑠5

𝑏4

𝑡3

𝑠5

𝑏6

𝑡4

𝑠5

... 𝑡4

𝑠3

Fig. 16.3: The safe Petri net system Σ and its maximal unfolding from Figure 16.1. The (initial) conditions 𝑏 𝑖 , events 𝑒 𝑖 , and cuts 𝑐 𝑖 , of the unfolding are drawn in blue.

Example 16.14 Figure 16.3 The following list gives all the cuts that are visible in Figure 16.3. There are, of course, many more if the lower line (continually executing 𝑡4 ) is expanded. 𝑐1 𝑐2 𝑐3 𝑐4 𝑐5 𝑐6 𝑐7

= = = = = = =

{𝑏 1 } {𝑏 2 , 𝑏 4 } {𝑏 2 , 𝑏 5 } {𝑏 2 , 𝑏 6 } {𝑏 3 , 𝑏 4 } {𝑏 3 , 𝑏 5 } {𝑏 3 , 𝑏 6 }

= = = = = = =

Cut(𝜀) Cut(𝑡 1 ) Cut(𝑡 1 𝑡 3 ) Cut(𝑡 1 𝑡 3 𝑡4 ) Cut(𝑡 1 𝑡 2 ) Cut(𝑡 1 𝑡 2 𝑡3 ) = Cut(𝑡 1 𝑡3 𝑡2 ) Cut(𝑡 1 𝑡 2 𝑡3 𝑡4 ) = Cut(𝑡1 𝑡3 𝑡2 𝑡4 ) = Cut(𝑡1 𝑡3 𝑡 4 𝑡 2 )

We have, e.g., 𝑐 2 ⊑ 𝑐 5 , but neither 𝑐 3 ⊑ 𝑐 5 nor 𝑐 5 ⊑ 𝑐 3 . Also, 𝑐 3 ⊔ 𝑐 5 = 𝑐 6 . With 𝜎1 = 𝑡1 𝑡 3 and 𝜎2 = 𝑡 1 𝑡2 , we have 𝑐 3 = Cut(𝜎1 ) and 𝑐 5 = Cut(𝜎5 ). With 𝜌1 = 𝑡2 and 𝜌2 = 𝑡 3 , we have 𝑐 6 = Cut(𝜎1 𝜌1 ) = Cut(𝜎2 𝜌2 ).

Remark 16.15 Disjunctions of literals In Proposition 16.13, the conjunction may not be replaced by a disjunction. In Figure 16.3, for instance, 𝑡1 𝑡2 |= (𝑠2 ∨ 𝑠4 ) and 𝑡1 𝑡 3 |= (𝑠2 ∨ 𝑠4 ), but 𝑡 1 𝑡 2 𝑡 3 ̸ |= (𝑠2 ∨ 𝑠4 ).

16.6 Elimination of diamonds When it comes to model checking, diamonds are not one’s best friends. Using Proposition 16.13, it will be shown how, for the system model we are considering, “inner” diamonds which are followed by a conjunction of literals can be replaced

16.6 Elimination of diamonds

365

by a fresh conjunction of literals. Since the diamond ♦ denotes a possibility, we are interested in maximal executions before such a possibility vanishes. This is what the generalised T-vector Last 𝜒 , defined as follows, can achieve, relative to a conjunction 𝜒 of literals. Definition 16.16 The generalised T-vector Last 𝜒 Let 𝜒 be a conjunction of literals. To 𝜒, we associate a vector Last 𝜒 : 𝑇 → N ∪ {−1} ∪ {∞} by setting, for 𝑡 ∈ 𝑇, ( 𝜎 −1 , if no firing sequence 𝑀0 −→ satisfies 𝜎 |= 𝜒 Last 𝜒 (𝑡) = 𝜎 sup{ P (𝜎)(𝑡) | 𝑀0 −→ ∧ 𝜎 |= 𝜒 } , otherwise Intutively, Last 𝜒 (𝑡) indicates how often 𝑡 can fire (arbitrarily often if Last 𝜒 (𝑡) = ∞) in some firing sequence without losing the property that some larger sequence satisfies 𝜒.

Example 16.17 See Figure 16.3 Here are various Last vectors for Figure 16.3. Last𝑡2 ≤1∧¬𝑠3 Last𝑡2 ≤1∧¬𝑠5 Last¬𝑠2 Last¬(𝑡2 ≤1) Last 𝑠2 ∧𝑠3

= = = = =

(1, 0, 1, ∞) (1, 1, 0, 0) (1, 1, 1, ∞) (−1, −1, −1, −1) (−1, −1, −1, −1)

Actually, if Last has a −1 entry, then all of its entries are −1.

Lemma 16.18 Key property of Last Let 𝜎 be a firing sequence of Σ (from its initial marking) and let 𝜒 be a conjunction of literals. Then the following are equivalent: (i) There is a firing sequence 𝜏 which extends 𝜎 and for which 𝜏 |= 𝜒. (ii) P (𝜎) ≤ Last 𝜒 . Proof: (i)⇒(ii): Immediate by the definition of Last. (ii)⇒(i): Suppose (ii) is true, i.e., P (𝜎) ≤ Last 𝜒 .

366

16 Model Checking Safe, Strongly Persistent Petri Nets

P (𝜎) ≤ Last 𝜒 ⇒ [since Last 𝜒 dominates all firing sequences satisfying 𝜒] ∃ firing sequence 𝜏 ′ : 𝜏 ′ |= 𝜒 ∧ P (𝜎) ≤ P (𝜏 ′ ) ≤ Last 𝜒 ⇒ max(P (𝜎), P (𝜏 ′ )) = P (𝜏 ′ ) ⇒ ∃𝜌 : 𝜎𝜌 is a firing sequence with P (𝜎𝜌) = P (𝜏 ′ ) and 𝜎𝜌 |= 𝜒 [by Proposition 16.13 and its proof] Taking 𝜏 = 𝜎𝜌 proves (i). 16.18 In fact, given that we are dealing with persistent systems, it should come as no surprise that the last step in this proof (as also the proof of Proposition 16.13) is very reminiscent of Keller’s theorem. Theorem 16.19 Eliminating ♦ Let Σ be a safe, strongly persistent Petri net system. Let 𝜎 be a firing sequence from 𝑀0 and let 𝜒 be a conjunction of literals. Then 𝜎 |= ♦ 𝜒

⇐⇒

∀𝑡 ∈ 𝑇 : (𝑡 ≤ Last 𝜒 (𝑡))

Proof: By the definition of |=, 𝜎 |= ♦ 𝜒 iff there is an extension 𝜏 of 𝜎 such that 𝜏 |= 𝜒. By Lemma 16.18, this is the case iff P (𝜎) ≤ Last 𝜒 . Again by the definition of |=, P (𝜎) ≤ Last 𝜒 iff ∀𝑡 ∈ 𝑇 : (𝑡 ≤ Last 𝜒 (𝑡)). 16.19

16.7 The model checker Keeping in mind the previous section, we will define a normal form for 𝑆4 formulae. The idea is that the innermost subformulae are conjunctions of literals, preceded by a ♦ modality. Such innermost formulae can be subjected to the transformation indicated by Theorem 16.19. Definition 16.20 Normal form of 𝑆4 formulae An 𝑆4 formula is in normal form if (i) it contains no derived connectives (in particular, no □ ) and (ii) every innermost subformula is of the form ♦ 𝜒, where 𝜒 is a conjunction of literals.

Proposition 16.21 Normal forms exist Every 𝑆4 formula can be transformed equivalently into an 𝑆4 formula in normal form. Proof: This can be done by applying the definition of □ , and using Lemma 16.5, in conjunction with transformations of propositional logic. 16.21

16.7 The model checker

367

Example 16.22 A sample transformation into normal form Let Φ = □ ♦ (𝑠5 ∧ (𝑠2 ∨ 𝑠3 )). Φ can be transformed equivalently as follows. Φ = □ ♦ (𝑠5 ∧ (𝑠2 ∨ 𝑠3 )) ≡ ¬♦ ¬♦ ((𝑠5 ∧ 𝑠2 ) ∨ (𝑠5 ∧ 𝑠3 )) [□ = ¬♦ ¬ , and distributivity] ≡ ¬♦ ¬(♦ (𝑠5 ∧ 𝑠2 ) ∨ ♦ (𝑠5 ∧ 𝑠3 )) [Lemma 16.5] The last line already displays a normal form of Φ. The model checking algorithm performs normalisation followed by eliminating inner ♦ modalities until none remain: Algorithm 16.23 Model checking safe, strongly persistent nets Inputs: A safe, strongly persistent, Petri net system Σ = (𝑆, 𝑇, 𝐹, 𝑀0 ) and an 𝑆4 formula Φ Output: Σ |= Φ or Σ ̸ |= Φ while Φ has modalities (♦ or □ ) do Φ := Φ in normal form; replace inner subformulae of the form ♦ 𝜒 by ∀𝑡 ∈ 𝑇 : 𝑡 ≤ Last 𝜒 (𝑡) endwhile; { comment: Φ has no modalities any more} check whether or not 𝜀 |= Φ and output the answer.

Example 16.24 A worked example (Σ in Figure 16.3) We wish to check whether Φ = □ ♦ (𝑠5 ∧ (𝑠2 ∨ 𝑠3 )) is true in Figure 16.3. □ ♦ (𝑠5 ∧ (𝑠2 ∨ 𝑠3 ))

[this is not yet the normal form]

≡ ¬♦ ¬(♦ (𝑠5 ∧ 𝑠2 ) ∨ ♦ (𝑠5 ∧ 𝑠3 )) [by the definition of □ and by Lemma 16.5; now we have normal form] ≡ ¬♦ ¬((𝑡 1 ≤1 ∧ 𝑡2 ≤0 ∧ 𝑡3 ≤1 ∧ 𝑡 4 ≤∞) ∨ (𝑡1 ≤1 ∧ 𝑡 2 ≤1 ∧ 𝑡 3 ≤1 ∧ 𝑡4 ≤∞)) [by Theorem 16.19, since Last 𝑠5 ∧𝑠2 = (1 0 1 ∞) and Last 𝑠5 ∧𝑠3 = (1 1 1 ∞)] ≡ ¬♦ ¬((𝑡 1 ≤1 ∧ 𝑡3 ≤1) ∧ (𝑡2 ≤0 ∨ 𝑡 2 ≤1)) [by (𝑡 4 ≤∞) = true and because of distributivity] ≡ ¬♦ ¬(𝑡 1 ≤1 ∧ 𝑡2 ≤1 ∧ 𝑡3 ≤1)) [since (𝑡2 ≤0) implies (𝑡 2 ≤1)] ≡ ¬♦ (𝑡 1 >1 ∨ 𝑡 2 >1 ∨ 𝑡3 >1) [by de Morgan’s law]

368

16 Model Checking Safe, Strongly Persistent Petri Nets

≡ ¬(♦ (𝑡1 >1) ∨ ♦ (𝑡2 >1) ∨ ♦ (𝑡 3 >1)) [by Lemma 16.5] ≡ ¬(𝑡 1 ≤−1 ∧ 𝑡 2 ≤−1 ∧ 𝑡 3 ≤−1 ∧ 𝑡 4 ≤−1) [3 times Theorem 16.19, since Last𝑡1 >1 =Last𝑡2 >1 =Last𝑡3 >1 =(−1 −1 −1 −1)] ≡ true

[by (𝑡𝑖 ≤−1) ≡ false]

Evaluating true on 𝜀 remains true, and so, the property holds for Σ. Algorithm 16.23 has two severe bottlenecks. Firstly, the normalisations between getting rid of innermost modalities and moving enclosing modalities into an innermost position can be very costly. Secondly, it is not clear how the vector Last 𝜒 can be computed for a given conjunction 𝜒. The first problem can hardly be avoided, since model checking is NP-difficult in the size of a formula. The second problem will be addressed in the next section for the class of safe marked graphs.

16.8 Computing Last 𝝌 polynomially for marked graphs Every safe marked graph is also strongly persistent. Hence the preceding theory can be applied to safe marked graphs. In this section, let Σ = (𝑆, 𝑇, 𝐹, 𝑀0 ) be a safe and weakly connected marked graph. Let 𝐶 be the incidence matrix of Σ, and let 𝜒 be a conjunction of 𝑆4 literals. Proposition 16.25 Computing 𝐿𝑎𝑠𝑡 𝜒 The generalised T-vector 𝐿𝑎𝑠𝑡 𝜒 can be computed in polynomial time in the size of Σ. Proof: (Sketch) Let 𝑋 : 𝑇 → Q be a solution of the following linear programming problem: ∑︁ Maximise 𝑋 (𝑡), subject to the conditions in (16.2) (16.1) 𝑡 ∈𝑇

where (16.2) is the set of inequalities specified below. The idea is that 𝑋 approximates the Parikh vector of an infinite firing sequence satisfying 𝜒. Suppose that 𝜒 = 𝜒1 ∧ 𝜒2 ∧ . . . 𝜒𝑛 , where the 𝜒𝑖 are literals. Also, let 𝑇𝑑 be the set of dead transitions of Σ. This set can be determined in polynomial time.

16.8 Computing Last 𝜒 polynomially for marked graphs

𝑋≥0 [𝑋 is nonnegative] 𝑀0 + 𝐶·𝑋 ≥ 0 [after 𝑋, a marking is reached] ∀𝑡 ∈ 𝑇𝑑 : 𝑋 (𝑡) = 0 [dead transitions cannot fire] (𝑀0 + 𝐶·𝑋)(𝑠) = 1 [if 𝜒𝑖 = 𝑠] 𝑋 (𝑡) ≤ 𝑘 [if 𝜒𝑖 = (𝑡 ≤ 𝑘)] (𝑀0 + 𝐶·𝑋)(𝑠) = 0 [if 𝜒𝑖 = ¬𝑠] 𝑋 (𝑡) > 𝑘 [if 𝜒𝑖 = (𝑡 > 𝑘)]

369

(16.2)

One of the last four lines is included individually for every 𝜒𝑖 (1 ≤ 𝑖 ≤ 𝑛). The linear programming problem (16.1) has the following properties: • Every optimal solution is integral, with entries in Z. • If there is no solution, then Last 𝜒 = (−1, . . . , −1). • If there are solutions, but no optimal solution, then Last 𝜒 = (∞, . . . , ∞). • If there is an optimal solution 𝑋, then ∀𝑡 ∈ 𝑇 : 𝑋 (𝑡) ∈ N and Last 𝜒 = 𝑋. 16.25 Corollary 16.26 Polynomiality of model checking For safe, connected marked graphs Σ, the model checking algorithm is polynomial in the size of Σ. Proof: Linear programming is polynomial.4 16.26 If individual properties are considered separately, rather than a whole bunch of properties at the same time (as in a temporal logic), we also get corollaries such as the following one. Corollary 16.27 Polynomiality of individual properties For safe, connected marked graphs Σ, the following problems (amongst others) are solvable in polynomial time: • Is Σ live? • For two transitions 𝑡 1 , 𝑡2 , can 𝑡 1 and 𝑡2 be executed concurrently? • For a marking 𝑀, is 𝑀 reachable from the initial marking? Proof: These properties can be expressed in 𝑆4 . 16.27 Of course, we already knew that liveness is polynomial for marked graphs (see Theorem 5.31 and Section 5.10), but it does not necessarily harm to have it confirmed from a more general principle.

4 In practice, the simplex algorithm will likely be used, even though it is exponential in the worst case.

370

16 Model Checking Safe, Strongly Persistent Petri Nets

16.9 Bibliographical remarks and further reading Lemma 16.5 holds in every modal logic. Lemma 16.6 is typical for the temporal interpretation of modal logic (Saul A. Kripke). Lemma 16.7 also holds in temporal interpretations of modal logic; it originates from a modal logic which is known as 𝑆4 (by Clarence I. Lewis), and which is akin to the logic described in this chapter5 (explaining the terminology used in this chapter). This chapter closely follows [BE91], by Eike Best and Javier Esparza. The missing part of the proof of Proposition 16.25 can be found there. The linear-algebraic method described in Section 16.8, as well as the unfolding approach to model checking in general, has been generalised for larger Petri net classes; see, for instance, [Esp94] (by Javier Esparza) and [Gra97] (by Burkhard Graves). It has also been extended to more powerful logics; see, e.g., [EH00] (by Javier Esparza and Keijo Heljanko). Model checking is a very active research area in computer aided verification and design, though most implemented tools are based on transition systems, rather than on unfoldings. Normally, these days, much stronger logics than 𝑆4 are being considered, such as CTL6 (computation tree logic) and LTL7 (linear temporal logic), or generalisations thereof, such as CTL∗ .8 There also exist fast model checking tools, such as https://lip6.github.io/ITSTools-web/ctl.html for CTL and https://nusmv.fbk.eu for CTL and for LTL, supporting also infinite state spaces. LTL model-checking tools based on unfoldings have been reported in [EH01] (by Javier Esparza and Keijo Heljanko) and in [SK04] (by Claus Schröter and Victor Khomenko). 𝑆4 as defined in the present chapter is most easily seen as a subset of CTL, in the sense that both are interpreted over the states of a transition system. Both can express safety properties (in the sense of Section 8.1), but lack the ability to express liveness properties, such as “progress” or “fairness” (see Chapter 8). CTL can express all properties which can be expressed in 𝑆4 , but also others, such as “𝑠1 is marked until 𝑠2 is marked”. By contrast, LTL is interpreted over paths and can express progress and fairness properties, such as “place 𝑠 will eventually become marked”. The causal net semantics used in this chapter (Definition 16.10) has been derived from unfoldings, defined in Chapter 9. However, there is also a direct definition; see [BD87] (Eike Best and Raymond Devillers) where the relationship between causal semantics and firing sequence semantics is described. The third Petri net in Figure 16.2 is of independent interest, since it is one of the basic examples in the so-called “individual tokens” versus “collective tokens” controversy, the tale of which has been told in [Gla05] (Rob van Glabbeek).

5 See also https://en.wikipedia.org/wiki/Modal_logic. 6 See https://en.wikipedia.org/wiki/Computation_tree_logic. 7 See https://en.wikipedia.org/wiki/Linear_temporal_logic. 8 See https://en.wikipedia.org/wiki/CTL*.

16.10 Exercises

371

16.10 Exercises Exercise 16.1 Liveness in 𝑆4 Reconsider the Petri net system Σ from Figure 16.3. (i) Consider the 𝑆4 formula Δ1 = □ ♦ 𝑠 2 Using the model checking algorithm, check the validity of this formula. Display and comment every step of the algorithm in detail. Interpret your result. (ii) Do the same with the 𝑆4 formula Δ2 = □ ♦ 𝑠 5 Exercise 16.2 Computing the Last vector For the marked graph shown in Figure 16.4, a) compute the incidence matrix 𝐶; b) display the inequality system (16.2) for 𝜒 = (𝑠1 ∧ 𝑠2 ) and compute Last 𝜒 using (16.2); c) do the same for 𝜒 = 𝑠2 ; d) and again the same for 𝜒 = (𝑐 ≤ 2).

𝑎

𝑠2

𝑐

𝑠3

𝑠4

𝑠1

𝑏

Fig. 16.4: A marked graph (Exercise 16.2).

372

16 Model Checking Safe, Strongly Persistent Petri Nets

Solution 16.1 (Exercise 16.1) (i): Since a token on 𝑠2 enables 𝑡2 , Δ1 means, intuitively, that “𝑡2 is live”. Δ1 = ≡ ≡ ≡ ≡ ≡ ≡ ≡ ≡ ≡ ≡ ≡

□ ♦ 𝑠2 ¬♦ ¬♦ 𝑠2 ¬♦ ¬(𝑡1 ≤1 ∧ 𝑡2 ≤0 ∧ 𝑡 3 ≤1 ∧ 𝑡 4 ≤∞) ¬♦ ¬(𝑡1 ≤ 1 ∧ 𝑡 2 ≤ 0 ∧ 𝑡3 ≤ 1) ¬♦ (𝑡1 > 1 ∨ 𝑡2 > 0 ∨ 𝑡3 > 1) ¬(♦ (𝑡 1 >1) ∨ ♦ (𝑡2 >0) ∨ ♦ (𝑡3 >1)) ¬((𝑡1 ≤−1) ∧ . . . ∧ (𝑡4 ≤−1)) ∨ . . .) ¬(false ∨ ♦ (𝑡 2 >0) ∨ ♦ (𝑡 3 >1)) ¬(false ∨ ♦ (𝑡 2 >0) ∨ false) ¬(♦ (𝑡 2 >0)) ¬(𝑡 1 ≤ 1 ∧ 𝑡2 ≤ 1 ∧ 𝑡 3 ≤ 1 ∧ 𝑡4 ≤ ∞) (𝑡 1 > 1 ∨ 𝑡 2 > 1 ∨ 𝑡 3 > 1)

( definition of □ ) ( Last 𝑠2 = (1 0 1 ∞); Thm. 16.19 ) ( 𝑡 4 ≤∞ ≡ true ) ( de Morgan’s law ) ( Lemma 16.5 ) ( Last𝑡1 >1 = (−1 −1 −1 −1) ) ( propositional logic ) ( similarly for 𝑡 3 ) ( propositional logic ) ( Last𝑡2 >0 = (1 1 1 ∞) ) ( de Morgan etc., as above )

Finally, checking the validity of Δ1 on the empty string (i.e., the initial marking): 𝜀 |= Δ1

⇐⇒

𝜀 |= (𝑡 1 > 1 ∨ 𝑡 2 > 1 ∨ 𝑡 3 > 1)

⇐⇒

false

since 𝜀 contains neither 𝑡1 nor 𝑡2 nor 𝑡 3 . This means that 𝑡 2 is not live. (ii): Since a token on 𝑠5 enables 𝑡4 , Δ2 means, intuitively, that “𝑡4 is live”. Δ2 = ≡ ≡ ≡ ≡ ≡ ≡ ≡

□ ♦ 𝑠5 ¬♦ ¬♦ 𝑠5 ¬♦ ¬(𝑡 1 ≤ 1 ∧ 𝑡2 ≤ 1 ∧ 𝑡 3 ≤ 1) ¬♦ (𝑡 1 > 1 ∨ 𝑡 2 > 1 ∨ 𝑡 3 > 1) ¬(♦ (𝑡 1 >1) ∨ ♦ (𝑡 2 >1) ∨ ♦ (𝑡3 >1)) ¬((𝑡1 ≤−1) ∧ . . . ∧ (𝑡 4 ≤−1)) ¬(false) true

( definition of □ ) ( Last 𝑠5 = (1 1 1 ∞); Thm. 16.19 ) ( de Morgan ) ( Lemma 16.5 ) ( Last𝑡𝑖 >1 =(−1 −1 −1 −1), 𝑖=1, 2, 3 ) ( de Morgan etc., as above )

Checking the validity of Δ2 is even easier: 𝜀 |= true. This means that 𝑡4 is live. Solution 16.2 (Exercise 16.2) (a): 0 1 −1 © ª ­−1 0 1 ® 𝐶=­ ® ­ 1 −1 0 ® «−1 1 0 ¬

16.10 Exercises

373

(b): 𝑋𝑏 − 𝑋𝑐 ≥ 0 − 𝑋𝑎 + 𝑋𝑐 ≥ 0 1 + 𝑋 𝑎 − 𝑋𝑏 ≥0 − 𝑋 𝑎 + 𝑋𝑏 ≥0 𝑋𝑏 − 𝑋𝑐 = 1 and − 𝑋𝑎 + 𝑋𝑐 = 1 where the last line comes from 𝜒 = (𝑠1 ∧ 𝑠2 ). This inequality system is unsolvable, hence Last 𝑠1 ∧𝑠2 = (−1 − 1 − 1). Indeed, no firing sequence leads to a marking in which 𝑠1 and 𝑠2 simultaneously carry tokens. (c): The same, except that the last line is replaced by −𝑋𝑎 + 𝑋𝑐 = 1 because of 𝜒 = 𝑠2 . This inequality system has infinitely many solutions 𝑋𝑎 = 𝜆, 𝑋𝑏 = 𝜆 + 1, 𝑋𝑐 = 𝜆 + 1 where 𝜆 is an arbitrary natural number. Hence Last 𝑠2 = (∞ ∞ ∞). Indeed, for every firing sequence marking 𝑠2 , there is a longer one which also marks 𝑠2 . (d): The same, except that the last line is replaced by 𝑋𝑐 ≤ 2 because of 𝜒 = (𝑐 ≤ 2). This inequality system has exactly one solution 𝑋𝑎 = 2, 𝑋𝑏 = 3, 𝑋𝑐 = 2 𝑏𝑐𝑎𝑏𝑐𝑎𝑏

Hence Last 𝑐≤2 = (2 3 2). Indeed, 𝑀0 −−−−−→ −−− contains 𝑐 two times, and there is no such firing sequence with a larger Parikh vector.

Chapter 17

Semilinearity

Abstract In this chapter, the reachability graphs of – possibly unbounded – persistent Petri nets will be examined. It turns out that they, too, have a special structure. Even though the reachability sets of unbounded persistent nets contain infinitely many markings, it turns out that they are still semilinear. Semilinearity is a generalisation of the notion of “ultimately periodic”, known for infinite words,1 to vectors of natural numbers. Semilinear sets are decidable, a fact which can be exploited in proving that persistence is decidable, too. Semilinearity is also related to a special logic on the natural numbers called Presburger logic.

Contents 17.1 Introduction . . . . . . . . . . . . . . . . . . . 17.2 Semilinear sets . . . . . . . . . . . . . . . . . 17.3 Presburger arithmetic and Presburger sets . . . 17.4 Minimal nondecreasing transition sequences . 17.5 Persistent nets have semilinear reachability sets 17.6 Decidability of persistence . . . . . . . . . . . 17.7 Weakly persistent Petri nets . . . . . . . . . . 17.8 Some non-semilinear Petri nets . . . . . . . . 17.9 Bibliographical remarks and further reading . . 17.10 Exercises . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

376 376 378 381 384 387 389 391 393 394

1 An infinite word 𝑤 is ultimately periodic if it can be written in the form 𝑤 = 𝑤0 𝑤1 𝑤1 𝑤1 . . .

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. Best and R. Devillers, Petri Net Primer, Computer Science Foundations and Applied Logic, https://doi.org/10.1007/978-3-031-48278-6_17

375

376

17 Semilinearity

17.1 Introduction Persistent Petri nets and persistent labelled transition systems have been the main topics of Chapters 12 and 14. Keller’s theorem, described in Chapter 12, is valid for all deterministic and persistent labelled transition systems, even for infinite ones, and hence also for all persistent Petri nets. For the small cycle decompositions, also described in Chapter 12, finiteness and weak periodicity were added as premises; these decomposition theorems are thus also valid for bounded persistent Petri nets. In Chapters 14 and 15, the focus has been narrowed to subclasses of persistent transition systems (marked graph and choice-freely synthesisable ones, respectively). In the present chapter, we shall, instead, widen the focus and consider what happens for infinite persistent lts, where we assume from the beginning that they are synthesisable by a finite (not necessarily choice-free) Petri net. Of course, such a Petri net cannot be bounded. The reachability graphs of such nets are infinite, but it turns out that persistence enforces a certain structure called semilinearity. We start with a definition of semilinearity.

17.2 Semilinear sets The reachability graph of a bounded persistent Petri net 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀0 ) has a special structure, as described in Section 12.4. In particular, it has a unique maximal strongly connected component (Corollary 12.17) which contains precisely all the home states of 𝑁 (Corollary 12.16), and from any state of which all cycles can be executed. Of course, it could be the case that some (infinite) firing sequence in such a net always remains outside the set of home states, but – informally and very roughly speaking – the more tokens are produced by such a sequence, the greater the chance that a home state will eventually be reached. In a sense, firing (small) cycles inside the home state component is analogous to concatenating the periods 𝑤1 of an ultimately periodic word 𝑤0 𝑤1 𝑤1 𝑤1 . . ., while firing sequences that lead from 𝑀0 into the set of home states are broadly analogous to the initial offset 𝑤0 . For unbounded persistent Petri nets, a similar statement can be made. This similarity is manifested in the notion of a semilinear set of markings. Since the idea of semilinearity pertains not just to sets of markings, but to arbitrary subsets of vectors in N𝑛 , the next definitions are rather general ones. In order to formulate them concisely, a notation that is inspired by regular expressions can be used. Notation 17.1 Notation for linear and semilinear subsets of N𝑛 For a vector 𝑣 ∈ N𝑛 , 𝑣 also denotes the singleton set {𝑣}. For two subsets 𝑋, 𝑌 ⊆ N𝑛 , 𝑋 + 𝑌 denotes the set {𝑥 + 𝑦 | 𝑥 ∈ 𝑋 ∧ 𝑦 ∈ 𝑌 }. For a subset 𝑋 ⊆ N𝑛 , N·𝑋 denotes the set {𝑤 ∈ N𝑛 | ∃𝑘 1 , . . . , 𝑘 𝑟 ∈ N ∃𝑥1 , . . . , 𝑥𝑟 ∈ 𝑋 : 𝑤 = 𝑘 1 ·𝑥1 +𝑘 2 ·𝑥2 +. . .+𝑘 𝑟 ·𝑥𝑟 }.

17.2 Semilinear sets

377

The last line simply means that N·𝑋 contains all semipositive linear combinations of the vectors in 𝑋 (including 0). Example 17.2 An example illustrating Notation 17.1 Let 𝑛 = 2. The expression (0, 1) + N·(1, 0) denotes the infinite set of vectors {(0, 1), (1, 1), (2, 1), (3, 1), . . .}. The expression N·(0, 3) + N·(2, 0) denotes the infinite set of vectors (𝑣, 𝑤) such that 𝑣 is divisible by 2 and 𝑤 is divisible by 3.

Definition 17.3 Linear and semilinear subsets of N𝑛 Let 𝑋 ⊆ N𝑛 . • 𝑋 is called linear if there is a vector 𝑣 ∈ N𝑛 and a finite set of vectors 𝑃 ⊆ N𝑛 such that 𝑋 = 𝑣 + N·𝑃. In this representation, 𝑣 is called the offset or the base and the elements of 𝑃 are called periods. • 𝑋 is called semilinear if it is the union of finitely many linear sets, that is, if 𝑋 = (𝑣1 + N·𝑃1 ) ∪ . . . ∪ (𝑣 𝑘 + N·𝑃 𝑘 ) with 𝑘 offsets and 𝑘 sets of periods.

Example 17.4 An example of a semilinear reachability set Figure 17.1 reproduces the net shown in Figure 1.1. Its reachability set has a representation as a semilinear subset of N3 as follows: (0 1 2) T ∪ (0 0 4) T ∪ (1 0 3) T + N·(0 0 1) T ∪ (2 0 2) T + N·(0 0 1) T ∪ (3 0 1) T + N·(0 0 1) T ∪ (4 0 0) T + N·(1 0 0) T + N·(0 0 1) T ∪ (1 1 1) T + N·(0 0 1) T ∪ (2 1 0) T + N·(1 0 0) T + N·(0 0 1) T with eight bases and two periods combined in appropriate ways. Note that the first two lines are subsumed by the definition of linear sets, using 𝑃 = ∅. The net is choice-free and, therefore, persistent. The existence of a semilinear representation for it is guaranteed by Theorem 17.21 below (the main result of this chapter).

378

17 Semilinearity 𝑡3

𝑡1 𝑠3

𝑠1

𝑠2 𝑡2

Fig. 17.1: A Petri net whose reachability set [𝑀0⟩ is semilinear.

17.3 Presburger arithmetic and Presburger sets In this section, we define a predicate logic which allows one to talk about semilinear sets. First, we briefly chart some of the wider terrain occupied by predicate logic. The language of first-order predicate logic consists of formulae 𝐹 which are composed of: • propositional connectives ¬ and ∨; • the existential first-order quantifier ∃; • operation symbols such as + and ∗; • predicate symbols such as = and ≤ and >; • constant symbols such as 0 and 1; • variables that may be assigned values from some domain. A formula is closed if it does not contain any free variables, such as, for instance, ∀𝑛∃𝑚 : 𝑚 > 𝑛 + 1. Closed formulae are also called sentences. Presburger formulae are distinguished by restricting operation symbols and predicate symbols. Basically, only the operation symbol + (to denote addition) and the predicate symbol = (to denote equality) are allowed. Importantly, the multiplication symbol is not allowed inside a Presburger formula, nor will it (in general) turn out to be derivable from the other operators and predicates. Some operators can be derived “syntactically”, though, including: • propositional operators such as ∧ (e.g., 𝐹1 ∧ 𝐹2 abbreviates ¬(¬𝐹1 ∨ ¬𝐹2 )); • the universal first-order quantifier ∀: ∀𝐹 abbreviates ¬∃¬𝐹; • further predicate symbols such as ≤ and > (for instance, 𝑥 ≤ 𝑦 abbreviates ∃𝑧(𝑥 + 𝑧 = 𝑦), and 𝑥 > 𝑦 abbreviates ¬(𝑥 = 𝑦) ∧ ∃𝑧(𝑥 = 𝑦 + 𝑧).2 2 These abbreviations make sense when considering, as is usually done, Presburger logic to be interpreted over the set N.

17.3 Presburger arithmetic and Presburger sets

379

Formulae of the first-order predicate logic will be interpreted over some domain, say D. The shorthand F = (D, op, pred, const) denotes the set of first-order formulae, to be interpreted in the domain D, whose operation symbols are op, whose predicate symbols are pred, and whose constant symbols are const. Every sentence in F has a truth value (either true or false). Nonclosed formulae have a truth value only when all free variables have been assigned a value from the domain. The set of all sentences in F whose truth value is true is denoted by Th(F ) and is called a theory. In particular, Th(N, +, ∗, =, 𝑛 + 1) is logically equivalent to the negation of 𝐹1 , and hence false on N. Therefore, 𝐹1′ ∉ Th(N, +).a • 𝐹2 = ∃𝑚∀𝑛 : 𝑚 > 𝑛 + 1. While 𝐹2 is not logically equivalent to 𝐹1′ , it is still false on N. Again, 𝐹2 ∉ Th(N, +). • 𝐹3 = ∀𝑎∃𝑏∀𝑥, 𝑦 : (𝑏 > 𝑎 ∧ (𝑥, 𝑦 > 1 → 𝑥 ∗ 𝑦 ≠ 𝑏)) is in Th(N, +, ∗, =, < , 0).b • 𝐹4 = ∀𝑎∃𝑏∀𝑥, 𝑦 : (𝑏 > 𝑎 ∧ (𝑥, 𝑦 > 1 → (𝑥 ∗ 𝑦 ≠ 𝑏 ∧ 𝑥 ∗ 𝑦 ≠ 𝑏+2))) is a sentence which is not known to be in Th(N, +, ∗, =, 0. Nevertheless, one always finds a covering sequence. We will see in a moment how this can be done in general. In our example (Figure 18.1), the firing se𝜎′

quence 𝜎 ′ = 𝑡2 𝑡2 𝑡 1 𝑡 2 𝑡 2 𝑡 1 𝑡2 𝑡2 𝑡1 𝑡2 𝑡3 with 𝑊 (𝜎 ′ ) = (1, 1, 1) T , 𝑣 −→𝐺 𝑣, and 𝜎′

(0, 2, 0) T −→ (1, 3, 1) T ≤ 𝜔 (𝜔, 𝜔, 𝜔) T does the job of getting a positive effect on all places.

Lemma 18.13 Existence of covering firing sequences 𝜎 • If 𝜎 is covering, then (𝑣, 𝑀) −→𝐺 max(𝑣, 𝑀) in 𝐶𝑜𝑣(𝐺, 𝑣, 𝑁, 𝑀). • A covering sequence can always be constructed.

1 Using, amongst other things, Lemma 18.5 and Lemma 18.7.

18.3 Seeds, and seed-transition-sequences

405

Proof: The first claim is clearly true. A covering sequence can be constructed by finding a path from (𝑣, 𝑀) to max(𝑣, 𝑀) by which, in all additional 𝜔-components of max(𝑣, 𝑀) (as compared with those of (𝑣, 𝑀)), values greater than those of 𝑀 are reached. By Lemma 18.10, this is always possible. 18.13

18.3 Seeds, and seed-transition-sequences Now we will restrict the class of graphs which are permitted to control our coverability graphs. The allowed graphs are called seeds, or pre-coverability graphs. Seeds are related to the coverability graph. Their nodes must be generalised markings of the Petri net under consideration, and edges are introduced only if they can correctly model the firing of a transition. However, it is allowed to neglect edges. Moreover, seeds have to be strongly connected. This implies that no 𝜔’s can be newly introduced in any seed. Because of this, seeds are arguably even more closely related to reachability graphs than to coverability graphs. Definition 18.14 Seed Let 𝐺 = (𝑉, 𝐸, Σ) be a graph and let 𝑁 = (𝑆, 𝑇, 𝐹) be a Petri net. 𝐺 is called a seed (or a pre-coverability graph) of 𝑁 if it enjoys the following properties: • 𝐺 is finite • 𝐺 is locally deterministic • 𝐺 is strongly connected • 𝑉 ⊆ (N 𝜔 ) 𝑆 •Σ=𝑇 𝑡 𝑡 • ∀𝑀, 𝑀 ′ ∈ 𝑉 ∀𝑡 ∈ 𝑇: (𝑀 −→𝐺 𝑀 ′ =⇒ 𝑀 −→ 𝑀 + 𝑊 (𝑡) ≤ 𝜔 𝑀 ′ ). Thus, the edges of a seed are annotated by transitions of 𝑁, while the nodes of a seed are generalised states of 𝑁. The following lemma shows that every path in a seed corresponds to a firing sequence in 𝑁. Lemma 18.15 Relationship between a seed and the underlying net Let 𝐺 = (𝑉, 𝐸, 𝑇) be a seed of a Petri net 𝑁 = (𝑃, 𝑇, 𝐹). • Every strongly connected subgraph of 𝐺 is again a seed of 𝑁. • ∀𝑀, 𝑀 ′ ∈ 𝑉 ∀𝑠 ∈ 𝑆: (𝑀 (𝑠) = 𝜔 ⇐⇒ 𝑀 ′ (𝑠) = 𝜔). 𝜎 𝜎 • ∀𝑀, 𝑀 ′ ∈ 𝑉 ∀𝜎 ∈ 𝑇 ∗ : (𝑀 −→𝐺 𝑀 ′ =⇒ 𝑀 −→ 𝑀 ′ ). Proof: The first claim is obvious, and the third claim follows directly from the second. But the second claim is also fairly obvious if one keeps in mind that 𝐺 is strongly connected. Consider, for instance, 𝑀1 (𝑠) ∈ N and 𝑀2 (𝑠) = 𝜔 for some 𝑠 ∈ 𝑆 and 𝑀1 , 𝑀2 ∈ 𝑉. Strong connectedness yields the existence of firing sequences 𝜎1 𝜎2 𝜎2 𝜎1 , 𝜎2 ∈ 𝑇 ∗ with 𝑀1 −→𝐺 𝑀2 −→𝐺 𝑀1 . But then, we must also have 𝑀2 −→ 𝑀2 + 𝑊 (𝜎2 ) ≤ 𝜔 𝑀1 , contradicting 𝑀2 (𝑠) > 𝜔 𝑀1 (𝑠). 18.15

406

18 Decidability of the Reachability Problem

From now on, we will restrict our attention to coverability graphs of a Petri net 𝑁 = (𝑆, 𝑇, 𝐹, 𝑀) that are controlled by a seed 𝐺 = (𝑉, 𝐸, 𝑇, 𝑣) of 𝑁 with 𝑀 ≤ 𝜔 𝑣. Example 18.16 Figure 18.4 Figure 18.4 depicts the (𝐺, 𝑣)-controlled coverability graph corresponding to the Petri net shown in Figure 18.1 with starting marking 𝑀 = (0, 2, 0) T , using the graph 𝐺 with start node 𝑣 = (0, 𝜔, 0) T shown in Figure 18.3.

𝑡2 0𝜔0 𝑡3 𝑡4

𝑡2 1𝜔0

2𝜔0

𝑡1

0𝜔1 𝑡3 0𝜔2

Fig. 18.3: A seed 𝐺 of the net 𝑁 shown in Figure 18.1.

0𝜔0, 020

𝑡2

𝑡2

1𝜔0, 110

𝑡3 𝑡4

2𝜔0, 200 𝑡1

0𝜔1, 011

0𝜔0, 0𝜔0

𝑡3

𝑡3 𝑡4

0𝜔2, 002

𝑡2

1𝜔0, 1𝜔0

𝑡2

2𝜔0, 2𝜔0

𝑡1

0𝜔1, 0𝜔1 𝑡3 0𝜔2, 0𝜔2

T

Fig. 18.4: A (𝐺, (0, 𝜔, 0) )-controlled coverability graph corresponding to the seed 𝐺 shown in Figure 18.3 and the net 𝑁 shown in Figure 18.1 with starting state (0 2 0) T . In order to be able to use the controlled coverability graph produced in this way as a new seed, we shall, first of all, identify it with its projection onto the second component.

18.3 Seeds, and seed-transition-sequences

407

Theorem 18.17 Projection theorem b ∈ (N 𝜔 ) 𝑆 , let 𝐺 = Let 𝑁 = (𝑆, 𝑇, 𝐹) be a Petri net with initial state 𝑀 b ≤ 𝜔 𝑀, (𝑉, 𝐸, 𝑇) be a seed of 𝑁 with an initial node 𝑀 ∈ (N 𝜔 ) 𝑆 satisfying 𝑀 b be the (𝐺, 𝑀)-controlled coverability graph and let 𝐶 = 𝐶𝑜𝑣(𝐺, 𝑀, 𝑁, 𝑀) b of (𝑁, 𝑀). b a (𝑉𝐶 , 𝐸𝐶 , 𝑇, (𝑀, 𝑀)) b ′ ∈ (N 𝜔 ) 𝑆 : ((𝑀 ′ , 𝑀 b ′ ) ∈ 𝑉𝐶 =⇒ 𝑀 b ′ ≤ 𝜔 𝑀 ′ ). • ∀𝑀 ′ , 𝑀 • The projection 𝜋2 : 𝑉 × (N 𝜔 ) 𝑆 → (N 𝜔 ) 𝑆 of 𝑉 × (N 𝜔 ) 𝑆 onto its second component is injective. • Let 𝜋2 (𝐶) arise from 𝐶 by replacing every node 𝑣 in 𝑉𝐶 by 𝜋2 (𝑣). Then no equally named nodes 𝜋2 (𝑣) arise in 𝜋2 (𝐶) by that transformation, and 𝜋2 (𝐶) is isomorphic to 𝐶. a

From now on, and in the remainder of this section, the capital letter 𝐶 stands for an abbreviation of a coverability graph, and no longer for the incidence matrix of a net.

Proof: The third claim follows immediately from the second. b ′ ) ∈ 𝑉𝐶 . Because, by construction, all nodes of 𝑉𝐶 are Ad Claim 1: Let (𝑀 ′ , 𝑀 𝜎 ′ b ′ ). b there is a sequence 𝜎 ∈ 𝑇 ∗ with (𝑀, 𝑀) b −→ reachable from (𝑀, 𝑀), 𝐶 (𝑀 , 𝑀 𝜎 𝜎 b −→ 𝑀 b + 𝑊 (𝜎) ≤ 𝜔 𝑀 b ′ ∧ 𝑀 −→𝐺 𝑀 ′ , Together with Lemma 18.5, we have 𝑀 𝜎 ′ ′ b+ hence also 𝑀 −→ 𝑀 , since 𝐺 is a seed. Therefore, 𝑀 = 𝑀 + 𝑊 (𝜎) ≥ 𝜔 𝑀 b ′ . In particular, this implies 𝑀 ′ (𝑠) = 𝑀 b ′ (𝑠), provided 𝑀 ′ (𝑠) ∈ N and 𝑊 (𝜎) ≤ 𝜔 𝑀 b ′ (𝑠) ∈ N. 𝑀 b ′ (𝑠) = 𝜔 already implies 𝑀 ′ (𝑠) = The only thing that remains to be proved is that 𝑀 b ′ (𝑠) = 𝜔. By Lemma 18.7, there exist, for every 𝑘 ∈ N, a sequence 𝜔. So, suppose 𝑀 𝜎𝑘 𝜎𝑘 b𝑘 ∈ (N 𝜔 ) 𝑆 with 𝑀 b −→ b𝑘 ≤ 𝜔 𝑀 b′ ∧ 𝑀 b𝑘 (𝑠) ≥ 𝑘 ∧ 𝑀 −→ 𝜎𝑘 ∈ 𝑇 ∗ and a state 𝑀 𝑀 𝐺 𝜎𝑘 b + 𝑊 (𝜎𝑘 ))(𝑠) = 𝑀 ′ , i.e., 𝑀 −→ 𝑀 ′ . Hence 𝑀 ′ ( 𝑝) = (𝑀 + 𝑊 (𝜎𝑘 ))(𝑠) ≥ ( 𝑀 b𝑘 (𝑠) ≥ 𝑘 for every 𝑘 ∈ N, and therefore, 𝑀 ′ (𝑠) = 𝜔. 𝑀 b1 ) and (𝑀2 , 𝑀 b2 ) be nodes in 𝑉𝐶 satisfying 𝑀 b1 = 𝑀 b2 . From Ad Claim 2: Let (𝑀1 , 𝑀 b b 𝑀1 ≤ 𝜔 𝑀1 and 𝑀2 ≤ 𝜔 𝑀2 it follows that 𝑀1 and 𝑀2 agree on all finite (non-𝜔) coordinates. Because, by Lemma 18.15, all nodes of a seed possess the same 𝜔coordinates, we already have 𝑀1 = 𝑀2 . 18.17 In what follows, we shall identify 𝐶 with 𝜋2 (𝐶). From now on, the initial node b instead of (𝑀, 𝑀); b max(𝑀, 𝑀) b is now called max 𝑀; b etc. We have is just 𝑀 b = (𝑀, 𝑀 + ) for a 𝑀 + ∈ (N 𝜔 ) 𝑆 . From 𝑀 b ≤ 𝜔 𝑀, Theorem 18.17 yields max(𝑀, 𝑀) b (= max(𝑀, 𝑀)). b 𝑀 + ≤ 𝜔 𝑀. If we identify 𝐶 with 𝜋2 (𝐶), 𝑀 + itself becomes max 𝑀 That is, in 𝜋2 (𝐶), the following is true. Corollary 18.18 b ≤ 𝜔 𝑀 implies max 𝑀 b ≤ 𝜔 𝑀. 𝑀 Unfortunately, projecting onto the second components does not suffice to create another seed out of a seed-controlled coverability graph (such as the one shown

408

18 Decidability of the Reachability Problem

in Figure 18.4). This is because the coverability graph may no longer be strongly connected (and indeed, it is not, in Figure 18.4). Since this state of affairs cannot be mended directly, there remains only one possibility, namely that of decomposing the controlled coverability graph into its strongly connected components. These components can then be used as new seeds.

𝑡2 020

0𝜔0

𝑡3 𝑡4

𝑡3 𝑡4

011 𝑡3

110

200

002 𝐾0

𝑡2 1𝜔0

2𝜔0

𝑡1

0𝜔1 𝑡3 0𝜔2

𝐾1

𝐾2

𝐾3

Fig. 18.5: Decomposition of the projection of the coverability graph shown in Figure 18.4 into its four strongly connected components 𝐾0 , 𝐾1 , 𝐾2 , 𝐾3 . One can retrieve the information given in Figure 18.4 if one knows the missing transitions, for instance in the form of a sequence 𝐾0 𝑡2 𝐾1 𝑡 2 𝐾2 𝑡1 𝐾3 . In order not to lose any information provided by the controlled coverability graph by such a decomposition, we keep track of the connections between the strongly connected components. In our running example (Figure 18.4), we obtain (and keep in mind) the sequence 𝐾0 𝑡 2 𝐾1 𝑡2 𝐾2 𝑡 1 𝐾3 . But in general, the coverability graph cannot be represented by a sequence. Up front, there is a connectivity problem: Implicitly, we have been assuming, so far, that the components are not only entered but also left at their starting states. If this is not the case, then a path from the entry point to the exit point can be copied and regarded as a chain of separate, new, strongly connected components. In this way, one can, for instance, traverse the original component from a starting point to a starting point, followed by traversing a separate chain from the entry point to the correct exit point. The object arising in this way is no longer a coverability graph, but it allows the same paths and the same firing sequences. Every path through the original coverability graph can be retrieved in terms of a suitable sequence of strongly connected components and their connectivities. In particular, no strongly connected component may occur more than once in such a sequence, since it is not possible to leave a strongly connected component and re-enter it later. It follows that all paths in the coverability graph can be described precisely by a finite set of such sequences. These sequences shall be called seed-transition sequences.

18.3 Seeds, and seed-transition-sequences

409

Definition 18.19 Seed-transition sequence Let 𝑁 = (𝑆, 𝑇, 𝐹) be a Petri net, let 𝑛 ∈ N, let 𝑡𝑖 , 1 ≤ 𝑖 ≤ 𝑛, be 𝑛 transitions, and let 𝐾𝑖 = (𝑉𝑖 , 𝐸 𝑖 , 𝑇), 0 ≤ 𝑖 ≤ 𝑛, be 𝑛 + 1 seeds of 𝑁. The alternating sequence F = 𝐾0 𝑡 1 𝐾1 𝑡 2 . . . 𝑡 𝑛 𝐾 𝑛 of seeds and transitions is called a seed-transition sequence of 𝑁. F is called marked, if every seed 𝐾𝑖 , 0 ≤ 𝑖 ≤ 𝑛, is annotated by three further states 𝑀𝑖 ∈ 𝑉𝑖 , 𝑀𝑖𝑖𝑛 , 𝑀𝑖𝑜𝑢𝑡 ∈ (N 𝜔 ) 𝑆 , with 𝑀𝑖𝑖𝑛 ≤ 𝜔 𝑀𝑖 and 𝑀𝑖𝑜𝑢𝑡 ≤ 𝜔 𝑀𝑖 , where 𝑀𝑖 is called the start state, 𝑀𝑖𝑖𝑛 is called the input, and 𝑀𝑖𝑜𝑢𝑡 is called the output of 𝐾𝑖 . 𝑀0𝑖𝑛 is also called the input of F, and 𝑀𝑛𝑜𝑢𝑡 is also called the output of F. A word 𝜎 ∈ 𝑇 ∗ is called a path through F if, for 0 ≤ 𝑖 ≤ 𝑛, there are cycles 𝜎𝑖 𝜎𝑖 in 𝐾𝑖 from 𝑀𝑖 to 𝑀𝑖 , i.e., 𝑀𝑖 −→𝐾𝑖 𝑀𝑖 , and if 𝜎 = 𝜎0 𝑡 1 𝜎1 𝑡 2 . . . 𝑡 𝑛 𝜎𝑛 holds. A path 𝜎 = 𝜎0 𝑡1 𝜎1 𝑡2 . . . 𝑡 𝑛 𝜎𝑛 through F is called a solution of F, or an F-firing e 𝑖𝑛 , 𝑀 e 𝑜𝑢𝑡 ∈ N𝑆 (without 𝜔!) such that: sequence, if ∀𝑖 ∈ {0, . . . , 𝑛} ∃ 𝑀 𝑖 𝑖 𝑖𝑛 𝑖𝑛 e ≤𝜔 𝑀 •𝑀 𝑖 𝑖 e 𝑜𝑢𝑡 ≤ 𝜔 𝑀 𝑜𝑢𝑡 •𝑀 𝑖 𝑖 𝜎𝑖 e 𝑖𝑛 −→ e 𝑜𝑢𝑡 •𝑀 𝑁𝑀 𝑖

𝑖

𝑡𝑖+1 e 𝑜𝑢𝑡 −→ •𝑀 𝑖

e 𝑖𝑛 (for 0 ≤ 𝑖 < 𝑛). 𝑁𝑀 𝑖+1

Example 18.20 Figure 18.5 The four seeds in our example shown in Figure 18.5 correspond to a marked seed-transition sequence F = 𝐾0 𝑡 2 𝐾1 𝑡 2 𝐾2 𝑡1 𝐾3 with 𝑀0 = 𝑀0𝑖𝑛 = 𝑀0𝑜𝑢𝑡 = (0, 2, 0) T , 𝑀1 = 𝑀1𝑖𝑛 = 𝑀1𝑜𝑢𝑡 = (1, 1, 0) T , 𝑀2 = 𝑀2𝑖𝑛 = 𝑀2𝑜𝑢𝑡 = (2, 0, 0) T , and 𝑀3 = 𝑀3𝑖𝑛 = 𝑀3𝑜𝑢𝑡 = (0, 𝜔, 0) T . The definition allows arbitrary natural numbers, instead of 𝜔, in 𝑀3𝑖𝑛 and 𝑀3𝑜𝑢𝑡 . But only with 𝑀3𝑖𝑛 = (0, 3, 0) T or with 𝑀3𝑖𝑛 = (0, 𝜔, 0) T does F have 𝑡1 e 𝑜𝑢𝑡 −→ e 𝑖𝑛 and 𝑀 e 𝑜𝑢𝑡 = 𝑀 𝑜𝑢𝑡 = (2, 0, 0) T , we a solution: Because of 𝑀 𝑀 3 2 2 2 e 𝑖𝑛 = (0, 3, 0) T . Because of 𝑀 e 𝑖𝑛 ≤ 𝜔 𝑀 𝑖𝑛 , we must then have must have 𝑀 3

𝑀3𝑖𝑛 = (0, 3, 0) T or 𝑀3𝑖𝑛 = (0, 𝜔, 0) T .

3

3

Figure 18.6 illustrates the general case. Observation 18.21 Consequences of the local determinism of seeds Suppose that F is a seed-transition sequence and that 𝜎 = 𝜎0 𝑡1 𝜎1 . . . 𝑡 𝑛 𝜎𝑛 is a F-firing sequence with the notations just introduced. Then we already have 𝜎0 𝑡𝑛 𝜎𝑛 𝑡1 𝜎1 e 𝑖𝑛 −→ e 𝑜𝑢𝑡 −→ e 𝑖𝑛 −→ e 𝑜𝑢𝑡 . . . 𝑀 e 𝑜𝑢𝑡 −→ e𝑛𝑖𝑛 −→ e𝑛𝑜𝑢𝑡 . 𝑀 𝑀 𝑀 𝑀 𝑀 𝑀 0 1 0 1 𝑛−1

410

18 Decidability of the Reachability Problem

F:

𝐾0

𝐾1

𝐾𝑛

𝜎0

𝜎1

𝜎𝑛

𝑀0

𝑀1

𝑀𝑛

Input of F =

𝑀0𝑜𝑢𝑡

𝑀0𝑖𝑛

𝑀1𝑖𝑛 𝑀1𝑜𝑢𝑡

≤𝜔

≤𝜔

≤𝜔

≤𝜔

≤𝜔

≤𝜔

•••

𝑀𝑛𝑜𝑢𝑡= Output

𝑀𝑛𝑖𝑛

Solution of F:

f𝑜𝑢𝑡 [𝑡1 ⟩ 𝑁 f𝑖𝑛 [ 𝜎0 ⟩ 𝑁 𝑀 𝑀 0 0

f𝑜𝑢𝑡 [𝑡2 ⟩ 𝑁 f𝑖𝑛 [ 𝜎1 ⟩ 𝑁 𝑀 𝑀 1 1

•••

≤𝜔

≤𝜔

≤𝜔

≤𝜔

≤𝜔

≤𝜔

of F

f𝑛𝑖𝑛 [ 𝜎𝑛 ⟩ 𝑁 𝑀

f𝑛𝑜𝑢𝑡 𝑀

Fig. 18.6: A seed-transition sequence F = 𝐾0 𝑡1 𝐾1 . . . 𝑡 𝑛 𝐾𝑛 with an F-solution 𝜎 = e 𝑖𝑛 are more specific than the 𝑀 𝑖𝑛 of 𝐾𝑖 and 𝜎0 𝑡1 𝜎1 . . . 𝑡 𝑛 𝜎𝑛 . In general, the 𝑀 𝑖 𝑖 e 𝑜𝑢𝑡 are more specific than the 𝑀 𝑜𝑢𝑡 of 𝐾𝑖 . The inputs 𝑀 𝑖𝑛 and the similarly, the 𝑀 𝑖 𝑖 𝑖 outputs 𝑀𝑖𝑜𝑢𝑡 of 𝐾𝑖 do not have to lie inside 𝐾𝑖 . In general, they are more specific than the starting state 𝑀𝑖 of 𝐾𝑖 which has to be a part of 𝐾𝑖 .

e 𝑖𝑛 uniquely determine all intermediate states 𝑀 e 𝑖𝑛 , In particular, 𝜎 and 𝑀 𝑖 0 e 𝑜𝑢𝑡 , 0 ≤ 𝑖 ≤ 𝑛, because both seeds and Petri nets are 1 ≤ 𝑖 ≤ 𝑛, as well as 𝑀 𝑖 locally deterministic.

18.4 Reduction of the reachability problem In this section, we present an algorithm which solves the reachability problem of Petri nets. Decision Problem 18.22 The reachability problem RP Input: A Petri net 𝑁 = (𝑆, 𝑇, 𝐹) and two markings 𝑀1 , 𝑀2 ∈ N𝑆 . 𝜎 To decide: Is there a sequence 𝜎 ∈ 𝑇 ∗ such that 𝑀1 −→ 𝑁 𝑀2 ? An algorithm which decides RP accepts a Petri net 𝑁 = (𝑆, 𝑇, 𝐹) and two markings 𝑀1 , 𝑀2 ∈ N𝑆 as inputs, and outputs “yes” if there is a firing sequence of 𝑁 which leads from 𝑀1 to 𝑀2 , and “no” otherwise. Of course, a firing sequence 𝜎 which leads from 𝑀1 to 𝑀2 can be constructed if this algorithm answers “yes”. In this case, one can adopt a brute-force method and check all possible sequences in the order of increasing lengths until a suitable one has been

18.4 Reduction of the reachability problem

411 𝜎

found (for given 𝑁, 𝜎, 𝑀1 and 𝑀2 , it is trivial to check whether 𝑀1 −→ 𝑁 𝑀2 or not). The algorithm given in the sections below even does that job along the way. It has the disadvantage of being very complex. Since it is based on the construction of coverability graphs whose sizes are not bounded, there is no easy way of estimating its complexity precisely.2 Abstracting away from this particular algorithm, it has long been possible to give a lower bound. The reachability problem requires at least exponential space, which is an extremely large amount. The proof of this statement goes back to a paper by R.J. Lipton [Lip76]. In that paper, it is shown that Petri nets of size 𝑂 (𝑛) can be simulated by register machines that can process numbers up to an upper bound of 𝑛 22 . The reachability problems for finite automata 𝐴 (“does there exist a path in 𝐴 which leads from 𝑀1 to 𝑀2 ?”), for pushdown automata, or for linearly bounded automata (“is it possible to reach a configuration 𝐶2 from a configuration 𝐶1 ”) are decidable, but the corresponding problem for Turing machines is undecidable. For Petri nets, this problem has been open for more than a decade, until E.W. Mayr [May80; May84] and S.R. Kosaraju [Kos82] found a solution independently of each other. In the remaining part of this chapter, we adopt and present a variant of this proof which has first been described by J.-L. Lambert [Lam92]. Unlike the proofs by Mayr and Kosaraju,3 Lambert’s proof does not use the concept of semilinearity. Instead, theorems about the solvability (in N) of non-homogenous linear equality systems (over Z) are employed. The solution of the reachability problem is obtained by reduction to the seedtransition problem, which is defined as follows. Decision Problem 18.23 The seed-transition problem Input: A seed-transition-sequence F = 𝐾0 𝑡 1 𝐾1 𝑡2 . . . 𝑡 𝑛 𝐾𝑛 . To decide: Is there a solution?

Lemma 18.24 Reducing RP to the seed-transition problem If there is an algorithm solving the seed-transition problem, then there is also an algorithm solving RP, the reachability problem 18.22 for Petri nets. Proof: Apparently, RP is a special case of the seed-transition problem. Suppose that 𝑁 = (𝑆, 𝑇, 𝐹) is a Petri net and that 𝑀, 𝑀 ′ ∈ N𝑆 . We define the “trivial” seed 𝐾 of 𝑁 as 𝐾 = (𝑉, 𝐸, 𝑇) with 𝑉 := {𝝎}, 𝝎 = (𝜔, . . . , 𝜔) T , and 𝐸 := {(𝝎, 𝑡, 𝝎) | 𝑡 ∈ 𝑇 }. 𝐾 is annotated with 𝝎 as start, 𝑀 as input, and 𝑀 ′ as output. Figure 18.7 depicts this for 𝑇 = {𝑡 1 , . . . , 𝑡 𝑛 }. 2 See Section 18.8. This paragraph and the next one have been left intact, even though they are already more than 10 years old. 3 Also unlike the proof by Leroux.

412

18 Decidability of the Reachability Problem

The only possible solution of the trivial seed-transition sequence 𝐾, consisting only 𝜎 of 𝐾, is a sequence 𝜎 ∈ 𝑇 ∗ with 𝑀 −→ 𝑁 𝑀 ′ . 18.24

𝑡1

𝑡𝑛

...

Input= 𝑀

≤𝜔

≤𝜔

𝝎

𝑀 ′ =Output

Fig. 18.7: The simplest form of a seed-transition sequence: a seed with a single node 𝝎 = (𝜔, . . . , 𝜔) T . At this point, we proceed as is often done in mathematics. Instead of solving a problem 𝑃 directly (here: the reachability problem RP), we solve a generalisation 𝑃′ of 𝑃 (here: the seed-transition problem). The complex structure of 𝑃′ allows a creative approach to the problem (here: diverse reduction steps). In the next sections, we present a solution of the seed-transition problem.

18.5 The characteristic equation We shall associate an inhomogenous linear equation system 𝐴x = b over Z to every marked seed-transition sequence F. In this system, b is a Z-vector and 𝐴 is a matrix over Z, both with pretty high dimensions. The vector x shall be written as x = (𝑥1 , . . . , 𝑥ℓ ) T . To support understanding this equation system, we shall name the coordinates 𝑖 of a variable 𝑥𝑖 in accordance with the intended meaning of 𝑥𝑖 . In the sequel, we shall use the following conventions. F is a seed-transition sequence F = 𝐾0 𝑡 1 𝐾1 . . . 𝑡 𝑛 𝐾𝑛 of a Petri net 𝑁 = (𝑆, 𝑇, 𝐹). Every 𝐾𝑖 is a seed 𝐾𝑖 = (𝑉𝑖 , 𝐸 𝑖 , 𝑇𝑖 ) Ð𝑛 which is annotated by a start 𝑀𝑖 , an input 𝑀𝑖𝑖𝑛 , and an output 𝑀𝑖𝑜𝑢𝑡 .4 𝐸 := 𝑖=0 𝐸 𝑖 is the set of all edges of all seeds (we shall assume that the seeds are pairwise disjoint). For every 𝑒 ∈ 𝐸, 𝑥(𝑒) is a variable in x, and 𝑊 (𝑒) is the effect 𝑊 𝑁 (𝑡), provided the edge 𝑒 is labelled by transition 𝑡. Let 𝐼 := {𝑖 𝑖𝑛 (𝑠) | 0 ≤ 𝑖 ≤ 𝑛 ∧ 𝑠 ∈ 𝑆} and 𝑂 := {𝑖 𝑜𝑢𝑡 (𝑠) | 0 ≤ 𝑖 ≤ 𝑛 ∧ 𝑠 ∈ 𝑆} be further sets of names of coordinates of the variable vector x. For every 𝑖, 0 ≤ 𝑖 ≤ 𝑛, and 𝑠 ∈ 𝑆, 𝑖 𝑖𝑛 (𝑠) and 𝑖 𝑜𝑢𝑡 (𝑠) are two coordinates, and 𝑥(𝑖 𝑖𝑛 (𝑠)) and 𝑥(𝑖 𝑜𝑢𝑡 (𝑠)) are two variables in x.

4 Compare Figure 18.6.

18.5 The characteristic equation

413

Definition 18.25 Characteristic equation of F Let F be a marked seed-transition sequence of a Petri net 𝑁. Using the naming conventions just explained the characteristic equation 𝐴x = b for F is determined by the following five equation systems: ∀𝑖 ∈ {0, . . . , 𝑛} ∀𝑠 ∈ 𝑆: i) (entering a seed) 𝑥(𝑖 𝑖𝑛 (𝑠)) = 𝑀𝑖𝑖𝑛 (𝑠), if 𝑀𝑖𝑖𝑛 (𝑠) ≠ 𝜔 (definite token counts are reached precisely) ii) (exiting a seed) 𝑥(𝑖 𝑜𝑢𝑡 (𝑠)) = 𝑀𝑖𝑜𝑢𝑡 (𝑠), if 𝑀𝑖𝑜𝑢𝑡 (𝑠) ≠ 𝜔 (definite token counts Í are reached precisely) iii) 𝑥(𝑖 𝑜𝑢𝑡 (𝑠)) − 𝑥(𝑖 𝑖𝑛 (𝑠)) − 𝑒∈𝐸𝑖 𝑊 (𝑒)(𝑠) · 𝑥(𝑒) = 0 (marking equation) ∀𝑖 ∈ {0, . . . , 𝑛 − 1} ∀𝑠 ∈ 𝑆: iv) 𝑥((𝑖 + 1) 𝑖𝑛 (𝑠)) − 𝑥(𝑖 𝑜𝑢𝑡 (𝑠)) = 𝑊 𝑁 (𝑡𝑖+1 )(𝑠) (marking equation) ∀𝑖 Í ∈ {0, . . . , 𝑛} ∀𝑣 Í ∈ 𝑉𝑖 : v) 𝑒∈𝑣• 𝑥(𝑒) − 𝑒∈ • 𝑣 𝑥(𝑒) = 0 (paths inside seeds are cyclic). These five equation systems can be written as 𝐴x = b. The values 𝑀𝑖𝑖𝑛 (𝑠) ≠ 𝜔, 𝑀𝑖𝑜𝑢𝑡 (𝑠) ≠ 𝜔, and 𝑊 𝑁 (𝑡 𝑖+1 )(𝑠), determine the vector b. If u is a solution of 𝐴x = b over Z, then u ∈ Z𝐸∪𝐼∪𝑂 , i.e., |𝐸 ∪ 𝐼 ∪ 𝑂| is the number of columns of 𝐴. Suppose that 𝜎 = 𝜎0 𝑡1 𝜎1 . . . 𝑡 𝑛 𝜎𝑛 is a solution of F with 𝜎0 e 𝑖𝑛 −→ 𝑀 0

e 𝑜𝑢𝑡 𝑁𝑀 0

𝑡1

−→

e 𝑖𝑛 𝑁𝑀 1

𝑡𝑛

. . . −→

e𝑛𝑖𝑛 𝑁𝑀

𝜎𝑛

−→

e𝑛𝑜𝑢𝑡 𝑁𝑀

e 𝑖𝑛 ≤ 𝜔 𝑀 𝑖𝑛 , 𝑀 e 𝑜𝑢𝑡 ≤ 𝜔 𝑀 𝑜𝑢𝑡 , and 𝑀 e 𝑖𝑛 , 𝑀 e 𝑜𝑢𝑡 ∈ N𝑆 for 0 ≤ 𝑖 ≤ 𝑛. and suppose that 𝑀 𝑖 𝑖 𝑖 𝑖 𝑖 𝑖 Equations i) and ii) indicate that finite values in the annotations of a seed have to be reached precisely in the solution on the corresponding places of the firing sequence 𝜎. Equations iii) and iv) demand that the effects of the corresponding parts of 𝜎 have to be correct when traversing either a seed or the transition connecting two seeds; however, nothing is specified about the hurdles 𝐻 (𝜎) of 𝜎. Finally, equation v) disallows “loose ends” within a solution; more pecisely, every 𝜎𝑖 has to form one or several cycles in 𝐾𝑖 (which do not need to be connected with each other). One obtains a solution u ∈ N𝐸∪𝐼∪𝑂 of the characteristic equation 𝐴x = b of F by setting u as follows: for all 𝑖, 0 ≤ 𝑖 ≤ 𝑛, e𝑖𝑖𝑛 (𝑠) ∀𝑠 ∈ 𝑆 : 𝑢(𝑖 𝑖𝑛 (𝑠)) := 𝑀 e𝑖𝑜𝑢𝑡 (𝑠) ∀𝑠 ∈ 𝑆 : 𝑢(𝑖 𝑜𝑢𝑡 (𝑠)) := 𝑀 ∀𝑒 ∈ 𝐸 𝑖 : 𝑢(𝑒) := P (𝜎𝑖 )(𝑒) This can be verified by simple calculations. Conditions i) to iv) are straightforward Í to verify. The only thing to be kept in mind is that 𝑊 (𝜎) = 𝑒∈𝐸 P (𝜎)(𝑒) · 𝑊 (𝑒), where P (𝜎) is the Parikh image of 𝜎.5 (Thus, 𝑃(𝜎)(𝑒) is simply #𝑒 (𝜎), the number 5 At this point, we interpret the Parikh image P ( 𝜎) as a vector of edge counts in the seed, rather than as a vector of transition counts in the Petri net. This saves us from considering an algorithm to

414

18 Decidability of the Reachability Problem

of occurrences of 𝑒 in 𝜎.) Moreover, v) holds because every 𝜎𝑖 represents a cycle from 𝑀𝑖 to 𝑀𝑖 in 𝐾𝑖 (referring to Figure 18.6), and for every such cycle, the number of edges arriving at a node equals the number of edges leaving that node. Example 18.26 Example 18.20 continued Reconsider the seed-transition sequence F = 𝐾0 𝑡2 𝐾1 𝑡2 𝐾2 𝑡 1 𝐾3 from Example 18.20, using the seeds shown in Figure 18.5 relating to the Petri net 𝑁 depicted in Figure 18.1, except that we now choose the state 𝑀3𝑖𝑛 := (0, 3, 0) T . The edges of 𝐾0 will be named 𝑒 1 , 𝑒 2 , 𝑒 3 (in the same order as in the graph); the edges in the cycle at the bottom of 𝐾3 will be named 𝑒 4 , 𝑒 5 , 𝑒 6 ; and the edges of the cycle at the right-hand side will be named 𝑒 7 , 𝑒 8 , 𝑒 9 . Then we obtain the following characteristic equation: i) 𝑥(0𝑖𝑛 (𝑠1 ))=0 𝑥(0𝑖𝑛 (𝑠2 ))=2 𝑥(0𝑖𝑛 (𝑠3 ))=0 𝑥(1𝑖𝑛 (𝑠1 ))=1 𝑥(1𝑖𝑛 (𝑠2 ))=1 𝑥(1𝑖𝑛 (𝑠3 ))=0

𝑥(2𝑖𝑛 (𝑠1 ))=2 ii) 𝑥(0𝑜𝑢𝑡 (𝑠1 ))=0 𝑥(2𝑖𝑛 (𝑠2 ))=0 𝑥(0𝑜𝑢𝑡 (𝑠2 ))=2 𝑖𝑛 𝑥(2 (𝑠3 ))=0 𝑥(0𝑜𝑢𝑡 (𝑠3 ))=0 𝑖𝑛 𝑥(3 (𝑠1 ))=0 𝑥(1𝑜𝑢𝑡 (𝑠1 ))=1 𝑖𝑛 𝑥(3 (𝑠2 ))=3 𝑥(1𝑜𝑢𝑡 (𝑠2 ))=1 𝑖𝑛 𝑥(3 (𝑠3 ))=0 𝑥(1𝑜𝑢𝑡 (𝑠3 ))=0

𝑥(2𝑜𝑢𝑡 (𝑠1 ))=2 𝑥(2𝑜𝑢𝑡 (𝑠2 ))=0 𝑥(2𝑜𝑢𝑡 (𝑠3 ))=0 𝑥(3𝑜𝑢𝑡 (𝑠1 ))=0 𝑥(3𝑜𝑢𝑡 (𝑠3 ))=0

iii) 𝑥(0𝑜𝑢𝑡 (𝑠1 )) − 𝑥(0𝑖𝑛 (𝑠1 )) =0 𝑥(0𝑜𝑢𝑡 (𝑠2 )) − 𝑥(0𝑖𝑛 (𝑠2 )) + 𝑥(𝑒 1 ) + 𝑥(𝑒 2 ) − 2𝑥(𝑒 3 ) =0 𝑥(0𝑜𝑢𝑡 (𝑠3 )) − 𝑥(0𝑖𝑛 (𝑠3 )) − 𝑥(𝑒 1 ) − 𝑥(𝑒 2 ) + 2𝑥(𝑒 3 ) =0 𝑥(1𝑜𝑢𝑡 (𝑠1 )) − 𝑥(1𝑖𝑛 (𝑠1 )) =0 𝑥(1𝑜𝑢𝑡 (𝑠2 )) − 𝑥(1𝑖𝑛 (𝑠2 )) =0 𝑥(1𝑜𝑢𝑡 (𝑠3 )) − 𝑥(1𝑖𝑛 (𝑠3 )) =0 𝑥(2𝑜𝑢𝑡 (𝑠1 )) − 𝑥(2𝑖𝑛 (𝑠1 )) =0 𝑥(2𝑜𝑢𝑡 (𝑠2 )) − 𝑥(2𝑖𝑛 (𝑠2 )) =0 𝑥(2𝑜𝑢𝑡 (𝑠3 )) − 𝑥(2𝑖𝑛 (𝑠3 )) =0 𝑥(3𝑜𝑢𝑡 (𝑠1 )) − 𝑥(3𝑖𝑛 (𝑠1 )) − 𝑥(𝑒 7 ) − 𝑥(𝑒 8 ) + 2𝑥(𝑒 9 ) =0 𝑥(3𝑜𝑢𝑡 (𝑠2 )) − 𝑥(3𝑖𝑛 (𝑠2 )) +𝑥(𝑒 4 ) +𝑥(𝑒 5 ) −2𝑥(𝑒 6 ) +𝑥(𝑒 7 ) +𝑥(𝑒 8 ) −3𝑥(𝑒 9 )=0 𝑥(3𝑜𝑢𝑡 (𝑠3 )) − 𝑥(3𝑖𝑛 (𝑠3 )) − 𝑥(𝑒 4 ) − 𝑥(𝑒 5 ) + 2𝑥(𝑒 6 ) =0 iv) 𝑥(1𝑖𝑛 (𝑠1 )) − 𝑥(0𝑜𝑢𝑡 (𝑠1 ))=1 𝑥(1𝑖𝑛 (𝑠2 )) − 𝑥(0𝑜𝑢𝑡 (𝑠2 ))=−1 𝑥(1𝑖𝑛 (𝑠3 )) − 𝑥(0𝑜𝑢𝑡 (𝑠3 ))=0 𝑥(2𝑖𝑛 (𝑠1 )) − 𝑥(1𝑜𝑢𝑡 (𝑠1 ))=1 𝑥(2𝑖𝑛 (𝑠2 )) − 𝑥(1𝑜𝑢𝑡 (𝑠2 ))=−1 𝑥(2𝑖𝑛 (𝑠3 )) − 𝑥(1𝑜𝑢𝑡 (𝑠3 ))=0 𝑥(3𝑖𝑛 (𝑠1 )) − 𝑥(2𝑜𝑢𝑡 (𝑠1 ))=−2 𝑥(3𝑖𝑛 (𝑠2 )) − 𝑥(2𝑜𝑢𝑡 (𝑠2 ))=3 𝑥(3𝑖𝑛 (𝑠3 )) − 𝑥(2𝑜𝑢𝑡 (𝑠3 ))=0

v) 𝑥(𝑒 1 ) − 𝑥(𝑒 3 ) =0 𝑥(𝑒 2 ) − 𝑥(𝑒 1 ) =0 𝑥(𝑒 3 ) − 𝑥(𝑒 2 ) =0 𝑥(𝑒 4 ) + 𝑥(𝑒 7 ) − 𝑥(𝑒 6 ) − 𝑥(𝑒 9 )=0 𝑥(𝑒 5 ) − 𝑥(𝑒 4 ) =0 𝑥(𝑒 6 ) − 𝑥(𝑒 5 ) =0 𝑥(𝑒 8 ) − 𝑥(𝑒 7 ) =0 𝑥(𝑒 9 ) − 𝑥(𝑒 8 ) =0

calculate back and forth between edge labels and labels, which would be pretty irrelevant; such an algorithm must exist because of the local determinism of seeds.

18.5 The characteristic equation

415

Definition 18.27 Solution u 𝜎 of 𝐴x = b associated with 𝜎 Let 𝜎 be a solution of F and let u, constructed as above from 𝜎, be a solution of the characteristic equation of F. Then u is called the solution of 𝐴x = b associated with 𝜎, and it is denoted by u 𝜎 .

Example 18.28 Example 18.26 continued The sequence 𝜎 = 𝜎0 𝑡 2 𝜎1 𝑡 2 𝜎2 𝑡 1 𝜎3 , with 𝜎0 = 𝜎1 = 𝜎2 = 𝜀 and 𝜎3 = 𝑡 3 𝑡 3 𝑡 4 , is a solution of F as specified in Example 18.26. In the associated solution of the characteristic equation, we have 𝑥(𝑒 1 ) = 𝑥(𝑒 2 ) = 𝑥(𝑒 3 ) = 𝑥(𝑒 7 ) = 𝑥(𝑒 8 ) = 𝑥(𝑒 9 ) = 0, 𝑥(𝑒 4 ) = 𝑥(𝑒 5 ) = 𝑥(𝑒 6 ) = 1, 𝑥(3𝑜𝑢𝑡 (𝑠2 )) = 3, while all other variables are as determined by the partial systems i) and ii). It is easily checked that this variable assignment satisfies all equations. If we replace 𝜎3 by 𝑡4 𝑡 3 𝑡 3 , however, then 𝜎 is no longer a solution of F, but the solution of the characteristic equation associated with 𝜎 has exactly the same variable assignments. Thus, not every solution u of the characteristic equation 𝐴x = b over N is a solution e 𝑖𝑛 (𝑠) := 𝑢(𝑖 𝑖𝑛 (𝑠)), 𝑀 e 𝑜𝑢𝑡 (𝑠) := 𝑢(𝑖 𝑜𝑢𝑡 (𝑠)) for 0 ≤ 𝑖 ≤ 𝑛 and of F. If we define 𝑀 𝑖 𝑖 𝑠 ∈ 𝑆, and 𝜎𝑖 as a cycle which leads from 𝑀𝑖 to 𝑀𝑖 in 𝐾𝑖 and which satisfies P (𝜎𝑖 ) = 𝜋 𝐸𝑖 (u), we have, on the one hand: e 𝑖𝑛 + 𝑊 (𝜎𝑖 ) = 𝑀 e 𝑜𝑢𝑡 for 0 ≤ 𝑖 ≤ 𝑛 𝑀 𝑖 𝑖 e 𝑜𝑢𝑡 + 𝑊 (𝑡 𝑖+1 ) = 𝑀 e 𝑖𝑛 for 0 ≤ 𝑖 ≤ 𝑛 𝑀 𝑖 𝑖+1 𝑡𝑖+1 𝜎𝑖 e 𝑖𝑛 −→ e 𝑜𝑢𝑡 −→ but, on the other hand, we cannot yet deduce that 𝑀 𝑁 and 𝑀 𝑁, 𝑖 𝑖 i.e., that the necessary hurdles 𝐻 (𝜎𝑖 ) and 𝐻 (𝑡 𝑖+1 ) can be jumped. The following is implied directly by the definition of the characteristic equation. If u0 is a solution of the homogenous equation 𝐴x = 0, then u0 satisfies i’) u0 (𝑖 𝑖𝑛 (𝑠)) = 0 for 0 ≤ 𝑖 ≤ 𝑛, 𝑠 ∈ 𝑆 with 𝑀𝑖𝑖𝑛 (𝑠) ≠ 𝜔, ii’) u0 (𝑖 𝑜𝑢𝑡 (𝑠)) = 0 for 0 ≤ 𝑖 ≤ 𝑛, 𝑠 ∈ 𝑆 with 𝑀𝑖𝑜𝑢𝑡 (𝑠) ≠ 𝜔, iv’) u0 ((𝑖 + 1) 𝑖𝑛 (𝑠)) − u0 (𝑖 𝑜𝑢𝑡 (𝑠)) = 0 for 0 ≤ 𝑖 < 𝑛, 𝑠 ∈ 𝑆, and iii) and v) as stated previously. If we define the support supp A of 𝐴 as

supp 𝐴 := {𝑖 ∈ 𝐸 ∪ 𝐼 ∪ 𝑂 | ∃u ∈ N𝐸∪𝐼∪𝑂 : ( 𝐴u = 0 ∧ 𝑢(𝑖) > 0)} then i’) and ii’) imply immediately that supp 𝐴 ⊆ 𝐸 ∪ {𝑖 𝑖𝑛 (𝑠) ∈ 𝐼 | 𝑀𝑖𝑖𝑛 (𝑠) = 𝜔} ∪ {𝑖 𝑜𝑢𝑡 (𝑠) ∈ 𝑂 | 𝑀𝑖𝑜𝑢𝑡 (𝑠) = 𝜔} The support of a linear equation system will become important in Section 18.6.

416

18 Decidability of the Reachability Problem

Definition 18.29 Reverse graph, reverse net Let 𝐺 be a directed graph. By the reverse graph 𝐺 𝑟 𝑒𝑣 , we mean the same graph as 𝐺, except that all edges are reversed. This operation can equally be applied to coverability graphs and to Petri nets. Of course, we always have (𝐺 𝑟 𝑒𝑣 ) 𝑟 𝑒𝑣 = 𝐺. Definition 18.30 Perfect seed-transition sequence, F-double sequence Let F be a marked seed-transition sequence of a Petri net 𝑁 using the above naming conventions. Let 𝐶𝑖 := 𝐶𝑜𝑣(𝐾𝑖 , 𝑀𝑖 , 𝑁, 𝑀𝑖𝑖𝑛 ) be the (𝐾𝑖 , 𝑀𝑖 )controlled coverability graph of (𝑁, 𝑀𝑖𝑖𝑛 ) and let 𝐶𝑖′ := 𝐶𝑜𝑣(𝐾𝑖𝑟 𝑒𝑣 , 𝑀𝑖 , 𝑁 𝑟 𝑒𝑣 , 𝑀𝑖𝑜𝑢𝑡 ) be the (𝐾𝑖𝑟 𝑒𝑣 , 𝑀𝑖 )-controlled coverability graph in the reverse net 𝑁 𝑟 𝑒𝑣 having 𝑀𝑖𝑜𝑢𝑡 as start state. Also, let max𝐶𝑖 𝑀𝑖𝑖𝑛 be the maximal covering of 𝑀𝑖𝑖𝑛 in 𝐶𝑖 and let max𝐶𝑖′ 𝑀𝑖𝑜𝑢𝑡 be the maximal covering of 𝑀𝑖𝑜𝑢𝑡 in 𝐶𝑖′ , i.e., in the reverse net. Let 𝐴x = b be the characteristic equation of F. Then F is called perfect if • supp 𝐴 = 𝐸 ∪ {𝑖 𝑖𝑛 (𝑠) ∈ 𝐼 | 𝑀𝑖𝑖𝑛 (𝑠) = 𝜔} ∪ {𝑖 𝑜𝑢𝑡 (𝑠) ∈ 𝑂 | 𝑀𝑖𝑜𝑢𝑡 (𝑠) = 𝜔} • 𝑀𝑖 = max𝐶𝑖 𝑀𝑖𝑖𝑛 = max𝐶𝑖′ 𝑀𝑖𝑜𝑢𝑡 . A finite sequence of words 𝜂𝑖 , 𝜇𝑖 ∈ 𝑇 ∗ , so that 𝜂𝑖 is a covering sequence in 𝐶𝑖 which is firable under 𝑀𝑖𝑖𝑛 , and 𝜇𝑟𝑖 𝑒𝑣 = 𝜇𝑖𝑅 is a covering sequence in 𝐶𝑖′ which is firable under 𝑀𝑖𝑜𝑢𝑡 , is called a F-double sequence. Covering sequences always exist and are constructible by Lemma 18.13. Hence the same is true for reverse covering sequences, as well as for F-double sequences. The covering sequence 𝜇𝑖 mentioned above has its covering property both with regard to 𝜂𝑖 the seed 𝐾𝑖 (since 𝑀𝑖 −→𝐾𝑖 𝑀𝑖 is trivially covering) and with regard to the controlled 𝜂𝑖 coverability graph 𝐶𝑖 (since 𝑀𝑖𝑖𝑛 −→𝐶𝑖 max𝐶𝑖 𝑀𝑖𝑖𝑛 is covering, by construction of 𝜂𝑖 ). In a perfect seed-transition system, this covering property is also true with regard 𝜂𝑖 to the “pseudo path” 𝑀𝑖𝑖𝑛 −→𝐾𝑖 𝑀𝑖 , i.e., the path from the input to the start of seed 𝐾𝑖 ; this is so since 𝑀𝑖 = max𝐶𝑖 𝑀𝑖𝑖𝑛 . Example 18.31 Examples 18.26 and 18.28 continued The seed-transition sequence F from Example 18.26 is perfect. The support of 𝐴x = 0 includes the variables 𝑥(𝑒 1 ) to 𝑥(𝑒 9 ) and 𝑥(3𝑜𝑢𝑡 (𝑠2 )). All other variables are determined uniquely by the partial equation systems i’) and ii’). These support variables must satisfy 𝑥(𝑒 1 ) = 𝑥(𝑒 2 ) = 𝑥(𝑒 3 ), 𝑥(𝑒 4 ) = 𝑥(𝑒 5 ) = 𝑥(𝑒 6 ), and 𝑥(𝑒 7 ) = 𝑥(𝑒 8 ) = 𝑥(𝑒 9 ) = 𝑥(3𝑜𝑢𝑡 (𝑠2 )). These requirements are easily derived from a simplification of the homogenous equation system 𝐴x = 0. Incidentally, it is possible to find different F-double sequences (𝜂𝑖 , 𝜇𝑖 )0≤𝑖 ≤3 . For instance, 𝜂0 = (𝑡3 𝑡3 𝑡4 ) 𝑘 and 𝜇0 = (𝑡 3 𝑡 3 𝑡4 ) 𝑛 with 𝑘, 𝑛 ≥ 0 can be chosen arbitrarily. The sequences 𝜂1 , 𝜇1 , 𝜂2 , 𝜇2 must be the empty words, since the seeds 𝐾1 and 𝐾2 do not possess any edges. For the seed 𝐾3 , the situation is

18.5 The characteristic equation

417

more complex. Since in this case, we have 𝑀3𝑖𝑛 = (0, 3, 0) T ≠ (0, 𝜔, 0) T = 𝑀3 , the seed 𝐾3 is not identical with the controlled coverability graph 𝐶3 (however, due to 𝑀3 = 𝑀3𝑜𝑢𝑡 , 𝐶3′ is identical with 𝐾3𝑟 𝑒𝑣 ). 𝐶3 is shown in Figure 18.8. Apparently, we have max((0, 𝜔, 0) T , (0, 3, 0) T ) = ((0, 𝜔, 0) T , (0, 𝜔, 0) T ), or, in abbreviated notation, max(0, 3, 0) T = (0, 𝜔, 0) T , that is, max 𝑀3𝑖𝑛 = 𝑀3 . Because of 𝑀3𝑖𝑛 (𝑠2 ) ∈ N and 𝑀3 (𝑠2 ) = 𝜔, Definition 18.11 demands that the effect 𝑊 (𝜂3 ) has to be positive at place 𝑠2 . Therefore, we can choose 𝜂3 = 𝑡2 𝑡2 𝑡 1 (with 𝑊 (𝑡2 𝑡2 𝑡1 ) = (0, 1, 0) T ), but also, every sequence that can be composed of 𝑡2 𝑡2 𝑡1 (at least once) and 𝑡 3 𝑡 3 𝑡 4 (arbitrarily many times), since all such sequences lead from 𝑀3𝑖𝑛 to 𝑀3 in 𝐶3 . In 𝜇3 , on the other hand, the two cycles 𝑡2 𝑡 2 𝑡 1 and 𝑡 3 𝑡 3 𝑡 4 may occur arbitrarily often; 𝜇3 = 𝜀 is permitted as well.

0𝜔0, 030

𝑡2

𝑡2

1𝜔0, 120

2𝜔0, 210

𝑡3 𝑡4

𝑡1

0𝜔1, 021

0𝜔0, 0𝜔0

𝑡3

𝑡2

1𝜔0, 1𝜔0

𝑡3 𝑡4

0𝜔2, 012

𝑡2

2𝜔0, 2𝜔0

𝑡1

0𝜔1, 0𝜔1 𝑡3 0𝜔2, 0𝜔2

T

Fig. 18.8: A (𝐾3 , (0, 𝜔, 0) )-controlled coverability graph corresponding to the seed 𝐾3 shown in Figure 18.5 and the net 𝑁 shown in Figure 18.1 with start state (0 3 0) T .

Lemma 18.32 Properties of double sequences of perfect F’s Let F be a perfect seed-transition sequence and let (𝜂𝑖 , 𝜇𝑖 )0≤𝑖 ≤𝑛 be an Fdouble sequence. Then: 𝜂𝑖 𝜂𝑖 • 𝑀𝑖𝑖𝑛 −→𝐶𝑖 𝑀𝑖 −→𝐶𝑖 𝑀𝑖 𝜂𝑖

• 𝑀𝑖𝑖𝑛 −→ 𝜇𝑖𝑟𝑒𝑣

𝑖𝑛 𝑁 𝑀𝑖

𝜂𝑖

+ 𝑊 𝑁 (𝜂𝑖 ) ≤ 𝜔 𝑀𝑖 −→

𝑁 𝑀𝑖

𝜇𝑖𝑟𝑒𝑣

• 𝑀𝑖𝑜𝑢𝑡 −→𝐶𝑖′ 𝑀𝑖 −→𝐶𝑖′ 𝑀𝑖 𝜇𝑖𝑟𝑒𝑣

• 𝑀𝑖𝑜𝑢𝑡 −→ •

𝑜𝑢𝑡 + 𝑊 𝑁 𝑟𝑒𝑣 (𝜇𝑟𝑖 𝑒𝑣 ) 𝑁 𝑟𝑒𝑣 𝑀𝑖 𝜇𝑖 𝑀𝑖𝑜𝑢𝑡 − 𝑊 𝑁 (𝜇𝑖 ) −→ 𝑁 𝑀𝑖𝑜𝑢𝑡 .

𝜇𝑖𝑟𝑒𝑣

≤ 𝜔 𝑀𝑖 −→

𝜇𝑖

𝑁 𝑟𝑒𝑣 𝑀𝑖

−→

𝑁 𝑀𝑖

418

18 Decidability of the Reachability Problem

Proof: This follows directly from the covering properties of 𝜂𝑖 for 𝑀𝑖𝑖𝑛 and 𝜇𝑟𝑖 𝑒𝑣 for 𝑀𝑖𝑜𝑢𝑡 , by 𝑀𝑖 = max𝐶𝑖 𝑀𝑖𝑖𝑛 = max𝐶𝑖′ 𝑀𝑖𝑜𝑢𝑡 . For the last item, it has to be kept in mind that 𝑊 𝑁 𝑟𝑒𝑣 (𝜎𝑟 𝑒𝑣 ) = −𝑊 𝑁 (𝜎) is generally true. 18.32

18.6 A pumping lemma First of all, we quote the following result without proof: Theorem 18.33 Solutions on the support of 𝐴 Let 𝐴x = b be a linear equation system over Z, for which we are looking for solutions x = (𝑥1 , . . . , 𝑥 𝑛 ) T ∈ N𝑛 (with a suitable 𝑛). • It is decidable whether 𝑥𝑖 (1 ≤ 𝑖 ≤ 𝑛) can become arbitrarily large in the solution space (over N) of 𝐴x = 0 (in other words, whether 𝑥 𝑖 belongs to the support supp 𝐴 or not). • It is decidable which values 𝑥 𝑖 can have if 𝑖 is not in the support (these are finitely many). • One can determine a solution usupp of 𝐴x = 0, such that 𝑢 supp,𝑖 > 0 for all 𝑖 ∈ supp 𝐴 and 𝑢 supp,𝑖 = 0 for all 𝑖 ∉ supp 𝐴. • One can decide whether 𝐴x = b has a solution (over N), and if so, one can calculate a solution u. In case there is such a solution, u + 𝑘 · usupp is also a solution, for every 𝑘 ∈ N. This theorem is not quite easy to prove. The difficulties are essentially due to the fact that we restrict the solution space to N up front. If N is replaced by Z, simple linear algebra would suffice. However, we need to satisfy the requirements of the characteristic equation, and this means that we may only be interested in natural solutions.6 With this theorem, we can now proceed to solve the seed-transition problem for perfect seed-transition sequences. Theorem 18.34 Solving the seed-transition problem for perfect F Let F be a perfect seed-transition sequence of a Petri net 𝑁 using the above naming conventions. Then the following two statements are equivalent: i) F has a solution. 𝑡𝑖+1 ii) 𝐴x = b has a solution in N, and 𝑀𝑖𝑜𝑢𝑡 −→ 𝑁 for 0 ≤ 𝑖 < 𝑛.

6 There are simple algorithms that yield a single solution of 𝐴x = b in N, but the calculation of a support solution and all values of variables 𝑥𝑖 ∉ supp 𝐴 is intricate and expensive. A variant of Farkas’ lemma can be employed in the proof.

18.6 A pumping lemma

419

Proof: From the previous paragraphs, we can deduce that every solution of F yields a natural solution of 𝐴x = b. Hence i) =⇒ ii). For ii) =⇒ i), we can use a pumping lemma, as follows. 18.34 Theorem 18.35 Lambert’s pumping lemma Let F be a perfect seed-transition sequence with the characteristic equation 𝑡𝑖+1 𝐴x = b. If 𝐴x = b has a solution in N and if 𝑀𝑖𝑜𝑢𝑡 −→ 𝑁 for 0 ≤ 𝑖 < 𝑛, then, for every F-double sequence (𝜂𝑖 , 𝜇𝑖 )0≤𝑖 ≤𝑛 , a number 𝑘 0 ∈ N and words 𝛼𝑖 , 𝛽𝑖 ∈ 𝑇𝑖∗ for 0 ≤ 𝑖 ≤ 𝑛 can be constructed, satisfying 𝛽𝑖

𝛼𝑖

• 𝑀𝑖 −→𝐾𝑖 𝑀𝑖 and 𝑀𝑖 −→𝐾𝑖 𝑀𝑖 • ∀𝑘 ≥ 𝑘 0 , 𝜂0𝑘 𝛽0 𝛼0𝑘 𝜇0𝑘 𝑡 1 𝜂1𝑘 𝛽1 𝛼1𝑘 𝜇1𝑘 𝑡2 . . . 𝑡 𝑛 𝜂 𝑛𝑘 𝛽𝑛 𝛼𝑛𝑘 𝜇 𝑛𝑘 is an F-firing sequence, that is, a solution of F. Proof: Using Theorem 18.33, it is possible to determine whether 𝐴x = b has a solution over N, and if so, to construct one. Suppose that 𝐴x = b has a solution over N. Let u1 ∈ N𝐸∪𝐼∪𝑂 be a solution constructed in this way. Let usupp be a support solution of 𝐴x = 0 with 𝜋supp 𝐴 (usupp ) ≥ 1; such a support solution can also be constructed by Theorem 18.33. For u := u1 + usupp ∈ N𝐸∪𝐼∪𝑂 , we have 𝜋supp 𝐴 (u) ≥ 1, and therefore, in particular, 𝜋 𝐸𝑖 (u) ≥ 1 for the edge set 𝐸 𝑖 of every seed 𝐾𝑖 . Of course, u is also a solution of 𝐴x = b. Because u1 and usupp satisfy Í Í Equation v), the same is true for u: 𝑒∈𝑣• u(𝑒) = 𝑒∈ • 𝑣 u(𝑒) for all 𝑣 ∈ 𝑉𝑖 with 0 ≤ 𝑖 ≤ 𝑛. A theorem by Euler allows us to construct a cycle 𝛽𝑖 from 𝑀𝑖 to 𝑀𝑖 in 𝐾𝑖 which satisfies P (𝛽𝑖 ) = 𝜋 𝐸𝑖 (u) (≥ 1). For 0 ≤ 𝑖 < 𝑛 and 𝑎 ∈ N let u𝑖,𝑎 := 𝑎·𝜋 𝐸𝑖 (usupp ) − P (𝜂𝑖 𝜇𝑖 ) For sufficiently large 𝑎, we have 𝜋 𝐸𝑖 (u𝑖,𝑎 ) ≥ 1. Because 𝜂𝑖 𝜇𝑖 is a cycle in 𝐾𝑖 , we Í Í also have 𝑒∈𝑣• P (𝜂𝑖 𝜇𝑖 )(𝑒) = 𝑒∈ • 𝑣 P (𝜂𝑖 𝜇𝑖 )(𝑒). Hence, and because Equation v) holds for usupp , it is also true for u𝑖,𝑎 . As a consequence, we may again apply Euler’s theorem and find a sufficiently large 𝑎 1 and a cycle 𝛼𝑖,𝑎1 from 𝑀𝑖 to 𝑀𝑖 in 𝐾𝑖 satisfying P (𝛼𝑖,𝑎1 ) = 𝜋 𝐸𝑖 (u𝑖,𝑎1 ) = 𝑎 1 ·𝜋 𝐸𝑖 (usupp ) − P (𝜂𝑖 𝜇𝑖 ) Hence we also have 𝑎 1 ·𝜋 𝐸𝑖 (usupp ) = P (𝜂𝑖 𝛼𝑖,𝑎1 𝜇𝑖 ) Hence all edges of the seed 𝐾𝑖 occur in the cycle 𝜂𝑖 𝛼𝑖,𝑎1 𝜇𝑖 . We would like, in addition, that 𝜂𝑖 and 𝜇𝑖 are actually firable in our solution, and that after the firing of 𝜂𝑖 (respectively, before the firing of 𝜇𝑖 ), the place variables of the support are positive, that is, that each of these places carries at least one token. For this purpose, 𝑎 1 may be too small. The next investigations serve to determine the minimum value of 𝑎 1 . It is possible to find sufficiently large numbers 𝑎 2 to 𝑎 6 satisfying

420

18 Decidability of the Reachability Problem

𝑎 2 ·usupp (𝑖 𝑖𝑛 (𝑠)) + 𝑊 𝑁 (𝜂𝑖 )(𝑠) ≥ 1 for all 𝑠 ∈ 𝑆 with 𝑀𝑖 (𝑠) = 𝜔 𝑎 3 ·usupp (𝑖 𝑜𝑢𝑡 (𝑠)) + 𝑊 𝑁 𝑟𝑒𝑣 (𝜇𝑟𝑖 𝑒𝑣 )(𝑠) ≥ 1 for all 𝑠 ∈ 𝑆 with 𝑀𝑖 (𝑠) = 𝜔 (u + 𝑎 4 ·usupp )(𝑖 𝑖𝑛 ) ≥ 𝐻 𝑁 (𝜂𝑖 ) (u + 𝑎 5 ·usupp )(𝑖 𝑜𝑢𝑡 ) ≥ 𝐻 𝑁 𝑟𝑒𝑣 (𝜇𝑟𝑖 𝑒𝑣 ) (u + 𝑎 6 ·usupp )(𝑖 𝑜𝑢𝑡 ) ≥ 𝐻 𝑁 (𝑡 𝑖+1 ) These inequalities can be solved by putting the usual linear-algebraic paraphernalia to work, and, in addition, by keeping in mind the fact that 𝜇𝑖 and 𝜂𝑖 are covering, as well as the corresponding facts about 𝑊 (𝜇𝑖 ) and 𝑊 (𝜂𝑖 ). The details are omitted at this point. We calculate the values 𝑎 1 to 𝑎 6 for every seed. After that, let 𝑎 be the maximum over all calculated values. Let 𝛼𝑖 be the cycle 𝛼𝑖,𝑎 from 𝑀𝑖 to 𝑀𝑖 in 𝐾𝑖 . For 0 ≤ 𝑖 ≤ 𝑛 and 𝑘 ∈ N let e 𝑖𝑛 := (u + 𝑘 · 𝑎 · usupp )(𝑖 𝑖𝑛 ) ∈ N𝑆 𝑀 𝑖,𝑘 e 𝑜𝑢𝑡 := (u + 𝑘 · 𝑎 · usupp )(𝑖 𝑜𝑢𝑡 ) ∈ N𝑆 𝑀 𝑖,𝑘 The only thing that is left to be proved now is e 𝑖𝑛 𝑀 𝑖,𝑘

𝜂𝑖𝑘 𝛽𝑖 𝛼𝑖𝑘 𝜇𝑖𝑘

−→

𝑡𝑖+1 e 𝑜𝑢𝑡 −→ e 𝑖𝑛 𝑀 𝑀 𝑖+1,𝑘 𝑖,𝑘

While 𝜂𝑖𝑘 and 𝜇𝑖𝑘 present no problems, since they are covering, the sequences 𝛽𝑖 and 𝛼𝑖𝑘 may need a large number of tokens in order to be firable. This is why the factor 𝑘 has been introduced. Using 𝑘, the number of tokens can be increased (“pumped”) by 𝜂𝑖 and, later, decreased by 𝜇𝑖 in a suitable way, so that 𝜂0𝑘 𝛽0 𝛼0𝑘 𝜇0𝑘 𝑡1 . . . 𝑡 𝑛 𝜂 𝑛𝑘 𝛽𝑛 𝛼𝑛𝑘 𝜇 𝑛𝑘 is the required solution of F, for a suitable 𝑘. It is sufficient to choose 𝑘 such that 𝑘 ≥ 𝐻 𝑁 (𝛽𝑖 𝛼𝑖 )(𝑠) and 𝑘 ≥ 𝐻 𝑁 𝑟𝑒𝑣 (𝛼𝑖𝑟 𝑒𝑣 )(𝑠), for every place 𝑠 ∈ 𝑆 with 𝑀𝑖 (𝑠) = 𝜔. 18.35 Example 18.36 Example 18.26 etc. continued For the perfect seed-transition sequence from Example 18.26, we choose the double sequence (𝜂𝑖 , 𝜇𝑖 )0≤𝑖 ≤3 with 𝜂0 = 𝜇0 = 𝑡3 𝑡3 𝑡4 , 𝜂1 = 𝜇1 = 𝜂2 = 𝜇2 = 𝜇3 = 𝜀, and 𝜂3 = 𝑡 2 𝑡2 𝑡1 . Determining the numbers 𝑎 𝑗 for 1 ≤ 𝑗 ≤ 6 and for all four seeds, according to the proof just given, yields 𝑎 1 = 2 for the seed 𝐾0 , 𝑎 1 = 𝑎 3 = 1 for the seed 𝐾3 , and 𝑎 𝑗 = 0 for all other 𝑎 𝑗 . This gives 𝑎 = 2, and one gets 𝛽0 = 𝑡3 𝑡3 𝑡 4 , 𝛽1 = 𝛽2 = 𝛼0 = 𝛼1 = 𝛼2 = 𝜀, 𝛽3 = 𝑡3 𝑡3 𝑡4 𝑡 2 𝑡 2 𝑡 1 , and 𝛼3 = 𝑡3 𝑡3 𝑡 4 . According to the last part of the proof, we need to choose 𝑘 such that 𝑘 ≥ 𝐻 𝑁 (𝛽𝑖 𝛼𝑖 )(𝑠) and 𝑘 ≥ 𝐻 𝑁 𝑟𝑒𝑣 (𝛼𝑖𝑟 𝑒𝑣 )(𝑠) for all places 𝑠 with 𝑀𝑖 (𝑠) = 𝜔 in every seed 𝑖. Since we only have 𝑀3 (𝑠2 ) = 𝜔, and since this place needs two tokens in order to allow the firing of 𝛽3 𝛼3 or of 𝛼3𝑟 𝑒𝑣 (in 𝑁 𝑟 𝑒𝑣 ), choosing 𝑘 = 2 suffices. Hence 𝜎 := 𝜂02 𝛽0 𝛼02 𝜇02 𝜂12 𝛽1 𝛼12 𝜇12 𝜂22 𝛽2 𝛼22 𝜇22 𝜂32 𝛽3 𝛼32 𝜇32 = (𝑡3 𝑡3 𝑡 4 ) 5 (𝑡2 𝑡2 𝑡1 ) 2 𝑡 3 𝑡 3 𝑡 4 𝑡 2 𝑡2 𝑡1 (𝑡3 𝑡 3 𝑡 4 ) 2

18.7 Decomposition of seed-transition sequences

421

is a possible firing sequence with 𝜎 e 𝑖𝑛 = (0, 2, 0) T −→ e 𝑜𝑢𝑡 ≤ 𝜔 𝑀 𝑜𝑢𝑡 𝑀0𝑖𝑛 ≥ 𝜔 𝑀 (0, 5, 0) T = 𝑀 0 3 3

18.7 Decomposition of seed-transition sequences Now we show how an arbitrary marked seed-transition sequence F can effectively be decomposed into a finite set of perfect seed-transition sequences K, such that a sequence 𝜎 is a solution of F if, and only if, 𝜎 is a solution of one of the members of K. This solves the general seed-transition problem, since it has already been solved for perfect seed-transition sequences. The next definition describes a method for creating new seed-transition sequences from a set of given seed-transition sequences. Definition 18.37 Decomposition of seed-transition sequences We shall say that a set K2 of seed-transition sequences arises out of a finite set K1 seed-transition sequences by decomposition, if K2 is obtained by means of the following procedure: K2 := K1 ; 1: choose F = 𝐾0 𝑡 1 . . . 𝑡 𝑛 𝐾𝑛 in K2 : choose 𝐾𝑖 in F: choose a finite set K of seed-transition sequences all of which are “simpler” than 𝐾𝑖 : for all G in K: let FG arise from F by replacing 𝐾𝑖 in F by G; K2 := K2 ∪ {FG } K2 := K2 \{F} 2: go back to 1 or stop. In general, such a decomposition scheme is likely to lead to an explosion of the size of K2 . However, in every step, we will replace only a single seed by a finite set of “simpler” seed-transition sequences. The key notion of “simpler” shall be defined in such a way that a seed-transition sequence is the simpler, the nearer it is to perfection: Definition 18.38 Relation “simpler” on seed-transition sequences Suppose that in a seed-transition sequence F = 𝐾0 𝑡1 . . . 𝑡 𝑛 𝐾𝑛 the seed 𝐾𝑖 is replaced by a seed-transition sequence G = 𝐾˜ 0 𝑡˜1 . . . 𝑡˜𝑗 𝐾˜ 𝑗 . The sequence so obtained is called simpler than F if every seed in G satisfies one of the following: a) in the start state 𝑀𝑖 , it has less 𝜔 entries than 𝐾𝑖 ; or b) if a) yields equality, it has less edges than 𝐾𝑖 ; or

422

18 Decidability of the Reachability Problem

c) if both a) and b) yield equality, it has less 𝜔 entries in the input 𝑀𝑖𝑖𝑛 than 𝐾𝑖 ; or d) if a), b) and c) yield equality, it has less 𝜔 entries in the output 𝑀𝑖𝑜𝑢𝑡 than 𝐾𝑖 . A seed-transition sequence G is called simpler than a seed-transition sequence F if G arises out of F by replacing seeds by simpler seed-transition sequences. Now we shall apply an incarnation of this decomposition technique to a possibly nonperfect seed-transition sequence F. We start with the set {F}. If F is already perfect, we are done. Otherwise, we replace F (and in general, every seed-transition sequence in an intermediate set) by the members of a set of simpler seed-transition sequences, such that the replaced seed-transition sequence has a solution if and only if one of the replacing seed-transition sequences has a solution, i.e., the solution space is not changed. Eventually, we end up with a set KF of seed-transition sequences such that F has a solution iff some element of KF has a solution. Theorem 18.39 Decomposition theorem Let F be a marked seed-transition sequence. There exists a finite, effectively constructible set KF of perfect seed-transition sequences with 𝜎 is a solution of F

⇐⇒

∃G ∈ KF : 𝜎 is a solution of G

Proof: If F is perfect, we are done. In the remaining part of the proof, we show that for every non-perfect F it is possible to construct a finite set K of simpler marked seed-transition sequences with the property that F is solvable iff one of the members of K is solvable. Since any nonperfect (intermediate) seed-transition sequence can be further refined, the procedure eventually stops with a finite set KF of perfect seed-transition sequences satisfying 𝜎 is a solution of F ⇐⇒ ∃G ∈ KF : G is perfect and 𝜎 is a solution of G (18.1) Suppose that F = 𝐾0 𝑡 1 𝐾1 . . . 𝑡 𝑛 𝐾𝑛 is a seed-transition sequence of 𝑁 which has the characteristic equation 𝐴x = b. there may be five possible reasons for F not being perfect: (1) A seed 𝐾𝑖 has an 𝜔 entry in 𝑀𝑖𝑖𝑛 but the corresponding variable does not belong to the support; (2) analogously for 𝑀𝑖𝑜𝑢𝑡 ; (3) one of the edge variables does not belong to the support; (4) 𝑀𝑖 is not the maximum of 𝑀𝑖𝑖𝑛 in 𝐶𝑖 := 𝐶𝑜𝑣(𝐾𝑖 , 𝑀𝑖 , 𝑁, 𝑀𝑖𝑖𝑛 ) (compare Definition 18.30); (5) 𝑀𝑖 is not the maximum of 𝑀𝑖𝑜𝑢𝑡 in 𝐶𝑖′ := 𝐶𝑜𝑣(𝐾𝑖𝑟 𝑒𝑣 , 𝑀𝑖 , 𝑁 𝑟 𝑒𝑣 , 𝑀𝑖𝑜𝑢𝑡 ). We shall analyse these five cases and show how one can deal with them. Case 1: ∃𝑖 ∈ {0, . . . , 𝑛} ∃𝑠 ∈ 𝑆: (𝑀𝑖𝑖𝑛 (𝑠) = 𝜔 ∧ 𝑖 𝑖𝑛 (𝑠) ∉ supp 𝐴).

18.7 Decomposition of seed-transition sequences

423

By Theorem 18.33, we can calculate the finite set of values which 𝑖 𝑖𝑛 (𝑠) can have. For each of these values, we create a copy of F and replace the offending 𝜔 by this value. The solution space remains invariant in the sense specified above, and all modified copies of F are simpler than F. Case 2: ∃𝑖 ∈ {1, . . . , 𝑛} ∃𝑠 ∈ 𝑆: (𝑀𝑖𝑜𝑢𝑡 (𝑠) = 𝜔 ∧ 𝑖 𝑜𝑢𝑡 (𝑠) ∉ supp 𝐴). Analogous to Case 1. Case 3: ∃𝑖 ∈ {0, . . . , 𝑛} ∃𝑒ˆ ∈ 𝐸 𝑖 : 𝑒ˆ ∉ supp 𝐴. By Theorem 18.33, we can calculate how often the edge 𝑒ˆ can be traversed. For every possible number 𝑗 of traversing 𝑒, ˆ we replace the seed 𝐾𝑖 by seed-transition sequences G = G 𝑀𝑖 ,𝑀𝑒ˆ ( 𝑒G ˆ 𝑀𝑒ˆ ′ ,𝑀𝑒ˆ ) 𝑗 −1 𝑒G ˆ 𝑀𝑒ˆ ′ , 𝑀𝑖 . The seed-transition sequences G 𝑦,𝑧 have the 𝑡1

𝑡2

𝑡𝑘

form G 𝑦,𝑧 = G 𝑦0 𝑡1 G 𝑦1 𝑡 2 . . . 𝑡 𝑘 G 𝑦𝑘 , where 𝑦 = 𝑦 0 −→ 𝑦 1 −→ . . . −→ 𝑦 𝑘 = 𝑧 is a simple (cycle-free) path in 𝐾𝑖 without the edge 𝑒, ˆ and G 𝑦𝑟 is the strongly connected component of 𝐾𝑖 \{ 𝑒} ˆ with start node 𝑦 𝑟 . That is, one has to take into account all (finitely many) possible values of traversals of 𝑒, ˆ as well as all (finitely many) simple paths in 𝐾𝑖 \{ 𝑒}. ˆ The overall input and output of the seed-transition sequence so constructed are marked as in 𝐾𝑖 , and the inner inputs and outputs of the sequence are marked with the start states of the corresponding seeds. This creates finitely many new seed-transition sequences, but every one of them is simpler than the original one, and the solution space is not changed (in the above sense). Case 4: ∃𝑖 ∈ {0, . . . , 𝑛}: 𝑀𝑖 ≠ max𝐶𝑖 𝑀𝑖𝑖𝑛 . By Corollary 18.18, max 𝑀𝑖𝑖𝑛 ≤ 𝜔 𝑀𝑖 . Because of max 𝑀𝑖𝑖𝑛 ≠ 𝑀𝑖 , we must have max 𝑀𝑖𝑖𝑛 < 𝜔 𝑀𝑖 . This implies that max 𝑀𝑖𝑖𝑛 has less 𝜔-coordinates than 𝑀𝑖 , i.e., it is more specific. We replace 𝐾𝑖 by all possible seed-transition sequences G 𝑀 𝑖𝑛 , 𝑀 + = 𝑖

𝑡1

𝑡2

𝑡𝑘

G 𝑦0 𝑡 1 G 𝑦1 𝑡 2 . . . 𝑡 𝑘 G 𝑦𝑘 , where 𝑀𝑖𝑖𝑛 = 𝑦 0 −→ 𝑦 1 −→ . . . −→ 𝑦 𝑘 = 𝑀 + ≤ 𝜔 max 𝑀𝑖𝑖𝑛 is a simple (cycle-free) path in 𝐶𝑖 , and G 𝑦 𝑗 is the strongly connected component in 𝐶𝑖 containing the node 𝑦 𝑗 . We mark every seed G 𝑦 𝑗 by 𝑦 𝑗 as a start, input, and output, except for the output of the last seed. This output is marked by min{𝑀𝑖𝑜𝑢𝑡 , 𝑀 + } (where 𝑀𝑖𝑜𝑢𝑡 and 𝑀 + agree on their finite components anyhow). The solution space is not modified, and all sequences so constructed are simpler than 𝐾𝑖 , since they have less 𝜔 entries in the start states of all of their seeds. Case 5: ∃𝑖 ∈ {0, . . . , 𝑛}: 𝑀𝑖 ≠ max𝐶𝑖′ 𝑀𝑖𝑜𝑢𝑡 . By symmetry of 𝑁, 𝑀𝑖𝑖𝑛 , 𝑀𝑖 , compared with 𝑁 𝑟 𝑒𝑣 , 𝑀𝑖𝑜𝑢𝑡 , 𝑀𝑖 , this is completely analogous to Case 4. By definition (of perfectness), there are no other possibilities for a seed-transition sequence not to be perfect. Since we keep replacing seeds by simpler seeds or by simpler seed-transition sequences, the construction terminates and results in a finite set KF of perfect seed-transition sequences satisfying (18.1), as was claimed. 18.39 Theorem 18.39 reduces the seed-transition problem to perfect seed-transition sequences. Theorem 18.34 demonstrates the equivalence of the solvability of perfect

424

18 Decidability of the Reachability Problem

seed-transition sequences with the solvability of linear equation systems 𝐴x = b over N, which is effective by Theorem 18.33. This proves the decidability of the seedtransition problem, and hence, by Lemma 18.24, the decidability of the reachability problem for Petri nets.

18.8 Bibliographical remarks and further reading The pointers to the literature in the original text by Harro Wimmel have been left intact (see Section 18.4 above). Proofs that have been omitted in this chapter can be found in [PW03] (Lutz Priese and Harro Wimmel) and in [Wim08] (Harro Wimmel). See https://en.wikipedia.org/wiki/Seven_Bridges_of_Koenigsberg for the theorem by Leonhard Euler used in the proof of Theorem 18.35. Recently, the reachability problem has been shown to be Ackermann-complete [LS15] (Jérôme Leroux and Sylvain Schmitz), [Cze+21] (Wojciech Czerwinski, Slawomir Lasota, Ranko Lazic, Jérôme Leroux, and Filip Mazowiecki), and [CO21] (Wojciech Czerwinski and Lukasz Orlikowski). In the proofs of these results, variants of the non-persistent net shown in Figure 17.7 play a prominent role. The reachability problem has been used as a basis serving to decide many other problems related to Petri nets (see [Wim08] (Harro Wimmel) for a compendium up to 2007; other problems have joined the list since then). As a rule of thumb, many problems related to the reachability of markings turn out to be decidable, while problems related to Petri net languages in general tend to be undecidable. Amongst the undecidable problems related to Petri nets are Rabin’s result (already mentioned in Section 10.7) and the language equivalence problem for labelled nets [Hac75a] (by Michel Hack). If Petri nets are extended by inhibitor arcs (see Section 8.6), then the reachability problem becomes undecidable. This is even true if only two inhibitor arcs are allowed [Hac75a]. The intuitive reason is that two inhibitor arcs can model two different counters, and two counters are sufficient to model Turing machines. However, one inhibitor arc is not enough: Klaus Reinhardt proved that reachability remains decidable even if (at most) one inhibitor arc is allowed [Rei08].

18.9 Exercises

425

18.9 Exercises Exercise 18.1 Reachability of (semi-) linear sets • Consider the linear reachability set decision problem LDEC: Decision Problem 18.40 LDEC Input: A number 𝑛 ≥ 1, an unmarked Petri net 𝑁, two vectors 𝑣, 𝑤 ∈ N𝑛 , and two finite sets 𝑃, 𝑄 ⊆ N𝑛 with 𝑃 = {𝑝 1 , . . . , 𝑝 | 𝑃 | } and 𝑄 = {𝑞 1 , . . . , 𝑞 |𝑄 | }. To decide: Are there markings 𝑀1 ∈ (𝑣 + N·𝑝 1 + . . . + N·𝑝 | 𝑃 | ) and 𝑀2 ∈ (𝑤 + N·𝑞 1 + . . . + N·𝑞 |𝑄 | ) such that in 𝑁, 𝑀2 ∈ [𝑀1⟩? In other words, LDEC asks to decide whether some marking out of a linear set 𝑄 can be reached from some marking out of a linear set 𝑃. Using the decidability of the reachabiliy problem, show that LDEC is decidable. • Further, show that the semilinear reachability set problem SLDEC is decidable, where SLDEC is defined as LDEC, except that finite unions of linear sets are given as the initial and the final markings. Exercise 18.2 Decidability of persistence Using Exercise 18.1. give an alternative proof of the decidability of persistence (Section 17.6). Hint: Fix two different transitions 𝑡 1 , 𝑡2 and describe semi-linearly what it means for a marking not to be persistent with respect to these two transitions. Solution 18.1 (Exercise 18.1) See Figure 18.9. Initially, only place 𝑠1 has a token, and all other places are tokenfree; call this marking 𝑀0 . The transitions 𝑡 𝑝𝑖 (one for each vector 𝑝 𝑖 ) create on 𝑁 a marking out of the set {𝑝 1 , . . . , 𝑝 | 𝑃 | }. The transition 𝑡 𝑣 adds to this marking the vector 𝑣 and, at the same time, disables the creation of any more 𝑝 𝑖 and marks the run place with a token. The run place is connected to every transition of 𝑁 by a side loop. As long as run has a token, the net 𝑁 can be run, generating reachable markings on 𝑁. The transition 𝑡 𝑤 subtracts from any one of these follower marking the vector 𝑤. After that, 𝑁 can no longer be run, but any linear combination of vectors out of the set {𝑞 1 , . . . , 𝑞 | 𝑃 | } can be subtracted. If a marking 𝑀0′ is reached which has a token on 𝑠2 but no tokens elsewhere, we know that there are two markings 𝑀1 and 𝑀2 of 𝑁 as required in LDEC (𝑀2 being reachable from 𝑀1 in 𝑁). Hence LDEC has been reduced to the question whether 𝑀0′ can be reached from 𝑀0 , and this question is decidable by the main result of the present chapter. The problem SLDEC can be decided easily,7 since the allowed sets of initial and final markings are just finite unions of linear sets. Hence we just have to apply LDEC as many times as there are pairs (initial and final ones) of such sets. 7 Not to be confused with “efficiently”.

426

18 Decidability of the Reachability Problem

𝑁 𝑝𝑖

𝑡 𝑝𝑖

𝑞𝑖

𝑣

𝑠1

𝑡𝑣

...

𝑤

run

𝑡𝑤

𝑡𝑞𝑖

𝑠2

Fig. 18.9: Solution of Exercise 18.1.

Solution 18.2 (Exercise 18.2) Let (𝑁, 𝑀0 ) be an initially marked Petri net and let 𝑡 ≠ 𝑡 ′ be two different transitions. 𝑡 We want to describe the set of markings 𝑀 which enable both 𝑡 and 𝑡 ′ , i.e., 𝑀 −→ 𝑡′

𝑡𝑡 ′

and 𝑀 −→ , but do not enable the sequence 𝑡𝑡 ′ , i.e., ¬(𝑀 −→ ), because in that case, persistence is violated. By the firing rule, the sequence 𝑡𝑡 ′ is not enabled at 𝑀 if and only if ∃𝑠 ∈ • 𝑡 ∩ • 𝑡 ′ : 𝑀 (𝑠) < 𝐹 (𝑠, 𝑡)−𝐹 (𝑡, 𝑠)+𝐹 (𝑠, 𝑡 ′ ) Let the period vectors 𝑣𝑖 be constructed for indices 𝑖 satisfying max{𝐹 (𝑠, 𝑡), 𝐹 (𝑠, 𝑡 ′ )} ≤ 𝑖 < 𝐹 (𝑠, 𝑡)−𝐹 (𝑡, 𝑠)+𝐹 (𝑠, 𝑡 ′ ) Here, 𝑖 is meant to be a “critical” marking of 𝑠 (prohibiting the execution of 𝑡𝑡 ′ ), which is still so large that 𝑠 disables neither 𝑡 not 𝑡 ′ (hence the max{𝐹 (𝑠, 𝑡), 𝐹 (𝑠, 𝑡 ′ )} lower bound). All other places may not disable 𝑡, 𝑡 ′ either. For this reason,  𝑖 if 𝑟 = 𝑠 𝑣𝑖 (𝑟) = max{𝐹 (𝑟, 𝑡), 𝐹 (𝑟, 𝑡 ′ )} if 𝑟 ≠ 𝑠 The period vectors 𝑝 𝑟 , for 𝑟 ≠ 𝑠, are defined as the “unit vectors” 𝑝 𝑟 (𝑟) = 1 and 𝑝 𝑟 (𝑟 ′ ) = 0 for 𝑟 ′ ≠ 𝑟. They describe possible token increases on places other than 𝑠. The linear set ∑︁ 𝑃𝑖 = 𝑣𝑖 + N·𝑝 𝑟 𝑟 ∈𝑆\{𝑠}

describes all markings in which all places carry sufficiently many tokens to fire both 𝑡 and 𝑡 ′ , but in which 𝑠 carries only 𝑖 tokens, which is not enough to fire 𝑡𝑡 ′ . With Exercise 18.1, this test can be carried out for every pair 𝑡 ≠ 𝑡 ′ of transitions (and all places in • 𝑡 ∩ • 𝑡 ′ ), solving persistence.

Chapter 19

The Box Algebra 1/2: Refinement and Recursion

Abstract In the early days of parallel computing theory, Petri nets have been competing with process algebras for the attention of computer science theorists. These two models are distinguished by their inheritance: Petri nets stem from automata theory, and process algebras are abstractions from programming languages. As a consequence, while Petri nets cannot, in general, easily be constructed in a modular fashion out of small building blocks by well-defined syntactic rules, process-algebraic expressions can. Because both models feature concurrency (Petri nets implicitly, and process algebras explicitly, usually by an operation of parallel composition), multiple efforts have been underway ever since to find connections between them. This chapter and the next describe some key aspects of a particular one of these efforts (entitled the “box algebra”). This highlights, in a pars pro toto way, some issues (and limitations) that such efforts are likely to encounter. The present chapter concentrates on basic operations and on recursion, while the next chapter adds iteration and the treatment of data.

Contents 19.1 19.2 19.3 19.4 19.5 19.6 19.7 19.8

Introduction . . . . . . . . . . . . . . . . . . . Motivation . . . . . . . . . . . . . . . . . . . Simultaneous refinement and relabelling of nets Compositional, denotational Petri net semantics Compositional, operational Petri net semantics Recursion . . . . . . . . . . . . . . . . . . . . Bibliographical remarks and further reading . . Exercises . . . . . . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. Best and R. Devillers, Petri Net Primer, Computer Science Foundations and Applied Logic, https://doi.org/10.1007/978-3-031-48278-6_19

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

428 428 431 439 443 447 451 452

427

428

19 The Box Algebra 1/2: Refinement and Recursion

19.1 Introduction Petri nets are not per se modular. Their definition does not specify a way of constructing larger nets from smaller building blocks. This is in stark contrast to the definition of programming languages, or of process algebras, which offer, from the start, a syntactic way of producing large programs (or process algebraic expressions) from smaller programs (or expressions). Modular ways of composing or decomposing Petri nets can become helpful, and even very important, in practice. Various small examples have already been described, for instance in Chapters 2, 10, and 13. As another example, suppose that, for some reason, one wishes to describe a (parallel) system not only by a concurrent program or by a process-algebraic expression but also by a behaviourally equivalent Petri net, then it will be almost mandatory that the construction of such a net takes place in a syntax-driven, modular, way. The present chapter and the next (Chapter 20) describe a general method for achieving compositionality and modularity in Petri nets. This method will essentially consist of providing Petri nets with interfaces in such a way that smaller nets can be “glued” to each other via interfaces, yielding a larger net. In its first half, the present chapter describes this interface mechanism. It will be explained on an idealised, very simple, process algebra which resembles an already well-established process algebra called CCS (Calculus of Communicating Systems). At first, this will be done without taking into account repetitive operators such as recursion or iteration. In the second half of the present chapter, recursion will be added. Chapter 20 will then be devoted to iteration (in its first half) and on data (in its second half), eventually obtaining Petri net semantics for a full parallel programming language.

19.2 Motivation At around the same time as Carl Adam Petri and others first developed net theory, so-called high-level programming languages were being designed by which it was possible to translate algorithms from human-readable format into computer machine language. Although programs were initially meant to be executed on a single processor, the need soon arose to take into account concurrency and synchronisation between interacting components of a computer system. Process algebras such as CCS (Calculus of Communicating Systems), CSP (Communicating Sequential Processes) and COSY (Concurrent Systems Notation) have consequently been invented and investigated. They abstract from the data aspects of atomic actions and focus, instead, on the specification of control structures (such as sequential composition and parallel composition), with a particular emphasis on parallelism and synchronisation. As a first motivating example, we consider a very simple “toy” process algebra

19.2 Motivation

429

VS.TOYA : 𝐸 ::= 𝑎 (atomic action execution)

𝐸 + 𝐸 (choice) 𝐸; 𝐸 (sequential execution)

(19.1)

𝐸 ∥ 𝐸 (parallel execution) where 𝑎 denotes a letter from some alphabet, representing some atomic action of some program. The VS.TOYA syntax allows process-algebraic expressions 𝐸 to be constructed. They mirror the most common basic control flow composition operators in sequential programs (sequence and choice), along with parallel composition. Notably absent are iterations and recursions; they will be discussed later in this chapter and the next. Example 19.1 Some examples The expression 𝑎 specifies action 𝑎 to be executed once, after which execution is finished. The expression 𝑎 + 𝑏 denotes either 𝑎 or 𝑏 to be executed, after either of which execution is finished. The expression 𝑎; 𝑏 specifies the execution of 𝑎, followed after termination by the execution of 𝑏. The expression 𝑎 ∥ 𝑏 denotes both 𝑎 and 𝑏 to be executed in parallel (without any interference). The expression (𝑎; 𝑏) ∥ 𝑐 specifies the execution of 𝑎 followed by 𝑏, in parallel with 𝑐, again without any interference. Real programming languages, even the oldest ones, have lots more features. Crucially, there need to be ways of manipulating data. However, since VS.TOYA differs from classic languages in terms of the parallel operator, it seems justifiable, for the moment, to abstract away from data-related features in order to study ∥ in its own right, and its interplay with other control flow constructs such as choice and sequential composition. For instance, an atomic action 𝑎 could be some assignment using program variables. However, for the moment, we do not care about the internal structure of an atomic action.1 We now examine how VS.TOYA expressions can be modeled faithfully by means of Petri nets. Modelling process-algebraic expressions by Petri nets seems to be a reasonable idea, because parallelism plays an important role in Petri net theory, and because control structure aspects are more easily modelled in terms of nets than data structure aspects. Figure 19.1 shows examples. Notice how an outward-branching place effects the choice between 𝑎; 𝑏 and 𝑐; 𝑑 in the net for (𝑎; 𝑏)+(𝑐; 𝑑). In the net for (𝑎 ∥ 𝑏); (𝑐 ∥ 𝑑), care has been taken that both 𝑐 and 𝑑 can be executed only if and when both 𝑎 and 𝑏 have ended, regardless of the temporal relationship between 𝑎 and 𝑏. The net for (𝑎 ∥ 𝑏) + (𝑐 ∥ 𝑑) prevents both 𝑐 and 𝑑 being chosen for execution as soon as one of 𝑎 or 𝑏 has started, and conversely. The 𝑒 and 𝑥 annotation of places has the following informal meaning: if all 𝑒-places are marked with one token (and no other place is), then execution can start; and 1 This will change in the next chapter, however.

430

19 The Box Algebra 1/2: Refinement and Recursion 𝑒

𝑒

𝑎

𝑎

𝑐

𝑒

𝑒

𝑎

𝑏

𝑒

𝑒

𝑎

𝑏

𝑥

𝑥

𝑒

𝑒

𝑐

𝑑

𝑥 𝑑

𝑏

Net for 𝑎

𝑥

Net for (𝑎; 𝑏) + (𝑐; 𝑑)

𝑐

𝑑

𝑥

𝑥

Net for (𝑎 ∥ 𝑏); (𝑐 ∥ 𝑑)

𝑥

𝑥

Net for (𝑎 ∥ 𝑏) + (𝑐 ∥ 𝑑)

Fig. 19.1: Nets for 𝑎, (𝑎; 𝑏)+(𝑐; 𝑑), (𝑎 ∥ 𝑏); (𝑐 ∥ 𝑑), and (𝑎 ∥ 𝑏)+(𝑐 ∥ 𝑑), respectively.

when execution is finished, all 𝑥-places are marked (with one token) and all others are unmarked. There are also six places in this figure which are not annotated by 𝑒 or by 𝑥. These places denote intermediate (or internal) states. Later, we shall use an 𝑖 label for such places. The 𝑒/𝑖/𝑥 annotations are useful for creating the net corresponding to an expression compositionally. To see the idea, consider the VS.TOYA syntax tree of, say, (𝑎; 𝑏) + (𝑐; 𝑑). As in a brick building game, the construction of the corresponding net proceeds by starting with four basic building bricks, namely the nets for 𝑎, 𝑏, 𝑐, and 𝑑. Then we arrange the nets for 𝑎 and 𝑏 in a sequence and do the same with the nets for 𝑐 and 𝑑. Finally, we combine the two resulting nets in an outermost choice construct. The other nets in Figure 19.1 can be constructed in a similar fashion. In fact, we may suspect a general principle behind these constructions. Indeed, one can invent a general refinement operator on Petri nets in order to describe all of these control flow combinations. In the simple algebra VS.TOYA, not only data, but also potential synchronisation has been neglected. The parallel composition operator ∥ does not allow any synchronisation or interaction between the two expressions it combines. This differs from CCS where ∥ embodies synchronisations as well. Therefore, we will need to add a synchronisation mechanism to VS.TOYA. It turns out that a general relabelling operator is capable of expressing a large class of synchronisation and interaction regimes, including the CCS variant. General refinements and relabellings will be the subject of the next section.

19.3 Simultaneous refinement and relabelling of nets

431

19.3 Simultaneous refinement and relabelling of nets We present two ways of manipulating a Petri net, simultaneous refinement (Section 19.3.1), and general relabelling (Section 19.3.2). In Section 19.3.3, key behavioural properties of these operations are discussed. Refinement and relabelling are orthogonal to each other, so that they can be combined to a single operation called refinement with relabelling.

19.3.1 Simultaneous refinement Let 𝑡 be some transition in a net 𝑁. We intend to define the refinement of 𝑡 by a second net, 𝑁 ′ , and we assume that all places of 𝑁 ′ are annotated either by 𝑒, or by 𝑖, or by 𝑥. Just as the execution of 𝑡 subtracts tokens from all of its input places and adds tokens to all of its output places atomically, the intention is that 𝑁 ′ can be viewed (when fully executed) as moving tokens from all of its 𝑒 places to all of its 𝑥 places – not necessarily atomically, but possibly through some more or less lengthy computation (which, however, does not produce an intermediate state in which all 𝑥 places are already marked). It is therefore reasonable to combine, in some suitable way,2 the 𝑒 places of 𝑁 ′ with the input places • 𝑡 of 𝑡, and likewise, the 𝑥 places of 𝑁 ′ with the output places 𝑡 • of 𝑡. Example 19.2 A refinement example Consider the example in Figure 19.2. We wish to refine transition 𝑡 in the net 𝑁 by the net 𝑁 ′ . When 𝑡 is replaced by 𝑁 ′ , the two 𝑒-labelled places of 𝑁 ′ are merged with the three input places • 𝑡 of 𝑡 in 𝑁,a and the two 𝑥-labelled places of 𝑁 ′ are merged with the two output places 𝑡 • of 𝑡 in 𝑁. The 𝑖-labelled places shall be taken over without change, and all places inherit their annotations from 𝑁 (rather than from 𝑁 ′ ). Note that the 𝑖 places of 𝑁 are involved in the refinement. This is because they are in • 𝑡 ∪ 𝑡 • . By contrast, the 𝑖 places of 𝑁 ′ are not involved in the refinement. This is because they are not also 𝑒 or 𝑥 places.b a

One of them is not annotated by 𝑒 but by 𝑖. The incoming places of transitions have been marked by a “small token”, because they are the places where one wants to see tokens if 𝑡 is enabled in the original 𝑁 . Later, it will be made sure that no initial or final token lands on an 𝑖 place. b Annotating a place by more than one of the labels 𝑒, 𝑖, 𝑥 will be avoided.

Refinement is a purely structural notion. No claims can yet be made about the behaviour of a refinement result with regard to the behaviours of the two constituent nets when they are provided with some (initial) marking. Of course, the orderly distribution of incoming/outgoing places of refined transitions and the 𝑒, 𝑖 and 𝑥 places of refining nets has to be sorted out (later, in the formal definition) so as to be able to make such statements. 2 Much like the place multiplication already used in Theorem 5.31 (Section 5.6), as well as in our previous examples (𝑎 ∥ 𝑏); (𝑐 ∥ 𝑑) and (𝑎 ∥ 𝑏) + (𝑐 ∥ 𝑑) shown in Figure 19.1.

432

19 The Box Algebra 1/2: Refinement and Recursion

1

𝑖

𝑒

2

3

𝑒

6

𝑒

7

𝑒

𝑖 𝑡′

𝑎

𝑡

𝑏 𝑖

4

𝑥

5

𝑖

8

𝑥

9

𝑥

𝑁′

𝑁

•𝑡

16

𝑖

26

𝑒

36

𝑒

17

𝑖

27

𝑒

37

𝑒

𝑖 𝑡′

𝑎

𝑡 is replaced by the inner part of 𝑁 ′

𝑏

𝑡 ′ is retained

𝑖

48

𝑥

58

𝑥

𝑖

is duplicated by the 𝑒 places of 𝑁 ′

49

𝑖

59

𝑡 • is duplicated by the 𝑥 places of 𝑁 ′

e = 𝑁 [𝑡 ← 𝑁 ′ ] 𝑁

e (below). The Fig. 19.2: Refining 𝑡 in 𝑁 (above left) by 𝑁 ′ (above right) yields 𝑁 ′ transition 𝑡 and its surrounding arcs (the blue ones) are not affected by the refinement, except for suitable arc replications.

Observation 19.3 Re-useability e = 𝑁 [𝑡 ← 𝑁 ′ ], 𝑁 providing the transition 𝑡 𝑁 and 𝑁 ′ play different roles in 𝑁 ′ to be refined, 𝑁 being the net to be substituted for 𝑡. Nevertheless, due to the e can fact that both 𝑁 and 𝑁 ′ are annotated by 𝑒/𝑖/𝑥, the refinement result 𝑁 ′ be used either as an 𝑁 or as an 𝑁 in further refinement steps. For instance, continued refinements such as b[ 𝑡 ′ ← 𝑁 [𝑡 ← 𝑁 ′ ] ] 𝑁 and repeated refinements such as e[𝑡 ′ ← 𝑁 ′′ ] 𝑁 [𝑡 ← 𝑁 ′ ] [𝑡 ′ ← 𝑁 ′′ ] = 𝑁 are well-defined.

19.3 Simultaneous refinement and relabelling of nets

433

Observation 19.4 Exchangeability of refinement It may be checked on the example in Figure 19.2 that the nets 𝑁 [𝑡 ← 𝑁 ′ ] [𝑡 ′ ← 𝑁 ′ ] and 𝑁 [𝑡 ′ ← 𝑁 ′ ] [𝑡 ← 𝑁 ′ ] are isomorphic, and this is true even if the refining nets are not the same. In general, 𝑁 [𝑡 1 , . . . , 𝑡 𝑛 ← 𝑁1′ , . . . , 𝑁 𝑛′ ] is therefore well-defined, meaning that the 𝑡𝑖 are simultaneously substituted, respectively, by 𝑁𝑖′ , for 1 ≤ 𝑖 ≤ 𝑛. The next definition formalises the above intuitions. In it, 𝑡 1 , . . . , 𝑡 𝑚 shall be refined simultaneously by 𝑁1 , . . . , 𝑁 𝑚 , and 𝑢 1 , . . . , 𝑢 𝑘 shall be refined simultaneously by 𝑁1′ , . . . , 𝑁 𝑘′ . Definition 19.5 Simultaneous refinement Le 𝑁 be a plain net (with arc weights 0 or 1), and let 𝑠 be a place in 𝑁. Assume 𝑡 1 , . . . , 𝑡 𝑚 ∈ • 𝑠 and 𝑢 1 , . . . , 𝑢 𝑘 ∈ 𝑠• . Let 𝑁1 , . . . , 𝑁 𝑚 and 𝑁1′ , . . . , 𝑁 𝑘′ be 𝑒/𝑖/𝑥-annotated nets such that every place carries exactly one of the labels 𝑒, 𝑖 or 𝑥. For every combination of 𝑚 + 𝑘 places 𝑠1 , . . . , 𝑠 𝑚 , 𝑠1′ , . . . , 𝑠′𝑘 , such that 𝑠 𝑗 is an 𝑥-annotated place of 𝑁 𝑗 (1 ≤ 𝑗 ≤ 𝑚) and 𝑠𝑙′ is an 𝑒-annotated place of 𝑁𝑙′ (1 ≤ 𝑙 ≤ 𝑘), the refinement result shall contain a new place. The arcs around this place (and its annotation) are inherited canonically from the arcs of 𝑁, 𝑁1 , . . . , 𝑁 𝑚 , 𝑁1′ , . . . , 𝑁 𝑘′ and the annotations of 𝑁 (respectively, the 𝑖 annotations of 𝑁1 , . . . , 𝑁 𝑚 , 𝑁1′ , . . . , 𝑁 𝑘′ ). Figure 19.2 illustrates what is meant by “canonical inheritance”: Example 19.6 Figure 19.2 Places 16 and 17 are annotated by 𝑖 because place 1 carries an 𝑖. Places 48 and 49 carry an 𝑥 because place 4 in 𝑁 is annotated by 𝑥. The 𝑖 annotations of 𝑁 ′ are bequeathed unchanged to the corresponding places in 𝑁 [𝑡 ← 𝑁 ′ ]. Let us examine the arcs around places 16, 26, 36 and 17, 27, 37 according to Definition 19.5. For instance, for place 1 of 𝑁, 𝑚 = 0 and 𝑘 = 1 with 𝑁1′ = 𝑁 ′ , since only the output transition 𝑡, but no input transition, of 1 is refined. So, every combination of 1 with the 𝑒 places of 𝑁 ′ is considered, and 16 and 17 are obtained. Analogously, 26 and 27 are constructed from place 2, and 36 and 37 are obtained from place 3. Example 19.7 In Figure 19.3, 𝑡 is refined by 𝑁 ′ Definition 19.5 does not exclude the possibility that 𝑡 is part of a loop; see Figure 19.3. Consider place 1 in 𝑁. This place has 𝑡 both as an input and as

434

19 The Box Algebra 1/2: Refinement and Recursion

2

𝑒

𝑒

𝑎

𝑏

3

1.24

1.25

1.34

1.35

𝑖

𝑖

𝑖

𝑖

1 𝑡

𝑖

𝑎 4

𝑥

𝑥

𝑏

5

𝑁 𝑁′

e = 𝑁 [𝑡 ← 𝑁 ′ ] 𝑁

Fig. 19.3: Another refinement example. an output transition. In terms of Definition 19.5, 𝑚 = 1 with 𝑁1 = 𝑁 ′ , and 𝑘 = 1 with 𝑁1′ = 𝑁 ′ . Hence place 1 is merged with all combinations, 24, 25, 34 and 35, of 𝑒 places and 𝑥 places of 𝑁 ′ . The arcs arise canonically. For e contains an arc from 1.24 to 𝑎 because 𝑁 ′ has an arc from place instance, 𝑁 e 2 to transition 𝑎. The annotations also arise canonically. All four places in 𝑁 inherit 𝑖 from place 1 in 𝑁. e describes an infinite loopa with body (𝑎 ∥ 𝑏) if all Informally speaking, 𝑁 places are marked with 1 token initially. Note that by initially marking all 𝑒e is and 𝑖-annotated places by one token, both 𝑁 and 𝑁 ′ are safe. However, 𝑁 not, though it is 2-bounded. a

Which cannot yet be expressed in the simple VS.TOYA language.

Remark 19.8 Plainness The net 𝑁, whose transition 𝑡 is refined, has been required to be plain. This requirement has a pragmatic background. It is possible to generalise the definition to properly weighted nets, if desired. However, there are more than one ways of doing so, depending on one’s desires.

Remark 19.9 𝑒-directedness and 𝑥-directedness The intuition behind 𝑒 and 𝑥 places of 𝑁 ′ is that the former match the input places of the transition 𝑡 which it is intending to refine, while the latter match 𝑡’s output places. This is particularly obvious if 𝑁 ′ is 𝑒-directed and 𝑥-directed, that is, by definition, none of its 𝑒 places has an incoming arrow and none of its 𝑥 places has an outgoing arrow. In our previous examples, all nets enjoy this property.

19.3 Simultaneous refinement and relabelling of nets

435

Remark 19.10 𝑇-restrictedness and ex-restrictedness Refinement is defined even if • 𝑡 = ∅ or 𝑡 • = ∅ or both. However, this may lead to strange effects and seems to have little practical significance. The same is true if 𝑁 ′ has no 𝑒 places and/or no 𝑥 places. Let 𝑁 be called 𝑇-restricted if • 𝑡 ≠ ∅ ≠ 𝑡 • for all transitions 𝑡 of 𝑁,a and ex-restricted if it has at least one 𝑒-annotated place and at least one 𝑥-annotated place. These properties will be fulfilled in all nets pertaining to process algebras considered later on. In our examples so far, all nets are 𝑇-restricted and most, but not all of them, are also ex-restricted. a

A similar property has been introduced by the name of transition-regularity, see Chapter 9.

19.3.2 General relabelling Simply refining a transition by a net can be boring. What if during refinement, more interesting things may happen? Relabelling transitions could be more exciting, especially if such relabelling does not interfere with refinement. Let us suppose that a refining net 𝑁 ′ carries some transition labels as well (in addition to place annotations 𝑒/𝑖/𝑥), while 𝑡, the transition to be refined, carries a more complex annotation that specifies a transformation of these labels – and possibly the transitions – into new ones. Such a transformation is called a relabelling. We may imagine that relabelling takes place during refinement. We now allow transitions 𝑢 from 𝑁 ′ to be annotated by labels 𝜆(𝑢) from a set Γ. For real process algebras, Γ will normally contain letters from some alphabet denoting atomic actions (such as 𝑎 in Figure 19.1). 𝜆 is not necessarily injective. For example, 𝑎 ∥ 𝑎 is a valid VS.TOYA expression. By contrast, we allow 𝑡 to carry a relabelling relation 𝜌(𝑡) 𝜌(𝑡) ⊆ mult (Γ) × Γ, where mult (Γ) denotes the set of multisets over Γ. Refining 𝑡 with label 𝜌(𝑡) by 𝑁 ′ with labels 𝜆(𝑢) for every transition 𝑢 in 𝑁 ′ is a two-step process as follows. 1. Places and place annotations of 𝑁 ′ are left unchanged. Transitions and arcs of 𝑁 ′ are changed according to 𝜌(𝑡), following the rule below. The new 𝑁 ′ is called 𝑁 ′ [𝜌(𝑡)]. 2. 𝑁 [𝑡 ← 𝑁 ′ [𝜌(𝑡)]] is constructed by the refinement rule of Section 19.3.1.

436

19 The Box Algebra 1/2: Refinement and Recursion

Rule 19.11 Net relabelling Let 𝑢 1 , . . . , 𝑢 𝑘 be some transitions of 𝑁 ′ . Let 𝜇 = {𝜆(𝑢 1 ), . . . , 𝜆(𝑢 𝑘 )} be some multiset in the domain of 𝜌, i.e., ∃𝑧 ∈ Γ : (𝜇, 𝑧) ∈ 𝜌(𝑡). Then (and only then), 𝑁 ′ [𝜌(𝑡)] contains a new transition 𝑣 as follows: • 𝑣 is annotated by 𝑧.a • The arc weight between 𝑣 and any place 𝑠′ of 𝑁 ′ (and hence of 𝑁 ′ [𝜌]) is the sum of the arc weights of arcs between 𝑢 1 , . . . , 𝑢 𝑘 and 𝑠. To yield 𝑁 ′ [𝜌], this construction is carried out for all suitable sets 𝑢 1 , . . . , 𝑢 𝑘 of transitions of 𝑁 ′ . a

And can hence again be used in some refining net 𝑁 ′ .

Example 19.12 Relabelling example (Figure 19.4) Let Γ = {𝑎, 𝑏, 𝜏}. 𝜌1 𝜌2 𝜌3 𝜌4

= = = =

{ ({𝑧}, 𝑧) | 𝑧 ∈ Γ } ( = {({𝑎}, 𝑎), ({𝑏}, 𝑏), ({𝜏}, 𝜏)} ) { ({𝑧}, 𝑧) | 𝑧 ∈ Γ\{𝑏} } ( = {({𝑎}, 𝑎), ({𝜏}, 𝜏)} ) {({𝑧}, 𝑧) | 𝑧 ∈ Γ}∪{({𝑎, 𝑏}, 𝜏)} (={({𝑎},𝑎),({𝑏},𝑏),({𝜏},𝜏),({𝑎, 𝑏},𝜏)}) { ({𝑧}, 𝑧) | 𝑧 ∈ Γ\{𝑎, 𝑏} } ∪ {({𝑎, 𝑎, 𝑏}, 𝑏)} (={({𝜏}, 𝜏), ({𝑎, 𝑎, 𝑏}, 𝑏)})

The effect of 𝜌1 is “no change”. Transitions constructed newly in 𝑁 ′ [𝜌1 ] are exactly the old ones, with exactly the same labels. The effect of 𝜌2 is that transitions labelled 𝑏 are simply deleted (since the multiset {𝑏} is not matched by any multiset in the domain of 𝜌2 ), while other transitions are left unmodified. 𝜌3 creates a synchronised transition from two transitions labelled 𝑎 and 𝑏, while 𝑎 and 𝑏 are retained. 𝜌4 illustrates that arc weights may increase in a transition created by relabelling. This can, however, occur only if proper multisets are in the domain of the relabelling relation. 𝜌4 differs from 𝜌3 in the sense that synchronisations are created from two transitions, but the constituent transitions are deleted in the result. Since relabelling does not change the structural properties of refinement, simultaneous refinement 𝑁 [ 𝑡 1 ← 𝑁1′ , . . . , 𝑡 𝑛 ← 𝑁 𝑛′ ] (19.2) is still well-defined, where it is assumed that 𝑡1 , . . . , 𝑡 𝑛 are transitions of 𝑁 annotated with relabelling relations, and all transitions of 𝑁1′ , . . . , 𝑁 𝑛′ are annotated by elements of Γ. Then (19.2) yields another net annotated by labels from Γ. In general, we have

19.3 Simultaneous refinement and relabelling of nets 𝑒

𝑒

𝑒

𝑒

437

𝑒

𝑒 2

𝑎

𝑏

𝑎

𝑎

𝜏

𝑏

𝑏 2

𝑥

𝑥

𝑁 ′ = 𝑁 ′ [𝜌1 ]

𝑥

𝑥

𝑁 ′ [𝜌2 ]

𝑥

𝑥

𝑁 ′ [𝜌3 ]

𝑁 ′ [𝜌4 ]

Fig. 19.4: Illustration of relabelling.

Proposition 19.13 Stability of directedness and restrictedness Let 𝑡 be a transition of 𝑁. If 𝑁 is 𝑒-directed (𝑥-directed) and all 𝑁𝑖′ are 𝑒-directed (𝑥-directed, respectively), the same is true for 𝑁 (𝑁1′ , . . . , 𝑁 𝑛′ ). If 𝑁 is 𝑇-restricted and all 𝑁𝑖′ are ex-restricted, then the same is true for 𝑁 (𝑁1′ , . . . , 𝑁 𝑛′ ). Thus, in this case, we may view 𝑁 as an operator which takes 𝑛 Γ-annotated nets e We shall therefore call 𝑁 an 𝑁1′ , . . . , 𝑁 𝑛′ and yields another Γ-annotated net 𝑁. operator net (or, following the jargon of the Petri box calculus, an operator box). A notation such as e = 𝑁 (𝑁 ′ , . . . , 𝑁 𝑛′ ) 𝑁 1 e is the result of refining the 𝑛 transitions of 𝑁 by 𝑁 ′ , . . . , 𝑁 𝑛′ . may be used: 𝑁 1

19.3.3 Behavioural properties of general refinement Let 𝑁 and 𝑁𝑖′ be boxes, that is, 𝑒/𝑖/𝑥-annotated nets. How are the behaviours of 𝑁, 𝑁𝑖′ and 𝑁 (𝑁1′ , . . . , 𝑁 𝑛′ ) related to each other? Executions of boxes are intended to start at the 𝑒 places and end at the 𝑥 places. More precisely, their canonical start and end markings ◦ 𝑁 and 𝑁 ◦ are defined as follows: Definition 19.14 Start marking ◦ 𝑁 and end marking 𝑁 ◦ The start marking ◦ 𝑁 of 𝑁 is defined as 1 token on every 𝑒-place and zero tokens elsewhere. The end marking 𝑁 ◦ of 𝑁 is defined as 1 token on every 𝑥-place and zero tokens elsewhere.

438

19 The Box Algebra 1/2: Refinement and Recursion

Definition 19.15 Safe, clean, ex-exclusive, weakly live 𝑁 is called • safe, if every successor marking 𝑀 of ◦ 𝑁 or of 𝑁 ◦ satisfies 𝑀 (𝑠) ≤ 1 for every place 𝑠. • clean, if every successor marking 𝑀 of ◦ 𝑁 or of 𝑁 ◦ satisfies (∀𝑒-places 𝑠 : 𝑀 (𝑠) > 0) ⇒ 𝑀 = ◦ 𝑁 and (∀𝑥-places 𝑠 : 𝑀 (𝑠) > 0) ⇒ 𝑀 = 𝑁 ◦ • ex-exclusive, if every successor marking 𝑀 of ◦ 𝑁 or of 𝑁 ◦ satisfies ¬∃𝑒-place 𝑠, 𝑥-place 𝑠′ : 𝑀 (𝑠) > 0 ∧ 𝑀 (𝑠′ ) > 0 • weakly live if every transition can be fired at least once from ◦ 𝑁.

Definition 19.16 Factorisability 𝑁 is factorisable, if for any 𝑀 (reachable from ◦ 𝑁 or backward reachable from 𝑁 ◦ ), the set of places marked by 𝑀 is the disjoint union of the input places or the output places of a set of transitions.

Example 19.17 Illustration of factorisability Figure 19.5 depicts an N-shaped box that is safe, clean, and weakly live. However, it is not factorisable. To see this, consider the reachable marking 𝑀 placing one token on 𝑠2 , 𝑠4 , 𝑠6 and zero tokens elsewhere, which is indicated by a dashed red line in the figure. The places underlying this marking cannot by represented as a disjoint union of pre- and post-places of transitions. By contrast, all other reachable markings can. For instance, the marking 𝑀 ′ shown in the figure marks places 𝑠3 , 𝑠4 , 𝑠5 , and we have {𝑠3 , 𝑠4 , 𝑠5 } = 𝑎 • ∪ 𝑏 • (and also {𝑠3 , 𝑠4 , 𝑠5 } = • 𝑐 ∪ • 𝑑). The marking placing one token on {𝑠2 , 𝑠3 , 𝑠4 } (not shown in the figure) satisfies {𝑠2 , 𝑠3 , 𝑠4 } = 𝑎 • ∪ • 𝑏. By the way, this net also fails to satisfy ex-exclusivity; but the two properties are orthogonal, so that this is purely coincidental. The conditions concerning reachability from 𝑁 ◦ can be omitted if the box under consideration is 𝑥-directed. Otherwise, we might, for instance, have some 𝑁𝑖′ in which 𝑁𝑖′ ◦ allows further markings to be reached, without 𝑁𝑖′ ◦ being reachable. If such an 𝑁𝑖′ is refined into 𝑡 𝑖 , it may happen that previously non-reachable markings after 𝑁𝑖′ ◦ become reachable in the resulting net. If such a marking is not safe, then the resulting net may not be safe, even though 𝑁𝑖′ is.

19.4 Compositional, denotational Petri net semantics

𝑠1

𝑀′

𝑠3

439

𝑒

𝑒

𝑎

𝑏

𝑖

𝑖

𝑖

𝑠2

𝑠5

𝑠4

𝑠6

𝑐

𝑑

𝑥

𝑥

𝑠7

𝑀

Fig. 19.5: A non-factorisable net, the only culprit being 𝑀.

Theorem 19.18 Behavioural properties of refinement Let 𝑁 be safe, clean, weakly live, and factorisable. Let every 𝑁𝑖′ be safe and clean. In addition, assume that 𝑁𝑖′ is ex-exclusive if 𝑡𝑖 (the transition to be refined by 𝑁𝑖′ ) satisfies • 𝑡𝑖 ∩ 𝑡𝑖• ≠ ∅ (i.e. is part of a self-loop). Then 𝑁 (𝑁1′ , . . . , 𝑁 𝑛′ ) is safe and clean.

Example 19.19 Checking Theorem 19.18 on the examples The premises of Theorem 19.18 are true for all examples so far, except for Figure 19.3. In that figure, transition 𝑡 is part of a self-loop, and at the same time, 𝑁 ′ is not ex-exclusive. Hence Theorem 19.18 is not applicable; and indeed, the refinement result is not safe. Safeness and cleanness are of primary interest, since the box can then be claimed to “behave like a transition in the context of a safe net”. If 𝑁 is used as an operator box, it is desirable that the premises of this theorem are fulfilled, because this guarantees, by the theorem, that the results of refinement are again safe and clean and thus reusable as a transition of a safe net.

19.4 Compositional, denotational Petri net semantics In Section 19.4.1, a finite fragment of classical CCS is dealt with. In Section 19.4.2, a different, slightly idiosyncratic, process algebra is considered. Our aim is to provide every process-algebraic expression 𝐸 with a meaning (also called a semantics) in terms of a Petri net associated with 𝐸. Indirectly, this specifies the behaviour of 𝐸 as being the behaviour of the Petri net associated with it. This semantics will be given in

440

19 The Box Algebra 1/2: Refinement and Recursion

a compositional way, that is to say that the net corresponding to a process-algebraic expression is built in the same syntax-driven way as the expression itself. This type of semantics is called denotational (rather than operational) because it recurs to a pool of already known, well-understood, objects (Petri nets, in this case). An operational semantics, by contrast, specifies the behaviour of an expression directly, without recurring to known objects; see Section 19.5.

19.4.1 Classical finite CCS The sequential execution of two actions 𝑎 and 𝑏 is specified in CCS by prefixing 𝑎.(𝑏.nil), the choice between 𝑎 and 𝑏 is specified as (𝑎.nil) + (𝑏.nil), and the parallel execution of 𝑎 and 𝑏 is specified as (𝑎.nil) (𝑏.nil), using a basic process-algebraic primitive called nil (“inaction”, or “stop execution”). The binary parallel composition operator consists of two parts, the parallel execution of two processes and the synchronisation of so-called conjugated actions 𝑎 and b 𝑎 , which when synchronised result in an unobservable action denoted by 𝜏. For instance, the process (𝑎.nil) (b 𝑎 .nil) can execute either a sequence 𝑎b 𝑎 , or a sequence b 𝑎 𝑎, or the single action 𝜏. Apart from parallel composition, choice and prefixing, classical CCS also allows two operators called restriction and relabelling. For instance, the process (𝑎\𝑎) cannot execute any action (and in particular, no action 𝑎), while the process (𝑎[ℎ]), with ℎ(𝑎) = 𝑏, cannot execute an action 𝑎, but can execute an action 𝑏. Thus, CCS is based on a set 𝐴 = {𝑎, b 𝑎 , 𝑏, b 𝑏, 𝑐, b 𝑐 , . . .} of “observable” actions and a single “unobservable” action 𝜏 ∉ 𝐴. Every observable action 𝑎 in CCS has a so-called conjugate action b 𝑎 with which it can be synchronised. By definition, the conjugate of b 𝑎 is 𝑎, so that 𝐴 comes in pairs. Γ = 𝐴 ∪ {𝜏}, or possible relabellings thereof, will be defined as the set of permitted transition annotations. The syntax of finite CCS expressions 𝐸 is

19.4 Compositional, denotational Petri net semantics

441

𝐸 ::= n𝑖𝑙 | 𝐸 + 𝐸 | 𝑧.𝐸 | 𝐸 𝐸 | 𝐸 [ℎ] | 𝐸 \𝑎 where 𝑎 stands for actions in 𝐴, 𝑧 stands for actions in Γ = 𝐴∪{𝜏}, and ℎ stands for a CCS renaming function ℎ : Γ → Γ which commutes with conjugation, that is, ℎ(b 𝑎) = š ℎ(𝑎) for all 𝑎 ∈ 𝐴, and satisfies ℎ(𝜏) = 𝜏. Note that CCS has prefixing (of an expression 𝐸 with an observable action 𝑎 or with an unobservable action 𝜏), but no true sequential execution.3 Finite CCS is translated into Petri nets, using the “constant” boxes KB and operator boxes OB from Figure 19.6, and the relabelling relations 𝜌id 𝜌𝐿 𝜌𝑅 𝜌sy 𝜌 [ℎ] 𝜌\𝑎

= = = = = =

{({𝑧}, 𝑧) | 𝑧 ∈ Γ} {({𝑧}, 𝑧 𝐿 ) | 𝑧 ∈ Γ} {({𝑧}, 𝑧 𝑅 ) | 𝑧 ∈ Γ} {({𝑎 𝐿 , b 𝑎 𝑅 }, 𝜏) | 𝑎 ∈ 𝐴} ∪ {({𝑧 𝐿 }, 𝑧) | 𝑧 ∈ Γ} ∪ {({𝑧 𝑅 }, 𝑧) | 𝑧 ∈ Γ} {({𝑧}, ℎ(𝑧)) | 𝑧 ∈ Γ} {({𝑧}, 𝑧) | 𝑧 ∈ Γ\{𝑎, b 𝑎 }}

as follows: Net( nil) Net(𝐸 1 + 𝐸 2 ) Net(𝑧.𝐸) Net(𝐸 1 𝐸 2 ) Net(𝐸 [ℎ]) Net(𝐸 \𝑎)

= KBnil = OB+ (Net(𝐸 1 ), Net(𝐸 2 )) = OB; (KB(𝑧), Net(𝐸)) = OBoxsy (OB (OB 𝐿 (Net(𝐸 1 )), OB𝑅 (Net(𝐸 2 )))) = OB [ℎ] (Net(𝐸)) = OB\𝑎 (Net(𝐸))

(19.3)

𝑒

𝑒

𝑒

𝑧

𝑒

𝜌id

𝜌id

𝜌id

𝑒

𝑒

𝑒

𝑒

𝑒

𝑒

𝑒

𝜌𝐿

𝜌𝑅

𝜌id

𝜌id

𝜌sy

𝜌 [ℎ]

𝜌\𝑎

𝑥

𝑥

𝑥

𝑥

𝑥

OBsy

OB [ℎ]

OB\𝑎

𝜌id

𝑥

𝑥

𝑥

𝑥

𝑥

𝑥

KBnil

KB𝑧

OB+

OB;

OB 𝐿

OB𝑅

OB

Fig. 19.6: Boxes for finite CCS (with initial markings).

3 Which can, however, be “simulated” if desired.

442

19 The Box Algebra 1/2: Refinement and Recursion

The semantics of CCS composition is slightly more complex than the other semantics, owing to the fact that the original CCS idea has to be modelled: 𝐸 1 𝐸 2 requires to compose 𝐸 1 and 𝐸 2 in parallel, along with synchronising conjugate pairs of actions (but only those that distribute over 𝐸 1 and 𝐸 1 , rather than conjugate pairs that are contained in the same component). For this reason, synchronisation is preceded in (19.3) by renaming which distinguishes left and right components of by superscripts 𝐿 and 𝑅 , respectively. The Petri box semantics of CCS’s somewhat distinctive process nil is justified intuitively by the fact that it is a unit with respect to choice. Indeed, (𝑎.nil) + nil and 𝑎.nil are strongly equivalent in CCS, and indeed, we have Net((𝑎.nil) + nil) = Net(𝑎.nil) However, this particular translation of nil depends on CCS’s other idiosyncracies. It depends on the fact that CCS can express prefixing, but no proper sequential composition, such as, for instance, in (𝑎 ∥ 𝑏); (𝑐 ∥ 𝑑). The box Net(𝑎.nil), for instance, does allow an execution of 𝑎 as required, but no proper terminating marking is reachable. Thus, if CCS could express (𝑎.nil).𝑏 (which it cannot), then 𝑏 would not be executable in the translation given above.

19.4.2 B.TOYA In this section, another process algebra called B.TOYA (for “basic toy algebra”) is considered, which contains no nil but full sequential composition. It extends the algebra considered in Section 19.2 by adding a somewhat fancy synchronisation operator (just in order to illustrate general refinement). Let Γ be some alphabet of action names. B.TOYA : 𝐸 ::= 𝑧

𝐸 [𝑦, 𝑦 ′ → 𝑧]

𝐸+𝐸

𝐸; 𝐸

𝐸∥𝐸

(19.4)

where 𝑦, 𝑦 ′ , 𝑧 ∈ Γ. In addition to VS.TOYA, there is a unary synchronisation parametrised by three action names 𝑦, 𝑦 ′ , 𝑧 ∈ Γ. By this operator, pairs 𝑦, 𝑦 ′ will first be synchronised and then deleted. Example 19.20 Figures 19.1 and 19.7 Figure 19.1 illustrates several B.TOYA expressions and their Petri net translations. Figure 19.7 illustrates, in addition, the synchronisation operator. In constructing the nets shown on the left-hand side, the second net of Figure 19.1 is used as intermediate steps just before synchronisation. Synchronisation then creates the central 𝑏 transition and deletes the 𝑏 and 𝑐 transitions it arises from. The same is true for the net on the right-hand side of the figure, starting with the (by now obvious) net for (𝑎; 𝑏) ∥ (𝑐; 𝑑).

19.5 Compositional, operational Petri net semantics 𝑒

443 𝑒

𝑎

𝑒

𝑎

𝑏

𝑏

𝑑

𝑥

Net for ( (𝑎; 𝑏) + (𝑐; 𝑑) ) [𝑏, 𝑐 → 𝑏]

𝑑

𝑥

𝑥

Net for ( (𝑎; 𝑏) ∥ (𝑐; 𝑑) ) [𝑏, 𝑐 → 𝑏]

Fig. 19.7: Illustration of B.TOYA synchronisation.

Note that the net shown on the left-hand side of Figure 19.7 is not weakly live. This does not contradict the requirements exposed in Section 19.3.3, since weak liveness is critical only for operator boxes, not for other nets participating in refinements. The B.TOYA operator boxes are not actually shown explicitly, because they are very analogous to the CCS Boxes of Figure 19.6. For the sake of completeness, here is the relabelling relation for B.TOYA synchronisation: 𝜌 [ 𝑦,𝑦 ′ →𝑧 ] = {({𝑣}, 𝑣) | 𝑣 ∈ Γ \ {𝑦, 𝑦 ′ }} ∪ {({𝑦, 𝑦 ′ }, 𝑧)} It is not hard to check that all of the B.TOYA operator boxes are weakly live. The non-liveness of Net(((𝑎; 𝑏) + (𝑐; 𝑑)) [𝑏, 𝑐 → 𝑏]) reflects the non-liveness of the corresponding process expression. Indeed, the original 𝑏 needs a preceding 𝑎, but the occurrence of such an 𝑎 prevents any occurrence of 𝑐 which would be necessary as participant of the synchronisation with 𝑏.

19.5 Compositional, operational Petri net semantics Traditionally, the operational brand of semantics deals with the behaviour of processalgebraic expressions, rather than with their translation into Petri nets, or any other objects, for that matter. It is therefore called “operational”, rather than “denotational”, and it is called a “structural operational semantics” (abbreviated SOS) if it is not only operational but, in addition, also compositional. SOS is usually defined in a syntax-driven way by the syntax of expressions, just like denotational semantics has been defined earlier. In this section, the focus is on such a semantics for B.TOYA.

444

19 The Box Algebra 1/2: Refinement and Recursion

In analogy to the markings of a Petri net, B.TOYA expressions can be provided with information about their “current states”. This is done by denoting the initial and final states of an expression 𝐸 and its subexpressions by overlining and underlining: 𝐸 and 𝐸 respectively to describe states similar to ◦ 𝑁 and 𝑁 ◦ with respect to their associated Petri net 𝑁. Expressions provided with overlinings and underlinings will be called dynamic. As an example, consider the B.TOYA expression 𝑎; 𝑏 and the following over- and underlinings: 𝑎; 𝑏 𝑎; 𝑏 𝑎; 𝑏 𝑎; 𝑏 𝑎; 𝑏 𝑎; 𝑏

initial state of the entire expression 𝑎; 𝑏 initial state of the first subexpression 𝑎 final state of the first subexpression 𝑎 initial state of the second subexpression 𝑏 final state of the second subexpression 𝑏 final state of the entire expression 𝑎; 𝑏

(19.5)

Clearly, some of these states ought to be identified. For example, both 𝑎; 𝑏 and 𝑏; 𝑎 denote the state after an execution of 𝑎 and before an execution of 𝑏. Let ≡ denote a suitable equivalence relation on dynamic expressions which formalises this identification, thus: 𝑎; 𝑏 ≡ 𝑎; 𝑏, and analogously, 𝑎; 𝑏 ≡ 𝑎; 𝑏 as well as 𝑎; 𝑏 ≡ 𝑎; 𝑏. (In a moment, a general definition of ≡ will be given.) Using this machinery, the behaviour of 𝑎; 𝑏 can be described as follows: 𝑎; 𝑏



𝑎

𝑎; 𝑏

−→

𝑎; 𝑏



𝑏

𝑎; 𝑏

−→

𝑎; 𝑏



𝑎; 𝑏

Informally speaking, 𝐺 ≡ 𝐻 means that 𝐺 and 𝐻 denote the same marked Petri net. In general, ≡ can be read off the operator boxes corresponding to an expression. The following syntax will be used for dynamic expressions: DB.TOYA : 𝐺

::=

𝐸

𝐸 𝐺 [𝑦, 𝑦 ′ → 𝑧] 𝐺+𝐸

𝐸 +𝐺 𝐺; 𝐸

𝐸; 𝐺 𝐺 ∥ 𝐻

where 𝐸, 𝐹 stand for B.TOYA expressions and 𝐺, 𝐻 stand for DB.TOYA expressions. Observe the difference between choice and parallel composition: in the former, only one of the two subexpressions may by dynamic, while in the latter, both have to be. DB.TOYA identifications are defined as follows: (1)

(a) 𝐸 [𝑦, 𝑦 ′ → 𝑧] ≡ 𝐸 [𝑦, 𝑦 ′ → 𝑧] , (b) 𝐸 [𝑦, 𝑦 ′ → 𝑧] ≡ 𝐸 [𝑦, 𝑦 ′ → 𝑧]

(2)

(a) 𝐸 + 𝐹 ≡ 𝐸 + 𝐹 ≡ 𝐸 + 𝐹 , (b) 𝐸 + 𝐹 ≡ 𝐸 + 𝐹 ≡ 𝐸 + 𝐹

(3)

(a) 𝐸; 𝐹 ≡ 𝐸; 𝐹 , (b) 𝐸; 𝐹 ≡ 𝐸; 𝐹 , (c) 𝐸; 𝐹 ≡ 𝐸; 𝐹

(4)

(a) 𝐸 ∥ 𝐹 ≡ 𝐸 ∥ 𝐹 , (b) 𝐸 ∥ 𝐹 ≡ 𝐸 ∥ 𝐹

(19.6)

19.5 Compositional, operational Petri net semantics

445

Line (3) has already been explained on an example. Line (2a) identifies the initial state of a choice with the initial state of one (but not two) of its components; and similarly for the final states in (2b). Line (4a) identifies the initial state of a parallel composition with two (and not only one) of its components; and similarly for the final states in (4b). Line (1) is, hopefully, self-explanatory; an exmple can be found below (Example 19.24). In order to account for the arbitrary nesting of subexpressions, a context rule is introduced. Let 𝐾 = C(•) be a dynamic context, that is to say, by definition, 𝐾 is a dynamic expression whenever some dynamic expression is inserted instead of •. Example 19.21 Illustration of dynamic contexts •; 𝑏 is a dynamic context, and so is •∥•, but neither •; • nor • + • are. The context rule (CR) is defined as follows: (CR)

If 𝐺 ≡ 𝐻, then C(𝐺) ≡ C(𝐻)

(19.7)

Finally, the behaviour of dynamic expressions is defined via a set of step rules. Steps are multisets 𝛾 of actions.4 Sets (rather than single actions) are needed because of synchronisation. In (𝑎 ∥ 𝑏) [𝑎, 𝑏 → 𝑐], for instance, a step {𝑐} should be derivable from the step {𝑎, 𝑏}. Multisets (rather than sets) are needed because of the possible non-injectivity of labelling. Example 19.22 Some steps In (𝑎 ∥ 𝑎), for instance, the step {𝑎, 𝑎} should be possible. Consider also the net for (𝑎 ∥ 𝑏); (𝑐 ∥ 𝑑) on the right-hand side of Figure 19.1. Steps {𝑎, 𝑏}, {𝑐, 𝑑}, {𝑎} (and other singleton steps), and even ∅, should be allowed, but not step {𝑎, 𝑐}. Step rules are given in (19.8), as shown in Table 19.1. Definition 19.23 SOS semantics of DB.TOYA The SOS semantics of B.TOYA (i.e., DB.TOYA) consists of the rules (19.6), (19.7), and (19.8).

4 See also Chapter 1, Definition 1.19.

446

19 The Box Algebra 1/2: Refinement and Recursion {𝑎}

(𝛼) 𝑎 −−→ −− 𝑎 𝛾+𝑘·{𝑦,𝑦 ′ }

𝐺 −−−−−−→ −− 𝐻 , 𝑘 ∈N (A)

𝛾+𝑘·{𝑧}

𝐺 [ 𝑦, 𝑦 ′ → 𝑧 ] −−−−−−→ −− 𝐻 [ 𝑦, 𝑦 ′ → 𝑧 ] 𝛾

𝛾

𝐺 −−→ −− 𝐻 (B) (1)

𝛾

𝐺 + 𝐸 −−→ −− 𝐻 + 𝐸

𝐺 −−→ −− 𝐻 (2)

𝛾 𝛾

𝐺; 𝐸 −−→ −− 𝐻; 𝐸 𝛾

(19.8)

𝛾

𝐺 −−→ −− 𝐻 (C) (1)

𝛾

𝐸𝐺 −−→ −− 𝐸 + 𝐻

𝐺 −−→ −− 𝐻 (2)

𝛾

𝐸; 𝐺 −−→ −− 𝐸; 𝐻

𝛿

𝐺 −−→ −− 𝐺 ′ , 𝐻 −−→ −− 𝐻 ′ (D)

𝛾+ 𝛿

𝐺 ∥ 𝐻 −−→ −− 𝐺 ′ ∥ 𝐻 ′

Table 19.1: Step rules. Rule (𝛼) is the only rule by which “something can happen”. All other rules (A)–(D), as well as rule (CR), extend Rule (𝛼) to all possible contexts allowed by the syntax of DB.TOYA.

Example 19.24 Example: Right-hand side of Figure 19.7 The following deduction shows that ((𝑎; 𝑏) ∥ (𝑐; 𝑑)) [𝑏, 𝑐 → 𝑏] can do an {𝑎} step followed by a {𝑏} step. ((𝑎; 𝑏) ∥ (𝑐; 𝑑)) [𝑏, 𝑐 → 𝑏] ≡ ((𝑎; 𝑏) ∥ (𝑐; 𝑑)) [𝑏, 𝑐 → 𝑏]

( 1a )

≡ ((𝑎; 𝑏) ∥ (𝑐; 𝑑)) [𝑏, 𝑐 → 𝑏]

( 4a, CR )

≡ ((𝑎; 𝑏) ∥ (𝑐; 𝑑)) [𝑏, 𝑐 → 𝑏]

( 2·3a, CR )

{𝑎}

−→ ((𝑎; 𝑏) ∥ (𝑐; 𝑑)) [𝑏, 𝑐 → 𝑏] ≡ ((𝑎; 𝑏) ∥ (𝑐; 𝑑)) [𝑏, 𝑐 → 𝑏]

( 𝛼, C1, D, A ) ( 3b, CR )

{𝑏}

−→ ((𝑎; 𝑏) ∥ (𝑐; 𝑑)) [𝑏, 𝑐 → 𝑏] (2·𝛼,C2,C1,D,A) ··· Line 4 uses rule (D) with 𝛾 = {𝑎} and 𝛿 = ∅, as well as rule (A) with 𝛾 = {𝑎} and 𝑘 = 0. Line 6 uses rule (D) with 𝛾 = {𝑏} and 𝛿 = {𝑐}, as well as rule (A) with 𝛾 = ∅ and 𝑘 = 1.

19.6 Recursion

447

By now, there are two semantics for B.TOYA, a denotational one in terms of Petri nets, and an operational one in terms of step rules. The following theorems explain the connection between them. Theorem 19.25 Consistency Every step sequence derivable for some DB.TOYA expression 𝐸 by means of (19.6, 19.7, 19.8) is also a step sequence of the net 𝑁 = Net(𝐸), starting from ◦ 𝑁. Actually, we did not define step sequences so far in this book. They are just like 𝜁

firing sequences, except that every single firing 𝑀 −→ 𝑀 ′ in such a sequence is effected not just through a single transition 𝜁 = 𝑡, but through a multiset of transitions 𝜁 = {𝑡1 , . . . , 𝑡 𝑧 } (𝑡𝑖 not necessarily distinct) which are concurrently enabled at the marking 𝑀.5 Theorem 19.26 Completeness Let 𝐸 be a B.TOYA expression. Every step sequence of the net 𝑁 = Net(𝐸) with initial marking ◦ 𝑁 can be derived from 𝐸 by means of (19.6, 19.7, 19.8). Actually, we even have isomorphism of labelled transition systems generated by the two semantics. In the box calculus, there is an even more general result: Theorem 19.27 Generic consistency and completeness Whenever an operator box is safe, clean and factorisable, there exists a collection of characteristic equivalence rule (such as (19.6)) and a consistent and complete SOS-semantics (analogous to (19.7, 19.8)). Factorisability is crucial in the proof of this result. Without this property, completeness is lost.

19.6 Recursion Now the finiteness condition will be dropped, but Petri nets will still be expected to be safe. The B.TOYA-Syntax (19.4) is extended by recursion in a similar way as in CCS: 𝐸 [𝑦, 𝑦 ′ → 𝑧] 𝐸 + 𝐹 𝐸; 𝐹 𝐸 ∥ 𝐹 𝑋 df where a defining equation 𝑋 = 𝐸 is associated with 𝑋

R.TOYA : 𝐸 ::= 𝑎

5 For the notion of concurrent enabling, see Definition 8.16.

448

19 The Box Algebra 1/2: Refinement and Recursion df

Example 19.28 Sequence and recursion: 𝑋 = 𝑎; 𝑋 Intuitively, the executions of this expression are expected to consist of sequences of 𝑎’s of arbitrary length. An infinite sequence of 𝑎’s should also (intuitively) be executable. Treating 𝑋 as a name for an action and using the SOS semantics idea of Section 19.5 yields ∅

𝑋 −→ 𝑎; 𝑋



𝑎

𝑎; 𝑋 −→ 𝑎; 𝑋



𝑎; 𝑋

(19.9)

Every step in this deduction but the very first one is actually just like described earlier. In particular, rules (3a), (𝛼) and (3b) of 19.6 and 19.8 have been applied. ∅

The first step, 𝑋 −→ 𝑎; 𝑋, is simply an application of the defining equation for 𝑋 in Example 19.9. In general, recursion is described by the following rule: Rule 19.29 Recursion rewrite rule In a dynamic context, a recursion variable 𝑋 may be replaced by the body 𝐸 df of its defining equation 𝑋 = 𝐸. This rule – along with the others that have already been described – is, by definition, already the full SOS semantics of recursion. Rewriting according to Rule 19.29 ∅

is denoted by an arrow −→ (empty step) since no real activity is involved. What is different from the previous examples is that the structure of an expression is modified by such a step; it is not, therefore, in the list of structural equivalences. Example 19.30 Example 19.28 continued The recursion rule can again be applied to the last expression of (19.9): ∅

𝑎; 𝑋 −→ 𝑎; 𝑎; 𝑋



𝑎

𝑎; (𝑎; 𝑋) −→ 𝑎; (𝑎; 𝑋)



𝑎; (𝑎; 𝑋)

at which point the recursion rewrite rule can again be applied. Clearly, the expected execution sequences can be deduced in this way, creating increasingly long dynamic expressions. Example 19.28 may be contrasted with the following example. df

Example 19.31 Unbounded finite choice: 𝑋 = 𝑎 + (𝑋; 𝑎) Arbitrarily long 𝑎-sequences can be executed from 𝑋 in this expression. The following derivation shows that two 𝑎’s can be executed in sequence:

19.6 Recursion

449 ∅

𝑋 −→ 𝑎 + (𝑋; 𝑎)



2

𝑎 + (𝑋; 𝑎)



−→ 𝑎 + (𝑎 + (𝑋; 𝑎); 𝑎)



𝑎 + ((𝑎 + (𝑋; 𝑎)); 𝑎) (19.10)

𝑎

−→ 𝑎 + ((𝑎 + (𝑋; 𝑎)); 𝑎)



2

𝑎 + ((𝑎 + (𝑋; 𝑎)); 𝑎)

𝑎

−→ 𝑎 + ((𝑎 + (𝑋; 𝑎)); 𝑎) where ≡ 2 abbreviates “two ≡ steps”. 𝑋 can execute 𝑎-sequences of arbitary length but not the infinite sequence 𝑎𝑎𝑎 . . .. The first claim follows directly by generalising (19.10). The second claim follows because in order to create the first 𝑎 of an execution sequence, the 𝑎 df on the left-hand side of 𝑋 = 𝑎 + (𝑋; 𝑎) has to be chosen, after which 𝑋 cannot be rewritten any longer.

df

Example 19.32 “Empty” choice: 𝑋 = 𝑎 + 𝑋 A single 𝑎 can be executed from 𝑋: ∅



𝑋 −→ 𝑎 + 𝑋 ≡ 𝑎 + 𝑋 −→ 𝑎 + (𝑎 + 𝑋) ≡ 𝑎 + (𝑎 + 𝑋) (19.11)

𝑎

−→ 𝑎 + (𝑎 + 𝑋) ≡ 2 𝑎 + (𝑎 + 𝑋) The recursion rule can be applied arbitrarily often (and it has been applied, simply for the purpose of illustrating it, more often than necessary in (19.11)). However, at most one of the 𝑎’s that are created by this can actually be executed. The final state in (19.11) can actually be rewritten from 𝑋, the final state of 𝑋.

df

Example 19.33 “Void” recursion: 𝑋 = 𝑋 with body 𝑋 The SOS semantics of this expression is fairly obvious. 𝑋 can be rewritten to itself as many times as one fancies, but nothing else can happen, in particular no execution of an action. Let us now turn to denotational semantics, that is, to Petri nets associated with recursive expressions. In this respect, the last example leads us to the following principle. Rule 19.34 Principle of economy df A Petri net describing 𝑋 = 𝐸 should not contain unnecessary elements. Instead of allowing any net without behaviour (for example, those that have dead df transitions) as semantics of 𝑋 = 𝑋, we pick a minimal one, which is the stop net consisting only of an 𝑒 place and an 𝑥 place. These two places are needed for net

450

19 The Box Algebra 1/2: Refinement and Recursion

composition and cannot easily be omitted. In fact, stop is the same net as suggested for CCS’s nil expression (see Figure 19.6). df

df

Consider the expressions 𝑋 = 𝑎; 𝑋 and 𝑋 = 𝑎 + (𝑋; 𝑎). Their denotational Petri net semantics can be constructed systematically as follows. df

(i) Construct a net 𝑅 𝐸 for the body of 𝑋 = 𝐸, where 𝑋 is treated as a single action name. This will be considered as a unary operator box, i.e., a function 𝑓 mapping nets to nets. For a given net 𝑁, 𝑓 (𝑁) is found by simultaneously refining all transitions of 𝑅 𝐸 labelled 𝑋 by 𝑁. (ii) Search for a fixpoint of 𝑓 , that is, a net 𝑁 which equals 𝑓 (𝑁) up to isomorphism. df If such a fixpoint exists, it may be called the Petri net semantics of 𝑋 with 𝑋 = 𝐸. df

Example 19.35 Denotational semantics of 𝑋 = 𝑎; 𝑋 Figure 19.8 shows 𝑅 𝑎;𝑋 and an infinite net 𝑁 which is a fixpoint of 𝑅 𝑎;𝑋 , in the sense that if transition 𝑋 of 𝑅 𝑎;𝑋 is refined by 𝑁, the result is isomorphic to 𝑁. In fact, this is even the only fixpoint, given the principle of economy. This follows from the guardedness of this particular recursion, that is, the fact that, in 𝑅 𝑎;𝑋 , the input places of the transition labelled 𝑋 are disjoint with the 𝑒 places of 𝑅 𝑎;𝑋 .

𝑒

𝑎

𝑋

𝑒

𝑎

𝑎

𝑎

...

𝑖

𝑥

𝑥

df

Fig. 19.8: Nets for 𝑋 = 𝑎; 𝑋: body 𝑅 𝑎;𝑋 (l.h.s.) and solution (r.h.s.).

df

Example 19.36 Denotational semantics of 𝑋 = 𝑎 + (𝑋; 𝑎) df The expression 𝑋 = 𝑎 + (𝑋; 𝑎) can be given a Petri net semantics in the same way, as shown in Figure 19.9. Note that indeed, as desired, this net has 𝑎-executions of arbitrary length, but no infinite such behaviour. These examples can be generalised. 1. All R.TOYA expressions have Petri net fixpoints whose labelled transition systems (i.e., reachability graphs) are isomorphic to their SOS semantics. 2. Fixpoints can be approximated iteratively: starting with stop, the 𝑋-labelled transitions are refined successively.

19.7 Bibliographical remarks and further reading

451

𝑒

𝑒

𝑎

𝑎

𝑎

𝑎

𝑎 ...

𝑖 𝑥

𝑎

𝑋

𝑥

𝑎

𝑎

𝑎

df

Fig. 19.9: Nets for 𝑋 = (𝑎 + (𝑋; 𝑎)): body 𝑅 𝑎+(𝑋;𝑎) (l.h.s.) and solution (r.h.s.).

3. For guarded recursions, there is a unique minimal model. Otherwise, the uniqueness of minimal fixpoints cannot be guaranteed, but there are always unique maximal fixpoints. A relevant example, for that matter, is 𝑋 = (𝑎∥𝑏) + 𝑋, but details are beyond the scope of this text.

19.7 Bibliographical remarks and further reading This chapter is based on [BDK01] (Eike Best, Raymond Devillers, and Maciej Koutny) and [BK03] (Eike Best and Maciej Koutny). All proofs can be found there. Neither place multiplication nor transition refinement are inventions of the box algebra. Indeed, as seen in Theorem 5.31, place multiplication occurs already in early proofs about reproduction sequences in marked graphs. In the context of process algebras, place multiplication occurs, for example, in [GM84] (by Ursula Goltz and Alan Mycroft) and in [Old91] (by Ernst-Rüdiger Olderog). The remarks on COSY, CSP and CCS refer to [LC75] (path expressions, the basis of COSY, by Roy H. Campbell and Peter E. Lauer), [JL92] (Concurrent Systems Notation, by Ryszard Janicki and Peter E. Lauer), [Hoa78] (Communicating Sequential Processes, by C.A.R. Hoare), and [Mil80] (Calculus of Communicating Systems, by Robin Milner). CSP has later been developed (broadly speaking, by adding recursion and various semantics while throwing out data), into TCSP (Theoretical CSP) [Hoa85].6 A thorough comparison between path expressions, CCS and TCSP (and other models) can be found in Michael W. Shields’ monograph [Shi97]. The terminology about semantics (denotational, operational) has very early roots. Denotational semantics goes back to Dana S. Scott and Christopher S. Strachey, see e.g. [Sco76] by D. Scott, [Sto79] by Joseph E. Stoy, and [Sco00] (Scott on Strachey). Structural Operational Semantics (SOS) has first been defined for process algebras and programming languages, see [Plo79] and [Plo04] (Gordon D. Plotkin) and [HP79a] (Matthew Hennessy and Gordon D. Plotkin).7 In the classical operational 6 See also http://www.usingcsp.com/cspbook.pdf. 7 Reprinted from https://homepages.inf.ed.ac.uk/gdp/publications/sos_jlap.pdf. This is an extensive and frequently cited treatise on SOS.

452

19 The Box Algebra 1/2: Refinement and Recursion

semantics of CCS, one has: 𝑎.𝑏.nil

𝑎

−→

𝑏.nil

that is, the CCS expression 𝑎.𝑏.nil can be transformed into the CCS expression 𝑏.nil by executing an 𝑎. This differs from the treatment proposed in this chapter, in the sense that the CCS expression is dynamic by itself, rather than being adorned with dynamic markers that obey structural equivalence rules. The box algebra operational semantics (using overlining and underlining to create dynamic expressions) is inspired by the (sub-)markings of a Petri net. Using such an approach, and the concomitant structural equivalences, was first described in [KEB94] (Maciej Koutny, Javier Esparza and Eike Best). Independently, but earlier, structural congruences were introduced in the context of (an extension of) CCS by Robin Milner in [Mil92]. According to [MR05] (Mohammad R. Mousavi and Michel A. Reniers), they are based on the Chemical Abstract Machine [BB90], by Gérard Berry and Gérard Boudol. Such congruences are comparable to in spirit, but differ in their definition from, the structural equivalence ≡ , because (for instance) long CCS expressions may be structurally congruent to small ones. Hardly surprisingly, in one or the other form, the recursion rewrite rule is embodied in many process algebras allowing recursion, for example in another influential early approach called ACP (Algebra of Communicating Processes, see [BW90] by Jos C.M. Baeten and W. Peter Weijland), but also in CCS and in TCSP (not in COSY which has only iteration). Since CCS is Turing-complete (a result that is not entirely easy to prove, see [Gor17] by Roberto Gorrieri), so is R.TOYA. Its place/transition Petri net semantics is, therefore, necessarily infinite.

19.8 Exercises Exercise 19.1 Non-uniqueness of expressions Give two (essentially) different process-algebraic expressions in the B.TOYA language to generate the box shown in Figure 19.10.

𝑒

𝑖

𝑥

𝑖

𝑎

𝑎 𝑎

𝑎 𝑒

𝑎 𝑖

𝑖

Fig. 19.10: A box to Exercise 19.1.

𝑥

19.8 Exercises

453

Exercise 19.2 Process-algebraic expressions Construct the boxes of the following process-algebraic expressions: a) ((𝑎; 𝑏)∥𝑐) + ((𝑑 ∥𝑔); 𝑓 ) b) (𝑎 ∥ stop); (𝑎 + stop) Hint: The stop box has an 𝑒 place, an 𝑥 place, and nothing else. Exercise 19.3 Recursion a) Draw a box for the following recursive expression: 𝑋

def

=

𝑎 + (𝑏 ∥ 𝑋)

To understand the infinite Petri net that arises from this construction, perform three steps: (i) draw the box of 𝑎 + (𝑏 ∥ 𝑋), considering 𝑋 to be a normal transition label; (ii) draw the box of 𝑎 + (𝑏 ∥ (𝑎 + (𝑏 ∥ 𝑋))) (that is, like (1), except that 𝑋 is refined once, by the result of (1)); (iii) draw the box of 𝑎 + ((𝑏 ∥ 𝑎 + (𝑏 ∥ (𝑎 + (𝑏 ∥ 𝑋))))). b) Show that 𝑎𝑏𝑏, 𝑏𝑎𝑏 and 𝑏𝑏𝑎 can be fired in the resulting infinite net (starting from its canonical initial marking), but that 𝑏𝑎𝑎, 𝑎𝑏𝑎 and 𝑎𝑎𝑏 cannot. c) Show that the operational semantics allows 𝑎𝑏𝑏, 𝑏𝑎𝑏 and 𝑏𝑏𝑎, but not 𝑏𝑎𝑎, 𝑎𝑏𝑎 and 𝑎𝑎𝑏, to be executed. d) Consider the marking in which all 𝑥-places are marked as the final state. In that case, what is the language of this box (and thus, of the expression), in the sense of Chapter 2? e) What is the full language, that is, by definition, the prefix language plus all infinite words that can be generated? Solution 19.1 (Exercise 19.1) a) (𝑎||𝑎); 𝑎; (𝑎||𝑎) b) ((𝑎; 𝑏; 𝑎)∥(𝑎; 𝑐; 𝑎)) [𝑏, 𝑐 → 𝑎] Solution 19.2 (Exercise 19.2) See Figure 19.11. Note that the net in b) cannot be fully executed from its standard initial marking. By contrast, the net in a) has fully terminating executions. Solution 19.3 (Exercise 19.3) a): See Figure 19.12. The construction principle is as follows. The first 𝑎 (counting from left to right) takes tokens from all 𝑒-places. The second 𝑎 takes tokens from all but the leftmost 𝑒-places. The third 𝑎 takes tokens from all but the two leftmost 𝑒-places, and so on. The 𝑖’th 𝑏-labelled transition takes one token from the 𝑖’th 𝑒-place. On the 𝑥-side, the net is constructed symmetrically.

454

19 The Box Algebra 1/2: Refinement and Recursion 𝑒

𝑒

𝑒

𝑒

𝑒

a):

𝑒

b):

𝑎

𝑐

𝑔

𝑑

𝑖

𝑖

𝑎

𝑖

𝑖

𝑎

𝑓

𝑏

𝑥

𝑖

𝑥

𝑥

Fig. 19.11: Solution of Exercise 19.2. 𝑒

𝑒

𝑒

𝑒

𝑒

𝑒

... 𝑎

𝑋

𝑏

𝑎

𝑏

𝑎

𝑎

𝑏

𝑏

... ...

𝑥

𝑥

𝑥

𝑥

𝑥

𝑥

Fig. 19.12: Solution of Exercise 19.3(a). On the left-hand side, the (finite) box of the body 𝑎 + (𝑏 ∥ 𝑋) is shown. On the right-hand side, the (infinite) box of the recursive def

expression 𝑋, with 𝑋 = 𝑎 + (𝑏 ∥ 𝑋), is shown.

b): Any two 𝑎-labelled transitions are in conflict with each other. Hence at most one of them can occur. On the other hand, it is possible to execute such an action “arbitrarily deep” within the recursion, and the finitely many 𝑏’s on the higher recursion levels can be executed in arbitrary order (before or after that 𝑎). By the fact that any two 𝑎’s are in conflict with each other, it is not possible to execute a sequence with two or more 𝑎’s. c): We show that 𝑏𝑎𝑏 is firable; the other claims are analogous.

19.8 Exercises ∅

𝑋 −→ 𝑎 + (𝑏∥ 𝑋) ∅

455



2

𝑎 + (𝑏∥ 𝑋)

−→ 𝑎 + (𝑏∥(𝑎 + (𝑏∥ 𝑋)))



2

𝑎 + (𝑏∥(𝑎 + (𝑏∥ 𝑋)))



−→ 𝑎 + (𝑏∥(𝑎 + (𝑏∥(𝑎 + (𝑏∥ 𝑋)))))



𝑎 + (𝑏∥(𝑎 + (𝑏∥(𝑎 + (𝑏∥ 𝑋)))))

𝑏

−→ 𝑎 + (𝑏∥(𝑎 + (𝑏∥(𝑎 + (𝑏∥ 𝑋))))) 𝑎

−→ 𝑎 + (𝑏∥(𝑎 + (𝑏∥(𝑎 + (𝑏∥ 𝑋))))) 𝑏

−→ 𝑎 + (𝑏∥(𝑎 + (𝑏∥(𝑎 + (𝑏∥ 𝑋))))) Sequences with two or more 𝑎’s are not firable because, in order to enable some 𝑎 at some level, all higher-level 𝑎’s are disabled, and no lower level 𝑎 can be enabled any longer. d): The language consists of all finite sequences containing exactly one 𝑎 and arbitrarily many 𝑏’s: (𝑏 ∗ 𝑎)



e): The full language consists of all finite sequences containing at most one 𝑎 and arbitrarily many 𝑏’s. There is also one infinite sequence which is firable: 𝑏 ∗ ∪ (𝑏 ∗

 𝑎)

∪ 𝑏∞

Chapter 20

The Box Algebra 2/2: Iteration and Data

Abstract In a programming language, recursion and iteration are the main techniques for creating repetitive (and potentially infinite) behaviours. In Chapter 19, recursion was treated. The first part of this chapter is a continuation and deals with iteration in the framework of the box algebra. In a programming language, variables and data play an important role. The second part of this chapter describes how data aspects can be included in the translation from a process algebra into place/transition Petri nets.

Contents 20.1 20.2 20.3 20.4 20.5 20.6 20.7 20.8

Introductory remarks . . . . . . . . . . . . Iteration . . . . . . . . . . . . . . . . . . . A programming language . . . . . . . . . Declarations and variables . . . . . . . . . Control flow . . . . . . . . . . . . . . . . Summary on boundedness and finiteness . Bibliographical remarks and further reading Exercises . . . . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

457 458 465 466 469 474 474 475

20.1 Introductory remarks Infinite behaviour can be generated by iteration or by recursion. Programming languages usually offer both possibilities. In the previous chapter (Section 19.6), we have added recursion to the basic B.TOYA algebra, creating a Turing-powerful process algebraic model, which implies that neither finite safe, nor even finite unbounded, place/transition Petri nets are sufficient for their semantics. Adding iteration has to © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. Best and R. Devillers, Petri Net Primer, Computer Science Foundations and Applied Logic, https://doi.org/10.1007/978-3-031-48278-6_20

457

458

20 The Box Algebra 2/2: Iteration and Data

be investigated closely, since (as we have seen in Figure 19.3 of Section 19.3) this may lead out of the realm of finite safe nets. To motivate the study of iteration construct(s), consider a typical loop in some programming language, such as 𝑠 := 0; 𝑖 := 1; while 𝑖 ≤ 𝑛 do 𝑠 := 𝑠 + 𝑖; 𝑖 := 𝑖 + 1 endwhile |{z} | {z } | {z } 𝐵

Init

(20.1)

𝐶

which adds up the first 𝑛 natural numbers. The initialisation part Init syntactically precedes the while, but logically, it can be viewed as belonging to the loop. Checking the entry condition 𝐵 is part of the loop, as is the execution of the command 𝐶. Implicitly, checking the exit condition ¬𝐵 is also part of the loop. In order to model such a loop, we shall consider a process-algebraic expression [𝐸 1 ∗ 𝐸 2 ∗ 𝐸 3 ] where 𝐸 1 is an initialisation activity, 𝐸 2 is a loop’s body, and 𝐸 3 is a termination activity. Intuitively, this allows the control structure of the loop (20.1) to be translated into   (𝑠 := 0; 𝑖 := 1) ∗ (𝑖 ≤ 𝑛; 𝑠 := 𝑠 + 𝑖; 𝑖 := 𝑖 + 1) ∗ (¬(𝑖 ≤ 𝑛)) | {z } | {z } | {z } Init

𝐶

¬𝐵

where the loop entry condition is treated as a “guard”, i.e., a programming language primitive which can be executed only if it evaluates to true. The loop entry condition appears negated as a termination action, because upon loop termination, testing the falsehood of 𝐵 is the last activity. In a shared variable environment, this can become important in order to take into account, or at least, to keep in mind that a competing process may tamper with the variables occurring in 𝐵. In the B.TOYA context, 𝐸 1 , 𝐸 2 and 𝐸 3 will be allowed (at first) to be arbitrary expressions. This immediately creates two problems: firstly, one may nest loops arbitrarily, and secondly, one may insert a parallel composition at arbitrary positions of the loop, and in particular, within 𝐸 1 . The next section shows how these problems can be dealt with.

20.2 Iteration B.TOYA can be extended by four different iteration constructs: [𝐸 1 ∗ 𝐸 2 ∗ 𝐸 3 ],

[𝐸 1 ∗ 𝐸 2 ⟩, ⟨𝐸 2 ∗ 𝐸 3 ] and ⟨𝐸 2 ⟩ where 𝐸 1 is an initialisation (executed at most once), 𝐸 2 is the repeated part (executed zero or more times), and 𝐸 3 is the termination part (executed at most once). The expression [𝐸 1 ∗ 𝐸 2 ∗ 𝐸 3 ] has all three parts, while in ⟨𝐸 2 ∗ 𝐸 3 ], initialisation is missing. From the four variants, only these two will be considered in the present chapter. Their operator boxes are shown in Figure 20.1. In loose parlance, 𝐸 1 is called the (front) guard of a loop while 𝐸 3 is called the rear guard. Figure 20.2 shows some examples, already anticipating their Petri net semantics.

20.2 Iteration

459 𝑒

𝜌id 𝐸1 𝐸2 𝜌id

𝑖

𝐸2 𝜌id

𝑒

𝜌id 𝐸3

𝜌id 𝐸3

𝑥

𝑥

OB [𝐸1 ∗𝐸2 ∗𝐸3 ]

OB⟨𝐸2 ∗𝐸3 ]

Fig. 20.1: Operator boxes for [𝐸 1 ∗ 𝐸 2 ∗ 𝐸 3 ] (l.h.s.) and ⟨𝐸 2 ∗ 𝐸 3 ] (r.h.s.). 𝑒

𝑑 𝑠1 𝑖

𝑠3

𝑠2 𝑖

𝑠4 𝑖

𝑎

𝑖

𝑏 𝑐

𝑠1 𝑒

𝑠3

𝑠2 𝑒

𝑠4 𝑒

𝑎

𝑏 𝑐

𝑥

Net( [𝑑 ∗ 𝑎 ∥𝑏 ∗ 𝑐] )

𝑒

𝑥

Net( [ ⟨𝑎 ∥𝑏 ∗ 𝑐] )

Fig. 20.2: Nets of [𝑑 ∗ 𝑎∥𝑏 ∗ 𝑐] (l.h.s.) and of ⟨𝑎∥𝑏 ∗ 𝑐] (r.h.s.).

460

20 The Box Algebra 2/2: Iteration and Data

Control flow of the example loop (20.1) can thus be understood in this syntax   as (𝑠 := 0; 𝑖 := 1) (𝑖 ≤ 𝑛; 𝑠 := 𝑠 + 𝑖; 𝑖 := 𝑖 + 1) (¬(𝑖 ≤ 𝑛)) * *

 or as 𝑠 := 0; 𝑖 := 1; (𝑖 ≤ 𝑛; 𝑠 := 𝑠 + 𝑖; 𝑖 := 𝑖 + 1) (¬(𝑖 ≤ 𝑛))

*

The characteristic identifications required by operational semantics are presented in (20.2):1 The repetitive nature of 𝐸 2 is embodied by the central ≡ in lines 2 and 4. [𝐸 1 ∗𝐸 2 ∗𝐸 3 ] ≡ [𝐸 1 ∗𝐸 2 ∗𝐸 3 ] [𝐸 1 ∗𝐸 2 ∗𝐸 3 ] ≡ [𝐸 1 ∗𝐸 2 ∗𝐸 3 ] ≡ [𝐸 1 ∗𝐸 2 ∗𝐸 3 ] ≡ [𝐸 1 ∗𝐸 2 ∗𝐸 3 ] [𝐸 1 ∗𝐸 2 ∗𝐸 3 ] ≡ [𝐸 1 ∗𝐸 2 ∗𝐸 3 ]

(20.2)

⟨𝐸 2 ∗𝐸 3 ] ≡ ⟨𝐸 2 ∗𝐸 3 ] ≡ ⟨𝐸 2 ∗𝐸 3 ] ≡ ⟨𝐸 2 ∗𝐸 3 ] ⟨𝐸 2 ∗𝐸 3 ] ≡ ⟨𝐸 2 ∗𝐸 3 ]

20.2.1 S-invariant analysis of refinements Theorem 19.18 can be extended by analysing what happens to S-invariants during refinement. Since boundedness follows structurally from coveredness by S-invariants, S-invariant coverability is of interest. Let 𝑁 be a net with 𝑛 transitions 𝑡 1 , . . . , 𝑡 𝑛 , whose places are inscribed by 𝑒/𝑖/𝑥. Let 𝑁1′ , . . . , 𝑁 𝑛′ also be 𝑛 𝑒/𝑖/𝑥-inscribed nets. Consider the refined net e = 𝑁 (𝑁 ′ , . . . , 𝑁 𝑛′ ) 𝑁 (20.3) 1 The question to be answered now is whether, from the S-invariant coverability of 𝑁 e and 𝑁𝑖 , something can be inferred about the coverability of 𝑁. Definition 20.1 aggregates For an 𝑒/𝑖/𝑥-inscribed net, an S-aggregate is defined as an S-invariant with natural number entries which assigns 1 to exactly one 𝑒-place and to exactly one 𝑥-place, while all other 𝑒- and 𝑥-places are assigned 0. Intuitively, an S-aggregate describes a path system on which a single token on an 𝑒-place can be transformed into a single token on an 𝑥-place, without needing to “split” tokens on the way. Example 20.2 S-aggregates in some previous examples Each of the operator nets of Figure 20.1 has exactly one S-aggregate, namely assigning 1 to every place. The net 𝑁 ′ = Net(𝑎∥𝑏) of Figure 19.3 has exactly two S-aggregates, namely the left line and the right line. The N-shaped net of Figure 19.5 has precisely three S-aggregates. 1 The context rule (CR) of Section 19.5 applies as well.

20.2 Iteration

461

Most of the nets investigated in this chapter are covered by S-aggregates, in the following sense. Definition 20.3 T-covering and S-covering by S-aggregates An 𝑒/𝑖/𝑥-inscribed net is called T-covered by S-aggregates, if every transition lies on the subnet spanned by some S-aggregate, and S-covered by S-aggregates, if every place lies on some S-aggregate. The most important properties of nets which are covered by S-aggregates in the sense of Definition 20.3 are the following. 1. If a place is covered by an S-aggregate, then it can carry at most one token in any successor marking of ◦ 𝑁 or of 𝑁 ◦ .2 If two places are covered by the same S-aggregate, they cannot be marked simultaneously. 2. If a transition is covered by an S-aggregate, it is never auto-concurrent (i.e., by definition, enabled concurrently with itself, see Definition 1.19). This follows because there is no transition without pre-places, and for other transitions, thanks to their being covered by an S-aggregate, there is always a pre-place with at most 1 token. If two transitions are covered by the same S-aggregate, they can never be activated concurrently. Theorem 20.4 Propagation of T-coverings Suppose 𝑁 is T-covered by S-aggregates and suppose all 𝑁𝑖′ (1 ≤ 𝑖 ≤ 𝑛) are e = 𝑁 (𝑁 ′ , . . . , 𝑁 𝑛′ ) is also T-covered by T-covered by S-aggregates. Then 𝑁 1 S-aggregates. This means that with “normal” operator nets (e.g. those of CCS, or those of B.TOYA) there can never be a transition that is concurrently enabled with itself in some reachable state. The premises of Theorem 20.4 are always fulfilled and cannot be changed by arbitrary refinements or recursion. Nevertheless, one may have non-safe places for iteration. Indeed, the propagation of S-coverings is less straightforward. To see this, reconsider Figures 20.1 and 20.2. The operator box of ⟨𝐸 2 ∗ 𝐸 3 ] (Figure 20.1) and the nets of 𝑎∥𝑏 (Figure 19.3) and 𝑐 (Figure 19.1) are S-covered by S-aggregates, but the net of ⟨𝑎∥𝑏 ∗ 𝑐] (Figure 20.2) is not. Neither 𝑠2 nor 𝑠3 are covered by S-aggregates (but all other places are). This observation does not depend on the choice of iteration, since the same remark is true for the net of [𝑑 ∗ 𝑎∥𝑏 ∗ 𝑐]. Theorem 20.5 S-covering propagation, part 1 Let 𝑁 and all 𝑁𝑖′ (1 ≤ 𝑖 ≤ 𝑛) be S-covered by S-aggregates. Furthermore, suppose that for every 𝑡 𝑖 in 𝑁 which satisfies 𝑡𝑖• ∩ • 𝑡 𝑖 ≠ ∅ (i.e., is part of a

2 To see this, recall that ◦ 𝑁 is defined in such a way that every 𝑒-place carries 1 token and every other place, including all 𝑖-places, carries 0 tokens, and similarly for 𝑁 ◦ .

462

20 The Box Algebra 2/2: Iteration and Data

side loop) and for every pair (𝑠𝑒 , 𝑠 𝑥 ) with 𝑠𝑒 ∈ ◦ 𝑁𝑖′ and 𝑠 𝑥 ∈ 𝑁𝑖′◦ , there is an S-aggregate in 𝑁𝑖′ covering both 𝑠𝑒 and 𝑠 𝑥 . e = 𝑁 (𝑁 ′ , . . . , 𝑁 𝑛′ ) is S-covered by S-aggregates. Then 𝑁 1 This theorem is a static counterpart of Theorem 19.18, since the condition that every pair (𝑠𝑒 , 𝑠 𝑥 ) is S-covered by an S-aggregate implies ex-exclusivity. The theorem does not capture the situation depicted in Figure 20.2, since Net(𝑎∥𝑏) does not satisfy the premise. It contains two pairs (𝑠𝑒 , 𝑠 𝑥 ) that are not covered by a common S-aggregate: the top left 𝑒-place and the bottom right 𝑥-place, as well as the top right 𝑒-place and the bottom left 𝑥-place. The next result will be applicable in such cases. To motivate it, consider the difference between the iteration [𝐸 1 ∗ 𝐸 2 ∗ 𝐸 3 ] which is both front-guarded and rear-guarded, and the iteration ⟨𝐸 2 ∗ 𝐸 3 ] which is only rear-guarded. A difference can be seen on their operator boxes. The box for [𝐸 1 ∗ 𝐸 2 ∗ 𝐸 3 ] is both 𝑒-directed and 𝑥-directed. In particular, there is no loop involving an 𝑒-place or an 𝑥-place. Example 20.6 Locating the source of non-safeness (i) Net([ 𝑑 ′ ∗ ([𝑑 ∗ 𝑎∥𝑏 ∗ 𝑐])∥𝑏 ′ ∗ 𝑐 ′ ]), obtained by nesting the two-way guarded form of iteration, is still 2-bounded and remains so after arbitrarily deeply nestings. (ii) Net(⟨ (⟨𝑎∥𝑏∗𝑐])∥𝑏 ′ ∗𝑐 ′ ]), obtained by nesting the two-way guarded form of iteration, is 3-bounded, but not 2-bounded. After another nesting, it is 4-bounded, but not 3-bounded; and so on. (iii) Theorem 20.4 is true for all of these nestings, that is, none of the nets from (i) and (ii) has any auto-concurrent transitions, regardless of how deep the nestings are.

Definition 20.7 Half-aggregate For an 𝑒/𝑖/𝑥-inscribed net, an S-half-aggregate is an S-invariant having natural multiples of 12 as entries, which assigns exactly one 𝑒-place and exactly one 𝑥-place with 1, while all other 𝑒-places and 𝑥-places are assigned 0. Intuitively, an S-half-aggregate describes a path system on which a single token on an 𝑒-place can be transformed into a single token on an 𝑥-place, while on the way there can be a split into at most two tokens. Example 20.8 Figure 20.2 • Net( [𝑑 ∗ 𝑎∥𝑏 ∗ 𝑐] ) has the following S-half-aggregate: Assign 1 to the 𝑒-place and to the 𝑥-place; assign 0 to places 𝑠1 and 𝑠4 ; assign 12 to places 𝑠2 and 𝑠3 . • Net(⟨𝑎∥𝑏 ∗ 𝑐]) does not have any S-half-aggregate through either 𝑠2 or 𝑠3 . Since these places are 𝑒-places, Definition 20.7 requires at least one them to be assigned the number 1.

20.2 Iteration

463

Definition 20.9 S-half-aggregate covering An 𝑒/𝑖/𝑥-inscribed net is called S-covered by S-half-aggregates, if every place lies on some S-half-aggregate. The salient property of places which are covered by S-half-aggregates is the fact that under every successor marking of ◦ 𝑁 or of 𝑁 ◦ , they can carry at most 2 tokens. Hence nets which are covered by S-half-aggregates are 2-bounded. Theorem 20.10 S-covering propagation, part 2 Let 𝑁 and all 𝑁𝑖′ (1 ≤ 𝑖 ≤ 𝑛) be S-covered by S-aggregates. Further, suppose that all 𝑖-places are S-covered by S-half-aggregates. e = 𝑁 (𝑁 ′ , . . . , 𝑁 𝑛′ ), all 𝑒-places and 𝑥-places are S-covered by Then in 𝑁 1 S-aggregates, and all 𝑖-places are S-covered by S-half-aggregates. This theorem explains why, in the context of [𝐸 1 ∗ 𝐸 2 ∗ 𝐸 3 ]-shaped iterations, where the place involved in the loop is an 𝑖-place, the worst case is a violation of safeness; but 2-boundedness is always guaranteed, even for arbitrary nestings of the loop and/or the parallel operator (and in general, any non-ex-exclusive operator). In the context of ⟨𝐸 2 ∗ 𝐸 3 ]-shaped iterations, however, any arbitary boundedness can be violated by nesting this loop sufficiently deeply. Indeed, even boundedness as such can be violated if recursion is additionally taken into account: 𝑋

with

df

df

𝑋 = ⟨𝑌 ∗ 𝑐] and 𝑌 = 𝑋 ∥𝑏

has an unbounded solution. Theorem 20.4 still holds even in this case, though. When the syntax, say, of B.TOYA is expanded by iteration constructs of the form (20.2), it is possible to incorporate the premises of Theorems 20.5 and 20.10 syntactically. This is not shown here, but the next subsection explains a similar construction for another purpose. It is also possible for the user of such loops to restrict their use suitably, for instance, by avoiding the use of non-ex-exclusive operators (such as the parallel operator) at the beginning of a front-unguarded iteration. Alternatively, the sources of non-safeness considered in this section may be deemed (interesting, perhaps, but also) relatively harmless, compared with Theorem 20.4 which guarantees that transitions are never simultaneously activated with themselves. This has the repercussion that SOS semantics (20.2) and Petri net semantics are always consistent with each other, no matter whether there are non-safe places or not.

20.2.2 Behavioural compositionality Even a safe translation of iteration does not guarantee behavioural compositionality. Behavioural compositionality means that the behaviour of a composite expression is expressed as the composition of the behaviours of its parts. This is the case, for

464

20 The Box Algebra 2/2: Iteration and Data

instance, if the behaviour of a choice 𝐸 1 + 𝐸 2 is always either a behaviour of 𝐸 1 or a behaviour of 𝐸 2 . For the rear-guarded loop, behavioural compositionality would mean that a behaviour of ⟨𝐸 2 ∗ 𝐸 3 ] is a sequence of complete behaviours of 𝐸 2 , followed by a behaviour of either 𝐸 2 or 𝐸 3 . Here, one has to be careful with nestings (again). Consider two examples: ⟨𝑎 ∗ 𝑏] + 𝑐 and ⟨ ⟨𝑎 ∗ 𝑏] ∗ 𝑐 ]

(20.4)

In the first of these expressions, both SOS (20.2) and the Petri net semantics via 20.1 allow the behaviour 𝑎𝑐, even though 𝑎𝑐 is neither behaviour of ⟨𝑎 ∗ 𝑏] nor a behaviour of 𝑐. In the second expression, both SOS and Petri net semantics allow 𝑎𝑐 as a behaviour, even though 𝑎 is not a complete behaviour of ⟨𝑎 ∗ 𝑏]. In both cases, there is no behavioural compositionality. The culprit is, of course, (again) the loop around an 𝑒-place. If only fully guarded iterations of the form [𝐸 1 ∗ 𝐸 2 ∗ 𝐸 3 ] are used, the phenomenon vanishes. If this is perceived as a problem, there are (at least) two solutions. • Unwinding the loop once. This is probably the most commonly used method in process algebra in general. It consists of forcing guardedness by replacing a loop such as ⟨𝑎 ∗ 𝑏] by the loop [𝑎 ∗ 𝑎 ∗ 𝑏], in choice with a stand-alone 𝑏. • Restricting nesting by syntax. This means that the interplay between nesting and loops around 𝑒- (or 𝑥-) places is syntactically governed in such a way that cases such as described in (20.4) cannot arise. This solution is described in the remainder of this section. Consider the following five conditions (where the notation refers to a refinement according to (20.3)). 1. If 𝑡 𝑖 ≠ 𝑡 𝑗 and • 𝑡𝑖 ∩ • 𝑡 𝑗 ≠ ∅ in 𝑁, then 𝑁𝑖′ and 𝑁 ′𝑗 are 𝑒-directed. 2. If 𝑡 𝑖 ≠ 𝑡 𝑗 and 𝑡𝑖• ∩ 𝑡 •𝑗 ≠ ∅ in 𝑁, then 𝑁𝑖′ and 𝑁 ′𝑗 are 𝑥-directed. 3. If 𝑡 𝑖• ∩ • 𝑡 𝑗 ≠ ∅ in 𝑁, then 𝑁𝑖′ is 𝑥-directed or 𝑁 ′𝑗 is 𝑒-directed. 4. If 𝑡 𝑖• ∩ ◦ 𝑁 ≠ ∅ in 𝑁, then 𝑁𝑖′ is 𝑥-directed. 5. If • 𝑡 𝑗 ∩ 𝑁 ◦ ≠ ∅ in 𝑁, then 𝑁 ′𝑗 is 𝑒-directed. Theorem 20.11 Ensuring behavioural compositionality If Conditions 1. to 5. are satisfied, then behavioural compositionality (in the sense described above) is ensured. If only loops of the form [𝐸 1 ∗ 𝐸 2 ∗ 𝐸 3 ] are allowed, Conditions 1. to 5. are automatically satisfied. In order to allow loops of the form ⟨𝐸 2 ∗ 𝐸 3 ] and still ensure Conditions 1. to 5., it is possible to force them to hold by construction, as schematised in Table 20.1. The superscripts wf, 𝑒𝑑𝑖𝑟, 𝑥𝑐𝑙 and 𝑒𝑑𝑖𝑟:𝑥𝑐𝑙 mean, respectively: well-formed; 𝑒directed; ex-exclusive; and both 𝑒-directed and ex-exclusive. This syntax also guarantees the premises of Theorem 20.5, so that all nets constructed by it are safe as well as behaviourally compositional.

20.3 A programming language

465

𝐸 wf

::= 𝑋 wf | 𝐸 wf [𝜌] | 𝐸 wf ∥ 𝐸 wf | ⟨𝐸 𝑒𝑑𝑖𝑟:𝑥𝑐𝑙 ∗ 𝐸 𝑒𝑑𝑖𝑟 ] | 𝐸 𝑥𝑐𝑙 | 𝐸 𝑒𝑑𝑖𝑟

𝐸 𝑒𝑑𝑖𝑟

::= 𝑋 𝑒𝑑𝑖𝑟 | 𝐸 𝑒𝑑𝑖𝑟 [𝜌] | 𝐸 𝑒𝑑𝑖𝑟 ∥ 𝐸 𝑒𝑑𝑖𝑟 | 𝐸 𝑒𝑑𝑖𝑟 + 𝐸 𝑒𝑑𝑖𝑟 | 𝐸 𝑒𝑑𝑖𝑟:𝑥𝑐𝑙

𝐸 𝑥𝑐𝑙

::= 𝑋 𝑥𝑐𝑙 | 𝐸 𝑥𝑐𝑙 [𝜌] | 𝐸 wf ; 𝐸 wf | ⟨𝐸 𝑒𝑑𝑖𝑟:𝑥𝑐𝑙 ∗ 𝐸 𝑒𝑑𝑖𝑟:𝑥𝑐𝑙 ] | 𝐸 𝑒𝑑𝑖𝑟:𝑥𝑐𝑙

𝐸 𝑒𝑑𝑖𝑟:𝑥𝑐𝑙

::= 𝑋 𝑒𝑑𝑖𝑟:𝑥𝑐𝑙 | 𝐸 𝑒𝑑𝑖𝑟:𝑥𝑐𝑙 [𝜌] | 𝐸 𝑒𝑑𝑖𝑟 ; 𝐸 wf | 𝐸 𝑒𝑑𝑖𝑟:𝑥𝑐𝑙 + 𝐸 𝑒𝑑𝑖𝑟:𝑥𝑐𝑙 | 𝑎 df

where 𝑎 is in Γ, 𝜌 is a relabelling, and every variable 𝑋 𝛽 is defined by an equation 𝑋 𝛽 = 𝐸 𝛽 .

Table 20.1: Syntactic forcing of behavioural compositionality.

By this syntax, the repeated part of a loop is in 𝐸 𝑒𝑑𝑖𝑟:𝑥𝑐𝑙 , hence it is ex-exclusive. Then Theorem 20.5 (and then also Theorem 19.18) is applicable, which implies that all produced nets are safe. The repeated part is also 𝑒-directed. Hence Condition 1. holds. As all nets produced by the syntax shown in Table 20.1 (and, say, R.TOYA) are 𝑥-directed, Conditions 2., 3. and 4. also hold true. Condition 5. holds because all operator boxes are 𝑥-directed. Finally, Theorem 20.11 can be applied.

20.3 A programming language Now is the time to depart from atomic actions being simply letters and turn to actions that involve data, such as assignments or (handshake) communications, as in occam or in CSP (Communicating Sequential Processes), another established process algebra and a long-time “rival” of CCS. As this goes some way towards a typical programming language, variable declarations of the form var 𝑥 : “a set of values”; as well as channel declarations of the form chan 𝑐; and blocks of the form begin . . . end shall now be allowed, The syntax for commands is described in Table 20.2.3

Program ::= Block Block

::= begin Body end

Body

::= Decl; Body | Com

Decl

::= var 𝑥 : Set | var 𝑐 : chan

Com

::= Block | skip | stop | | Com1 ; Com2 | 𝐶𝑜𝑚1 ∥Com2 | if GC1 ... GC 𝑚 fi | do GC1 ... GC 𝑚 od

Act

::= 𝑥:=Expr | 𝑐?𝑥 | 𝑐!Expr

GC

::= GC; Com | |

Table 20.2: Syntax of Programs. Expr and BExpr are arithmetical and Boolean expressions, respectively. A syntax for them is assumed to be known. Atomic actions are enclosed in brackets < and >. A guard in the form can be passed (atomically) if the Boolean expression BExpr is evaluated to true, otherwise execution stops just prior to executing it.

3 To concentrate on how variables and channels are treated, recursion shall be ignored.

466

20 The Box Algebra 2/2: Iteration and Data

Example 20.12 A small program with handshake communication (1) (2) (3)

begin var 𝑥, 𝑦 : {0, 1}; chan 𝑐, 𝑑;   ; ∥ ; < 𝑦 := 1−𝑦 > ; end

Here, 𝑥 and 𝑦 are binary variables whereas 𝑐 and 𝑑 are channels. By definition, a channel output command 𝑐!Expr can only occur together with a matching channel input command 𝑐?𝑥 over the same channel, and if both commands occur together, then they have the combined effect of an (atomic) assignment 𝑥 := Expr. This is often referred to as handshake synchronisation. Thus, the overall effect of the program is

< 𝑦 := 0>; < 𝑦 := 1−𝑦 >; After termination, 𝑥 = 𝑦 = 1. Programs will be given a Petri net semantics in two stages. The general idea is to put a data part in parallel with a control part. The data part describes all possible changes of values of variables. The control part picks from this set only those that are mandated by the flow of control. The syntax of Com contains several of the connectives analysed in the previous chapter, and the theory developed there can be re-used.

20.4 Declarations and variables Every variable declaration var 𝑥 : Set will be given a Petri net semantics with as many places as Set has elements,4 plus two more places, an 𝑒-place and an 𝑥-place. The transitions of this net, denoted by 𝑥 𝑎𝑏 , where 𝑎, 𝑏 ∈ 𝑆𝑒𝑡 ∪ {•}, describe all possible value changes, including “no change”, and also including initialisation (modifying the value from “undefined”, represented by •) and de-initialisation (modifying the value to •). The data boxes 𝐵𝑜𝑥(𝑥) of a unary variable var 𝑥 : {0} and a binary variable var 𝑥 : {0, 1} are shown in Figure 20.3. Initialisations and de-initialisations make sense in the context of nested blocks. They also provide data boxes with an 𝑒-place and with an 𝑥-place, making them capable of being composed with other boxes. Moreover, they have an indigenous SOS semantics. For these reasons (having both a Petri net and a consistent SOS semantics), they can be treated as primitives of a process algebra.

4 This means, of course, that we may have to deal with infinite nets.

20.4 Declarations and variables

467

𝑒

𝑒

𝑥•0

𝑥•0

𝑥•1

𝑥01 𝑥=0

𝑥=0

𝑥00

𝑥••

𝑥=1

𝑥00

𝑥11

𝑥••

𝑥10

𝑥0•

𝑥

𝐵𝑜𝑥 (var 𝑥 : {0} )

𝑥0•

𝑥1•

𝑥

𝐵𝑜𝑥 (var 𝑥 : {0, 1} )

Fig. 20.3: Data box 𝐵𝑜𝑥(𝑥) of a unary (l.h.s.) and binary (r.h.s.) variable 𝑥.

Example 20.13 Another small program with nested blocks begin var 𝑥 : {0}; var 𝑦 : {0, 1}; ; begin var 𝑥 : {1} ; < 𝑦 := 𝑥 >; end end Table 20.3 provides a full process-algebraic equivalent of the program of Example 20.13, which will now be explained step by step. First note that the transitions of data boxes are labelled by 𝑥 𝑣1 𝑣2 where 𝑣1 and 𝑣2 are either values in the Type of 𝑥, or •. For brevity, call these labels particles. They come in hatted and unhatted versions, i.e. in conjugate pairs, just as in CCS (Section 19.4.1).5 Generally, transitions will be labelled by multisets of particles.6 Intuitively, these labels correspond to the variables read (or written to) by the transition in question, and if the program is 5 By convention, hatted particles are used for commands while unhatted ones are used for data. 6 This includes data and channel boxes as, e.g., in Figure 20.3. The transition labels shown there are meant to denote one-element sets.

468

20 The Box Algebra 2/2: Iteration and Data [ { 𝑥•• , 𝑥•0 , 𝑥0• , 𝑥00 } :

% declaration of outer 𝑥

( 𝐵𝑜𝑥 (var 𝑥 : {0} ) ∥

% data box of outer 𝑥 % with Type( 𝑥 ) = {0}

( { 𝑦•• , 𝑦•0 , 𝑦•1 , 𝑦0• , 𝑦1• , 𝑦00 , 𝑦01 , 𝑦10 , 𝑦11 } : % declaration of 𝑦 ( (𝐵𝑜𝑥 (var 𝑦 : {0, 1} ) ) ∥

% data box of 𝑦 % with Type( 𝑦) = {0, 1}

( ( ( { 𝑥c c •0 } + { 𝑥 00 } ); [ { 𝑥•• , 𝑥•1 , 𝑥1• , 𝑥11 } :

% < 𝑥 := 0> % declaration of inner 𝑥

(𝐵𝑜𝑥 (var 𝑥 : {1} ) ∥

% data box of inner 𝑥 % with Type( 𝑥 ) = {1}

( ( { 𝑥c c •1 } + { 𝑥 11 } );

% < 𝑥 := 1> ( { 𝑦c , 𝑥 c } + { 𝑦 c , 𝑥 c } + { 𝑦 c , 𝑥 c } ) % < 𝑦 := 𝑥 > note: at this •1 11 01 11 11 11 % point, Type( 𝑥 ) = {1}

); ( { 𝑥c c 1• } + { 𝑥 •• } )

% termination of inner 𝑥

) ] ); ( { 𝑦c c c 0• } + { 𝑦 1• } + { 𝑦 •• } )

% termination of 𝑦

) ) ]; ( { 𝑥c c •• } + { 𝑥 1• } )

% termination of outer 𝑥

) ) ]

Table 20.3: Process-algebraic expression corresponding to Example 20.13.

well-formed, they will all be ∅ in the full translation, because all data flows are represented explicitly. Blocks are translated into scope expressions [ 𝐴 : 𝐸] where 𝐴 is a set of action particles. A scope expression consists of synchronisation followed by restriction, where synchronisation combines hatted and unhatted particles, and restriction gets rid of their origins. The effect of this is illustrated in Figure 20.4. Because successive synchronisation is commutative and associative, and so is restriction, it is possible to generalise them to particle sets. Thus, [ 𝐴 : 𝐸] means that all transitions in 𝐸 are first synchronised and then restricted over all particles in 𝐴. Synchronisation is actually quite similar to CCS’s parallel operation, except that it operates on multisets of particles (of which two are conjugates), rather than on only one particle and its conjugate. Observe that in Table 20.3, there are three scopings, corresponding to the three declarations. Channels are treated in a similar way. The translation (of programs with

20.5 Control flow

469

the syntax shown in Table 20.2) is essentially completed by using the refinement mechanism of the previous chapter for command connectives (sequence, choice, parallel composition), by putting the data part in parallel with the command part of a block, and by finishing a block by means of de-initialisations. 𝑒

{𝑐}

{𝑎, 𝑏}

𝑒

𝑒

{𝑎 b, 𝑑 }

𝑥

{b 𝑐}

sy {𝑎,𝑐}

{𝑐}

𝑒

{𝑎, 𝑏}

{𝑏, 𝑑 }

{ 𝑥

𝑥

{𝑏, 𝑑 }

{𝑎 b, 𝑑 }

{b 𝑐}

𝑥

𝑒

rs {𝑎,𝑐}



𝑒



{ 𝑥

𝑥

Fig. 20.4: Illustration of synchronisation sy and restriction rs . In the sy step, two pairs of multisets are synchronised because they carry conjugate 𝑎 or 𝑐 labels: {𝑐} is synchronised with {b 𝑐 }, creating an ∅-labelled transition; and {𝑎, 𝑏} is synchronised with {b 𝑎 , 𝑑}, creating a {𝑏, 𝑑}-labelled transition. In the rs step, transitions having an 𝑎 or a 𝑐 in their label are erased.

Example 20.14 Shared variables begin var 𝑥 : {0, 1}; ∥ end This program explains the parallel operator. Its process-algebraic counterpart is given in Table 20.4, and its Box is depicted in Figure 20.5.

20.5 Control flow After explaining the principles governing the interplay between declarations and commands, the other control flow connectives defined in Table 20.2 will briefly be described. Channels have associated channel boxes, just as variables have data boxes. A fragment of such a box will be given below. Sequential composition and parallel composition are as defined in the previous chapter for B.TOYA.

470

20 The Box Algebra 2/2: Iteration and Data [ { 𝑥•• , 𝑥•0 , 𝑥•1 , 𝑥0• , 𝑥1• , 𝑥00 , 𝑥01 , 𝑥10 , 𝑥11 } : % declaration of 𝑥 ( ( 𝐵𝑜𝑥 (var 𝑥 : {0} ) ∥

% data box of outer 𝑥 % with Type( 𝑥 ) = {0, 1}

( ( ( { 𝑥c c c •0 } + { 𝑥 00 } + { 𝑥 10 } ) ∥

%

( ( { 𝑥c c c •1 } + { 𝑥 01 } + { 𝑥 11 } )

%

); ( ( { 𝑥c c c 0• } + { 𝑥 1• } + { 𝑥 •• } )

% termination of 𝑥

) ]

Table 20.4: Process-algebraic expression of Example 20.14. For a set 𝐴 of particles, scoping [ 𝐴 : 𝐸] is defined as (𝐸 sy 𝐴) rs 𝐴. The block is closed, and 𝐴 contains all possible action particles from 𝑥’s data box. Synchronisation effects their handshakes with matching hatted particles, and restriction then gets rid of all transition labels not equal to ∅. 𝑒

𝑒 𝑒

𝑥=0 ∅

𝑥=1 ∅













𝑥



𝑥

Fig. 20.5: Box of the expression shown in Table 20.4. The outer two 𝑒-places come from the parallel command, the other one from 𝑥’s data box. Of the 𝑥-places, one stems from the termination command, the other from the data box of 𝑥. Putting a token on each of the 𝑒-places and “playing the token game” traces the possible execution of the program of Example 20.14. Note that the systematic construction yields some dead transitions in this case.

20.5 Control flow

471

The assigment 𝑥:=Expr has already been explained. The channel input 𝑐?𝑥 and channel output 𝑐!Expr can be treated analogously, because their underlying effect on the variables is just an assignment 𝑥:=Expr. They are translated as follows. Suppose that 𝑥 is a unary variable var 𝑥 : {0} and 𝑐 is a channel. Then 𝑐!0∥𝑐?𝑥 is dealt with by translating 𝑐!0 ∥ 𝑐?𝑥

{

c ∥ ( { 𝑐?0, c 𝑥c c 00 } ) { 𝑐!0} •0 } + { 𝑐?0, 𝑥c

(20.5)

c means that the value 0 is written On the left-hand side of the ∥, intuitively, { 𝑐!0} to 𝑐. The right-hand side means that (at the same time), 𝑥 is assigned that value 0 – since it is known that this particular pairing of input/output has the semantics of the assignment 𝑥:=0. For a general matching 𝑐!Expr∥𝑐?𝑥, the expression on the right-hand side of (20.5) needs to be modified in order to express the assignment 𝑥:=Expr. Table 20.3 illustrates the idea. In order to effect the desired synchronisation, the channel box has a matching action carrying the multiset {𝑐?0, 𝑐!0}. Scoping (i.e., synchronisation followed by restriction) is carried out over 𝑐. This leads to labels that describe 𝑥’s value changes being the only ones that survive the restriction, and input/output being synchronised in the desired way. In Petri net terms, this effect is described in Figure 20.6.

{𝑐!0, 𝑐?0}

c { 𝑐!0}

from the channel box from 𝑐!0

d 𝑥c { 𝑐?0, •0 }

d 𝑥c { 𝑐?0, 00 }

sy 𝑐 rs 𝑐

{

{ 𝑥c •0 }

{ 𝑥c 00 }

from 𝑐?𝑥

Fig. 20.6: Handshake synchronisation through a channel. The transition from the channel box acts as a “catalyst” to fuse the various possible pairs of matching transitions from (𝑐!0∥𝑐?𝑥). This is one of the benefits of using (multi-)sets as labels. The net on the right-hand side describes 𝑥:=0. An atomic Boolean expression is translated as a choice in which all possible truth assignments are checked. For instance, supposing var 𝑥, 𝑦 : {0, 1} are two binary variables, is translated as ({𝑥 00 , 𝑦 00 } + {𝑥11 , 𝑦 1 }) 7 and is translated as {𝑥11 }. Atomic commands and atomic guarded commands have a similar translation. For instance, with var 𝑥 : {0, 1}, is translated as ({𝑥01 }+{𝑥11 }), and is translated as {𝑥10 }. 7 For the sake of simplicity, the “undefined” value • is ignored from now on. That is, it is assumed that 𝑥 and 𝑦 already have defined values before 𝑥=𝑦 is evaluated.

472

20 The Box Algebra 2/2: Iteration and Data

The if and do commands are treated as follows:8 if GC1 ... GC 𝑚 fi { GC1 + . . . + GC 𝑚

 do GC1 ... GC 𝑚 od { GC1 + . . . + GC 𝑚 ∗ where 𝐵𝐸 𝑗 are the Boolean expressions occurring first in GC 𝑗 In particular, the front-unguarded form of the process-algebraic iteration expression is used. Nevertheless: Theorem 20.15 Well-formedness and well-behavedness of program nets Let 𝑃 be a program of the form specified in Table 20.2 (Section 20.3). The process-algebraic expression 𝐸 (𝑃) associated with 𝑃 is of the form 𝐸 𝑤 𝑓 defined in Table 20.1 in Section 20.2.2. The box (i.e., Petri net) 𝐵(𝑃) associated with 𝑃 is 𝑇-restricted, ex-restricted, and 𝑥-directed. Moreover, it is plain (i.e. has arc weights 0 or 1), and it is safe and clean under its initial marking. The SOS semantics associated with 𝐸 (𝑃) is fully consistent with the (behavioural Petri net) semantics of 𝐵(𝑃). Moreover, they are consistent with the intuition behind the composition operators expressed by Conditions 1. to 5. of Section 20.2.2. In addition, if 𝑃 contains no infinite types, then 𝐵(𝑃) is finite. Proof: Essentially by induction over the syntactic forms explained in Table 20.1. One of the critical parts of this proof are loops where the form ⟨𝐸 2 ∗ 𝐸 3 ] has been used (which is “more dangerous” than the form [𝐸 1 ∗ 𝐸 2 ∗ 𝐸 3 ]). Indeed, a loop may occur right at the beginning of a program, in which case its box would not be 𝑒-directed. However, the repeated part of such a loop always starts with an atomic action or an atomic expression, never with an immediate parallel composition. This translates into a choice between transitions (rather than a parallel composition) and does not give rise to non-safeness. 20.15 Example 20.16 Some translations Finally, Table 20.7 shows some possible translations. Assume the prior declaration var 𝑥, 𝑦 : {0, 1} and that both variables are initialised, say to 0.a The 𝑒- or 𝑥-places of these nets are drawn in colours (blue for 𝑒, red for 𝑥), since these code fragments could occur anywhere in the program, and then the annotations of these places could also be 𝑖 for “internal”. Line 1 is a 𝑃(𝑥) on a binary semaphore 𝑥, while Line 2 is like the corresponding 𝑉 (𝑥). Line 3 shows that an empty if command is a (partial) deadlock (“stop and never carry out the next command”). Line 4 explains how a nonatomic guard differs from 𝑃(𝑥). Line 5 is a somewhat nonsensical program 8 The symbol has been used instead of + in Table 20.2, because it is established in the “guarded command” literature.

20.5 Control flow

473

explaining nondeterministic choice. Line 6 illustrates the loop; it specifies continual “critical toggling” of 𝑦’s value. Line 7 shows that an empty do command comes out as a skip command (“do nothing and carry on with the next command”). a

So that no undefined values appear in the nets.

Translations of programs involving channel operations look similar, except that the desired synchronisation regime is additionally specified using multiset-labelled transitions. program:

expression:

if < 𝑥 = 1; 𝑥:=0> fi

{ 𝑥c 10 }

< 𝑥:=1>

{ 𝑥c c 01 } + { 𝑥 11 }

net: { 𝑥c 10 }

𝑒

𝑥

{ 𝑥c 01 } 𝑒

𝑥 { 𝑥c 11 }

if fi

“empty choice”

𝑒

𝑥

𝑒

{ 𝑥c 10 }

if < 𝑥 = 1>; < 𝑥:=0> fi { 𝑥c c c 11 }; ( { 𝑥 10 } + { 𝑥 00 } )

𝑥

{ 𝑥c 11 } { 𝑥c 00 } 𝑥

if < 𝑥 = 1> ; < 𝑥:=0> fi

{ 𝑥c 11 } + ∅; ( { 𝑥c c 00 } + { 𝑥 10 } ) )

{ 𝑥c 00 }

{ 𝑥c 11 }

𝑒

{ 𝑥c 10 }

∅ 𝑒

{ 𝑦c 01 }

{ 𝑥c 01 } do < 𝑥 = 1; 𝑥:=0>; < 𝑦:=1−𝑦 >; < 𝑥:=1> od

⟨ { 𝑥c 10 }; ( { 𝑦c c 01 } + { 𝑦 10 } ); ( { 𝑥c c 01 } + { 𝑥 11 } ) ∗ { 𝑥c 00 } ]

𝑥 { 𝑥c 10 } { 𝑥c 00 }

do od

⟨ “empty choice” ∗ ∅ ]

𝑒

{ 𝑦c 10 }

{ 𝑥c 11 } ∅

Fig. 20.7: Sample translations.

𝑥

474

20 The Box Algebra 2/2: Iteration and Data

20.6 Summary on boundedness and finiteness For an iteration-free and data-free algebra such as B.TOYA, there are few issues with the finiteness or with the boundedness (or safeness) of their Petri net semantics. Adding recursion immediately leads to infinite Petri nets (or to high-level Petri nets, as described in the next chapter, where infinity is “hidden” in the annotations of places and transitions). The same is true for data with infinite value domains. When only iteration is added to a finite algebra, it appears that one needs to deal with potential unboundedness. In the case of fully guarded iteration, the worst case is 2-boundedness, but even this can be avoided in favour of a safe semantics, either syntactically, or by considering an alternative Petri net semantics which is safe (but leads to larger boxes). Such a modification lies beyond the scope of the present text.

20.7 Bibliographical remarks and further reading Both parts of this chapter are based on [BDK01] (Eike Best, Raymond Devillers, Maciej Koutny). All missing proofs and extensions can be found there. An early variant of the language defined in Definition 20.2 is described in [BH93] (Eike Best, Richard P. Hopkins). The original CSP by C.A.R. Hoare [Hoa78] features handshake (or rendezvous) channel synchronisation and communication, and so does occam, a programming language based on CSP [Ros84] (Andrew W. Roscoe). Virtually all early process algebras, including CCS, ACP, COSY, CSP, and TCSP, are based on handshake synchronisation. Some, such as CCS and the original CSP, also deal with data, and some, such as ACP and COSY, also have iteration. Petri net translations have been devised for other contexts as well. For example, [Dev+03], by Raymond Devillers, Hanna Klaudel, Maciej Koutny and Franck Pommereau, focuses on asynchronous, non-handshake, communication. Modelling sequential composition and choice in combination with parallel composition creates a problem of scale because place multiplication produces a multitude of control flow places. Recent research [KKY22] (by Victor Khomenko, Maciej Koutny and Alex Yakovlev) has found that one can trim down the number of places needed for such compositions considerably. Another problem of scale is created by data and variables, because the translation given in Sections 20.3 to 20.5 produces not just possibly infinite nets, but also many undesirable dead transitions. In order to circumvent this problem, high-level Petri nets can be used. Such nets trade the explicit modelling of places and transitions against more elaborate place and transition annotations; they are briefly described in Chapter 21. “Unwinding a loop once” features prominently in [BW90] (Jos C.M. Baeten and W. Peter Weijland). The idea of putting data next to commands in order to describe nested blocks is described in Robin Milner’s first book [Mil80]. Guarded commands

20.8 Exercises

475

are from [Dij76] (Edsger W. Dijkstra). Semaphores and the 𝑃 and 𝑉 operations have been introduced in [Dij68b] (E.W. Dijkstra).

20.8 Exercises Exercise 20.1 Value exchange in CSP Draw a box for the following CSP (communicating sequential processes) program: begin var 𝑥, 𝑦 : {0, 2}; chan 𝑐, 𝑑; ((𝑥 := 0; 𝑐!𝑥) ∥ (𝑦 := 2; 𝑑!𝑦) ∥ (𝑐?𝑦; 𝑑?𝑥)); skip end Your box should only reflect the control structure and the synchronisation, but not the data structure or the data flow in detail; its places should be annotated by 𝑒, 𝑖, 𝑥, and its transitions should be uninterpreted, i.e., labelled by assignment statements (or by skip); also, there should be no data boxes. There should be three 𝑒 places, six 𝑖 places, and one 𝑥 place. Using the firing sequences of this box, starting from its canonical initial marking, determine which values 𝑥 and 𝑦 can have during the execution of the skip command. Exercise 20.2 Process-algebraic expressions Construct boxes for the following process-algebraic (parallel) program expressions. a) begin ((𝑎; skip) ∥ 𝑏) + 𝑐) end b) begin var 𝑦,𝑧: {1,2}; chan c; ((𝑧 := 1; 𝑐!𝑧) + (𝑧 := 2; 𝑐!𝑧)) ∥ (𝑐?𝑦; 𝑎) end Treat skip, assignments, and 𝑎 as single transitions. Exercise 20.3 A simple mutual exclusion algorithm The following parallel program uses atomic Test-and-Set operations (in contrast to Peterson’s algorithm). It is much simpler, and its proof is also simpler. var in1 , in2 : Boolean (init false); repeat if fi repeat if fi “critical section 1”; “critical section 2”;

end repeat end repeat a) Draw a Petri net corresponding to this program. Concentrate on the essential part (rather than giving a full translation as in the chapter on iteration and data). Use Peterson’s algorithm as a template. Your net should have eight places and four transitions.

476

20 The Box Algebra 2/2: Iteration and Data

b) We want to prove the mutual exclusion property. First, compute all minimal semipositive S-invariants. Argue compellingly that these S-invariants are not sufficient for a mutual exclusion proof. c) Use the trap method in order to prove mutual exclusion. Also give an independent (but corresponding) proof by program invariants. Argue precisely why your invariant(s) are true initially and stay true over any atomic action. Hint: The solutions are easy, but you should be careful that your arguments are completely “watertight”. d) Show that the property P : “The system satisfies the mutual exclusion property” is co-linear, by finding an appropriate matrix 𝐴 and an appropriate vector 𝑏 such that ¬P (𝑀) ⇔ 𝐴·𝑀 ≥ 𝑏. e) Specify the inequality system E + ¬P + “a well-chosen initially marked trap”. Does it have a natural solution? Does this provide a (third) proof of the mutual exclusion property? f) Show that the property Q : “The system is deadlock-free” is co-linear, by finding an appropriate matrix 𝐴 and an appropriate vector 𝑏 such that ¬Q (𝑀) ⇔ 𝐴·𝑀 ≥ 𝑏. g) Specify the inequality system E + ¬Q (not using any initially marked trap). Does it have a natural solution. Does this prove deadlock-freeness? Solution 20.1 (Exercise 20.1) See Figure 20.8. The following firing sequences yield all possible values of 𝑥 and 𝑦 during skip, namely either 𝑥 = 𝑦 = 2 or 𝑥 = 𝑦 = 0. • (𝑥 := 0)(𝑦 := 𝑥)(𝑦 := 2)(𝑥 := 𝑦)skip yields 𝑥 = 2 ∧ 𝑦 = 2 • (𝑥 := 0)(𝑦 := 2)(𝑦 := 𝑥)(𝑥 := 𝑦)skip yields 𝑥 = 0 ∧ 𝑦 = 0 • (𝑦 := 2)(𝑥 := 0)(𝑦 := 𝑥)(𝑥 := 𝑦)skip yields 𝑥 = 0 ∧ 𝑦 = 0 Solution 20.2 (Exercise 20.2) See Figure 20.9. Solution 20.3 (Exercise 20.3) a): See Figure 20.10. b): The minimal semipositive S-invariants are: 𝐼1 𝐼2 𝐼3 𝐼4

= = = =

(11000000) T (00110000) T (00001100) T (00000011) T

They do not suffice for a mutual exclusion proof because, without the two side conditions, they are the same, but mutual exclusion is no longer guaranteed.

20.8 Exercises

477 𝑒

𝑒

𝑒

𝑥:=0

𝑖

𝑦:=𝑥

𝑦:=2

𝑖

𝑖

𝑖

𝑥:=𝑦

𝑖

𝑖

skip

𝑥

Fig. 20.8: Solution of Exercise 20.1.

a):

𝑒

skip

𝑎

𝑥

𝑐 𝑒

𝑥

𝑏 𝑦:=𝑧

b):

𝑧:=1 𝑒

𝑒

𝑥

𝑎

𝑥

𝑧:=2 𝑦:=𝑧

Fig. 20.9: Solution of Exercise 20.2. Unannotated places are 𝑖 places. The first (second) 𝑥 place corresponds to the first (second) 𝑒 place.

478

20 The Box Algebra 2/2: Iteration and Data

c): Consider the trap 𝑄 = {𝑠3 , 𝑠5 } Proof of mutual exclusion by contradiction: If both places 𝑠4 and 𝑠6 have tokens, then 𝑠3 and 𝑠5 are token-free, by the S-invariant properties of 𝐼2 and 𝐼3 , and by the fact that 𝐼2 and 𝐼3 carry exactly one token each. But this is impossible since 𝑄 is not token-free to start with, and remains so throughout since it is a trap. The two S-invariants 𝐼2 and 𝐼3 that played a role in the net-based proof, correspond to the fact that Boolean variables have exactly one of two possible values. This fact is usually taken for granted in purely program-based proofs. Independently of the translation into a Petri net, we may consider the program invariant Q ≡ (¬in1 ∨ ¬in2 ) Q is true initially since both in1 and in2 are false. It stays true over the atomic action since, while in1 may become true by this action, the guard ¬in2 ensures that Q remains true. (It is here that the atomicity of this action is crucial.) Q also stays true over , since setting in1 to false makes Q, at most, even “truer”. Of course, Q corresponds exactly to 𝑄. d) to g): Analogous to Exercise 7.1.

not in critical section 1 𝑠1

𝑠2 in critical section 1

in1 = false

in2 = false

𝑠3

𝑠5

𝑠4

𝑠6

in1 = true

not in critical section 2 𝑠7

in2 = true

Fig. 20.10: Solution of Exercise 20.3(a).

𝑠8 in critical section 2

Chapter 21

High-level Petri Nets

Abstract This chapter offers a cursory, mostly informal, tour around three of the many existing high-level Petri models, with an emphasis on their place/transition Petri net semantics.

Contents 21.1 21.2 21.3 21.4 21.5 21.6

Introductory remarks, and an example . . . Coloured and multirelational Petri nets . . Algebraic Petri nets . . . . . . . . . . . . . M-nets . . . . . . . . . . . . . . . . . . . Bibliographical remarks and further reading Exercises . . . . . . . . . . . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

479 480 490 496 499 501

21.1 Introductory remarks, and an example When place/transition Petri nets are used for substantial case studies, they tend to grow and become large. Often, however, they are built from uniform components that are parametrised in some way. This is why fairly early (starting in the 1970es), methods and calculi have been developed that allow one to describe and to construct large Petri nets in succinct ways. Such methods include coloured nets, multirelational nets, predicate/transition nets, algebraic nets, and M-nets (multilabel nets). These Petri net classes and their numerous variants are known by the generic name of high-level nets. High-level Petri nets can often be seen as compact representations of underlying place/transition nets. But additionally, they allow one to use techniques from a different field in order to specify and analyse a system. For example, algebraic nets © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. Best and R. Devillers, Petri Net Primer, Computer Science Foundations and Applied Logic, https://doi.org/10.1007/978-3-031-48278-6_21

479

480

21 High-level Petri Nets

allow the user to apply methods from algebraic specifications. Some models also feature true extensions, incorporating, for instance, time, probabilities, or priorities. Coloured nets and multirelational nets are, perhaps, the most straightforward way of “compactifying” place/transition Petri nets. They are directly based on set and multiset theory. Section 21.2 introduces coloured and multirelational nets and describes their relationship to place/transition nets (which, in contrast to high-level, shall now be called elementary). Section 21.3 introduces algebraic nets and their coloured Petri net semantics. This includes, indirectly, also their elementary net semantics. Section 21.4 describes M-nets, also giving their coloured net (and thus, by transitivity, their place/transition net) semantics. M-nets are closely related to the process algebra described in the previous chapters.

21.2 Coloured and multirelational Petri nets Let us describe the principle behind coloured nets by an example, the five philosophers model due to E.W. Dijkstra. This model was invented in order to study questions of deadlock-freeness and fairness, but here, we will be concerned with formalising it in terms of elementary and high-level Petri nets. The basic setup is that five philosophers are imagined to be sitting around a table and that a fork is located between any neighbouring pair of them. Only five forks are available in total. However, a philosopher needs both his left-hand fork and his right-hand fork for eating. Hence during the (finite) time a philosopher is eating, neither his left-hand neighbour nor his right-hand neighbour can eat; they have to wait for the necessary forks to be put back on the table. This system can be represented by the safe place/transition net shown in Figure 21.1. The five forks are modelled by the places 𝑔𝑖 (0 ≤ 𝑖 ≤ 4). Initially, each of these places carries a token, meaning that the corresponding fork is on the table (and available for being picked up). When philosopher number 𝑖 wants to eat, he picks up both necessary forks simultaneously by means of the transition 𝑏 𝑖 . As soon as 𝑏 𝑖 is executed, the tokens on places 𝑔 (𝑖−1) mod 5 and 𝑔𝑖 disappear, and a new token is produced on place 𝑝 𝑖 . This represents the fact that “philosopher no. 𝑖 is eating”. When philosopher 𝑖 finishes eating, he or she puts back the two used forks on the table by means of transition 𝑒 𝑖 . We can imagine other models, for instance one in which all (or some) philosophers do not grab their forks simultaneously but sequentially, one after the other. If this is done carelessly, and if a philosopher refuses to undo any fork-grabbing without subsequent eating, then there may be deadlocks (this could be avoided however if at least one philosopher is left-handed while at least one is right-handed). In the simpler solution depicted in Figure 21.1, such deadlocks are impossible.1 1 There could be a conspiracy by two not directly neighbouring philosophers in order to starve the one sitting between them (perhaps because he adheres to a different belief). Such a situation is very much akin to 1-unfairness which has been investigated in Chapter 8. However, we will neglect all

21.2 Coloured and multirelational Petri nets

481

The net has an obvious regular structure, which has already been exploited by using modulo calculations (mod 5). If we wanted to model 555 instead of 5 philosophers in the same way, it is clear how the net ought to look like, and we would presumably not be patient (or stupid) enough to actually draw it. It stands to reason to seek for a method of “folding” such nets systematically into smaller (and uniform) representations. This is what coloured nets can achieve.

𝑒1

𝑝1

𝑏1 𝑒0

𝑔0

𝑝0

𝑔1

𝑏0

𝑔2 𝑏4

𝑒4

𝑒2

𝑏2

𝑔4

𝑝4

𝑝2

𝑔3

𝑏3 𝑝3

𝑒3

Fig. 21.1: Place/transition net model of five philosophers (no. 0 to 4). Informally, the calculus of coloured Petri nets allows not just one kind of tokens (“black tokens”) on any given place, but tokens of any arbitrary colour. It even allows to have arbitrarily many tokens of any given colour (“red”, “green”, “yellow”, and so on) on a single place. In general, every place 𝑠 has an associated colour set 𝜄(𝑠), also called its set of values, or its type, which specifies the allowed colours of tokens on that place. In an elementary place/transition net, there is only a single colour (namely, “black”) for every place 𝑠, i.e., |𝜄(𝑠)| = 1. Similarly, a transition 𝑡 is allowed to fire (or, be executed) in any one of a specific set 𝜄(𝑡) of modes. In an elementary place/transition net, there is only a single mode for every transition 𝑡, i.e., |𝜄(𝑡)| = 1.

such possibilities in the present section, since our main aim will be to see how the philosophers – in whichever solution, and with whatever behaviour, as long as it is regular – can be modelled as a high-level Petri net.

482

21 High-level Petri Nets

Example 21.1 A coloured Petri net (Figure 21.2) The coloured net shown in Figure 21.2 has two places 𝑟 1 , 𝑟 2 and a transition 𝑢. On 𝑟 1 , tokens of two colours denoted by 1 and 2 are allowed. Initially, 𝑟 1 carries one token 1 and no token 2. The tokens on 𝑟 2 are also allowed to be of one of two colours, denoted by 2 and 3. Initially, 𝑟 2 carries two tokens 2 and one token 3. The transition 𝑢 can fire in one of two modes, denoted by 𝑥 and 𝑦. If 𝑢 is executed in mode 𝑥, a token 1 is taken away from 𝑟 1 (since this is what the inscription {(1, 𝑥)} on the arc from 𝑟 1 to 𝑢 demands) and a token 3 is added to place 𝑟 2 (because of the inscription {(𝑥, 3)} on the arc from 𝑢 to 𝑟 2 ). If 𝑢 fires in mode 𝑦, then two tokens 2 are taken away from 𝑟 2 and a token 2 is added to place 𝑟 1 . Observe that when firing in mode 𝑥, transition 𝑢 takes no token away from 𝑟 2 , even though there is an arc from 𝑟 2 to 𝑢 (but in mode 𝑦). Similarly, in mode 𝑦, 𝑢 does not need any token on place 𝑟 1 .

𝑟1

1

{1, 2}

𝑁: { ( 𝑦, 2) }

{ (1, 𝑥 ) }

𝑢

{ (2, 𝑦) , (2, 𝑦) }

{ ( 𝑥, 3) }

𝑟2

{ 𝑥, 𝑦 }

𝐶: 𝑢 with 𝜄 (𝑢) = { 𝑥, 𝑦 } 𝑟1 with −{ (1, 𝑥 ) } + { (2, 𝑦) } 𝜄 (𝑟1 ) = {1, 2} 𝑟2 with 𝜄 (𝑟2 ) = {2, 3} −{ (2, 𝑦) } − { (2, 𝑦) } + { (3, 𝑥 ) }

2 2 3

{2, 3}

Fig. 21.2: A coloured net 𝑁 (left-hand side) and its incidence matrix 𝐶 (right-hand side). See below for the definition of 𝐶. Multirelations can render this idea in precise terms. Multirelations are to relations what multisets are to sets. For multirelational nets, it turns out to be useful to generalise multisets, by allowing their elements to be contained in it zero times, or more than zero times, but, if necessary, also less than zero times.2 For instance, the multiset 𝜇 written informally as 𝜇 = {1, 1, 1, 2, 4, 4, 7} can be viewed as a function from some ground set, say 𝑋 = {1, 2, 3, 4, 5, 6, 7, 8, 9}, to N yielding, for every digit in 𝑋, the number of times it occurs in 𝜇; thus, 𝜇(1) = 3, 𝜇(2) = 1, 𝜇(3) = 0, 𝜇(4) = 2, and so forth. For the purpose of the present section, the following definition generalises this concept from N to Z, with 𝑋 being an arbitrary ground set. 2 This generalisation allows linear algebra to be applied smoothly, in particular through the definition of an incidence matrix 𝐶 as in Figure 21.2.

21.2 Coloured and multirelational Petri nets

483

Definition 21.2 (Generalised) multisets Let 𝑋 be an arbitrary set. A multiset over 𝑋, or an 𝑋-multiset, is a function 𝜇: 𝑋 → Z which associates an integer number 𝜇(𝑥) with every element in 𝑥 ∈ 𝑋. The set 𝑋 is called the basis of 𝜇. An 𝑋-multiset 𝜇 is finite if the set {𝑥 ∈ 𝑋 | 𝜇(𝑥) ≠ 0} is finite. Further, 𝜇 is a set if its codomain is {0, 1}. The last part of this definition clearly makes sense, because there is an immediate unique correspondence between sets (as defined ordinarily) and multisets with codomain {0, 1}. If 𝑋 has some structure (e.g., being a Cartesian product), then we get variants of this definition. In particular, we shall be interested in multirelations. Definition 21.3 (Generalised) multirelations Let 𝑋 and 𝑌 be two arbitrary sets. An (𝑋, 𝑌 )-multirelation is an (𝑋 × 𝑌 )multiset, that is, a function 𝜌 which associates an integer number to every pair (𝑥, 𝑦) ∈ 𝑋 × 𝑌 . Note that the special case 𝑋 = 𝑌 is allowed in Definition 21.3. Another way of viewing an 𝑋-multiset is as an integer vector with index positions 𝑋. (Sometimes they are even called 𝑋-vectors.) Similarly, (𝑋, 𝑌 )-multirelations can be viewed as integer-valued matrices whose rows are indexed by 𝑋 and whose columns are indexed by 𝑌 . (Sometimes they are called (𝑋, 𝑌 )-matrices, for that reason.) Let 1 be any singleton set, for instance 1 = {•}. Any 1-multiset can be identified with an integer number, and conversely, every integer can be identified with a 1-multiset. In a similar way, every 𝑋-multiset 𝜇 can be viewed as an (𝑋, 1)-multirelation 𝜌 𝜇 , and conversely, every (𝑋, 1)-multirelation 𝜌 can be viewed as an 𝑋-multiset 𝜇𝜌 . We may identify 𝜇 and 𝜌 𝜇 . It helps to imagine 𝜇 being represented as a column vector in this way. The corresponding row vector is 𝜇T , which will be identified with the (1, 𝑋)-multirelation 𝜎𝜇 which corresponds in a similar way to 𝜇. Let 𝜌 be an (𝑋, 𝑌 )-multirelation. The transposed of 𝜌 is denoted by 𝜌 T and defined as an (𝑌 , 𝑋)-multirelation satisfying 𝜌 T (𝑦, 𝑥) = 𝜌(𝑥, 𝑦) for 𝑥 ∈ 𝑋 and 𝑦 ∈ 𝑌 . We have (𝜌 T ) T = 𝜌. The following operations and predicates can be defined on multisets. Assume that 𝜇 and 𝜈 are two 𝑌 -multisets. • 𝜇 is a submultiset of 𝜈 (denoted by 𝜇≤𝜈) if ∀𝑦 ∈ 𝑌 : 𝜇(𝑦) ≤ 𝜈(𝑦). • The multiset sum 𝜇+𝜈 and the multiset difference 𝜇−𝜈 are respectively defined as (𝜇+𝜈)(𝑦) = 𝜇(𝑦) + 𝜈(𝑦) and (𝜇−𝜈)(𝑦) = 𝜇(𝑦) − 𝜈(𝑦), for 𝑦 ∈ 𝑌 . • For an integer number 𝑘 ∈ Z, the scalar product 𝑘 · 𝜇 is defined as (𝑘 · 𝜇)(𝑦) = 𝑘 · (𝜇(𝑦)), for 𝑦 ∈ 𝑌 .

484

21 High-level Petri Nets

• The empty 𝑌 -multiset, denoted by ∅𝑌 (or just by ∅ if the basis is known from the context), is defined by ∅(𝑦) = 0 for 𝑦 ∈ 𝑌 . • The (additive) inverse of 𝜇 is defined as (−𝜇) = (∅−𝜇). • 𝜇 is semipositive, if ∅≤𝜇. All notions are consistent with their counterparts in Z; to see this, it suffices to apply the definitions in the special case |𝑌 | = 1. (This justifies the fact that we use the same symbols.) On the other hand, if 𝜇 and 𝜈 are sets, then ≤ turns out to be the same as ⊆. The matrix product is defined for finite multirelations, as follows. Let 𝜌 be a finite (𝑋, 𝑌 )-multirelation and let 𝜎 be a finite (𝑌 , 𝑍)-multirelation. Then 𝜌⊗𝜎 is a finite (𝑋, 𝑍)-multirelation defined by the formula ∑︁ (𝜌⊗𝜎)(𝑥, 𝑧) = 𝜌(𝑥, 𝑦)·𝜎(𝑦, 𝑧) for 𝑥 ∈ 𝑋 and 𝑧 ∈ 𝑍 (21.1) 𝑦 ∈𝑌

Í In this formula, · and denote the product and the sum in Z, respectively. For two 𝑌 -multisets 𝜇 and 𝜈, the products 𝜇T ⊗𝜈 and 𝜇⊗𝜈 T are also well-defined. The first is a scalar (more formally: a (1, 1)-matrix, which can simply be identified with an element of 𝑌 ), while the second is a (𝑌 , 𝑌 )-matrix. Specific finite multisets can often be represented, in the same way as sets, by means of curly brackets. For instance, {𝑥, 𝑥, 𝑦} denotes an 𝑋-multiset with 𝜇(𝑥) = 2, 𝜇(𝑦) = 1, and 𝜇(𝑧) = 0 for all 𝑧 ∈ 𝑋 \{𝑥, 𝑦}. Non-semipositive multisets can conveniently be represented by linear combinations. For instance, −{𝑥} + {𝑦} − {𝑥} denotes the multiset 𝜇 with 𝜇(𝑥) = −2, 𝜇(𝑦) = 1, and 𝜇(𝑧) = 0 for all 𝑧 ∈ 𝑋 \{𝑥, 𝑦}.

21.2.1 A multirelational calculus for coloured nets We now construct a hierarchy of multirelations on which the definition of coloured Petri nets will be based. The following hierarchy will be used: • Lowest (elementary) level 0: A set VAL whose elements are “coloured tokens” such as 0, 1, 2, 3 etc., or also such as • (the “black token”), and a set MOD whose elements are transition firing modes such as 𝑥, 𝑦 etc.3 • Level 1: Multirelations whose bases are sets (of objects of level 0). • Level 2: Multirelations whose bases are objects of level 1 (i.e., of multirelations whose bases are sets).

3 It helps to compare this with Figure 21.2. That figure uses colours 1, 2, 3 and modes 𝑢, 𝑣.

21.2 Coloured and multirelational Petri nets

485

For giving names (rather than types) to places and transitions, we need two further disjoint sets, S and T. The set S contains place names (like 𝑟 1 and 𝑟 2 in Figure 21.2), and the set T contains transition names (like 𝑢 in Figure 21.2). To every place name 𝑟 ∈ S, a colour set (or also called a value set) 𝜄(𝑟) – called its type – is associated, and to every transition name 𝑡 ∈ T, a mode set 𝜄(𝑡) – also called its type – is associated. For example, in Figure 21.2, 𝜄(𝑟 1 ) = {1, 2}, 𝜄(𝑟 2 ) = {2, 3}, and 𝜄(𝑢) = {𝑥, 𝑦}. We can easily extend types 𝜄 to objects which are composed of names. In particular, the type of a pair (𝑎, 𝑏) ∈ (S ∪ T) × (S ∪ T) is 𝜄(𝑎, 𝑏) = 𝜄(𝑎) × 𝜄(𝑏); the type of a Ð subset 𝐴 ⊆ (S ∪ T) is {𝜄(𝑎)|𝑎 ∈ 𝐴}; and the type of a Cartesian product 𝐴 × 𝐵 is the set 𝜄( 𝐴) × 𝜄(𝐵). Definition 21.4 𝐴-multisets Let 𝐴 be a set which is composed of names. A function 𝛼 with domain 𝐴 is an 𝐴-multiset if it associates a 𝜄(𝑎)-multiset to every 𝑎 ∈ 𝐴. As above, we can then speak of 𝐴-vectors, ( 𝐴, 𝐵)-matrices, transposed matrices, etc. All operations (viz. addition, subtraction, comparisons, the empty multiset etc.) can be defined not just on 𝑋-multisets, but after a further step involving 𝜄 also on 𝐴-multisets. For instance, suppose that 𝛼1 and 𝛼2 are two 𝐴-multisets. Then 𝛼1 +𝛼2 is another 𝐴-multiset which is defined by (𝛼1 + 𝛼2 )(𝑎) = 𝛼1 (𝑎) + 𝛼2 (𝑎) for all 𝑎 ∈ 𝐴 On the right-hand side of this formula, we find the operation which has been defined previously for 𝑋-multisets. By Definition 21.4, both 𝛼1 (𝑎) and 𝛼2 (𝑎) are 𝜄(𝑎)multisets, and therefore, the sum is well-defined. We show how the matrix product (Formula (21.1)) can be extended to 𝐴-multisets. Assume that 𝐴, 𝐵, 𝐶 are three sets and that 𝛼 is an ( 𝐴, 𝐵)-matrix and 𝛽 is a (𝐵, 𝐶)matrix. Then 𝛼⊗𝛽 is an ( 𝐴, 𝐶)-Matrix which is defined by ∑︁ (𝛼⊗𝛽)(𝑎, 𝑐) = 𝛼(𝑎, 𝑏)⊗𝛽(𝑏, 𝑐) for 𝑎 ∈ 𝐴 and 𝑐 ∈ 𝐶 (21.2) 𝑏∈ 𝐵

Í The types on the right-hand side of this formula fit together, since is a sum of multisets, and ⊗ is a product of multisets. The latter is (well-)defined by Formula (21.1), since 𝛼(𝑎, 𝑏) is a (𝜄(𝑎), 𝜄(𝑏))-multirelation and 𝛽(𝑏, 𝑐) is a (𝜄(𝑏), 𝜄(𝑐))multirelation. Lemma 21.5 Algebraic properties In the class of multisets, + is associative and commutative; the scalar product satisfies the equation 𝑘·(𝑙·𝛼) = (𝑘·𝑙)·𝛼; ⊗ is associative and distributive over + as well as over −; and ∅ is a unit with regard to + as well as a null with respect to ⊗. Moreover, (𝐹⊗𝐺) T = 𝐺 T ⊗𝐹 T is valid.

486

21 High-level Petri Nets

Proof: All claims can be reduced easily to the analogous properties in the ring of integers. For example, the equation (𝐹⊗𝐺) T = 𝐺 T ⊗𝐹 T is, ultimately, a consequence of the commutativity of the product in Z. 21.5 Definition 21.6 Coloured Petri nets A coloured net is a triple (𝑆, 𝑇, Φ) where • 𝑆, the set of places, is a subset of S; • 𝑇, the set of transitions, is a subset of T; • Φ, the flow relation, is a semipositive (𝑆 × 𝑇) ∪ (𝑇 × 𝑆)-multirelation. Note that in order to understand Φ “being a semipositive (𝑆 × 𝑇) ∪ (𝑇 × 𝑆)multirelation”, we need to apply Definition 21.4 to 𝐴 = (𝑆 × 𝑇) ∪ (𝑇 × 𝑆), considering multirelations as multisets. By the fact that every 𝑠 ∈ 𝑆 and every 𝑡 ∈ 𝑇 has an asociated type, the same is true for every element in (𝑆 × 𝑇) ∪ (𝑇 × 𝑆). Thanks to the disjointness of 𝑆 and 𝑇, Φ may be written additively as Φ = (Φ| 𝑆×𝑇 ) + (Φ| 𝑇 ×𝑆 ). The first summand is a (𝑆, 𝑇)-matrix (and has previously – in place/transition nets – been called the “backward matrix”), while the second summand is a (𝑇, 𝑆)-matrix (and has previously been called the “forward matrix”). The incidence matrix is defined as 𝐶 = −Φ| 𝑆×𝑇 + (Φ| 𝑇 ×𝑆 ) T . Formally, 𝐶 is a multirelation-valued multirelation (a “matrix of matrices”). A marking 𝑀 of 𝑁 is a semipositive 𝑆-multiset (as before, in fact). Example 21.7 Example 21.1 continued We explain the formalism by means of the colored net shown previously in Figure 21.2. This net is written formally as 𝑁 = (𝑆, 𝑇, Φ), where 𝑆 = {𝑟 1 , 𝑟 2 } with 𝜄(𝑟 1 ) = {1, 2} and 𝜄(𝑟 2 ) = {2, 3} 𝑇 = {𝑢} with 𝜄(𝑢) = {𝑥, 𝑦}  {𝑟 1 , 𝑟 2 } × {𝑢} ∪ {𝑢} × {𝑟 1 , 𝑟 2 } → { 𝜇 | 𝜇 : 𝑋 → Z     with 𝑋 ⊆ ((𝜄(𝑟 1 ) ∪ 𝜄(𝑟 2 )) × 𝜄(𝑢) ∪ 𝜄(𝑢) × (𝜄(𝑟 1 ) ∪ 𝜄(𝑟 2 ))) }       (𝑟 1 , 𝑢) ↦→ {(1, 𝑥)} Φ :  (𝑢, 𝑟 ) ↦→ {(𝑦, 2)}  1    (𝑟 , 𝑢) ↦→ {(2, 𝑦), (2, 𝑦)}  2    (𝑢, 𝑟 ) ↦→ {(𝑥, 3)} 2  Thus, Φ is a {𝑟 1 , 𝑟 2 } × {𝑢} ∪ {𝑢} × {𝑟 1 , 𝑟 2 }-multirelation. 𝐶 is the {𝑟 1 , 𝑟 2 } × {𝑢}-multirelation

21.2 Coloured and multirelational Petri nets

487

 {𝑟 1 , 𝑟 2 } × {𝑢} → {𝜇 | 𝜇 : ((𝜄(𝑟 1 ) ∪ 𝜄(𝑟 2 )) × 𝜄(𝑢)) → Z}         (1, 𝑥) ↦→ −1     (2, 𝑦) ↦→ +1 (𝑟 , 𝑢) → ↦  1     (𝑤, 𝑧) ↦→ 0 for all other (𝑤, 𝑧) ∈ 𝜄(𝑟 1 ) × 𝜄(𝑢) 𝐶 :        (3, 𝑥) ↦→ +1     (2, 𝑦) ↦→ −2 (𝑟 , 𝑢) → ↦  2      (𝑤, 𝑧) ↦→ 0 for all other (𝑤, 𝑧) ∈ 𝜄(𝑟 2 ) × 𝜄(𝑢)   and the initial marking 𝑀0 is the {𝑟 1 , 𝑟 2 }-multiset  {𝑟 1 , 𝑟 2 } → Z        𝑟 1 ↦→ 1 ↦→  ↦ 𝑀0 : 2 →   2 ↦ →    𝑟 2 ↦→  3 ↦ → 

1 0 2 0

In Figure 21.3, 𝐶 is represented as a matrix of matrices in the following way: 𝐶 is a 2 × 1-matrix, while every entry of 𝐶 is a 2 × 2-matrix. The Magic Roundabout in Swindon has some similiarity (it is a "circle of circles", instead of a "matrix of matrices"). The picture is taken from https://en.wikipedia.org/wiki/Magic_ Roundabout_(Swindon).

Γ 1 2 2 𝑟2 3

𝑟1

𝑢 𝑥 𝑦 −1 0 0 1 0 −1 1 0

Fig. 21.3: Matrix of matrices (l.h.s.), Swindon roundabout (r.h.s.). The behaviour of an initially marked net is defined as before, but extended to a multirelational approach. The firing rule consists of two parts: first, it defines under which circumstances a transition is enabled, and secondly, it specifies what happens if an enabled transition is executed. Let 𝑁 = (𝑆, 𝑇, Φ) be a coloured net, let 𝑀 be a marking of 𝑁, let 𝑡 be a transition in 𝑇, and let 𝑚 be a mode in 𝜄(𝑡). We say that 𝑀 enables (or activates) 𝑡 in mode 𝑚 iff 𝑀 ≥ (Φ| 𝑆×𝑇 ) ⊗ (𝑡, 𝑚) where (𝑡, 𝑚) is the T-vector defined by  (𝑡, 𝑚) :

𝑇 → {𝜇 | 𝜇 : 𝜄(𝑇) → Z} 𝑡 ↦→ {𝑚} and 𝑢 ↦→ ∅ 𝜄 (𝑢) for 𝑢≠𝑡

488

21 High-level Petri Nets

That is, (𝑡, 𝑚) yields 𝑚 at index 𝑡 and 0 at all others indices. If 𝑀 activates 𝑡 in mode 𝑚, then 𝑡 can be fired in mode 𝑚 at 𝑀, yielding a successor marking 𝑀 ′ defined by the following transition rule: 𝑀 ′ = 𝑀 − (Φ| 𝑆×𝑇 ) ⊗ (𝑡, 𝑚) + (Φ| 𝑇 ×𝑆 ) T ⊗ (𝑡, 𝑚) = 𝑀 + 𝐶 𝑁 ⊗(𝑡, 𝑚) Other concepts such as a sequence of transitions, or a reachability graph, can be derived straightforwardly (and almost exactly as before) from the transition rule. Example 21.8 Continuing Examples 21.1 and 21.7 In the marking shown in Figure 21.2, transition 𝑢 is activated both in mode 𝑥 and in mode 𝑦. If 𝑢 fires in mode 𝑥, the token 1 on place 𝑟 1 is erased and a token 3 is added to place 𝑟 2 . After this occurrence, 𝑢 is still activated in mode 𝑦 (even though 𝑟 1 is now token-empty). If 𝑢 fires in mode 𝑦, two tokens 2 are subtracted from place 𝑟 2 and a token 2 is added to place 𝑟 1 .

Remark 21.9 Significance of arcs in a coloured net Example 21.8 demonstrates that in a coloured net, the presence of an arc between a place and a transition does not necessarily indicate an actual flow of tokens if the transition fires. A transition can be understood as an abstraction comprising many possible firings affecting some, but not necessarily all, of the places it is connected to.

Remark 21.10 Locality of behaviour If the value set of a place is replaced by another one of the same cardinality, and the multirelations on the arcs incident to that place are renamed consistently, then nothing changes with respect to the firings of the net, except that different kinds of token flow through it. For instance, if we renamed the type {1, 2} of place 𝑟 1 shown in Figure 21.2 into {1, 4}, and simultaneously replaced the pair (𝑦, 2) on the arc from 𝑢 to 𝑟 1 by (𝑦, 4), then an equivalent net would be obtained (except that all the tokens on place 𝑟 1 that were previously “coloured” 2 are now coloured 4). Note that all tokens 2 on 𝑟 2 still have the colour 2. In this sense, the behaviour of a net is invariant against consistent renaming in the immediate neighbourhood of a place. Indeed, it is possible to rename all the place types – and similarly, all transition mode sets – in such a way that they are mutually disjoint.a a

In practice, this is a typical requirement for coloured nets.

21.2 Coloured and multirelational Petri nets

489

21.2.2 Coloured philosophers Figure 21.4 represents a “folding” of the five philosophers shown in Figure 21.1 into a coloured net. We compress the five fork places 𝑔0 , . . . , 𝑔4 into a single place 𝑔 with value set 𝜄(𝑔) = {𝑔0 , . . . , 𝑔4 }. Similarly, the “eating” places are replaced by a single place with value set {𝑝 0 , . . . , 𝑝 4 }; the “start eating” transitions are replace by a single transition with mode set {𝑏 0 , . . . , 𝑏 4 }; and the “end eating” transitions are replace by a single transition with mode set {𝑒 0 , . . . , 𝑒}. We get two places and two transitions with five values and five modes, respectively. The 𝜌 arcs always move two tokens while the 𝜎 arcs always move only one token. Now it is very easy to model 555 instead of 5 philosophers. All that needs to be done is to replace 4 by 554 and 5 by 555 in Figure 21.4, and to indicate the large number of tokens on 𝑔 by a small comment, such as 𝑔0 , 𝑔1 , . . . , 𝑔554 . In fact, there are other possibilities of folding the philosophers net, for instance by allowing only a single place and a single transition, and coding the entire system into the colours, modes, and arcs. place 𝑔 with 𝜄 (𝑔) = {𝑔0 , . . . , 𝑔4 } 𝑔0 𝑔1 𝑔2 𝑔3 𝑔4

𝜌1

transition 𝑏 with 𝜄 (𝑏) = {𝑏0 , . . . , 𝑏4 }

𝜌2

transition 𝑒 with 𝜄 (𝑒) = {𝑒0 , . . . , 𝑒4 }

𝑒

𝑏

𝜎1

𝜎2

place 𝑝 with 𝜄 ( 𝑝) = { 𝑝0 , . . . , 𝑝4 } arc weights:

𝜌1 =

Í4

𝑏𝑖 ) , (𝑔𝑖 , 𝑏𝑖 ) }

𝜌2 =

Í4

(𝑒𝑖 , 𝑔𝑖 ) }

𝜎1 =

Í4

𝜎2 =

Í4

𝑖=0 { (𝑔 (𝑖−1) mod 5 ,

𝑖=0 { (𝑒𝑖 , 𝑔 (𝑖−1) mod 5 ) , 𝑖=0 { (𝑏𝑖 ,

𝑝𝑖 ) }

𝑖=0 { ( 𝑝𝑖 , 𝑒𝑖 ) }

Fig. 21.4: Five coloured (and thus parametrised) philosophers.

490

21 High-level Petri Nets

𝑟1

1

{ (1, 𝑥 ) } 𝑢

2 2 3

{1}

{2}

1

{ ( 𝑦, 2) }

{ (1, 𝑥 ) }

{ ( 𝑦, 2) }

{ 𝑥, 𝑦 }

{𝑥}

{𝑦}

{ (2, 𝑦) , (2, 𝑦) }

{ ( 𝑥, 3) } 𝑟2

{1, 2}

{ (2, 𝑦) , (2, 𝑦) }

{ ( 𝑥, 3) }

{2, 3}

{3}



3

2 2

{2}

2 •

• •

Fig. 21.5: Elementary net semantics of the net shown in Figure 21.2. The original net: left-hand side. Step (i): middle. Step (ii): right-hand side.

21.2.3 Elementary net semantics A place/transition Petri net can be associated to a coloured net in two steps: (i) “unfold” the coloured net by splitting a 𝜄(𝑠)-typed place 𝑠 into |𝜄(𝑠)| singleton-typed places, and split transitions in the same way; (ii) turn arc weights into cardinalities and paint all coloured tokens black. Figure 21.5 demonstrates this for the coloured net 𝑁 shown in Figure 21.2. 𝑁 is reproduced on the left-hand side of Figure 21.5. The middle of the figure shows how 𝑁 looks like after Step (i). The right-hand side shows how 𝑁 looks like after Step (ii).

21.3 Algebraic Petri nets Algebraic specifications can be used in order to create a specific type of highlevel Petri nets in which the types of places and transitions are no longer sets, but correspond to the constants and the variables of an underlying algebra. We present the definition in two steps. First, we explain algebraic specifications and their models. Secondly, we define algebraic nets based on an algebraic specification, and we explain how they can be translated into coloured nets using a model of such a specification.

21.3.1 Algebraic specifications The idea behind algebraic specifications is much related to the idea behind abstract data types. For example, for a stack data type, one would like to allow specific operations such as push and pop, as well as specific tests such as an emptiness test and, perhaps, an overflow test. No access to the “belly” of a stack should be allowed. But it may be reasonable to postulate algebraic laws for such operations, for instance equations such as pop(push(𝑎)) = 𝑎.

21.3 Algebraic Petri nets

491

For an ordinary array, there is a similar set of operations one would like to allow. In fact, array data types will be our first running example. Definition 21.11 Signature A signature is a pair Σ = (S, Ω) where S is a set of sorts and Ω is a family of sets of operation symbols: Ω = {Ω𝑣, 𝜎 | 𝑣 ∈ S ∗ , 𝜎 ∈ S} Every symbol in Ω 𝜎1 ... 𝜎𝑛 , 𝜎 stands for an operation having exactly 𝑛 arguments, so that the 𝑖 th argument has sort 𝜎𝑖 (1 ≤ 𝑖 ≤ 𝑛) and the application of the operation produces exactly one value of the sort 𝜎. The number of arguments, i.e., the number 𝑛, is called the arity of the operation. A special case are nullary operation symbols: if 𝑣 = 𝜀, then 𝑛 = 0, and every symbol in Ω 𝜀, 𝜎 stands for a constant of the sort 𝜎.

Example 21.12 A sample signature Let a signature Σ be defined by the sorts S = {array, index, entry} and the operation symbols Ω = {new, set, get} Sorts array,index,entry; Ops

new : → array; % arity 0 set : array,index,entry → array; % arity 3 get : array,index → entry; % arity 2

A signature Σ can have plenty of models. A model is some concrete mathematical structure conforming to Σ. Such structures are called Σ-algebras. Definition 21.13 Σ-algebra Given a signature Σ, a Σ-algebra A = (SA , ΩA ) consists of • a family SA = {𝜎A | 𝜎 ∈ S}, where 𝜎A is a set for every sort 𝜎 ∈ S; • and a family ΩA = {𝜔A | 𝑣 ∈ S ∗ , 𝜎 ∈ S, 𝜔 ∈ Ω𝑣, 𝜎 }, where 𝜔A : 𝜎1,A × . . .×𝜎𝑛,A → 𝜎A is a function for every operation symbol 𝜔 ∈ Ω 𝜎1 ... 𝜎𝑛 , 𝜎 . In the special case 𝑛 = 0, 𝜔A is an element of 𝜎A .

Example 21.14 Example 21.12 continued Let A be an array with two index positions 0, 1 and four entries (i.e., possible array values) ♣, ♠, ♥ and ♦, by specifying the following sets and operations on them:

492

21 High-level Petri Nets

indexA entryA arrayA newA setA getA

{0, 1} {♣, ♠, ♥, ♦} {♣, ♠, ♥, ♦} × {♣, ♠, ♥, ♦} (♣, ♣)  (𝑧, 𝑥2 ) if 𝑦 = 0 : ((𝑥 1 , 𝑥2 ), 𝑦, 𝑧) ↦→  (𝑥1 , 𝑧) if 𝑦 = 1 𝑥 if 𝑦 = 0 : ((𝑥 1 , 𝑥2 ), 𝑦) ↦→ 1 𝑥2 if 𝑦 = 1

= = = =

To see that A is indeed a (S, Ω)-algebra for the signature (S, Ω) defined in Example 21.12, we need to check whether the ground sets conform with the sorts, and whether the concrete operations conform with the abstract ones given in the signature. In particular, the respective arities must fit. It is easy to check that this is indeed the case in our example. Let us introduce Σ-variables X as consisting of a family of variable sets X = {X 𝜎 | 𝜎 ∈ S} where every X 𝜎 is a set of variable names of the sort 𝜎. Example 21.15 A variable declaration For instance, Vars

i,j:index; x,y:entry; a:array;

declares two variables of sort index, two variables of sort entry, and one variable of sort array. In this case, X = {Xindex , Xentry , Xarray } where Xindex = {𝑖, 𝑗 }, Xentry = {𝑥, 𝑦}, and Xarray = {𝑎}.

Definition 21.16 Terms, assignments, and evaluations Let TΩ, 𝜎 (X), the set of terms of sort 𝜎 over X , be defined as the smallest set having the following properties: • X 𝜎 and Ω 𝜀, 𝜎 are subsets of TΩ, 𝜎 (X); • if 𝜔 ∈ Ω 𝜎1 ... 𝜎𝑛 , 𝜎 and 𝑇𝑖 ∈ TΩ, 𝜎𝑖 (X) (for 𝑖 = 1, . . . , 𝑛) are in TΩ, 𝜎 (X), then so is 𝜔(𝑇1 , . . . , 𝑇𝑛 ). Let X be a set of Σ-variables. An assignment of X is a family 𝛼 = {𝛼 𝜎 | 𝜎 ∈ S} of functions 𝛼 𝜎 : X 𝜎 → 𝜎A . The evaluation of the terms in TΩ, 𝜎 (X) under an assignment 𝛼 is a family 𝛼♯ = {𝛼♯𝜎 | 𝜎 ∈ S}

of functions

𝛼♯𝜎 : TΩ, 𝜎 (X) → 𝜎A

which is inductively defined as follows: • 𝛼♯𝜎 (𝑥) = 𝛼 𝜎 (𝑥) for 𝑥 ∈ X; • 𝛼♯𝜎 (𝜔) = 𝜔A for 𝜔 ∈ Ω 𝜀, 𝜎 ; • 𝛼♯𝜎 (𝜔(𝑇1 , . . . , 𝑇𝑛 )) = 𝜔A (𝛼♯𝜎1 (𝑇1 ), . . . , 𝛼♯𝜎𝑛 (𝑇𝑛 )) for 𝜔 ∈ Ω 𝜎1 ... 𝜎𝑛 , 𝜎 .

21.3 Algebraic Petri nets

493

In general, a signature Σ has many Σ-algebras. Often it is desirable to restrict the set of candidates. One way of achieving such a restriction is by means of an equational theory, by which a set of equations is postulated that must be satisfied for an algebra to be an interesting model in the given context. Let X be a family of Σ-variables and let 𝜎 ∈ S be a sort. A Σ-equation of sort 𝜎 is an ordered pair [𝐿, 𝑅] of terms 𝐿, 𝑅 ∈ TΩ, 𝜎 (X). Example 21.17 Sample equations In our running example, we may consider the following equations: Eqs get(set(a,i,x),i) = x; % L = get(set(a,i,x),i), R = x set(set(a,i,x),i,y) = set(a,i,y); The first equation indicates that an operation get reads an entry that has previously been set at the same index. The second equations indicates that after setting the entry at any given index twice, the second (rather than the first) value counts. An equation [𝐿, 𝑅] is valid in a Σ-algebra 𝐴, if in 𝐴, 𝛼♯𝜎 (𝐿) = 𝛼♯𝜎 (𝑅) for all assignments 𝛼 of variables in X. That is, we have an implicit universal quantification over the set of variables occurring (freely) in 𝐿 and/or 𝑅. Summarising, Definition 21.18 Algebraic specification; model An algebraic specification is a triple 𝐷 = (S, Ω, E), consisting of a signature Σ = (S, Ω) and a set of Σ-equations E. A model of 𝐷 is a Σ-algebra 𝐴 in which all equations of E are valid. The set of models of 𝐷 is also called the abstract data type defined by 𝐷.

Example 21.19 An algebraic specification and one of its models The Σ-algebra A defined in Example 21.14 is a model of the equational algebraic specification defined in Examples 21.12, 21.15 and 21.17. In order to show this, we still need to show that all equations are valid in A. It is not hard to see that this is indeed the case. The choice of equations is, in some sense, arbitrary. In general, it is inspired by the widespread use of equational axioms in the specifications of data types such as semi-groups, groups and rings.

494

21 High-level Petri Nets

21.3.2 Algebraic nets and their place/transition net semantics Definition 21.20 Algebraic Petri nets Let a multiterm be defined as a multiset of terms. A tuple 𝑁 = (S, Ω, E, 𝑆, 𝑇, 𝐹, 𝜓, 𝜉, 𝜆, 𝑀0 ) is an algebraic Petri net if • (S, Ω, E) is an algebraic specification. • (𝑆, 𝑇, 𝐹) is a plain place/transition net. • 𝜓 is a sort assignment function 𝜓 : 𝑆 → S. • 𝜉 is a variable declaration, that is, a function assigning a family 𝜉 (𝑡) of finitely many (S, Ω)-variables to every transition 𝑡 ∈ 𝑇. • 𝜆 is an arc inscription, that is, a function assigning a finite multiterm 𝜆(𝑠, 𝑡) (respectively, 𝜆(𝑡, 𝑠)) over TΩ, 𝜎 (𝑠) (𝜉 (𝑡)) to every arc (𝑠, 𝑡) (respectively, (𝑡, 𝑠)) in 𝐹. • 𝑀0 is an initial marking assigning a finite multiterm over TΩ, 𝜎 (𝑠) (∅) to every 𝑠 ∈ 𝑆.a The terms in TΩ, 𝜎 (𝑠) ( ∅ ) have no free variables and can therefore be evaluated directly, without referring to any variable assignment. a

Example 21.21 An algebraic net We consider a playing-card oriented specification and a Petri net which is based on it. See Figure 21.6. This net does not yet have an initial marking. Sorts suit; Ops Eqs

club,spade,heart,diamond : nxt : suit → suit; nxt(club)=spade; nxt(spade)=heart; nxt(heart)=diamond; nxt(diamond)=club;

→ suit;

For instance, the term 1·nxt(x)+2·nxt(nxt(x)) written next to the arc from 𝑡 1 to 𝑠2 is a shorthand notation for the multiset { nxt(x) , nxt(nxt(x)) , nxt(nxt(x)) } of terms. We would like to define the semantics of an algebraic net in terms of a coloured net, because these kinds of net are already well-known from the previous section (and are a bit closer to elementary nets than algebraic ones). However, we will not be able to give a straight translation from an algebraic net into a coloured net, since we do not know the “extent” of the sets and operations involved in the algebraic specification. In order to know this extent, we need to fix a model.

21.3 Algebraic Petri nets

𝑠1

suit

495 1·nxt(x)

var x: 𝑡1 suit

𝑁:

1·nxt(x)+2·nxt(nxt(x))

1·y

𝑡2

var y: suit

1·y+2·nxt(y)

suit

𝑠2

Fig. 21.6: An algebraic net 𝑁.

Thus, let A be any model of the specification underlying a given algebraic net 𝑁 with place set 𝑆 and transition set 𝑇. We shall associate the following coloured net 𝑐𝑜𝑙 (𝑁, A) to 𝑁 and A: 𝑐𝑜𝑙 (𝑁, A) has the same places 𝑆 and transitions 𝑇 as 𝑁. 𝑐𝑜𝑙 (𝑁, A) has new arcs 𝜌 and a new initial marking 𝑀0′ , as follows: • For 𝑠 ∈ 𝑆: 𝜄(𝑠) = 𝜓(𝑠)A . • For 𝑡 ∈ 𝑇: 𝜄(𝑡) = set of assignments of 𝜉 (𝑡) in A. • For (𝑠, 𝑡) ∈ 𝐹: 𝜌(𝑠, 𝑡) : (𝑖1 , 𝑖2 ) ↦→ number of occurrences of 𝑖 1 , provided the multiterm 𝜆(𝑠, 𝑡) is evaluated under the assignment 𝑖2 . • For (𝑡, 𝑠) ∈ 𝐹: 𝜌(𝑡, 𝑠) : (𝑖1 , 𝑖2 ) ↦→ number of occurrences of 𝑖 2 , provided the multiterm 𝜆(𝑡, 𝑠) is evaluated under the assignment 𝑖1 . • For all 𝑠 ∈ 𝑆: 𝑀0′ (𝑠) : 𝑖 ↦→ number of occurrences of 𝑖.4 Thus we now have a coloured Petri net semantics – and hence, indirectly, also an elementary net semantics – for any algebraic Petri net, provided we also have a model of it. Figure 21.7 shows an example, but before describing it in more detail, we specify the model that was used in getting it. In the theory of algebraic specification, it can often be proved that amongst all the models of a specification, there is a distinguished one called initial algebra. If it exists, then it can be proved to be unique (up to isomorphism). Moreover, it also enjoys some category-theoretical properties which are roughly stating that it is the “most detailed” amongst all models. Informally, the initial algebra can be constructed (if possible) by the following method. • “No junk”: We collect in our ground set all ground terms, that is, all terms that can be evaluated directly (independently of any assignments). • “No confusion”: We consider only the equations from E and their consequences, but no further constraints or equations.

4 Here, we need no assignment, since by definition, there are no free variables.

496

21 High-level Petri Nets

Example 21.22 An initial algebra The algebraic net 𝑁 shown in Figure 21.6 (more precisely, its underlying algebraic specification) has an initial algebra A with a four-element so-called universe suitA = {club, spade, heart, diamond} and the induced (by nxt) operation nxtA . We can now use the algorithm given above in order to assign a coloured net 𝑁 ′ to 𝑁, together with its initial algebra A. The result is shown in Figure 21.7. For instance, we may initially put a token spade (being an element of the universe suitA ) onto the place 𝑠1 . Then transition 𝑡 1 can fire in mode club.a After firing 𝑡 1 in this mode, 𝑠2 contains three tokens, one of colour spade and two of colour heart.b Subsequently, 𝑡 2 is activated in mode spade.c After firing 𝑡 2 , we again have a token spade on place 𝑠1 ,d and only one token is left on place 𝑠1 . In the algebraic net 𝑁 (cf. Figure 21.6), we may also play the token game. We can put a token spade onto place 𝑠1 , but this is now a directly evaluable term, rather than a colour. After that, the firing sequence 𝑡1 𝑡 2 is enabled in 𝑁 (but we need to interpret it without resorting to colours). Because of the entry (spade, club) in the arc annotation from 𝑠1 to 𝑡1 . Because of the entries (club, spade), (club, heart) , (club, heart) in the arc annotation from 𝑡1 to 𝑠2 . c Because of the entries (spade, spade), (heart, spade) , (heart, spade) in the arc annotation from 𝑠2 to 𝑡2 . d Because of the entry (spade, spade) in the arc annotation from 𝑡 to 𝑠 . 2 1 a b

21.4 M-nets In the previous section, transition inscriptions were variables while arc inscriptions consisted of terms. In the M-net calculus, the places still carry values sets, i.e., have types as before. However, the roles of arc and transition inscriptions will be interchanged. Now transitions will carry (multisets of) terms which can be evaluated to true or to false, while arcs will carry (multisets of) variables. The idea is that a transition can fire if there is an assignment of the variables on incoming arcs (using tokens from input places) such that the transition’s term evaluates to true. Example 21.23 Figure 21.8 For instance, consider the M-net in Figure 21.8. The arcs on the left-hand side are inscribed by multisets of variables, namely {𝑥, 𝑦} (top) and {𝑦, 𝑦} (bottom). The transition 𝑡 is inscribed by a multiset of terms, which in this case consists only of the single term 𝑥 > 1 ∧ 𝑦 ≠ 2. In order to fire 𝑡, we

21.4 M-nets

497

𝑠1 {club, spade, heart, diamond}

{ (spade, club) , (heart, spade) , (diamond, heart) , (club, diamond) }

suit

𝑡1 var x: suit

{ (c, s) , (c, h), (c, h) , (s, h), (s, d) , (s, d), (h, d), (h, c) , (h, c) , (d, c), (d, s) , (d, s) }

{ (club, club) , (spade, spade) , (heart, heart) , (diamond, diamond) }

{club, spade, heart, diamond}

{club, spade, heart, diamond}

var y: suit 𝑡2

suit { (c, c) , (s, c), (s, c) (s, s), (h, s) , (h, s), (h, h), (d, h) , (d, h), (d, d), (c, d) , (c, d) }

{club, spade, heart, diamond}

𝑠2

Fig. 21.7: A coloured net 𝑁 ′ associated with the net 𝑁 in Figure 21.6, using the initial algebra. Abbreviations: c=club, s=spade, h=heart, d=diamond.

seek assignments of 𝑥 and 𝑦 (using tokens present on the incoming places) by which this term is evaluated to true. One possibility is [𝑥 ↦→ 2, 𝑦 ↦→ 1], for which there are sufficiently many tokens, namely 1 and 2, on 𝑠1 . This assignment yields true for the term 𝑥 > 1 ∧ 𝑦 ≠ 2, and hence allows 𝑡 to fire. The follower marking is shown on the right-hand side of the figure. Place 𝑠2 now carries two 1 tokens created by the multiset {𝑦, 𝑦} on the arc from 𝑡 to 𝑠2 , in additon to the token • already there. Note that the assignment [𝑥 ↦→ 3, 𝑦 ↦→ 1] would also turn the term to true, but there is no token 3 on 𝑠3 , hence 𝑡 is presently not firable in this way. Also, consider the assignment [𝑥 ↦→ 1, 𝑦 ↦→ 1]; place 𝑠1 carries suitable tokens for this to be realised (two instances of 1 are necessary), but the term evaluates to false, and hence 𝑡 is not firable. On the other hand, the assignment [𝑥 ↦→ 2, 𝑦 ↦→ •] would make 𝑡 firable, leading to a successor marking which is different from the one shown in the figure (namely: three tokens 1 on 𝑠1 and three black tokens on 𝑠2 ). We can transform an M-net into an “equivalent” coloured net, and hence also into an “equivalent” elementary net, as soon as all types of all variables occurring on the arcs are known. The place value sets can be taken over directly. The mode set of a transition consist of variable assignments (within their types) which cause all the transition’s terms to be evaluated to true. The M-net calculus consists of M-nets as just described, together with a mechanism for composing such nets with each other. The main idea for such a mechanism

498

21 High-level Petri Nets 𝑠1 • 2 1 1 1

𝑠1 • 1 1

𝜄 (𝑠1 ) = {•, 1, 2, 3}

𝜄 (𝑠1 ) = {•, 1, 2, 3}

[ 𝑥 ↦→ 2, 𝑦 ↦→ 1] { 𝑥, 𝑦 }

{ 𝑥, 𝑦 } 𝑡 𝑡

{ 𝑥 > 1 ∧ 𝑦 ≠ 2}

{ 𝑦, 𝑦 }

𝑠2

{

𝑡

{ 𝑥 > 1 ∧ 𝑦 ≠ 2}

{ 𝑦, 𝑦 }



𝜄 (𝑠2 ) = {•, 1, 2, 3}

𝑠2



1 1

𝜄 (𝑠2 ) = {•, 1, 2, 3}

Fig. 21.8: Transition firing in an M-net: 𝑡 can fire with 𝑥 bound to 2 and 𝑦 bound to 1. Tokens {1, 2} are removed from 𝑠1 and two tokens 1 are added to 𝑠2 (due to 𝑦 being bound to 1).

conforms with the theory developed in Chapters 19 and 20. In this theory, places are distinguished by being initial ones (indicated by an additional inscription e), internal ones (indicated by i), or final ones (indicated by x). These additional inscriptions serve as interfaces for the desired place-based composition operations. Also, transitions are distinguished by additional – slightly more complex – inscriptions, which serve for composing nets at their transitions. By these additional inscriptions, an M-net is turned into a module which can be used as a building block inside a larger (M-)net. Thus, a place and a transition of an M-net conforming to the M-net calculus carries inscriptions of the form (𝛼 | 𝛽) where 𝛼 is the interface information and 𝛽 is the type (or mode) information. Example 21.24 Figure 21.9 Figure 21.9 shows the idea behind the translation of a program block begin var 𝑥 : {0, 1} init 0; ⟨(𝑥, 𝑦) := (𝑦, 𝑥)⟩ end

(21.3)

into an M-net (as shown on the right-hand side of the figure). The M-net describing the program variable 𝑥 (top part of the figure) allows the value of 𝑥 to be changed arbitrarily with its type (which is {0, 1}), where the original value is 𝑥 𝑖 , while the new value is 𝑥 𝑜 .a The transition 𝑡2 in the M-net shown to the left of the figure describes the program command ⟨(𝑥, 𝑦) := (𝑦, 𝑥)⟩. It specifies that the new values of 𝑥 and 𝑦 are the old values of 𝑦 and 𝑥, respectively, that is, it describes the intended semantics of the program assignment (𝑥, 𝑦) := (𝑦, 𝑥). By definition of the M-net calculus, transitions can be synchronised via conjugated inscriptions, where conjugation is indicated by a “hat” symbol b .

21.5 Bibliographical remarks and further reading

499

b(. . .). This also, For example, 𝑡1 and 𝑡2 can be synchronised via 𝑋 (. . .) and 𝑋 by definition, entails the unification of the variables inside the brackets (. . .), b(. . .) from the new transition’s as well as the disappearance of 𝑋 (. . .) and 𝑋 (i.e. 𝑡3 ’s) inscription. The net on the right-hand side has a synchronisable symbol 𝑌 which could, for instance, be used for a declaration of the program variable 𝑦 inside a block enclosing the program fragment (21.3). In this example, the place inscriptions e, i and x were not relevant, and they were omitted altogether for the net describing the variable 𝑥. These inscriptions are only used for non-initialised variables, or if the block (21.3) is found inside a sequence of commands, or inside an alternative command or a loop. Chapter 19 discusses various examples on an elementary level. The M-net on the top of Figure 21.9 can be transformed into an equivalent coloured net, because the types of the (M-net) variables 𝑥 𝑖 and 𝑥 𝑜 are known, namely {0, 1}. By allowing 𝑥 𝑖 and 𝑥 0 to range over {0, 1}, we get four combinations. Hence the associated coloured net (not shown here) has four transitions which are inscribed by 𝑋 (0, 0), 𝑋 (0, 1), 𝑋 (1, 0), and 𝑋 (1, 1). By contrast, in the M-net shown on the left-hand side of the figure, we do not know any of the exact types of the variables. Hence we cannot transform this net into a coloured net. On the right-hand side, there are two variables (namely 𝑥 𝑖 and 𝑥 𝑜 ) whose types (i.e., {0, 1} for both) are known, and two others (namely 𝑦 𝑖 and 𝑦 𝑜 ) whose types are unknown. We can transform it partially into a coloured net if we want to, again getting four transitions which are inscribed as follows: 𝑌 (0, 0) 𝑌 (1, 0) 𝑌 (0, 1) 𝑌 (1, 1)

if 𝑥 𝑖 if 𝑥 𝑖 if 𝑥 𝑖 if 𝑥 𝑖

= 0, 𝑥 𝑜 = 0, 𝑥 𝑜 = 1, 𝑥 𝑜 = 1, 𝑥 𝑜

=0 =1 =0 =1

It can be seen that the semantics of the program fragment (21.3) is consistently preserved by these (partial) transformations of M-nets into coloured nets. a

Variable initialisations and terminations have been omitted for the sake of clarity.

21.5 Bibliographical remarks and further reading Coloured Petri nets are due to Kurt Jensen [Jen81; Jen92]. The recent versions of coloured nets allow great freedom in annotating the elements of a net, and there is a tool for dealing with such nets [JKW07] (Kurt Jensen, Lars M. Kristensen, Lisa Wells).5 Present-day coloured nets go way beyond the coloured nets of Section 21.2 where just a bare-bones approach based on [BT97] (Eike Best, Thomas Thielke) was 5 See also http://cpntools.org (last checked October 2022).

500

21 High-level Petri Nets (𝑋 ( 𝑥 𝑖 , 𝑥 𝑜 ) | ∅ ) 𝑡1

{ 𝑥𝑖 }

(i | {0, 1} ) 0

{ 𝑥𝑜 } (e | {•} )



b( 𝑥 𝑖 , 𝑥 𝑜 ) , ({{𝑋 𝑌 ( 𝑦𝑖 , 𝑦𝑜 ) } 𝑡2 | { 𝑥𝑜 = 𝑦𝑖 ∧ 𝑜 𝑖 𝑦 = 𝑥 })

(x | {•} )

(e | {•} )



( {𝑌 ( 𝑦 𝑖 , 𝑦 𝑜 ) } | { 𝑥 𝑜 = 𝑦 𝑖 ∧ 𝑡3 𝑦𝑜 = 𝑥𝑖 } )

{ 𝑥𝑖 } 0 { 𝑥𝑜 }

(x | {•} )

Fig. 21.9: M-nets describing var 𝑥 : {0, 1} init 0 (top), ⟨(𝑥, 𝑦) := (𝑦, 𝑥)⟩ (l.h.s.), and their combination in the block (21.3) (r.h.s.). Transition 𝑡 3 is the synchronisation of b(. . .) and 𝑋 (. . .). 𝑡1 and 𝑡 2 over the conjugated symbols 𝑋

described, since – in keeping with the overall theme of this text – we are interested in a place/transition net semantics. This bare-bones approach is very much akin to Wolfgang Reisig’s earlier multirelation nets [Rei85]. The idea of describing the incidence matrix as a “matrix of matrices” also occurs in [Jen81] (Kurt Jensen) and in [SR87] by Einar Smith and Wolfgang Reisig. The five philosophers problem is from [Dij68b] (Edsger W. Dijkstra). Predicate/transition nets (not treated in this chapter) have been introduced by Hartmann J. Genrich and Kurt Lautenbach in [GL79]. In the early Seventies, coloured nets and predicate/transition nets were scientific “rivals”. Predicate/transition nets are closely related to both coloured nets and to M-nets. Algebraic nets are due to Jacques Vautherin [Vau86]. Our presentation follows Karsten Wolf née Schmidt [Sch95; Sch96]. For algebraic specifications as such, see, for instance, the books by Donald Sanella and Andrzej Tarlecki [ST12] and by Hartmut Ehrig and Bernd Mahr [EM85]. The algebraic card game net of Section 21.3 is from [Sch96]. M-nets and the idea to use unification for synchronisation and communication originate from [KP95] (Hanna Klaudel and Elisabeth Pelz) and [Fra95] (Wojciech Fraczak). M-nets are tailored to compositional Petri net semantics programming languages [Bes+95] (by the same authors, with Eike Best, Hans Fleischhack, and Richard P. Hopkins). This semantics is similar to the semantics described in Chapters 19 and 20. Unification is a well-known concept used in Logics and in Automated Reasoning.6 6 See https://en.wikipedia.org/wiki/Unification_(computer_science).

21.6 Exercises

501

Apart from the three high-level Petri net models described in the present chapter, a plethora of other models and extensions can be found in the literature. Typically, they are tailored to fit specific application areas. To mention but one example, GSPN (Generalized Stochastic Petri Nets) and their tool have had a very successful career. This model is based on the paper [MCB84] (by Marco Ajmone Marsan, Gianni Conte and Gianfranco Balbo) and is being maintained and promoted, amongst others, by Susanna Donatelli and Giuliana Franceschinis. See [Baa+09] (by the two aforementioned ladies, together with Souheib Baarir, Marco Beccuti, Davide Cerotti, and Massimiliano De Pierro).

21.6 Exercises Exercise 21.1 Top-level coloured net Define a coloured Petri net with exactly one place and one transition which describes the five philosophers. Specify the place inscription, the transition inscription, all arc inscriptions, and the incidence matrix. Exercise 21.2 Matrix of matrices, and relational calculus This exercise refers back to Section 6.2 (especially to Definition 6.3). Every strongly connected free-choice Petri net can be viewed as the unfolded version of a coloured net (in the sense of Section 21.2.3) by considering all PRESETS as coloured places and all conflict clusters as coloured transitions, and the original places and transitions as their colours. a) Consider the free-choice net of Figure 21.10 and sketch the incidence matrix of the coloured net for which 𝑁 is elementary in the sense of Section 21.2.3. Is it a coincidence that this is a square matrix? b) A coloured T-vector can be defined, analogously to a (coloured) marking (see Section 21.2.1), as a 𝑇-multiset. The vector (1 0 1 0 0) T is a T-invariant of the net 𝑁 displayed in Figure 21.10. Give the corresponding T-vector in the coloured analogue of 𝑁 whose incidence matrix has been calculated in Part a) of the exercise. Also explain how the product of the incidence matrix and this T-vector yields zero. Exercise 21.3 Constructing high-level nets Devise a meaningful a) coloured Petri net, b) algebraic Petri net, having the Petri net shown in Figure 21.11 as elementary semantics. Try to minimise the graphs underlying your nets, while still reflecting the idea that they describe processes sharing resources.

502

21 High-level Petri Nets 𝑠1

𝑎

𝑠2

𝑐

𝑏

𝑒

𝑑

𝑠3

Fig. 21.10: A free-choice Petri net 𝑁 to Figure 21.3. 𝑠1

𝑠5

𝑡5

𝑡1 𝑟1

𝑠6

𝑠2

𝑡4

𝑡6

𝑡2

𝑡8

𝑠3

𝑠7 𝑟2

𝑠4

𝑡3

𝑠8

𝑡7

Fig. 21.11: A place/transition Petri net to Figure 21.3.

Solution 21.1 (Exercise 21.1) See Figure 21.12. Solution 21.2 (Exercise 21.2) For a), compare the matrix shown on the left; for b), see the T-vector shown in the middle. The matrix multiplication follows the usual rule; the inner matrices are multiplied with each other, also according to the matrix multiplication. {𝑎, 𝑒} {𝑏, 𝑐, 𝑑}     {𝑠1 , 11 −1 −1 −1 𝑠2 } 11 −1 −1 −1   111 {𝑠3 } −1 −1

{𝑎, 𝑒} ⊗

10

T

{𝑏, 𝑐, 𝑑} 0 1 0

T

=

T {𝑠1 , 𝑠2 } 0 0 T {𝑠3 } 0

21.6 Exercises

503

𝜄 (𝑠) = {𝑔0 , 𝑔1 , 𝑔2 , 𝑔3 , 𝑔4 , 𝑝0 , 𝑝1 , 𝑝2 , 𝑝3 , 𝑝4 } 𝜄 (𝑡 ) = {𝑏0 , 𝑏1 , 𝑏2 , 𝑏3 , 𝑏4 , 𝑒0 , 𝑒1 , 𝑒2 , 𝑒3 , 𝑒4 }

𝑔0

𝑔1

𝑥

𝑔2

𝑠

𝑡

𝑔3

𝑦

𝑔4

where 𝑥 = { (𝑔0 , 𝑏0 ), (𝑔4 , 𝑏0 ), (𝑔1 , 𝑏1 ) , (𝑔0 , 𝑏1 ) , (𝑔2 , 𝑏2 ) , (𝑔1 , 𝑏2 ), (𝑔3 , 𝑏3 ) , (𝑔2 , 𝑏3 ) , (𝑔4 , 𝑏4 ) , (𝑔3 , 𝑏4 ) , ( 𝑝0 , 𝑒0 ) , ( 𝑝1 , 𝑒1 ) , ( 𝑝2 , 𝑒2 ), ( 𝑝3 , 𝑒3 ), ( 𝑝4 , 𝑒4 ) } 𝑦 = {𝑒0 , 𝑔0 ), (𝑒0 , 𝑔4 ) , (𝑒1 , 𝑔1 ), (𝑒1 , 𝑔0 ) , (𝑒2 , 𝑔2 ) , (𝑒2 , 𝑔1 ) , (𝑒3 , 𝑔3 ) , (𝑒3 , 𝑔2 ), (𝑒4 , 𝑔4 ) , (𝑒4 , 𝑔3 ) , (𝑏0 , 𝑝0 ), (𝑏1 , 𝑝1 ) , (𝑏2 , 𝑝2 ) , (𝑏3 , 𝑝3 ) , (𝑏4 , 𝑝4 ) } and

𝐶 =

𝑡 𝑠 −𝑥 + 𝑦

Fig. 21.12: Solution of Exercise 21.1.

It is not coincidental that the incidence matrix is square. There are as many coloured places as there are coloured transitions. The latter is the number of conflict clusters, which equals, by Lemma 6.5, by the free-choice property, and by transition-regularity, the number of transition presets. Solution 21.3 (Exercise 21.3) For possible solutions, see Figures 21.13 and 21.14. {left, right}

𝑥1

{𝑡1 , 𝑡5 } 𝑦1

{ (𝑟1 , 𝑡1 ) , (𝑟2 , 𝑡5 ) }

𝑦4 {left, right} 𝑥2 {𝑡4 , 𝑡8 }

{𝑡2 , 𝑡6 }

{ (𝑟2 , 𝑡2 ) , (𝑟1 , 𝑡6 ) }

{𝑟1 , 𝑟2 }

𝑦2 𝑥4 {left, right}

{ (𝑡3 , 𝑟1 ), (𝑡3 , 𝑟2 ) , (𝑡7 , 𝑟1 ) , (𝑡7 , 𝑟2 ) }

𝑥3 {left, right}

𝑦3

{𝑡3 , 𝑡7 }

where 𝑥𝑖 = { (left, 𝑡𝑖 ), (right, 𝑡𝑖+4 ) } and 𝑦𝑖 = { (𝑡𝑖 , left), (𝑡𝑖+4 , right) } for 1 ≤ 𝑖 ≤ 4

Fig. 21.13: A solution of Exercise 21.3(a).

504

21 High-level Petri Nets

site

1·x

var x: site 1·x grab1(x) site

1·x

1·x var x: site

var x: site

grab2(x)

1·x site

1·x

1·r1+1·r2 1·x site

1·x

var x: site

Sorts site,resource; Ops left,right : → site; r1,r2 : → resource; grab1,grab2 : site → resource; Eqs grab1(left)=r1 grab1(right)=r2 grab2(left)=r2 grab2(right)=r1

Fig. 21.14: A solution of Exercise 21.3(b).

resource

Literature [ABO09]

[Age74]

[AK77]

[Arn94]

[Baa+09]

[BB90]

[BBD15]

[BBD97]

[BD09]

[BD14a]

Krzysztof R. Apt, Frank S. de Boer, and Ernst-Rüdiger Olderog. Verification of Sequential and Concurrent Programs. Texts in Computer Science. Springer, 2009. isbn: 978-1-84882-744-8. doi: 10.1007/9781 - 84882 - 745 - 5. url: https : / / doi . org / 10 . 1007 / 978 - 1 84882-745-5. Tilak Agerwala. An analysis of controlling agents for asynchronous processes. Hopkins Computer Research Report 35. Johns Hopkins University, Aug. 1974. url: https://www.osti.gov/servlets/purl/ 4249057. Toshiro Araki and Tadao Kasami. “Decidable Problems on the Strong Connectivity of Petri Net Reachability Sets”. In: Theor. Comput. Sci. 4.1 (1977), pp. 99–119. doi: 10.1016/0304- 3975(77)90059- 7. url: https://doi.org/10.1016/0304-3975(77)90059-7. André Arnold. Finite transition systems - semantics of communicating systems. Prentice Hall international series in computer science. Prentice Hall, 1994. isbn: 978-0-13-092990-7. Souheib Baarir, Marco Beccuti, Davide Cerotti, Massimiliano De Pierro, Susanna Donatelli, and Giuliana Franceschinis. “The GreatSPN tool: recent enhancements”. In: SIGMETRICS Perform. Evaluation Rev. 36.4 (2009), pp. 4–9. doi: 10.1145/1530873.1530876. url: https://doi.org/10.1145/1530873.1530876. Gérard Berry and Gérard Boudol. “The Chemical Abstract Machine”. In: Conference Record of the Seventeenth Annual ACM Symposium on Principles of Programming Languages, San Francisco, California, USA, January 1990. Ed. by Frances E. Allen. ACM Press, 1990, pp. 81– 94. doi: 10 . 1145 / 96709 . 96717. url: https : / / doi . org / 10 . 1145/96709.96717. Éric Badouel, Luca Bernardinello, and Philippe Darondeau. Petri Net Synthesis. Texts in Theoretical Computer Science. An EATCS Series. Springer, 2015. isbn: 978-3-662-47966-7. doi: 10 . 1007 / 978 - 3 662- 47967- 4. url: https://doi.org/10.1007/978- 3- 66247967-4. Éric Badouel, Luca Bernardinello, and Philippe Darondeau. “The Synthesis Problem for Elementary Net Systems is NP-Complete”. In: Theor. Comput. Sci. 186.1-2 (1997), pp. 107–134. doi: 10 . 1016 / S0304 3975(96)00219- 8. url: https://doi.org/10.1016/S03043975(96)00219-8. Eike Best and Philippe Darondeau. “A decomposition theorem for finite persistent transition systems”. In: Acta Informatica 46.3 (2009), pp. 237–254. doi: 10.1007/s00236- 009- 0095- 6. url: https: //doi.org/10.1007/s00236-009-0095-6. Eike Best and Raymond R. Devillers. “Characterisation of the State Spaces of Live and Bounded Marked Graph Petri Nets”. In: Language

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. Best and R. Devillers, Petri Net Primer, Computer Science Foundations and Applied Logic, https://doi.org/10.1007/978-3-031-48278-6

505

506

[BD14b]

[BD15a]

[BD15b]

[BD15c]

[BD16]

[BD17]

[BD18]

[BD87]

LITERATURE

and Automata Theory and Applications - 8th International Conference, LATA 2014, Madrid, Spain, March 10-14, 2014. Proceedings. Ed. by Adrian-Horia Dediu, Carlos Martín-Vide, José Luis Sierra-Rodríguez, and Bianca Truthe. Vol. 8370. Lecture Notes in Computer Science. Springer, 2014, pp. 161–172. doi: 10.1007/978- 3- 319- 049212\_13. url: https://doi.org/10.1007/978- 3- 319- 049212\_13. Eike Best and Raymond R. Devillers. “Synthesis of Persistent Systems”. In: Application and Theory of Petri Nets and Concurrency - 35th International Conference, PETRI NETS 2014, Tunis, Tunisia, June 23-27, 2014. Proceedings. Ed. by Gianfranco Ciardo and Ekkart Kindler. Vol. 8489. Lecture Notes in Computer Science. Springer, 2014, pp. 111–129. doi: 10.1007/978-3-319-07734-5\_7. url: https: //doi.org/10.1007/978-3-319-07734-5\_7. Eike Best and Raymond R. Devillers. “State space axioms for Tsystems”. In: Acta Informatica 52.2-3 (2015), pp. 133–152. doi: 10. 1007/s00236- 015- 0219- 0. url: https://doi.org/10.1007/ s00236-015-0219-0. Eike Best and Raymond R. Devillers. “Synthesis and reengineering of persistent systems”. In: Acta Informatica 52.1 (2015), pp. 35–60. doi: 10 . 1007 / s00236 - 014 - 0209 - 7. url: https : / / doi . org / 10 . 1007/s00236-014-0209-7. Eike Best and Raymond R. Devillers. “Synthesis of Live and Bounded Persistent Systems”. In: Fundam. Informaticae 140.1 (2015), pp. 39– 59. doi: 10.3233/FI- 2015- 1244. url: https://doi.org/10. 3233/FI-2015-1244. Eike Best and Raymond R. Devillers. “The Power of Prime Cycles”. In: Application and Theory of Petri Nets and Concurrency - 37th International Conference, PETRI NETS 2016, Toruń, Poland, June 19-24, 2016. Proceedings. Ed. by Fabrice Kordon and Daniel Moldt. Vol. 9698. Lecture Notes in Computer Science. Springer, 2016, pp. 59–78. doi: 10.1007/978-3-319-39086-4\_5. url: https://doi.org/10. 1007/978-3-319-39086-4\_5. Eike Best and Raymond R. Devillers. “Characterisation of the state spaces of marked graph Petri nets”. In: Inf. Comput. 253 (2017), pp. 399–410. doi: 10 . 1016 / j . ic . 2016 . 06 . 006. url: https : //doi.org/10.1016/j.ic.2016.06.006. Eike Best and Raymond R. Devillers. “Pre-synthesis of Petri nets based on prime cycles and distance paths”. In: Sci. Comput. Program. 157 (2018), pp. 41–55. doi: 10 . 1016 / j . scico . 2017 . 07 . 005. url: https://doi.org/10.1016/j.scico.2017.07.005. Eike Best and Raymond R. Devillers. “Sequential and Concurrent Behaviour in Petri Net Theory”. In: Theor. Comput. Sci. 55.1 (1987), pp. 87–136. doi: 10.1016/0304-3975(87)90090-9. url: https: //doi.org/10.1016/0304-3975(87)90090-9.

LITERATURE

[BD96]

[BDE20]

[BDE92]

[BDK01]

[BDS17]

[BDS18]

[BE16]

[BE91]

507

Éric Badouel and Philippe Darondeau. “Theory of Regions”. In: Lectures on Petri Nets I: Basic Models, Advances in Petri Nets, the volumes are based on the Advanced Course on Petri Nets, held in Dagstuhl, September 1996. Ed. by Wolfgang Reisig and Grzegorz Rozenberg. Vol. 1491. Lecture Notes in Computer Science. Springer, 1996, pp. 529–586. doi: 10 . 1007 / 3 - 540 - 65306 - 6 \ _22. url: https://doi.org/10.1007/3-540-65306-6\_22. Eike Best, Raymond R. Devillers, and Evgeny Erofeev. “A New Property of Choice-Free Petri Net Systems”. In: Application and Theory of Petri Nets and Concurrency - 41st International Conference, PETRI NETS 2020, Paris, France, June 24-25, 2020, Proceedings. 2020, pp. 89–108. doi: 10.1007/978-3-030-51831-8\_5. url: https: //doi.org/10.1007/978-3-030-51831-8\_5. Eike Best, Jörg Desel, and Javier Esparza. “Traps Characterize Home States in Free Choice Systems”. In: Theor. Comput. Sci. 101.2 (1992), pp. 161–176. doi: 10.1016/0304-3975(92)90048-K. url: https: //doi.org/10.1016/0304-3975(92)90048-K. Eike Best, Raymond R. Devillers, and Maciej Koutny. Petri net algebra. Monographs in Theoretical Computer Science. An EATCS Series. Springer, 2001. isbn: 978-3-540-67398-9. doi: 10 . 1007 / 978 - 3 662- 04457- 5. url: https://doi.org/10.1007/978- 3- 66204457-5. Eike Best, Raymond R. Devillers, and Uli Schlachter. “A GraphTheoretical Characterisation of State Separation”. In: SOFSEM 2017: Theory and Practice of Computer Science - 43rd International Conference on Current Trends in Theory and Practice of Computer Science, Limerick, Ireland, January 16-20, 2017, Proceedings. Ed. by Bernhard Steffen, Christel Baier, Mark van den Brand, Johann Eder, Mike Hinchey, and Tiziana Margaria. Vol. 10139. Lecture Notes in Computer Science. Springer, 2017, pp. 163–175. doi: 10.1007/978- 3- 31951963- 0\_13. url: https://doi.org/10.1007/978- 3- 31951963-0\_13. Eike Best, Raymond R. Devillers, and Uli Schlachter. “Bounded choicefree Petri net synthesis: algorithmic issues”. In: Acta Informatica 55.7 (2018), pp. 575–611. doi: 10 . 1007 / s00236 - 017 - 0310 - 9. url: https://doi.org/10.1007/s00236-017-0310-9. Eike Best and Javier Esparza. “Existence of home states in Petri nets is decidable”. In: Inf. Process. Lett. 116.6 (2016), pp. 423–427. doi: 10 . 1016 / j . ipl . 2016 . 01 . 011. url: https : / / doi . org / 10 . 1016/j.ipl.2016.01.011. Eike Best and Javier Esparza. “Model Checking of Persistent Petri Nets”. In: Computer Science Logic, 5th Workshop, CSL ’91, Berne, Switzerland, October 7-11, 1991, Proceedings. Ed. by Egon Börger, Gerhard Jäger, Hans Kleine Büning, and Michael M. Richter. Vol. 626. Lecture Notes in Computer Science. Springer, 1991, pp. 35–52. doi:

508

[Ber+08]

[Bes+20]

[Bes84]

[Bes86]

[Bes+95]

[Bes96]

[BF97]

[BH93]

LITERATURE

10 . 1007 / BFb0023756. url: https : / / doi . org / 10 . 1007 / BFb0023756. Mark de Berg, Otfried Cheong, Marc J. van Kreveld, and Mark H. Overmars. Computational geometry: algorithms and applications, 3rd Edition. Springer, 2008. isbn: 9783540779735. url: https://www. worldcat.org/oclc/227584184. Eike Best, Raymond R. Devillers, Evgeny Erofeev, and Harro Wimmel. “Target-oriented Petri Net Synthesis”. In: Fundam. Informaticae 175.14 (2020), pp. 97–122. doi: 10.3233/FI- 2020- 1949. url: https: //doi.org/10.3233/FI-2020-1949. Eike Best. “Fairness and Conspiracies”. In: Inf. Process. Lett. 18.4 (1984), 215–220, Erratum IPL Vol. 19, p.162. doi: 10.1016/00200190(84 ) 90114 - 5. url: https : / / doi . org / 10 . 1016 / 0020 0190(84)90114-5. Eike Best. “Structure Theory of Petri Nets: the Free Choice Hiatus”. In: Petri Nets: Central Models and Their Properties, Advances in Petri Nets 1986, Part I, Proceedings of an Advanced Course, Bad Honnef, Germany, 8-19 September 1986. Ed. by Wilfried Brauer, Wolfgang Reisig, and Grzegorz Rozenberg. Vol. 254. Lecture Notes in Computer Science. Springer, 1986, pp. 168–205. doi: 10.1007/BFb0046840. url: https://doi.org/10.1007/BFb0046840. Eike Best, Hans Fleischhack, Wojciech Fraczak, Richard P. Hopkins, Hanna Klaudel, and Elisabeth Pelz. “A Class of Composable High Level Petri Nets with an Application to the Semantics of B(PN)2 ”. In: Application and Theory of Petri Nets 1995, 16th International Conference, Turin, Italy, June 26-30, 1995, Proceedings. Ed. by Giorgio De Michelis and Michel Diaz. Vol. 935. Lecture Notes in Computer Science. Springer, 1995, pp. 103–120. doi: 10.1007/3-540-60029-9\_36. url: https://doi.org/10.1007/3-540-60029-9\_36. Eike Best. Semantics of sequential and parallel programs. Prentice Hall International series in computer science. Prentice Hall, 1996. isbn: 9780-13-460643-9. Zakaria Bouziane and Alain Finkel. “Cyclic Petri net reachability sets are semi-linear effectively constructible”. In: Second International Workshop on Verification of Infinite State Systems, Infinity 1997, Bologna, Italy, July 11-12, 1997. Ed. by Faron Moller. Vol. 9. Electronic Notes in Theoretical Computer Science. Elsevier, 1997, pp. 15– 24. doi: 10.1016/S1571-0661(05)80423-2. url: https://doi. org/10.1016/S1571-0661(05)80423-2. Eike Best and Richard P. Hopkins. “B(PN)2 - a Basic Petri Net Programming Notation”. In: PARLE ’93, Parallel Architectures and Languages Europe, 5th International PARLE Conference, Munich, Germany, June 14-17, 1993, Proceedings. Ed. by Arndt Bode, Mike Reeve, and Gottfried Wolf. Vol. 694. Lecture Notes in Computer Science. Springer,

LITERATURE

[BK03]

[BM84]

[BO09]

[BS15]

[BS83]

[BT97]

509

1993, pp. 379–390. doi: 10 . 1007 / 3 - 540 - 56891 - 3 \ _30. url: https://doi.org/10.1007/3-540-56891-3\_30. Eike Best and Maciej Koutny. “Process Algebra: A Petri-Net-Oriented Tutorial”. In: Lectures on Concurrency and Petri Nets, Advances in Petri Nets [This tutorial volume originates from the 4th Advanced Course on Petri Nets, ACPN 2003, held in Eichstätt, Germany in September 2003. In addition to lectures given at ACPN 2003, additional chapters have been commissioned]. Ed. by Jörg Desel, Wolfgang Reisig, and Grzegorz Rozenberg. Vol. 3098. Lecture Notes in Computer Science. Springer, 2003, pp. 180–209. doi: 10.1007/978-3-540-27755-2\_5. url: https://doi.org/10.1007/978-3-540-27755-2\_5. Eike Best and Agathe Merceron. “Frozen tokens and D-continuity: A study in relating system properties to process properties”. In: Advances in Petri Nets 1984, European Workshop on Applications and Theory in Petri Nets, covers the last two years which include the workshop 1983 in Toulouse and the workshop 1984 in Aarhus, selected papers. Ed. by Grzegorz Rozenberg, Hartmann J. Genrich, and Gérard Roucairol. Vol. 188. Lecture Notes in Computer Science. Springer, 1984, pp. 48– 61. doi: 10.1007/3-540-15204-0\_4. url: https://doi.org/ 10.1007/3-540-15204-0\_4. Kamila Barylska and Edward Ochmanski. “Levels of Persistency in Place/Transition Nets”. In: Fundam. Informaticae 93.1-3 (2009), pp. 33–43. doi: 10.3233/FI-2009-0086. url: https://doi.org/ 10.3233/FI-2009-0086. Eike Best and Uli Schlachter. “Analysis of Petri Nets and Transition Systems”. In: Proceedings 8th Interaction and Concurrency Experience, ICE 2015, Grenoble, France, 4-5th June 2015. Ed. by Sophia Knight, Ivan Lanese, Alberto Lluch-Lafuente, and Hugo Torres Vieira. Vol. 189. EPTCS. 2015, pp. 53–67. doi: 10.4204/EPTCS.189.6. url: https://doi.org/10.4204/EPTCS.189.6. Eike Best and M. W. Shields. “Some Equivalence Results for Free Choice Nets and Simple Nets and on the Periodicity of Live Free Choice Nets”. In: CAAP’83, Trees in Algebra and Programming, 8th Colloquium, L’Aquila, Italy, March 9-11, 1983, Proceedings. Ed. by Giorgio Ausiello and Marco Protasi. Vol. 159. Lecture Notes in Computer Science. Springer, 1983, pp. 141–154. doi: 10.1007/3- 54012727-5\_7. url: https://doi.org/10.1007/3-540-127275\_7. Eike Best and Thomas Thielke. “Orthogonal Transformations for Coloured Petri Nets”. In: Application and Theory of Petri Nets 1997, 18th International Conference, ICATPN ’97, Toulouse, France, June 23-27, 1997, Proceedings. Ed. by Pierre Azéma and Gianfranco Balbo. Vol. 1248. Lecture Notes in Computer Science. Springer, 1997, pp. 447–466. doi: 10.1007/3- 540- 63139- 9\_50. url: https: //doi.org/10.1007/3-540-63139-9\_50.

510

[BV84]

[BW90]

[CC77]

[CCS91]

[CHN13]

[CHR95]

[Chu36]

[Cla+00]

LITERATURE

Eike Best and Klaus Voss. “Free Choice Systems Have Home States”. In: Acta Informatica 21 (1984), pp. 89–100. doi: 10.1007/ BF00289141. url: https://doi.org/10.1007/BF00289141. Jos C. M. Baeten and W. P. Weijland. Process algebra. Vol. 18. Cambridge tracts in theoretical computer science. Cambridge University Press, 1990. isbn: 978-0-521-40043-5. Patrick Cousot and Radhia Cousot. “Abstract Interpretation: A Unified Lattice Model for Static Analysis of Programs by Construction or Approximation of Fixpoints”. In: Conference Record of the Fourth ACM Symposium on Principles of Programming Languages, Los Angeles, California, USA, January 1977. Ed. by Robert M. Graham, Michael A. Harrison, and Ravi Sethi. ACM, 1977, pp. 238–252. doi: 10.1145/512950.512973. url: https://doi.org/10.1145/ 512950.512973. Javier Campos, Giovanni Chiola, and Manuel Silva Suárez. “Properties and Performance Bounds for Closed Free Choice Synchronized Monoclass Queueing Networks”. In: IEEE Transactions on Automatic Control (Special Issue on Multi-Dimensional Queueing Systems) 36.12 (1991), pp. 1368 –1381. url: http : / / webdiis . unizar . es / ~jcampos/wordpress/wp- content/plugins/papercite/pdf/ ccs-tac-91.pdf. Jürgen Christ, Jochen Hoenicke, and Alexander Nutz. “Proof Tree Preserving Interpolation”. In: Tools and Algorithms for the Construction and Analysis of Systems - 19th International Conference, TACAS 2013, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2013, Rome, Italy, March 16-24, 2013. Proceedings. Ed. by Nir Piterman and Scott A. Smolka. Vol. 7795. Lecture Notes in Computer Science. Springer, 2013, pp. 124–138. doi: 10.1007/978-3-642-36742-7\_9. url: https://doi.org/10. 1007/978-3-642-36742-7\_9. Ludmila Cherkasova, Rodney R. Howell, and Louis E. Rosier. “Bounded Self-Stabilizing Petri Nets”. In: Acta Informatica 32.3 (1995), pp. 189–207. doi: 10 . 1007 / BF01178259. url: https : //doi.org/10.1007/BF01178259. Alonzo Church. “An Unsolvable Problem of Elementary Number Theory”. In: American Journal of Mathematics 58.2 (Apr. 1936), pp. 345– 363. Edmund M. Clarke, Orna Grumberg, Somesh Jha, Yuan Lu, and Helmut Veith. “Counterexample-Guided Abstraction Refinement”. In: Computer Aided Verification, 12th International Conference, CAV 2000, Chicago, IL, USA, July 15-19, 2000, Proceedings. Ed. by E. Allen Emerson and A. Prasad Sistla. Vol. 1855. Lecture Notes in Computer Science. Springer, 2000, pp. 154–169. doi: 10.1007/10722167\_15. url: https://doi.org/10.1007/10722167\_15.

LITERATURE

[CM75]

[CO21]

[Com+71]

[Com72] [Cor+00]

[Cor+97]

[CR95]

[Cze+21]

[Cze22]

511

Stefano Crespi-Reghizzi and Dino Mandrioli. “A Decidability Theorem for a Class of Vector-Addition Systems”. In: Inf. Process. Lett. 3.3 (1975), pp. 78–80. doi: 10.1016/0020- 0190(75)90020- 4. url: https://doi.org/10.1016/0020-0190(75)90020-4. Wojciech Czerwinski and Lukasz Orlikowski. “Reachability in Vector Addition Systems is Ackermann-complete”. In: 62nd IEEE Annual Symposium on Foundations of Computer Science, FOCS 2021, Denver, CO, USA, February 7-10, 2022. IEEE, 2021, pp. 1229–1240. doi: 10.1109/FOCS52979.2021.00120. url: https://doi.org/10. 1109/FOCS52979.2021.00120. Fred G. Commoner, Anatol W. Holt, Shimon Even, and Amir Pnueli. “Marked Directed Graphs”. In: J. Comput. Syst. Sci. 5.5 (1971), pp. 511–523. doi: 10.1016/S0022-0000(71)80013-2. url: https: //doi.org/10.1016/S0022-0000(71)80013-2. Fred G. Commoner. Deadlocks in Petri Nets. Technical Report CA7206-2311. Applied Data Research, Wakefield, Mass., June 1972. Jordi Cortadella, Michael Kishinevsky, Alex Kondratyev, Luciano Lavagno, and Alexandre Yakovlev. “Hardware and Petri Nets: Application to Asynchronous Circuit Design”. In: Application and Theory of Petri Nets 2000, 21st International Conference, ICATPN 2000, Aarhus, Denmark, June 26-30, 2000, Proceeding. 2000, pp. 1–15. doi: 10 . 1007/3-540-44988-4\_1. url: https://doi.org/10.1007/3540-44988-4\_1. J. Cortadella, M. Kishinevsky, A. Kondratyev, L. Lavagno, and A. Yakovlev. “Petrify: a tool for manipulating concurrent specifications and synthesis of asynchronous controllers”. In: IEICE Transactions on Information and Systems E80-D.3 (Mar. 1997), pp. 315–325. Piotr Chrzastowski-Wachtel and Marek Raczunas. “Orbits, half-frozen tokens and the liveness of weighted circuits”. In: Proceedings of the International Workshop on Structures in Concurrency Theory, STRICT 1995, Berlin, Germany, May 11-13, 1995. Ed. by Jörg Desel. Workshops in Computing. Springer, 1995, pp. 116–128. doi: 10.1007/9781-4471-3078-9\_8. url: https://doi.org/10.1007/978-14471-3078-9\_8. Wojciech Czerwinski, Slawomir Lasota, Ranko Lazic, Jérôme Leroux, and Filip Mazowiecki. “The Reachability Problem for Petri Nets Is Not Elementary”. In: J. ACM 68.1 (2021), 7:1–7:28. doi: 10.1145/ 3422822. url: https://doi.org/10.1145/3422822. Wojciech Czerwinski. “Involved VASS Zoo (Invited Talk)”. In: 33rd International Conference on Concurrency Theory, CONCUR 2022, September 12-16, 2022, Warsaw, Poland. Ed. by Bartek Klin, Slawomir Lasota, and Anca Muscholl. Vol. 243. LIPIcs. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2022, 5:1–5:13. doi: 10 . 4230 / LIPIcs . CONCUR . 2022 . 5. url: https : / / doi . org / 10 . 4230 / LIPIcs.CONCUR.2022.5.

512

[Dar03]

[DE95]

[Des92a]

[Des92b]

[Dev+03]

[Dev16]

[Dev18]

[Dev19]

[Dev21]

LITERATURE

Philippe Darondeau. “Unbounded Petri Net Synthesis”. In: Lectures on Concurrency and Petri Nets, Advances in Petri Nets [This tutorial volume originates from the 4th Advanced Course on Petri Nets, ACPN 2003, held in Eichstätt, Germany in September 2003. In addition to lectures given at ACPN 2003, additional chapters have been commissioned]. Ed. by Jörg Desel, Wolfgang Reisig, and Grzegorz Rozenberg. Vol. 3098. Lecture Notes in Computer Science. Springer, 2003, pp. 413–438. doi: 10.1007/978-3-540-27755-2\_11. url: https://doi.org/10.1007/978-3-540-27755-2\_11. Jörg Desel and Javier Esparza. Free Choice Petri Nets. Vol. 40. Cambridge Tracts in Theoretical Computer Science. Cambridge University Press, 1995. Jörg Desel. “A Proof of the Rank Theorem for Extended Free Choice Nets”. In: Application and Theory of Petri Nets 1992, 13th International Conference, Sheffield, UK, June 22-26, 1992, Proceedings. Ed. by Kurt Jensen. Vol. 616. Lecture Notes in Computer Science. Springer, 1992, pp. 134–153. doi: 10 . 1007 / 3 - 540 - 55676 - 1 \ _8. url: https : //doi.org/10.1007/3-540-55676-1\_8. Jörg Desel. Struktur und Analyse von Free-Choice-Petrinetzen. DUV Informatik. Deutscher Universitätsverlag, 1992. isbn: 978-3-82442030-8. url: https://d-nb.info/920461611. Raymond R. Devillers, Hanna Klaudel, Maciej Koutny, and Franck Pommereau. “Asynchronous Box Calculus”. In: Fundam. Informaticae 54.4 (2003), pp. 295–344. url: http://content.iospress.com/ articles/fundamenta-informaticae/fi54-4-01. Raymond R. Devillers. “Products of Transition Systems and Additions of Petri Nets”. In: 16th International Conference on Application of Concurrency to System Design, ACSD 2016, Torun, Poland, June 19-24, 2016. Ed. by Jörg Desel and Alex Yakovlev. IEEE Computer Society, 2016, pp. 65–73. doi: 10.1109/ACSD.2016.10. url: https://doi. org/10.1109/ACSD.2016.10. Raymond R. Devillers. “Factorisation of transition systems”. In: Acta Informatica 55.4 (2018), pp. 339–362. doi: 10.1007/s00236-0170300-y. url: https://doi.org/10.1007/s00236-017-0300-y. Raymond R. Devillers. “Articulation of Transition Systems and Its Application to Petri Net Synthesis”. In: Application and Theory of Petri Nets and Concurrency - 40th International Conference, PETRI NETS 2019, Aachen, Germany, June 23-28, 2019, Proceedings. 2019, pp. 113–126. doi: 10.1007/978-3-030-21571-2\_8. url: https: //doi.org/10.1007/978-3-030-21571-2\_8. Raymond R. Devillers. “Articulations and Products of Transition Systems and their Applications to Petri Net Synthesis”. In: Fundam. Informaticae 183.1-2 (2021), pp. 1–31. doi: 10.3233/FI-2021-2080. url: https://doi.org/10.3233/FI-2021-2080.

LITERATURE

[Dev23]

[Die12] [Dij68a]

[Dij68b]

[Dij76]

[DR96]

[EH00]

[EH01]

[EH08]

[EM00]

513

Raymond R. Devillers. “On the Reversibility of Circular Conservative Petri Nets”. In: Application and Theory of Petri Nets and Concurrency - 44th International Conference, PETRI NETS 2023, Lisbon, Portugal, June 25-30, 2023, Proceedings. Ed. by Luís Gomes and Robert Lorenz. Vol. 13929. Lecture Notes in Computer Science. Springer, 2023, pp. 307–323. doi: 10.1007/978-3-031-33620-1\_17. url: https://doi.org/10.1007/978-3-031-33620-1\_17. Reinhard Diestel. Graph Theory, 4th Edition. Vol. 173. Graduate texts in mathematics. Springer, 2012. isbn: 978-3-642-14278-9. Edsger W. Dijkstra. “The Structure of "THE"-Multiprogramming System”. In: Commun. ACM 11.5 (1968), pp. 341–346. doi: 10.1145/ 363095 . 363143. url: https : / / doi . org / 10 . 1145 / 363095 . 363143. E.W. Dijkstra. “Co-operating sequential processes”. In: Proc. Programming Languages: Summer School held in Villard-le-Lans, 1966. Ed. by F. Genuys. 1968, pp. 43–112. Edsger W. Dijkstra. A Discipline of Programming. Prentice-Hall, 1976. isbn: 013215871X. url: https : / / www . worldcat . org / oclc / 01958445. Jörg Desel and Wolfgang Reisig. “The Synthesis Problem of Petri Nets”. In: Acta Informatica 33.4 (1996), pp. 297–315. doi: 10.1007/ s002360050046. url: https : / / doi . org / 10 . 1007 / s002360 050046. Javier Esparza and Keijo Heljanko. “A New Unfolding Approach to LTL Model Checking”. In: Automata, Languages and Programming, 27th International Colloquium, ICALP 2000, Geneva, Switzerland, July 9-15, 2000, Proceedings. Ed. by Ugo Montanari, José D. P. Rolim, and Emo Welzl. Vol. 1853. Lecture Notes in Computer Science. Springer, 2000, pp. 475–486. doi: 10 . 1007 / 3 - 540 - 45022 - X \ _40. url: https://doi.org/10.1007/3-540-45022-X\_40. Javier Esparza and Keijo Heljanko. “Implementing LTL Model Checking with Net Unfoldings”. In: Model Checking Software, 8th International SPIN Workshop, Toronto, Canada, May 19-20, 2001, Proceedings. Ed. by Matthew B. Dwyer. Vol. 2057. Lecture Notes in Computer Science. Springer, 2001, pp. 37–56. doi: 10.1007/3- 540- 451390\_4. url: https://doi.org/10.1007/3-540-45139-0\_4. Javier Esparza and Keijo Heljanko. Unfoldings - A Partial-Order Approach to Model Checking. Monographs in Theoretical Computer Science. An EATCS Series. Springer, 2008. isbn: 978-3-540-77425-9. doi: 10.1007/978-3-540-77426-6. url: https://doi.org/10. 1007/978-3-540-77426-6. Javier Esparza and Stephan Melzer. “Verification of Safety Properties Using Integer Programming: Beyond the State Equation”. In: Formal Methods Syst. Des. 16.2 (2000), pp. 159–189. doi: 10 . 1023 /

514

[EM85]

[Eng91]

[ER90]

[Ero18]

[ERV96]

[ERW21]

[ERW22]

[ES01]

LITERATURE

A : 1008743212620. url: https : / / doi . org / 10 . 1023 / A : 1008743212620. Hartmut Ehrig and Bernd Mahr. Fundamentals of Algebraic Specification 1: Equations and Initial Semantics. Vol. 6. EATCS Monographs on Theoretical Computer Science. Springer, 1985. isbn: 3-540-13718-1. doi: 10.1007/978-3-642-69962-7. url: https://doi.org/10. 1007/978-3-642-69962-7. Joost Engelfriet. “Branching Processes of Petri Nets”. In: Acta Informatica 28.6 (1991), pp. 575–591. doi: 10.1007/BF01463946. url: https://doi.org/10.1007/BF01463946. Andrzej Ehrenfeucht and Grzegorz Rozenberg. “Partial (Set) 2Structures. Part II: State Spaces of Concurrent Systems”. In: Acta Informatica 27.4 (1990), pp. 343–368. doi: 10.1007/BF00264612. url: https://doi.org/10.1007/BF00264612. Evgeny Erofeev. “Characterisation of a Class of Petri Net Solvable Transition Systems”. PhD thesis. Carl von Ossietzky Universität Oldenburg, 2018. Javier Esparza, Stefan Römer, and Walter Vogler. “An Improvement of McMillan’s Unfolding Algorithm”. In: Tools and Algorithms for Construction and Analysis of Systems, Second International Workshop, TACAS ’96, Passau, Germany, March 27-29, 1996, Proceedings. Ed. by Tiziana Margaria and Bernhard Steffen. Vol. 1055. Lecture Notes in Computer Science. Springer, 1996, pp. 87–106. doi: 10.1007/3540- 61042- 1\_40. url: https://doi.org/10.1007/3- 54061042-1\_40. Javier Esparza, Mikhail A. Raskin, and Christoph Welzel. “Computing Parameterized Invariants of Parameterized Petri Nets”. In: Application and Theory of Petri Nets and Concurrency - 42nd International Conference, PETRI NETS 2021, Virtual Event, June 23-25, 2021, Proceedings. Ed. by Didier Buchs and Josep Carmona. Vol. 12734. Lecture Notes in Computer Science. Springer, 2021, pp. 141–163. doi: 10.1007/9783-030-76983-3\_8. url: https://doi.org/10.1007/978-3030-76983-3\_8. Javier Esparza, Mikhail A. Raskin, and Christoph Welzel. “Regular Model Checking Upside-Down: An Invariant-Based Approach”. In: 33rd International Conference on Concurrency Theory, CONCUR 2022, September 12-16, 2022, Warsaw, Poland. Ed. by Bartek Klin, Slawomir Lasota, and Anca Muscholl. Vol. 243. LIPIcs. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2022, 23:1–23:19. doi: 10.4230/ LIPIcs.CONCUR.2022.23. url: https://doi.org/10.4230/ LIPIcs.CONCUR.2022.23. Javier Esparza and Claus Schröter. “Unfolding Based Algorithms for the Reachability Problem”. In: Fundam. Informaticae 47.3-4 (2001), pp. 231–245. url: http://content.iospress.com/articles/ fundamenta-informaticae/fi47-3-4-05.

LITERATURE

[Esp94]

[Esp96]

[Esp98]

[Far02] [FEJ89]

[FHK20]

[Fin91]

[Fle92]

[Fra86]

515

Javier Esparza. “Model Checking Using Net Unfoldings”. In: Sci. Comput. Program. 23.2-3 (1994), pp. 151–195. doi: 10 . 1016 / 0167 6423(94 ) 00019 - 0. url: https : / / doi . org / 10 . 1016 / 0167 6423(94)00019-0. Javier Esparza. “Decidability and Complexity of Petri Net Problems An Introduction”. In: Lectures on Petri Nets I: Basic Models, Advances in Petri Nets, the volumes are based on the Advanced Course on Petri Nets, held in Dagstuhl, September 1996. Ed. by Wolfgang Reisig and Grzegorz Rozenberg. Vol. 1491. Lecture Notes in Computer Science. Springer, 1996, pp. 374–428. doi: 10.1007/3-540-65306-6\_20. url: https://doi.org/10.1007/3-540-65306-6\_20. Javier Esparza. “Reachability in Live and Safe Free-Choice Petri Nets is NP-Complete”. In: Theor. Comput. Sci. 198.1-2 (1998), pp. 211–224. doi: 10.1016/S0304-3975(97)00235-1. url: https://doi.org/ 10.1016/S0304-3975(97)00235-1. Julius Farkas. “Theorie der einfachen Ungleichungen.” In: Journal für die Reine und Angewandte Mathematik 124 (1902), pp. 1–24. Davide de Frutos Escrig and Colette Johnen. Decidability of home space property. Tech. rep. LRI-503. Univ. de Paris-Sud, Centre d’Orsay: Laboratoire de Recherche en Informatique, July 1989. Alain Finkel, Serge Haddad, and Igor Khmelnitsky. “Minimal Coverability Tree Construction Made Complete and Efficient”. In: Foundations of Software Science and Computation Structures - 23rd International Conference, FOSSACS 2020, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2020, Dublin, Ireland, April 25-30, 2020, Proceedings. Ed. by Jean Goubault-Larrecq and Barbara König. Vol. 12077. Lecture Notes in Computer Science. Springer, 2020, pp. 237–256. doi: 10.1007/978- 3- 030- 452315\_13. url: https://doi.org/10.1007/978- 3- 030- 452315\_13. Alain Finkel. “The Minimal Coverability Graph for Petri Nets”. In: Advances in Petri Nets 1993, Papers from the 12th International Conference on Applications and Theory of Petri Nets, Gjern, Denmark, June 1991. 1991, pp. 210–243. doi: 10.1007/3-540-56689-9\_45. url: https://doi.org/10.1007/3-540-56689-9\_45. Hans Fleischhack. “P-Superfairness in Nets”. In: Application and Theory of Petri Nets 1992, 13th International Conference, Sheffield, UK, June 22-26, 1992, Proceedings. Ed. by Kurt Jensen. Vol. 616. Lecture Notes in Computer Science. Springer, 1992, pp. 193–204. doi: 10.1007/3- 540- 55676- 1\_11. url: https://doi.org/10. 1007/3-540-55676-1\_11. Nissim Francez. Fairness. Texts and Monographs in Computer Science. Springer, 1986. isbn: 978-3-540-96235-9. doi: 10 . 1007 / 978 - 1 4612-4886-6. url: https://doi.org/10.1007/978-1-46124886-6.

516

[Fra95]

[Fur76] [GB+82]

[GGS09]

[GGS13]

[GH19]

[GHM03]

[GL73] [GL79]

[Gla05]

LITERATURE

Wojciech Fraczak. “Multi-Action Process Algebra”. In: Algorithms, Concurrency and Knowledge: 1995 Asian Computing Science Conference, ACSC ’95, Pathumthani, Thailand, December 11-13, 1995, Proceedings. Ed. by Kanchana Kanchanasut and Jean-Jacques Lévy. Vol. 1023. Lecture Notes in Computer Science. Springer, 1995, pp. 126–140. doi: 10.1007/3- 540- 60688- 2\_40. url: https: //doi.org/10.1007/3-540-60688-2\_40. Frederick C. Furtek. The Logic of Systems. Technical Report 170. MIT, Laboratory Computer Science, June 1976. G.W. Brams (nom collectif de G. Berthelot, G. Roucairol, Ch. André, G. Memmi, J. Sifakis, R. Valette, and G. Vidal-Naquet). Réseaux de Petri: Théorie et Pratique. Vol. Tome 1: Theorie et Analyse, Tome 2: Modelisation et Applications. Masson, Paris, Sept. 1982. Rob J. van Glabbeek, Ursula Goltz, and Jens-Wolfhard Schicke. “Symmetric and Asymmetric Asynchronous Interaction”. In: Electron. Notes Theor. Comput. Sci. 229.3 (2009), pp. 77–95. doi: 10.1016/j.entcs. 2009.06.040. url: https://doi.org/10.1016/j.entcs.2009. 06.040. Rob J. van Glabbeek, Ursula Goltz, and Jens-Wolfhard SchickeUffmann. “On Characterising Distributability”. In: Log. Methods Comput. Sci. 9.3 (2013). doi: 10.2168/LMCS-9(3:17)2013. url: https: //doi.org/10.2168/LMCS-9(3:17)2013. Rob van Glabbeek and Peter Höfner. “Progress, Justness, and Fairness”. In: ACM Comput. Surv. 52.4 (2019), 69:1–69:38. doi: 10 . 1145 / 3329125. url: https://doi.org/10.1145/3329125. Bruno Gaujal, Stefan Haar, and Jean Mairesse. “Blocking a transition in a free choice net and what it tells about its throughput”. In: J. Comput. Syst. Sci. 66.3 (2003), pp. 515–548. doi: 10.1016/S00220000(03)00039- 4. url: https://doi.org/10.1016/S00220000(03)00039-4. Hartmann J. Genrich and Kurt Lautenbach. “Synchronisationsgraphen”. In: Acta Informatica 2 (1973), pp. 143–161. Hartmann J. Genrich and Kurt Lautenbach. “The Analysis of Distributed Systems by Means of Predicate ? Transition-Nets”. In: Semantics of Concurrent Computation, Proceedings of the International Symposium, Evian, France, July 2-4, 1979. Ed. by Gilles Kahn. Vol. 70. Lecture Notes in Computer Science. Springer, 1979, pp. 123–147. doi: 10.1007/BFb0022467. url: https://doi.org/10.1007/ BFb0022467. Rob J. van Glabbeek. “The Individual and Collective Token Interpretations of Petri Nets”. In: CONCUR 2005 - Concurrency Theory, 16th International Conference, CONCUR 2005, San Francisco, CA, USA, August 23-26, 2005, Proceedings. Ed. by Martín Abadi and Luca de Alfaro. Vol. 3653. Lecture Notes in Computer Science. Springer,

LITERATURE

[GM84]

[Göd31]

[Gor17]

[Gor73]

[Gra80]

[Gra97]

[GRB07]

[GS66]

[GT84]

[GW93]

517

2005, pp. 323–337. doi: 10 . 1007 / 11539452 \ _26. url: https : //doi.org/10.1007/11539452\_26. Ursula Goltz and Alan Mycroft. “On the Relationship of CCS and Petri Nets”. In: Automata, Languages and Programming, 11th Colloquium, Antwerp, Belgium, July 16-20, 1984, Proceedings. Ed. by Jan Paredaens. Vol. 172. Lecture Notes in Computer Science. Springer, 1984, pp. 196–208. doi: 10 . 1007 / 3 - 540 - 13345 - 3 \ _18. url: https://doi.org/10.1007/3-540-13345-3\_18. Kurt Gödel. “Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme”. In: Monatshefte Math. Phys. 38 (1031), pp. 173–198. Roberto Gorrieri. “CCS(25, 12) is Turing-complete”. In: Fundam. Informaticae 154.1-4 (2017), pp. 145–166. doi: 10.3233/FI- 20171557. url: https://doi.org/10.3233/FI-2017-1557. Paul A. Gordan. “Über die Auflösung linearer Gleichungen mit reellen Coefficienten”. In: Mathematische Annalen, Berlin 6 (1873), pp. 23– 28. Jan Grabowski. “The Decidability of Persistence for Vector Addition Systems”. In: Inf. Process. Lett. 11.1 (1980), pp. 20–23. doi: 10 . 1016 / 0020 - 0190(80 ) 90026 - 5. url: https : / / doi . org / 10 . 1016/0020-0190(80)90026-5. Burkhard Graves. “Computing Reachability Properties Hidden in Finite Net Unfoldings”. In: Foundations of Software Technology and Theoretical Computer Science, 17th Conference, Kharagpur, India, December 18-20, 1997, Proceedings. Ed. by S. Ramesh and G. Sivakumar. Vol. 1346. Lecture Notes in Computer Science. Springer, 1997, pp. 327–341. doi: 10.1007/BFb0058040. url: https://doi.org/ 10.1007/BFb0058040. Gilles Geeraerts, Jean-François Raskin, and Laurent Van Begin. “On the Efficient Computation of the Minimal Coverability Set for Petri Nets”. In: Automated Technology for Verification and Analysis, 5th International Symposium, ATVA 2007, Tokyo, Japan, October 22-25, 2007, Proceedings. 2007, pp. 98–113. doi: 10.1007/978- 3- 54075596- 8\_9. url: https://doi.org/10.1007/978- 3- 54075596-8\_9. Seymour Ginsburg and Edwin Henry Spanier. “Semigroups, Presburger formulas, and languages”. In: Pacific Journal of Mathematics 16.2 (1966), pp. 285–296. url: https : / / msp . org / pjm / 1966 / 16 2/p09.xhtml. Hartmann J. Genrich and P. S. Thiagarajan. “A Theory of Bipolar Synchronization Schemes”. In: Theor. Comput. Sci. 30 (1984), pp. 241– 318. doi: 10.1016/0304-3975(84)90137-3. url: https://doi. org/10.1016/0304-3975(84)90137-3. Patrice Godefroid and Pierre Wolper. “Using Partial Orders for the Efficient Verification of Deadlock Freedom and Safety Properties”. In:

518

LITERATURE

Formal Methods Syst. Des. 2.2 (1993), pp. 149–164. doi: 10.1007/ BF01383879. url: https://doi.org/10.1007/BF01383879. [GY80] Abraham Ginzburg and Michael Yoeli. “Vector Addition Systems and Regular Languages”. In: J. Comput. Syst. Sci. 20.3 (1980), pp. 277– 284. doi: 10.1016/0022-0000(80)90009-4. url: https://doi. org/10.1016/0022-0000(80)90009-4. [Hac72] Michel H.Th. Hack. “Analysis of Production Schemata by Petri Nets”. MA thesis. Massachussetts Institute of Technology, MAC TR-94, 19721974. [Hac75a] Michel H.Th. Hack. Decidability Questions for Petri Nets. Outstanding Dissertations in the Computer Sciences. Garland Publishing, New York, 1975. isbn: 0-8240-4424-X. [Hac75b] Michel H.Th. Hack. “Petri Net Languages”. In: Computation Structures Group Memo 124 (1975). [HC70] Anatol W. Holt and Fred G. Commoner. “Events and Conditions: Introduction”. In: Concurrent Systems and Parallel Computation. MIT. Project MAC, Dec. 1970. [HD18] Thomas Hujsa and Raymond R. Devillers. “On Deadlockability, Liveness and Reversibility in Subclasses of Weighted Petri Nets”. In: Fundam. Informaticae 161.4 (2018), pp. 383–421. doi: 10 . 3233 / FI 2018-1708. url: https://doi.org/10.3233/FI-2018-1708. [HJM03] Thomas A. Henzinger, Ranjit Jhala, and Rupak Majumdar. “Counterexample-Guided Control”. In: Automata, Languages and Programming, 30th International Colloquium, ICALP 2003, Eindhoven, The Netherlands, June 30 - July 4, 2003. Proceedings. Ed. by Jos C. M. Baeten, Jan Karel Lenstra, Joachim Parrow, and Gerhard J. Woeginger. Vol. 2719. Lecture Notes in Computer Science. Springer, 2003, pp. 886–902. doi: 10.1007/3- 540- 45061- 0\_69. url: https: //doi.org/10.1007/3-540-45061-0\_69. [HKT95] P. W. Hoogers, H. C. M. Kleijn, and P. S. Thiagarajan. “A Trace Semantics for Petri Nets”. In: Inf. Comput. 117.1 (1995), pp. 98–114. doi: 10.1006/inco.1995.1032. url: https://doi.org/10.1006/ inco.1995.1032. [HM98] Jeng S. Huang and Tadao Murata. “A Constructive Method for Finding Legal Transition Sequences in Petri Nets”. In: J. Circuits Syst. Comput. 8.1 (1998), pp. 189–222. doi: 10.1142/S0218126698-000080. url: https://doi.org/10.1142/S0218126698000080. [HMW10] Peter Habermehl, Roland Meyer, and Harro Wimmel. “The Downward Closure of Petri Net Languages”. In: Automata, Languages and Programming, 37th International Colloquium, ICALP 2010, Bordeaux, France, July 6-10, 2010, Proceedings, Part II. Ed. by Samson Abramsky, Cyril Gavoille, Claude Kirchner, Friedhelm Meyer auf der Heide, and Paul G. Spirakis. Vol. 6199. Lecture Notes in Computer Science. Springer, 2010, pp. 466–477. doi: 10.1007/978- 3- 642- 14162-

LITERATURE

[Hoa74]

[Hoa78]

[Hoa85] [Hol68]

[HP79a]

[HP79b]

[HR89]

[HU79]

[Jan17]

[Jan+22]

519

1\_39. url: https://doi.org/10.1007/978- 3- 642- 141621\_39. C. A. R. Hoare. “Monitors: An Operating System Structuring Concept”. In: Commun. ACM 17.10 (1974), pp. 549–557. doi: 10.1145/355620. 361161. url: https://doi.org/10.1145/355620.361161. C. A. R. Hoare. “Communicating Sequential Processes”. In: Commun. ACM 21.8 (1978), pp. 666–677. doi: 10.1145/359576.359585. url: https://doi.org/10.1145/359576.359585. C. A. R. Hoare. Communicating Sequential Processes. Prentice-Hall, 1985. isbn: 0-13-153271-5. Anatol W. Holt. Final Report of the Information System Theory Project. Technical Report RADC-TR-68-305. Griffiss Air Force Base: Rome Air Development Center, 1968. Matthew Hennessy and Gordon D. Plotkin. “Full Abstraction for a Simple Parallel Programming Language”. In: Mathematical Foundations of Computer Science 1979, Proceedings, 8th Symposium, Olomouc, Czechoslovakia, September 3-7, 1979. Ed. by Jirí Becvár. Vol. 74. Lecture Notes in Computer Science. Springer, 1979, pp. 108–120. doi: 10.1007/3-540-09526-8\_8. url: https://doi.org/10.1007/ 3-540-09526-8\_8. John E. Hopcroft and Jean-Jacques Pansiot. “On the Reachability Problem for 5-Dimensional Vector Addition Systems”. In: Theor. Comput. Sci. 8 (1979), pp. 135–159. doi: 10.1016/0304-3975(79)90041-0. url: https://doi.org/10.1016/0304-3975(79)90041-0. Rodney R. Howell and Louis E. Rosier. “Problems Concerning Fairness and Temporal Logic for Conflict-Free Petri Nets”. In: Theor. Comput. Sci. 64.3 (1989), pp. 305–329. doi: 10.1016/0304-3975(89)900534. url: https://doi.org/10.1016/0304-3975(89)90053-4. John E. Hopcroft and Jeffrey D. Ullman. Introduction to Automata Theory, Languages and Computation. Addison-Wesley, 1979. isbn: 0-201-02988-X. Petr Jancar. “Deciding Structural Liveness of Petri Nets”. In: SOFSEM 2017: Theory and Practice of Computer Science - 43rd International Conference on Current Trends in Theory and Practice of Computer Science, Limerick, Ireland, January 16-20, 2017, Proceedings. Ed. by Bernhard Steffen, Christel Baier, Mark van den Brand, Johann Eder, Mike Hinchey, and Tiziana Margaria. Vol. 10139. Lecture Notes in Computer Science. Springer, 2017, pp. 91–102. doi: 10.1007/9783-319-51963-0\_8. url: https://doi.org/10.1007/978-3319-51963-0\_8. Ryszard Janicki, Jetty Kleijn, Maciej Koutny, and Lukasz Mikulski. Paradigms of Concurrency - Observations, Behaviours, and Systems - a Petri Net View. Vol. 1020. Studies in Computational Intelligence. Springer, 2022. isbn: 978-3-662-64819-3. doi: 10 . 1007 / 978 - 3 -

520

[Jan86]

[Jen81]

[Jen92]

[JK95]

[JKW07]

[JL22]

[JL92]

[JLL77]

[JP19]

LITERATURE

662- 64821- 6. url: https://doi.org/10.1007/978- 3- 66264821-6. Matthias Jantzen. “Language Theory of Petri Nets”. In: Petri Nets: Central Models and Their Properties, Advances in Petri Nets 1986, Part I, Proceedings of an Advanced Course, Bad Honnef, Germany, 8-19 September 1986. Ed. by Wilfried Brauer, Wolfgang Reisig, and Grzegorz Rozenberg. Vol. 254. Lecture Notes in Computer Science. Springer, 1986, pp. 397–412. doi: 10 . 1007 / BFb0046847. url: https://doi.org/10.1007/BFb0046847. Kurt Jensen. “Coloured Petri Nets and the Invariant-Method”. In: Theor. Comput. Sci. 14 (1981), pp. 317–336. doi: 10.1016/0304-3975(81) 90049 - 9. url: https : / / doi . org / 10 . 1016 / 0304 - 3975(81 ) 90049-9. Kurt Jensen. Coloured Petri Nets - Basic Concepts, Analysis Methods and Practical Use - Volume 1. EATCS Monographs on Theoretical Computer Science. Springer, 1992. isbn: 978-3-662-06291-3. doi: 10. 1007/978-3-662-06289-0. url: https://doi.org/10.1007/ 978-3-662-06289-0. Ryszard Janicki and Maciej Koutny. “Semantics of Inhibitor Nets”. In: Inf. Comput. 123.1 (1995), pp. 1–16. doi: 10.1006/inco.1995.1153. url: https://doi.org/10.1006/inco.1995.1153. Kurt Jensen, Lars Michael Kristensen, and Lisa Wells. “Coloured Petri Nets and CPN Tools for modelling and validation of concurrent systems”. In: Int. J. Softw. Tools Technol. Transf. 9.3-4 (2007), pp. 213– 254. doi: 10.1007/s10009-007-0038-x. url: https://doi.org/ 10.1007/s10009-007-0038-x. Petr Jancar and Jérôme Leroux. “Semilinear Home-space is Decidable for Petri Nets”. In: CoRR abs/2207.02697 (2022). doi: 10 . 48550 / arXiv.2207.02697. arXiv: 2207.02697. url: https://doi.org/ 10.48550/arXiv.2207.02697. Ryszard Janicki and Peter E. Lauer. Specification and Analysis of Concurrent Systems - The COSY Approach. EATCS Monographs on Theoretical Computer Science. Springer, 1992. isbn: 978-3-642-77339-6. doi: 10.1007/978-3-642-77337-2. url: https://doi.org/10. 1007/978-3-642-77337-2. Neil D. Jones, Lawrence H. Landweber, and Y. Edmund Lien. “Complexity of Some Problems in Petri Nets”. In: Theor. Comput. Sci. 4.3 (1977), pp. 277–299. doi: 10.1016/0304-3975(77)90014-7. url: https://doi.org/10.1016/0304-3975(77)90014-7. Petr Jancar and David Purser. “Structural liveness of Petri nets is ExpSpace-hard and decidable”. In: Acta Informatica 56.6 (2019), pp. 537–552. doi: 10.1007/s00236- 019- 00338- 6. url: https: //doi.org/10.1007/s00236-019-00338-6.

LITERATURE

[Kar84]

[KEB94]

[Kel74]

[Kel76]

[Kho+05]

[KKV03]

[KKY22]

[KM69]

521

Narendra Karmarkar. “A new polynomial-time algorithm for linear programming”. In: Comb. 4.4 (1984), pp. 373–396. doi: 10.1007/ BF02579150. url: https://doi.org/10.1007/BF02579150. Maciej Koutny, Javier Esparza, and Eike Best. “Operational Semantics for the Petri Box Calculus”. In: CONCUR ’94, Concurrency Theory, 5th International Conference, Uppsala, Sweden, August 22-25, 1994, Proceedings. Ed. by Bengt Jonsson and Joachim Parrow. Vol. 836. Lecture Notes in Computer Science. Springer, 1994, pp. 210–225. doi: 10.1007/978-3-540-48654-1\_19. url: https://doi.org/10. 1007/978-3-540-48654-1\_19. Robert M. Keller. “A Fundamental Theorem of Asynchronous Parallel Computation”. In: Parallel Processing, Proceedings of the Sagamore Computer Conference, Sagamore, Adirondack Mountains, NY, USA, August 20-23, 1974. Ed. by Tse-Yun Feng. Vol. 24. Lecture Notes in Computer Science. Springer, 1974, pp. 102–112. doi: 10.1007/3540-07135-0\_113. url: https://doi.org/10.1007/3-54007135-0\_113. Robert M. Keller. “Formal Verification of Parallel Programs”. In: Commun. ACM 19.7 (1976), pp. 371–384. doi: 10.1145/360248.360251. url: https://doi.org/10.1145/360248.360251. Victor Khomenko, Alex Kondratyev, Maciej Koutny, and Walter Vogler. “Merged Processes - A New Condensed Representation of Petri Net Behaviour”. In: CONCUR 2005 - Concurrency Theory, 16th International Conference, CONCUR 2005, San Francisco, CA, USA, August 23-26, 2005, Proceedings. Ed. by Martín Abadi and Luca de Alfaro. Vol. 3653. Lecture Notes in Computer Science. Springer, 2005, pp. 338–352. doi: 10.1007/11539452\_27. url: https://doi. org/10.1007/11539452\_27. Victor Khomenko, Maciej Koutny, and Walter Vogler. “Canonical prefixes of Petri net unfoldings”. In: Acta Informatica 40.2 (2003), pp. 95– 118. doi: 10.1007/s00236-003-0122-y. url: https://doi.org/ 10.1007/s00236-003-0122-y. Victor Khomenko, Maciej Koutny, and Alex Yakovlev. “Slimming down Petri Boxes: Compact Petri Net Models of Control Flows”. In: 33rd International Conference on Concurrency Theory, CONCUR 2022, September 12-16, 2022, Warsaw, Poland. Ed. by Bartek Klin, Slawomir Lasota, and Anca Muscholl. Vol. 243. LIPIcs. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2022, 8:1–8:16. doi: 10 . 4230 / LIPIcs . CONCUR . 2022 . 8. url: https : / / doi . org / 10 . 4230 / LIPIcs.CONCUR.2022.8. Richard M. Karp and Raymond E. Miller. “Parallel Program Schemata”. In: J. Comput. Syst. Sci. 3.2 (1969), pp. 147–195. doi: 10 . 1016/S0022- 0000(69)80011- 5. url: https://doi.org/10. 1016/S0022-0000(69)80011-5.

522

[Kos82]

[KP95]

[KW97]

[Lam00]

[Lam77]

[Lam92]

[Lamst]

[Lau73]

[LC75]

[Ler12]

LITERATURE

S. Rao Kosaraju. “Decidability of Reachability in Vector Addition Systems (Preliminary Version)”. In: Proceedings of the 14th Annual ACM Symposium on Theory of Computing, May 5-7, 1982, San Francisco, California, USA. Ed. by Harry R. Lewis, Barbara B. Simons, Walter A. Burkhard, and Lawrence H. Landweber. ACM, 1982, pp. 267–281. doi: 10.1145/800070.802201. url: https://doi.org/10.1145/ 800070.802201. Hanna Klaudel and Elisabeth Pelz. “Communication as Unification in the Petri Box Calculus”. In: Fundamentals of Computation Theory, 10th International Symposium, FCT ’95, Dresden, Germany, August 22-25, 1995, Proceedings. Ed. by Horst Reichel. Vol. 965. Lecture Notes in Computer Science. Springer, 1995, pp. 303–312. doi: 10.1007/3540- 60249- 6\_62. url: https://doi.org/10.1007/3- 54060249-6\_62. Ekkart Kindler and Rolf Walter. “Mutex Needs Fairness”. In: Inf. Process. Lett. 62.1 (1997), pp. 31–39. doi: 10.1016/S0020-0190(97) 00033- 1. url: https://doi.org/10.1016/S0020- 0190(97) 00033-1. Leslie Lamport. “Fairness and hyperfairness”. In: Distributed Comput. 13.4 (2000), pp. 239–245. doi: 10.1007/PL00008921. url: https: //doi.org/10.1007/PL00008921. Leslie Lamport. “Proving the Correctness of Multiprocess Programs”. In: IEEE Trans. Software Eng. 3.2 (1977), pp. 125–143. doi: 10.1109/ TSE.1977.229904. url: https://doi.org/10.1109/TSE.1977. 229904. Jean-Luc Lambert. “A Structure to Decide Reachability in Petri Nets”. In: Theor. Comput. Sci. 99.1 (1992), pp. 79–104. doi: 10.1016/03043975(92 ) 90173 - D. url: https : / / doi . org / 10 . 1016 / 0304 3975(92)90173-D. Leslie Lamport. A note on liveness terminology. http://research. microsoft.com/en-us/um/people/lamport/pubs/pubs.html, entry 23. Last accessed September 2022. Kurt Lautenbach. Exakte Bedingungen der Lebendigkeit für eine Klasse von Petri-Netzen. GMD-Report 82. Gesellschaft für Mathematik und Datenverarbeitung, 1973. Peter E. Lauer and Roy H. Campbell. “A Description of Path Expressions by Petri Nets”. In: Conference Record of the Second ACM Symposium on Principles of Programming Languages, Palo Alto, California, USA, January 1975. Ed. by Robert M. Graham, Michael A. Harrison, and John C. Reynolds. ACM Press, 1975, pp. 95–105. doi: 10.1145/512976.512987. url: https://doi.org/10.1145/ 512976.512987. Jérôme Leroux. “Vector Addition Systems Reachability Problem (A Simpler Solution)”. In: Turing-100 - The Alan Turing Centenary, Manchester, UK, June 22-25, 2012. Ed. by Andrei Voronkov. Vol. 10.

LITERATURE

[Lip76] [Liu22]

[LPS81]

[LR78]

[LR94]

[LS14]

[LS15]

[May80]

[May81a]

523

EPiC Series in Computing. EasyChair, 2012, pp. 214–228. doi: 10. 29007/bnx2. url: https://doi.org/10.29007/bnx2. R.J. Lipton. The Reachability Problem Requires Exponential Space. Research Report 62. Yale University, 1976. Guanjun Liu. Petri Nets - Theoretical Models and Analysis Methods for Concurrent Systems. Springer, 2022. isbn: 978-981-19-6308-7. doi: 10.1007/978- 981- 19- 6309- 4. url: https://doi.org/10. 1007/978-981-19-6309-4. Daniel Lehmann, Amir Pnueli, and Jonathan Stavi. “Impartiality, Justice and Fairness: The Ethics of Concurrent Termination”. In: Automata, Languages and Programming, 8th Colloquium, Acre (Akko), Israel, July 13-17, 1981, Proceedings. Ed. by Shimon Even and Oded Kariv. Vol. 115. Lecture Notes in Computer Science. Springer, 1981, pp. 264–277. doi: 10.1007/3- 540- 10843- 2\_22. url: https: //doi.org/10.1007/3-540-10843-2\_22. Lawrence H. Landweber and Edward L. Robertson. “Properties of Conflict-Free and Persistent Petri Nets”. In: J. ACM 25.3 (1978), pp. 352–364. doi: 10.1145/322077.322079. url: https://doi. org/10.1145/322077.322079. Kurt Lautenbach and Hanno Ridder. “Liveness in Bounded Petri Nets Which Are Covered by T-Invariants”. In: Application and Theory of Petri Nets 1994, 15th International Conference, Zaragoza, Spain, June 20-24, 1994, Proceedings. Ed. by Robert Valette. Vol. 815. Lecture Notes in Computer Science. Springer, 1994, pp. 358–375. doi: 10. 1007/3-540-58152-9\_20. url: https://doi.org/10.1007/3540-58152-9\_20. Jérôme Leroux and Philippe Schnoebelen. “On Functions Weakly Computable by Petri Nets and Vector Addition Systems”. In: Reachability Problems - 8th International Workshop, RP 2014, Oxford, UK, September 22-24, 2014. Proceedings. Ed. by Joël Ouaknine, Igor Potapov, and James Worrell. Vol. 8762. Lecture Notes in Computer Science. Springer, 2014, pp. 190–202. doi: 10.1007/978- 3- 319- 114392\_15. url: https://doi.org/10.1007/978- 3- 319- 114392\_15. Jérôme Leroux and Sylvain Schmitz. “Demystifying Reachability in Vector Addition Systems”. In: 30th Annual ACM/IEEE Symposium on Logic in Computer Science, LICS 2015, Kyoto, Japan, July 6-10, 2015. IEEE Computer Society, 2015, pp. 56–67. doi: 10.1109/LICS.2015. 16. url: https://doi.org/10.1109/LICS.2015.16. Ernst W. Mayr. “Ein Algorithmus für das allgemeine Erreichbarkeitsproblem bei Petrinetzen und damit zusammenhängende Probleme”. PhD thesis. Technische Universität München, 1980. Ernst W. Mayr. An effective representation of the reachability set of persistent Petri nets. Technical Report MIT/LCS/TM-188. Massachussetts Institute of Technology, Jan. 1981.

524

[May81b]

[May84]

[Maz84]

[MCB84]

[McM92]

[ME96]

[Mel98]

[Mil80]

[Mil89] [Mil92]

[Min67]

LITERATURE

Ernst W. Mayr. “Persistence of Vector Replacement Systems is Decidable”. In: Acta Informatica 15 (1981), pp. 309–318. doi: 10.1007/ BF00289268. url: https://doi.org/10.1007/BF00289268. Ernst W. Mayr. “An Algorithm for the General Petri Net Reachability Problem”. In: SIAM J. Comput. 13.3 (1984), pp. 441–460. doi: 10. 1137/0213029. url: https://doi.org/10.1137/0213029. Antoni W. Mazurkiewicz. “Traces, Histories, Graphs: Instances of a Process Monoid”. In: Mathematical Foundations of Computer Science 1984, Praha, Czechoslovakia, September 3-7, 1984, Proceedings. Ed. by Michal Chytil and Václav Koubek. Vol. 176. Lecture Notes in Computer Science. Springer, 1984, pp. 115–133. doi: 10.1007/ BFb0030293. url: https://doi.org/10.1007/BFb0030293. Marco Ajmone Marsan, Gianni Conte, and Gianfranco Balbo. “A Class of Generalized Stochastic Petri Nets for the Performance Evaluation of Multiprocessor Systems”. In: ACM Trans. Comput. Syst. 2.2 (1984), pp. 93–122. doi: 10.1145/190.191. url: https://doi.org/10. 1145/190.191. Kenneth L. McMillan. “Using Unfoldings to Avoid the State Explosion Problem in the Verification of Asynchronous Circuits”. In: Computer Aided Verification, Fourth International Workshop, CAV ’92, Montreal, Canada, June 29 - July 1, 1992, Proceedings. Ed. by Gregor von Bochmann and David K. Probst. Vol. 663. Lecture Notes in Computer Science. Springer, 1992, pp. 164–177. doi: 10.1007/3-540-564969\_14. url: https://doi.org/10.1007/3-540-56496-9\_14. Stephan Melzer and Javier Esparza. “Checking System Properties via Integer Programming”. In: Programming Languages and Systems - ESOP’96, 6th European Symposium on Programming, Linköping, Sweden, April 22-24, 1996, Proceedings. Ed. by Hanne Riis Nielson. Vol. 1058. Lecture Notes in Computer Science. Sprin-ger, 1996, pp. 250–264. doi: 10.1007/3- 540- 61055- 3\_41. url: https: //doi.org/10.1007/3-540-61055-3\_41. Stephan Melzer. Verifikation verteilter Systeme mittels linearer- und Constraint-Programmierung. Utz, 1998. isbn: 978-3-89675-394-6. url: https://d-nb.info/954322681. Robin Milner. A Calculus of Communicating Systems. Vol. 92. Lecture Notes in Computer Science. Springer, 1980. isbn: 3-540-10235-3. doi: 10.1007/3-540-10235-3. url: https://doi.org/10.1007/3540-10235-3. Robin Milner. Communication and concurrency. PHI Series in computer science. Prentice Hall, 1989. isbn: 978-0-13-115007-2. Robin Milner. “Functions as Processes”. In: Math. Struct. Comput. Sci. 2.2 (1992), pp. 119–141. doi: 10.1017/S0960129500001407. url: https://doi.org/10.1017/S0960129500001407. M. Minsky. Computation – Finite and Infinite Machines. Englewood Cliffs, N. J.: Prentice-Hall, Inc., 1967.

LITERATURE

[MKG97]

[MKS09]

[MPW92]

[MR05]

[MR79]

[Mül80]

[Mur89]

[NPW79]

525

Jeff Magee, Jeff Kramer, and Dimitra Giannakopoulou. “Analysing the behaviour of distributed software architectures: a case study”. In: 6th IEEE Workshop on Future Trends of Distributed Computer Systems (FTDCS ’97), 29-31 October 1997, Tunis, Tunisia, Proceedings. IEEE Computer Society, 1997, pp. 240–247. doi: 10.1109/FTDCS.1997. 644733. url: https://doi.org/10.1109/FTDCS.1997.644733. Roland Meyer, Victor Khomenko, and Tim Strazny. “A Practical Approach to Verification of Mobile Systems Using Net Unfoldings”. In: Fundam. Informaticae 94.3-4 (2009), pp. 439–471. doi: 10.3233/FI2009-138. url: https://doi.org/10.3233/FI-2009-138. Robin Milner, Joachim Parrow, and David Walker. “A Calculus of Mobile Processes”. In: Inf. Comput. 100.1 (1992), pp. 1–77. doi: 10. 1016 / 0890 - 5401(92 ) 90008 - 4. url: https : / / doi . org / 10 . 1016/0890-5401(92)90008-4. Mohammad Reza Mousavi and Michel A. Reniers. “Congruence for Structural Congruences”. In: Foundations of Software Science and Computational Structures, 8th International Conference, FOSSACS 2005, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2005, Edinburgh, UK, April 4-8, 2005, Proceedings. Ed. by Vladimiro Sassone. Vol. 3441. Lecture Notes in Computer Science. Springer, 2005, pp. 47–62. doi: 10.1007/978-3540-31982-5\_3. url: https://doi.org/10.1007/978-3-54031982-5\_3. Gérard Memmi and Gérard Roucairol. “Linear Algebra in Net Theory”. In: Net Theory and Applications, Proceedings of the Advanced Course on General Net Theory of Processes and Systems, Hamburg, Germany, October 8-19, 1979. Ed. by Wilfried Brauer. Vol. 84. Lecture Notes in Computer Science. Springer, 1979, pp. 213–223. doi: 10.1007/3540- 10001- 6\_24. url: https://doi.org/10.1007/3- 54010001-6\_24. Horst Müller. “Decidability of Reachability in Persistent Vector Replacement Systems”. In: Mathematical Foundations of Computer Science 1980 (MFCS’80), Proceedings of the 9th Symposium, Rydzyna, Poland, September 1-5, 1980. Ed. by Piotr Dembinski. Vol. 88. Lecture Notes in Computer Science. Springer, 1980, 426–438. doi: 10.1007/ BFb0022522. url: https://doi.org/10.1007/BFb0022522. Tadao Murata. “Petri nets: Properties, analysis and applications”. In: Proc. IEEE 77.4 (1989), pp. 541–580. doi: 10.1109/5.24143. url: https://doi.org/10.1109/5.24143. Mogens Nielsen, Gordon D. Plotkin, and Glynn Winskel. “Petri Nets, Event Structures and Domains”. In: Semantics of Concurrent Computation, Proceedings of the International Symposium, Evian, France, July 2-4, 1979. Ed. by Gilles Kahn. Vol. 70. Lecture Notes in Computer Science. Springer, 1979, pp. 266–284. doi: 10.1007/BFB0022474. url: https://doi.org/10.1007/BFb0022474.

526

[NRT92]

[NSW93]

[OG76]

[Old91]

[Par61]

[Par66]

[Pat72] [Pel87]

[Pel93]

[Pel98]

LITERATURE

Mogens Nielsen, Grzegorz Rozenberg, and P. S. Thiagarajan. “Elementary Transition Systems”. In: Theor. Comput. Sci. 96.1 (1992), pp. 3–33. doi: 10.1016/0304- 3975(92)90180- N. url: https: //doi.org/10.1016/0304-3975(92)90180-N. Mogens Nielsen, Vladimiro Sassone, and Glynn Winskel. “Relationships Between Models of Concurrency”. In: A Decade of Concurrency, Reflections and Perspectives, REX School/Symposium, Noordwijkerhout, The Netherlands, June 1-4, 1993, Proceedings. Ed. by J. W. de Bakker, Willem P. de Roever, and Grzegorz Rozenberg. Vol. 803. Lecture Notes in Computer Science. Springer, 1993, pp. 425–476. doi: 10.1007/3- 540- 58043- 3\_25. url: https://doi.org/10. 1007/3-540-58043-3\_25. Susan S. Owicki and David Gries. “Verifying Properties of Parallel Programs: An Axiomatic Approach”. In: Commun. ACM 19.5 (1976), pp. 279–285. doi: 10.1145/360051.360224. url: https://doi. org/10.1145/360051.360224. Ernst-Rüdiger Olderog. Nets, Terms and Formulas. Vol. Cambridge Tracts in Theoretical Computer Science. Cambridge University Press, 1991. Rohit Jivanlal Parikh. “Language-generating devices”. In: vol. Quarterly Progress Report. 60. Laboratory of Electronics, Massachussetts Institute of Technology, 1961, pp. 199–212. Rohit Jivanlal Parikh. “On Context-Free Languages”. In: J. ACM 13.4 (1966), pp. 570–581. doi: 10.1145/321356.321364. url: https: //doi.org/10.1145/321356.321364. Suhas Patil. Circuit Implementation of Petri Nets. Computation Structures Group: Memo 73, project MAC MIT MIT, 1972. Elisabeth Pelz. “Closure Properties of Deterministic Petri Nets”. In: STACS 87, 4th Annual Symposium on Theoretical Aspects of Computer Science, Passau, Germany, February 19-21, 1987, Proceedings. Ed. by Franz-Josef Brandenburg, Guy Vidal-Naquet, and Martin Wirsing. Vol. 247. Lecture Notes in Computer Science. Springer, 1987, pp. 371– 382. doi: 10 . 1007 / BFb0039620. url: https : / / doi . org / 10 . 1007/BFb0039620. Doron A. Peled. “All from One, One for All: on Model Checking Using Representatives”. In: Computer Aided Verification, 5th International Conference, CAV ’93, Elounda, Greece, June 28 - July 1, 1993, Proceedings. Ed. by Costas Courcoubetis. Vol. 697. Lecture Notes in Computer Science. Springer, 1993, pp. 409–423. doi: 10.1007/3540- 56922- 7\_34. url: https://doi.org/10.1007/3- 54056922-7\_34. Doron A. Peled. “Ten Years of Partial Order Reduction”. In: Computer Aided Verification, 10th International Conference, CAV ’98, Vancouver, BC, Canada, June 28 - July 2, 1998, Proceedings. Ed. by Alan J. Hu and Moshe Y. Vardi. Vol. 1427. Lecture Notes in Computer Science.

LITERATURE

[Pet62]

[Pet67]

[Pet77]

[Pet81]

[Plo04] [Plo79]

[Pra86]

[Pre29]

[PV16]

[PW03] [Ray86] [Rei08]

527

Springer, 1998, pp. 17–28. doi: 10.1007/BFb0028727. url: https: //doi.org/10.1007/BFb0028727. C. A. Petri. “Fundamentals of a Theory of Asynchronous Information Flow”. In: Information Processing, Proceedings of the 2nd IFIP Congress 1962, Munich, Germany, August 27 - September 1, 1962. North-Holland, 1962, pp. 386–390. C.A. Petri. “Grundsätzliches zur Beschreibung diskreter Prozesse”. In: 3. Colloquium über Automatentheorie, Birkhäuser Verlag, Basel (1967). James L. Peterson. “Petri Nets”. In: ACM Comput. Surv. 9.3 (1977), pp. 223–252. doi: 10.1145/356698.356702. url: https://doi. org/10.1145/356698.356702. Gary L. Peterson. “Myths About the Mutual Exclusion Problem”. In: Inf. Process. Lett. 12.3 (1981), pp. 115–116. doi: 10.1016/00200190(81 ) 90106 - X. url: https : / / doi . org / 10 . 1016 / 0020 0190(81)90106-X. Gordon D. Plotkin. “A structural approach to operational semantics”. In: J. Log. Algebraic Methods Program. 60-61 (2004), pp. 17–139. Gordon D. Plotkin. “Dijkstras Predicate Transformers & Smyth’s Power Domaine”. In: Abstract Software Specifications, 1979 Copenhagen Winter School, January 22 - February 2, 1979, Proceedings. Ed. by Dines Bjørner. Vol. 86. Lecture Notes in Computer Science. Springer, 1979, pp. 527–553. doi: 10 . 1007 / 3 - 540 - 10007 - 5 \ _48. url: https://doi.org/10.1007/3-540-10007-5\_48. Vaughan R. Pratt. “Modeling concurrency with partial orders”. In: Int. J. Parallel Program. 15.1 (1986), pp. 33–71. doi: 10 . 1007 / BF01379149. url: https://doi.org/10.1007/BF01379149. Mojżesz Presburger. “Über der Vollständigkeit eines gewissen Systems der Arithmetik ganzer Zahlen, in welchen die Addition als einzige Operation hervortritt”. In: Comptes Rendus Premier Congrès des Mathématicienes des Pays Slaves, Varsovie 1929 / Sprawozdanie z I Kongresu matematyków krajów słowiańskich. Ed. by F. Leja. Vol. 395. 1929, pp. 92–101. Artturi Piipponen and Antti Valmari. “Constructing Minimal Coverability Sets”. In: Fundam. Informaticae 143.3-4 (2016), pp. 393–414. doi: 10.3233/FI-2016-1319. url: https://doi.org/10.3233/ FI-2016-1319. Lutz Priese and Harro Wimmel. Petri-Netze. ISBN 3-540-44289-8. Springer-Verlag, 2003. Michel Raynal. Algorithms for Mutual Exclusion. ISBN 13: 9780946536047. North Oxford Academic, 1986. Klaus Reinhardt. “Reachability in Petri Nets with Inhibitor Arcs”. In: Proceedings of the Second Workshop on Reachability Problems in Computational Models, RP 2008, Liverpool, UK, September 15-17, 2008. Ed. by Vesa Halava and Igor Potapov. Vol. 223. Electronic Notes

528

[Rei13]

[Rei85]

[Rei96]

[Ros84]

[Rot02]

[RS19]

[RTS98]

[Sau03]

LITERATURE

in Theoretical Computer Science. Elsevier, 2008, pp. 239–264. doi: 10.1016/j.entcs.2008.12.042. url: https://doi.org/10. 1016/j.entcs.2008.12.042. Wolfgang Reisig. Understanding Petri Nets - Modeling Techniques, Analysis Methods, Case Studies. Springer, 2013. isbn: 978-3-64233277-7. doi: 10 . 1007 / 978 - 3 - 642 - 33278 - 4. url: https : //doi.org/10.1007/978-3-642-33278-4. Wolfgang Reisig. Petri Nets: An Introduction. Vol. 4. EATCS Monographs on Theoretical Computer Science. Springer, 1985. isbn: 3-54013723-8. doi: 10.1007/978-3-642-69968-9. url: https://doi. org/10.1007/978-3-642-69968-9. Wolfgang Reisig. “Interleaved progress, concurrent progress, and local progress”. In: Partial Order Methods in Verification, Proceedings of a DIMACS Workshop, Princeton, New Jersey, USA, July 24-26, 1996. Ed. by Doron A. Peled, Vaughan R. Pratt, and Gerard J. Holzmann. Vol. 29. DIMACS Series in Discrete Mathematics and Theoretical Computer Science. DIMACS/AMS, 1996, pp. 99–115. doi: 10.1090/dimacs/ 029/06. url: https://doi.org/10.1090/dimacs/029/06. A. W. Roscoe. “Denotational Semantics for occam”. In: Seminar on Concurrency, Carnegie-Mellon University, Pittsburg, PA, USA, July 9-11, 1984. Ed. by Stephen D. Brookes, A. W. Roscoe, and Glynn Winskel. Vol. 197. Lecture Notes in Computer Science. Springer, 1984, pp. 306–329. doi: 10.1007/3- 540- 15670- 4\_15. url: https: //doi.org/10.1007/3-540-15670-4\_15. Jan Rothe. “A Syntactical Approach to Weak (Bi-)Simulation for Coalgebras”. In: Coalgebraic Methods in Computer Science, CMCS 2002, Satellite Event of ETAPS 2002, Grenoble, France, April 6-7, 2002. Ed. by Lawrence S. Moss. Vol. 65. Electronic Notes in Theoretical Computer Science 1. Elsevier, 2002, pp. 270–285. doi: 10.1016/S15710661(04)80368- 2. url: https://doi.org/10.1016/S15710661(04)80368-2. Pierre-Alain Reynier and Frédéric Servais. “On the Computation of the Minimal Coverability Set of Petri Nets”. In: Reachability Problems 13th International Conference, RP 2019, Brussels, Belgium, September 11-13, 2019, Proceedings. 2019, pp. 164–177. doi: 10.1007/978-3030- 30806- 3\_13. url: https://doi.org/10.1007/978- 3030-30806-3\_13. Laura Recalde, Enrique Teruel, and Manuel Silva Suárez. “On Linear Algebraic Techniques for Liveness Analysis of P/T Systems”. In: J. Circuits Syst. Comput. 8.1 (1998), pp. 223–265. doi: 10 . 1142/S0218126698000092. url: https://doi.org/10.1142/ S0218126698000092. Nathalie Sauer. “Marking Optimization of Weighted Marked Graphs”. In: Discret. Event Dyn. Syst. 13.3 (2003), pp. 245–262. doi: 10 .

LITERATURE

[Sch16]

[Sch18a]

[Sch18b]

[Sch95]

[Sch96]

[Sch99]

[Sco00]

[Sco76]

[SE05]

529

1023/A:1024055724914. url: https://doi.org/10.1023/A: 1024055724914. Uli Schlachter. “Petri Net Synthesis for Restricted Classes of Nets”. In: Application and Theory of Petri Nets and Concurrency - 37th International Conference, PETRI NETS 2016, Toruń, Poland, June 19-24, 2016. Proceedings. Ed. by Fabrice Kordon and Daniel Moldt. Vol. 9698. Lecture Notes in Computer Science. Springer, 2016, pp. 79–97. doi: 10.1007/978-3-319-39086-4\_6. url: https://doi.org/10. 1007/978-3-319-39086-4\_6. Uli Schlachter. “Over-Approximative Petri Net Synthesis for Restricted Subclasses of Nets”. In: Language and Automata Theory and Applications - 12th International Conference, LATA 2018, Ramat Gan, Israel, April 9-11, 2018, Proceedings. Ed. by Shmuel Tomi Klein, Carlos Martín-Vide, and Dana Shapira. Vol. 10792. Lecture Notes in Computer Science. Springer, 2018, pp. 296–307. doi: 10.1007/978- 3319- 77313- 1\_23. url: https://doi.org/10.1007/978- 3319-77313-1\_23. Uli Schlachter. “Petri net synthesis and modal specifications”. PhD thesis. University of Oldenburg, Germany, 2018. url: http://oops. uni-oldenburg.de/3755. Karsten Schmidt. “On the Computation of Place Invariants for Algebraic Petri Nets”. In: Proceedings of the International Workshop on Structures in Concurrency Theory, STRICT 1995, Berlin, Germany, May 11-13, 1995. Ed. by Jörg Desel. Workshops in Computing. Springer, 1995, pp. 310–325. doi: 10.1007/978- 1- 4471- 30789\_21. url: https://doi.org/10.1007/978- 1- 4471- 30789\_21. Karsten Schmidt. “Symbolische Analysemethoden für algebraische Petrinetze”. PhD thesis. Humboldt University of Berlin, Unter den Linden, Germany, 1996. isbn: 3-929470-54-3. url: https : / / d nb.info/947595880. Alexander Schrijver. Theory of linear and integer programming. WileyInterscience series in discrete mathematics and optimization. Wiley, 1999. isbn: 978-0-471-98232-6. Dana S. Scott. “Some Reflections on Strachey and His Work”. In: High. Order Symb. Comput. 13.1/2 (2000), pp. 103–114. doi: 10 . 1023/A:1010018211714. url: https://doi.org/10.1023/A: 1010018211714. Dana S. Scott. “Data Types as Lattices”. In: SIAM J. Comput. 5.3 (1976), pp. 522–587. doi: 10.1137/0205037. url: https://doi. org/10.1137/0205037. Stefan Schwoon and Javier Esparza. “A Note on On-the-Fly Verification Algorithms”. In: Tools and Algorithms for the Construction and Analysis of Systems, 11th International Conference, TACAS 2005, Held as Part of the Joint European Conferences on Theory and Practice of Soft-

530

[Shi97] [SK04]

[SL76]

[Smi15]

[Sok+20]

[SR87]

[SSE03]

[ST12]

[Sta78]

LITERATURE

ware, ETAPS 2005, Edinburgh, UK, April 4-8, 2005, Proceedings. Ed. by Nicolas Halbwachs and Lenore D. Zuck. Vol. 3440. Lecture Notes in Computer Science. Springer, 2005, pp. 174–190. doi: 10.1007/9783-540-31980-1\_12. url: https://doi.org/10.1007/978-3540-31980-1\_12. Michael W. Shields. Semantics of parallelism - non-interleaving representation of behaviour. Springer, 1997. isbn: 978-3-540-76059-7. Claus Schröter and Victor Khomenko. “Parallel LTL-X Model Checking of High-Level Petri Nets Based on Unfoldings”. In: Computer Aided Verification, 16th International Conference, CAV 2004, Boston, MA, USA, July 13-17, 2004, Proceedings. Ed. by Rajeev Alur and Doron A. Peled. Vol. 3114. Lecture Notes in Computer Science. Springer, 2004, pp. 109–121. doi: 10.1007/978-3-540-27813-9\_9. url: https://doi.org/10.1007/978-3-540-27813-9\_9. Jayme L. Szwarcfiter and Peter E. Lauer. “A search strategy for the elementary cycles of a directed graph”. In: BIT Numerical Mathematics 16 (1976), pp. 192–204. Einar Smith. Carl Adam Petri - Life and Science. Springer, 2015. isbn: 978-3-662-48092-2. doi: 10 . 1007 / 978 - 3 - 662 - 48093 - 9. url: https://doi.org/10.1007/978-3-662-48093-9. Danil Sokolov, Victor Khomenko, Andrey Mokhov, Vladimir Dubikhin, David Lloyd, and Alex Yakovlev. “Automating the Design of Asynchronous Logic Control for AMS Electronics”. In: IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 39.5 (2020), pp. 952–965. doi: 10.1109/TCAD.2019.2907905. url: https://doi.org/10. 1109/TCAD.2019.2907905. Einar Smith and Wolfgang Reisig. “The Semantics of a Net is a Net – an Exercise in General Net Theory”. In: Advances in Petri Nets 87. Ed. by G. Rozenberg. Vol. ISBN 978-3-540-47743-3. Lecture Notes in Computer Science. Springer-Verlag, 1987, pp. 461–480. Claus Schröter, Stefan Schwoon, and Javier Esparza. “The ModelChecking Kit”. In: Applications and Theory of Petri Nets 2003, 24th International Conference, ICATPN 2003, Eindhoven, The Netherlands, June 23-27, 2003, Proceedings. Ed. by Wil M. P. van der Aalst and Eike Best. Vol. 2679. Lecture Notes in Computer Science. Springer, 2003, pp. 463–472. doi: 10 . 1007 / 3 - 540 - 44919 - 1 \ _29. url: https://doi.org/10.1007/3-540-44919-1\_29. Donald Sannella and Andrzej Tarlecki. Foundations of Algebraic Specification and Formal Software Development. Monographs in Theoretical Computer Science. An EATCS Series. Springer, 2012. isbn: 978-3642-17335-6. doi: 10.1007/978- 3- 642- 17336- 3. url: https: //doi.org/10.1007/978-3-642-17336-3. Peter H. Starke. “Free Petri Net Languages”. In: Mathematical Foundations of Computer Science 1978, Proceedings, 7th Symposium, Zakopane, Poland, September 4-8, 1978. Ed. by Józef Winkowski. Vol. 64.

LITERATURE

[Sta90] [Sto79]

[SW17]

[SW18]

[TCS97]

[Ter+92]

[Tre19]

531

Lecture Notes in Computer Science. Springer, 1978, 506–515. doi: 10. 1007/3-540-08921-7\_99. url: https://doi.org/10.1007/3540-08921-7\_99. Peter H. Starke. Analyse von Petri-Netz-Modellen. Leitfäden und Monographien der Informatik. Teubner, 1990. Joseph E. Stoy. “Foundations of Denotational Semantics”. In: Abstract Software Specifications, 1979 Copenhagen Winter School, January 22 February 2, 1979, Proceedings. Ed. by Dines Bjørner. Vol. 86. Lecture Notes in Computer Science. Springer, 1979, pp. 43–99. doi: 10.1007/ 3-540-10007-5\_35. url: https://doi.org/10.1007/3-54010007-5\_35. Uli Schlachter and Harro Wimmel. “k-Bounded Petri Net Synthesis from Modal Transition Systems”. In: 28th International Conference on Concurrency Theory, CONCUR 2017, September 5-8, 2017, Berlin, Germany. Ed. by Roland Meyer and Uwe Nestmann. Vol. 85. LIPIcs. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2017, 6:1–6:15. doi: 10.4230/LIPIcs.CONCUR.2017.6. url: https://doi.org/ 10.4230/LIPIcs.CONCUR.2017.6. Uli Schlachter and Harro Wimmel. “A Geometric Characterisation of Event/State Separation”. In: Application and Theory of Petri Nets and Concurrency - 39th International Conference, PETRI NETS 2018, Bratislava, Slovakia, June 24-29, 2018, Proceedings. Ed. by Victor Khomenko and Olivier H. Roux. Vol. 10877. Lecture Notes in Computer Science. Springer, 2018, pp. 99–116. doi: 10.1007/978- 3319-91268-4\_6. url: https://doi.org/10.1007/978-3-31991268-4\_6. Enrique Teruel, José Manuel Colom, and Manuel Silva Suárez. “Choice-free Petri nets: a model for deterministic concurrent systems with bulk services and arrivals”. In: IEEE Trans. Syst. Man Cybern. Part A 27.1 (1997), pp. 73–83. doi: 10.1109/3468.553226. url: https://doi.org/10.1109/3468.553226. Enrique Teruel, Piotr Chrząstowski-Wachtel, José Manuel Colom, and Manuel Silva Suárez. “On Weighted T-Systems”. In: Application and Theory of Petri Nets 1992, 13th International Conference, Sheffield, UK, June 22-26, 1992, Proceedings. Ed. by Kurt Jensen. Vol. 616. Lecture Notes in Computer Science. Springer, 1992, pp. 348–367. doi: 10.1007/3- 540- 55676- 1\_20. url: https://doi.org/10. 1007/3-540-55676-1\_20. Ronny Tredup. “Synthesis of Structurally Restricted b-bounded Petri Nets: Complexity Results”. In: Reachability Problems - 13th International Conference, RP 2019, Brussels, Belgium, September 11-13, 2019, Proceedings. 2019, pp. 202–217. doi: 10.1007/978-3-03030806- 3\_16. url: https://doi.org/10.1007/978- 3- 03030806-3\_16.

532

[Tre21]

[TS93]

[TS96]

[Vau86]

[VH11]

[Vil38]

[Vog92]

[VV81]

[Weh10a]

[Weh10b]

LITERATURE

Ronny Tredup. “Edge, Event and State Removal: The Complexity of Some Basic Techniques that Make Transition Systems Petri Net Implementable”. In: Application and Theory of Petri Nets and Concurrency 42nd International Conference, PETRI NETS 2021, Virtual Event, June 23-25, 2021, Proceedings. 2021, pp. 253–273. doi: 10.1007/978-3030- 76983- 3\_13. url: https://doi.org/10.1007/978- 3030-76983-3\_13. Enrique Teruel and Manuel Silva Suárez. “Liveness and Home States in Equal Conflict Systems”. In: Application and Theory of Petri Nets 1993, 14th International Conference, Chicago, Illinois, USA, June 2125, 1993, Proceedings. 1993, pp. 415–432. doi: 10.1007/3- 54056863-8\_59. url: https://doi.org/10.1007/3-540-568638\_59. Enrique Teruel and Manuel Silva Suárez. “Structure Theory of Equal Conflict Systems”. In: Theor. Comput. Sci. 153.1&2 (1996), pp. 271– 300. doi: 10.1016/0304-3975(95)00124-7. url: https://doi. org/10.1016/0304-3975(95)00124-7. Jacques Vautherin. “Parallel systems specitications with coloured Petri nets and algebraic specifications”. In: Advances in Petri Nets 1987, covers the 7th European Workshop on Applications and Theory of Petri Nets, Oxford, UK, June 1986. Ed. by Grzegorz Rozenberg. Vol. 266. Lecture Notes in Computer Science. Springer, 1986, pp. 293–308. doi: 10.1007/3- 540- 18086- 9\_31. url: https://doi.org/10. 1007/3-540-18086-9\_31. Antti Valmari and Henri Hansen. “Can Stubborn Sets Be Optimal?” In: Fundam. Informaticae 113.3-4 (2011), pp. 377–397. doi: 10.3233/ FI-2011-613. url: https://doi.org/10.3233/FI-2011-613. Jean André Ville. “Sur la théorie générale des jeux ou intervient l’habileté des joueurs”. In: Traité du Calcul des Probabilités et de ses Applications IV, 2 (1938). Ed. by E. Borel et al., pp. 105–113. Walter Vogler. “Partial words versus processes: a short comparison”. In: Advances in Petri Nets 1992, The DEMON Project. Ed. by Grzegorz Rozenberg. Vol. 609. Lecture Notes in Computer Science. Springer, 1992, pp. 292–303. doi: 10 . 1007 / 3 - 540 - 55610 - 9 \ _176. url: https://doi.org/10.1007/3-540-55610-9\_176. Rüdiger Valk and Guy Vidal-Naquet. “Petri Nets and Regular Languages”. In: J. Comput. Syst. Sci. 23.3 (1981), pp. 299–325. doi: 10 . 1016 / 0022 - 0000(81 ) 90067 - 2. url: https : / / doi . org / 10.1016/0022-0000(81)90067-2. Joachim Wehler. “Free-Choice Petri Nets without Frozen Tokens, and Bipolar Synchronization Systems”. In: Fundam. Informaticae 98.2-3 (2010), pp. 283–320. doi: 10 . 3233 / FI - 2010 - 228. url: https : //doi.org/10.3233/FI-2010-228. Joachim Wehler. “Simplified proof of the blocking theorem for freechoice Petri nets”. In: J. Comput. Syst. Sci. 76.7 (2010), pp. 532–537.

LITERATURE

[Wei15]

[Wim04]

[Wim08] [Win86]

[Yam81]

[Yam84]

[YY03]

533

doi: 10.1016/j.jcss.2009.10.001. url: https://doi.org/10. 1016/j.jcss.2009.10.001. Jeremias Weihmann. “Generalized Petri Nets: Algorithms and Complexity”. PhD thesis. Fakultät für Informatik, Technische Universität München, Mar. 2015. Harro Wimmel. “Infinity of Intermediate States Is Decidable for Petri Nets”. In: Applications and Theory of Petri Nets 2004, 25th International Conference, ICATPN 2004, Bologna, Italy, June 21-25, 2004, Proceedings. Ed. by Jordi Cortadella and Wolfgang Reisig. Vol. 3099. Lecture Notes in Computer Science. Springer, 2004, pp. 426–434. doi: 10.1007/978-3-540-27793-4\_24. url: https://doi.org/10. 1007/978-3-540-27793-4\_24. Harro Wimmel. Entscheidbarkeit bei Petrinetzen. Textbook. ISBN: 978-3-540-85471-5. Springer-Verlag, 2008. Glynn Winskel. “Event Structures”. In: Petri Nets: Central Models and Their Properties, Advances in Petri Nets 1986, Part II, Proceedings of an Advanced Course, Bad Honnef, Germany, 8-19 September 1986. Ed. by Wilfried Brauer, Wolfgang Reisig, and Grzegorz Rozenberg. Vol. 255. Lecture Notes in Computer Science. Springer, 1986, pp. 325– 392. doi: 10.1007/3-540-17906-2\_31. url: https://doi.org/ 10.1007/3-540-17906-2\_31. Hideki Yamasaki. “On Weak Persistency of Petri Nets”. In: Inf. Process. Lett. 13.3 (1981), pp. 94–97. doi: 10.1016/0020-0190(81)901174. url: https://doi.org/10.1016/0020-0190(81)90117-4. Hideki Yamasaki. “Normal Petri Nets”. In: Theor. Comput. Sci. 31 (1984), pp. 307–315. doi: 10.1016/0304-3975(84)90038-0. url: https://doi.org/10.1016/0304-3975(84)90038-0. Hsu-Chun Yen and Lien-Po Yu. “Petri Nets with Simple Circuits”. In: Computing and Combinatorics, 9th Annual International Conference, COCOON 2003, Big Sky, MT, USA, July 25-28, 2003, Proceedings. Ed. by Tandy J. Warnow and Binhai Zhu. Vol. 2697. Lecture Notes in Computer Science. Springer, 2003, pp. 149–158. doi: 10.1007/3540- 45071- 8\_17. url: https://doi.org/10.1007/3- 54045071-8\_17.

Index

𝐴𝑛 ( 𝑥 ) (Ackermann function), 239 𝐶 (incidence matrix), 72 [ 𝑀⟩,[ 𝑁 ⟩ (reachability set), 8 𝑇 ∞ (set of infinite words), 50 #𝑡 ( 𝜏 ) (occurrence count), 10 Σ∗ (set of words), 18 Σ+ (set of nonempty words), 18 ← − 𝑇 , 262 𝜀 (empty sequence), 8 𝜀 (empty word), 18 𝑡

−→ (firable; firing), 7 Φ (flow relation, coloured net), 486 ∞, 9, 44, 50 ⟨𝑞⟩ (in an lts), 305 | (restriction of a function), 5 𝜔 (in a generalised marking), 52 ±𝑇, 299 𝑎 ↔ 𝑏 (for labels 𝑎, 𝑏 of an lts), 307 ∞-fair, 188 Δ (distance in an lts), 329, 346 ⊕ (sum of lts), 302 ⊗ (multiplication of lts), 301 ⪯ 𝑥 (in an lts), 329 𝑞𝑥 (maximal state), 331 𝑟𝑥 (minimal state), 331 |= (models in a modal logic), 357 ♦ (possibly, modal logic), 356 □ (always, modal logic), 357 ≡ (permutation equivalence), 192 ⊑ (permutation comparability), 192 ≡𝑞 (permutation equivalence, lts), 286 ≡ (equivalence of expressions), 444 ≺ (causality relation), 206 # (conflict relation), 206 co (concurrency relation), 206 •𝑥 (preset), 3

𝑥 • (postset), 3 (start marking of a box), 437 𝑁 ◦ (end marking of a box), 437 ∃𝑖∞ (∃ infinitely many), 187 ∀𝑖∞ (∀ but finitely many), 187 ∃ (Presburger logic), 378 ∀ (Presburger logic), 378 (shuffle), 24 (big shuffle), 25 ≤ (less or equal, for vectors), 52 < (less, for vectors), 52 ≪ (strictly less, for vectors), 52 ≤ 𝜔 , 400 𝑥 T (transposition), 5

◦𝑁





abstract data type, 493 AC-net, 179 Ackermann function, 239 weakly PN-computable, 239 ACP, 452, 474 activated, 7 acyclic, 206, 291 adequate order (O-net), 210 adjacency set (in an lts), 308 algebraic Petri net, 490, 494 arc inscription, 494 initial marking, 494 multiterm, 494 variable declaration, 494 algebraic specification, 493 abstract data type, 493 arity, 491 constant, 491 initial algebra, 495 model, 493 multiterm, 494 nullary, 491

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. Best and R. Devillers, Petri Net Primer, Computer Science Foundations and Applied Logic, https://doi.org/10.1007/978-3-031-48278-6

535

536 operation symbol, 491 signature, 491 sorts, 491 allocation, 101 cycle-free, 101 firing according to, 101 alphabet, 18 alternation principle, 77 ample sets, 173 annotation, 429 apt, 266, 297 arc, 2 multiple, 4 weight, 90 arithmetic, 379 arity, 491 arrow, 2 articulation, 308 associativity, 310 commutative associativity, 310 commutativity, 310 sequence, 310 asymmetric choice net, 179, 197 alternative definitions, 183 asymmetric choice systems, 151 atomic action VS.TOYA, 429 auto-concurrent, 11, 461 automaton finite, 1, 21 auxiliary variables, 170 B, 3 backward deterministic, 10, 266, 280, 299 backward matrix, 3 backward path (in an lts), 262 backward persistent, 280, 304, 322 basic toy algebra, 442 behaviour of a labelled transition system, 252 behavioural compositionality, 463 big shuffle, 25 bisimulation, 274 bit, 160 block, 465 bounded, 9, 49 criterion for live FC-systems, 111 criterion for live S-systems, 114 criterion for live T-systems, 115 structurally, 70, 78 branching process, 208 B.TOYA, 442 completeness, 447 consistency, 447

Index dynamic expression, 444 syntax, 442 calculus of communicating systems, 428 canonical prefix (of an O-net), 212 causal net, 360 causality, 2, 7 causality relation, 206 CCS, 428, 440, 452, 474 choice, 440 conjugate, 440 parallel composition, 440 prefixing, 440 relabelling, 440 renaming, 441 restriction, 440 channel declaration, 465 characterisation algebraic, 33 choice composition VS.TOYA, 429 choice place, 282 choice-free, 117 choice-free Petri net, 282 Chomsky hierarchy, 20 classes of Petri nets, 284 clean, 438 closed formula, 378 closure, 33 w.r.t. fine homomorphisms, 35 w.r.t. intersection, 36 w.r.t. inverse homomorphisms, 35 w.r.t. shuffle, 35 w.r.t. union, 36 cluster conflict, 135 place, 121 transition, 118 co-set (of an O-net), 209 coloured Petri net, 480, 486 flow relation, 486 incidence matrix, 486 marking, 486 transition activated in mode 𝑚, 487 fires in mode 𝑚, 488 commands, 465 communicating sequential processes, 428 commutativity, 9 concurrently enabled transitions, 12 complementary place, 195 complete lattice, 363 complexity

Index coverability graph, 243 general liveness problem, 63 liveness and boundedness of FC-nets, 143 non-liveness of FC-systems, 107 reachability graph, 243 well-formedness of FC-nets, 142 compositional semantics, 440 computable strongly, 230 weakly, 234 concurrency, 2, 7 concurrent systems notation, 428 concurrently activated, 11 enabled, 11, 191 executable, 11 firable, 11 condition (of an O-net), 204 configuration (of an O-net), 209 conflict, 11 conflict cluster, 135, 184 conflict relation (O-net), 206 confluent, 11 connected, 45 connected labels (in an lts), 307 connectivity matrix, 72 conservative, 85 consistent, 85 conspiracy, 187, 480 constant (algebraic specification), 491 constant symbol (predicate logic), 378 context rule, 445 COSY, 428, 452, 474 counter machine, 122, 198 Cov, 56, 398 coverability graph, 56, 398 controlled, 398 finiteness, 58, 400 properties, 61 size, 243 coverability tree, 53 covering by S-components, 113 by T-components, 116 critical section, 177, 216, 341 cross product of places, 110, 431 of transitions, 36 CSP, 428, 452, 465, 474 cut (of an O-net), 209 cutting context (O-net), 210 cycle, 253 small, 291, 342 cycle (in a graph), 44

537 cyclic, 50, 394 𝐷 (Dyck language), 23 DB.TOYA, 444

expression, 444 step rules, 445 dead, 50 deadlock, 50, 100, 311, 341 deadlock-free, 50, 100 denotational semantics, 440 determinism full, 10 general, 10 deterministic, 10, 53, 280, 299, 322 diamond property, 282 general, 303 Dickson’s corollary, 58, 382 Dickson’s lemma, 57, 78, 382 distance (between states of an lts), 329, 346 distributability, 197 divergence, 181 divide-and-conquer strategy, 298 dominate, 385 dominate (vector), 52 dual, 145 dual net, 84 duality principle, 77 duality theorem, 146 dynamic basic toy algebra, 444 dynamic context, 445 dynamic expression, 445 𝑒-directed, 434, 464 edge, 2 edge (of an lts), 252 effect, 28, 36, 156, 255, 382, 384, 400 function, 255 elementary (single-place) net, 26 elementary (single-place) subnet, 29 elementary high-level net, 490 elementary path (in a graph), 44 empty sequence, 8 empty word, 18 enabled, 7, 8 enabled path, 252 end state, 18 𝑒-place, 429, 460, 463 ESSP, 257 event (of an O-net), 204 event/state separation problem, 257 event/state separation property, 257 evolution of a labelled transition system, 252 ex-exclusive, 438, 463, 464

538 ex-restricted, 434 executable, 7 execution sequence, 8 extended free choice, 96 F, 3 factorisable (Petri box), 438 factorisation (of an lts), 303 fair, fairness, 186 Farkas’ lemma, 77, 418 FC-net, 96 fc-net, 96 FC-systems exact bounds, 113 final state, 18 finite automaton, 21 finite delay property, 187, 193 finitely preceded, 207 firable; firing, 7, 8 firing lemma, 75 firing rule, 7 firing sequence, 8 fair, 188 repetitive, 80 unfair, 188 weakly fair, 191 weakly unfair, 191 first order predicate logic, 378 fixpoint, 450 fold (high-level net), 481 forward deterministic, 10 forward matrix, 3 forward persistent, 304 free Petri net, 19 free-choice, 96 full determinism, 10 function, 2 restriction, 5 general determinism, 10 general diamond property (of an lts), 303 generalised marking, 52 graph, 44 arc, 44 arrow, 44 connected, 322 covered by cycles, 45 cycle, 44 edge, 44 inscription, 44 label, 44 node, 44 path, 44 elementary, 44

Index length, 44 simple, 44 strongly connected, 45 strongly connected component, 45 terminal, 51 subgraph, 44 undirected, 44 vertex, 44 weakly connected, 45, 322 weakly connected component, 45 graph minor theorem, 85 guarded recursion, 450 ℎ −1 , 26 ℎ −1 ( 𝑁𝐷 ), 27 handshake synchronisation, 466, 474 Hasse diagram, 184 high-level Petri net, 479 mode of a transition, 481 type of a place, 481 ℎℓ , 18 home marking, 148 home state, 148, 289 homomorphism, 18, 274 fine, 18, 35 inverse, 26, 35 non-erasing, 18 hurdle, 400 I-arc, 194 I-net, 194 implementation of Petri nets, 340 incidence matrix, 72, 486 infinite behaviour, 181 infinite firing sequence, 9 infinite path, 44 infinite word, 50 inhibitor arc, 194 inhibitor net, 194 initial algebra, 495 initial state (of a Petri net), 4 initial state (of an lts), 252 inscription graph, 44 O-net, 208 intersection, 36 invariant, 48, 80 inductive, 158 minimal, 80 properties, 81 semipositive, 80 support, 80 inverse homomorphism, 26 𝑖-place, 429, 463

Index isolated, 3 isomorphism, 334, 447 lts, 254 iteration, 458 join-free, 121 König’s lemma, 58, 210 𝑘-activates, 188 Keller’s theorem, 287, 304, 384 𝑘-fair, 188 L, 19 L 1 , 19, 34 label graph, 44 of an lts, 252 label separation (in an lts), 303 label-disjoint sequences, 254, 290 labelled transition system see lts, 252 labelling, 18 language, 18 𝑎 𝑛 𝑏 𝑛 𝑐 𝑛 , 19 {𝑏𝑎, 𝜀 }, 34 context-sensitive, 32 𝐷, 23 Dyck, 23 free nets, 30 𝐿 ( 𝑁𝐷 ), 23 𝐿 ( 𝑁𝐷 (𝑖, 𝑗 ) ), 24, 25 of 𝑁 𝑓 , 30 of a Petri net, 19 of arbitrary nets, 31 of elementary nets, 28 of nets with 1 final state, 31 regular, 1, 21 𝑤𝑤𝑅 , 21 language classes of Petri nets, 19 language equivalence lts, 254 lattice, 363 lemma Dickson’s, 57, 78 Farkas’, 77 König’s, 58, 210 length path, 44 linear inequality system homogeneous, 298 non-homogeneous, 298 linear set, 377 base, 377

539 offset, 377 periods, 377 linearisation, 198, 203, 219 literal (modal logic), 356 live, 50 criterion for FC-systems, 98 criterion for S-systems, 108 criterion for T-systems, 109 singly, 50 strongly, 50 structurally, 70 weakly, 50 liveness property, 177 𝐿 ( 𝑁 ), 19 𝐿 ( 𝑁𝐷 ), 23 𝐿 ( 𝑁𝐷 (𝑖, 𝑗 ) ), 24, 25 local configuration (of an O-net), 209 local determinism, 9 loop, 3, 74, 91, 181 loop net, 147 lts, 252 acyclic, 291 adjacency set, 308 articulation, 308 backward deterministic, 280 backward path, 262 backward persistent, 280, 322 behaviour, 252 connected labels, 307 cycle, 253 small, 291, 342 deterministic, 280, 322 diamond property, 282 distance, 329, 346 effect function, 255 event/state separation problem, 257 event/state separation property, 257 evolution, 252 factorisation, 303 finite, 254 general diamond property, 303 home state, 289 isomorphism, 254 Keller’s theorem, 287 label separation, 303 label-disjoint, 290 language equivalence, 254 minimal overapproximation, 270 Parikh vector, 253 Parikh-disjoint, 290 Parikh-equivalent, 290 path short, 324, 342 permutation, 286

540 persistent, 280, 322 product, 300 region, 255 reverse, 323 reversible, 280, 322 sequentialising state, 332 short path, 324, 342 small cycle, 291, 342 solution, by a Petri net, 254 state separation problem, 257 state separation property, 257 strongly cycle-consistent, 299 support, 253 T-vector, 253 totally reachable, 322 transposition, 286 weak periodicity, 280 M-net, 496 calculus, 497 conjugate labels, 500 synchronisation, 500 M-structure, 179 𝑀0 , 4 Mark(C), 210 mapping, 2 marked (place set), 95 marked graph, 94 marking, 4 coloured Petri net, 486 generalised, 52 marking equation, 75, 216 marking inequality, 76 matrix transposed, 5, 77 maximality of firing sequences, 193 maximum in the coverability graph, 402 memorylessness, 9 𝑀 𝑓 , 18 minimal, 80 vector, 381 minimal T-invariant, 342 modal logic, 356 mode of a transition, 481 model (algebraic specification), 493 models (𝑆4 ), 357 monitor, 342 monotonicity of firing, 9, 53 multigraph, 44 multilabel Petri net, 496 multiple arc, 4 multirelation, 483 (𝑋, 𝑌 )-, 483

Index matrix product, 484 transposed, 483 multirelational Petri net, 496 multiset 𝑋-, 483 𝐴-, 485 additive inverse, 484 basis, 483 difference, 483 empty, 484 finite, 483 generalised, 483 of places, 4 of transitions, 11 scalar product, 483 semipositive, 484 submultiset, 483 sum, 483 multiterm, 494 mutual exclusion, 167, 177 names places, 485 transitions, 485 𝑁𝐷 , 22 𝑁𝐷 (𝑖, 𝑗 ), 23 𝑁 𝑓 , 19, 29 nondecreasing sequence, 382 𝑁𝑠 , 29 nullary, 491 number of occurrences, 10 number theory, 379 O-net Mark(𝐶 ), 210 adequate order, 210 causality relation, 206 co-set, 209 condition, 204, 207 configuration, 209 local, 209 conflict-consistent, 207 cut, 209 cut-off event, 211 cutting context, 210 event, 204, 207 finitely preceded, 207 inscription, 208 maximal unfolding, 208 merge-free, 207 prefix, 208 canonical, 212 relevant event, 211 O-net (occurrence net), 207

Index occam, 465, 474 occurrence count vector, 10 offset, 377 𝜔-coordinate, 53 𝜔-place, 53 operation symbol, 378, 491 operations union, 36 big shuffle, 25 closure, 33 fine homomorphism, 18, 35 homomorphism, 18, 33 intersection, 33, 36 inverse homomorphism, 26, 33, 35 non-erasing homomorphism, 18 shuffle, 24, 33, 35 union, 33 operator box, 437 operator net, 437 overapproximation (of an lts) minimal, 270 overapproximation (of the state space), 159 parallel composition VS.TOYA, 429 parallelism, 2 Parikh vector, 10, 253 realisable, 109 Parikh-disjoint, 290 Parikh-equivalent, 290 partial order reduction, 173 partial order semantics, 12, 173, 198, 222 particles, 467 path enabled, 252 in an lts, 252 short, 324, 342 path (in a graph), 44 period, 377 permutation, 192, 286 comparability, 192 equivalence, 192 persistence, 51 persistence problem, 387 persistent, 11, 280, 322, 377 Peterson’s algorithm, 168 Petri box 𝑁 ◦ , 437 ◦ 𝑁 , 437 annotation, 429 𝑒-place, 429 𝑖-place, 429 𝑥-place, 429 behavioural compositionality, 463

541 block, 466 calculus, 437 clean, 438 command, 466 data, 466 𝑒-directed, 434, 464 𝑒-place, 429, 460, 463 𝑇-restricted, 434 ex-exclusive, 438, 464 ex-restricted, 434 factorisable, 438 fixpoint, 450 half-aggregate, 462 𝑖-place, 429, 463 iteration, 458 loop body 𝐸2 , 458 front guard 𝐸1 , 458 initialisation 𝐸1 , 458 rear guard 𝐸3 , 458 termination 𝐸3 , 458 operator box, 437 operator net, 437 particles, 467 refinement, 431 simultaneous, 431, 433 relabelling, 431 relation, 435 S-aggregate, 460 S-covered by, 461 T-covered by, 461 S-half-aggregate S-covered by, 463 safe, 438 scope expression, 468 weakly live, 438 𝑥-directed, 434 𝑥-place, 429, 460, 463 Petri net, 2, 4 𝑥•, 3 •𝑥, 3 AC, 179 acyclic, 206 algebraic, 490, 494 arc, 2 weight, 90 asymmetric choice, 179 B, 3 backward persistent, 281 bounded, 49, 322 causal, 360 classes, 284 coloured, 480, 486 concurrency relation, 206

542 concurrently enabled, 191 conflict cluster, 135 conflict relation, 206 connected, 322 conservative, 85 consistent, 85 cyclic, 50, 394 dead, 50 deadlock-free, 50, 100 dual, 84, 145 elementary (single-place), 26 elementary (single-place) subnet, 29 F, 3 FC, 96 free, 19 free version, 19 free-choice, 96 ℎ −1 ( 𝑁𝐷 ), 27 high-level, 479 home marking, 148 home state, 148 incidence matrix, 72 initial, 4 labelled, 18 language, 19 live, 50, 322 loop, 91 M-net, 496 M-structure, 179 marked graph, 94 marking, 4 marking equation, 75 marking inequality, 76 maximality of firing sequences, 193 multirelational, 496 𝑁𝐷 , 22 𝑁𝐷 (𝑖, 𝑗 ), 23 𝑁𝑠 , 29 permutation, 192 comparability, 192 equivalence, 192 persistence problem, 387 persistent, 281 place, 2 plain, 3, 90, 134 postset, 3 predicate/transition, 500 preset, 3 pure, 3 rank, 134 reachability problem, 47, 410 reachability set, 8 reduction, 147 regulation circuit, 139

Index representation, 3 reverse, 145 reverse-dual, 145 reversible, 50, 281 S-component, 110 S-net, 92 S-system, 92 safe, 49 semiflow, 85 set of presets, 135 simple, 183 simulation, 181 singly live, 50 siphon, 98 proper, 98 trivial, 98 solves an lts, 254 state equation, 75 strong permutation, 191 strongly live, 50 structurally bounded, 70 structurally live, 70 sum, 35, 302 system, 70 T-component, 110 T-net, 92 T-system, 92 transition, 2 transposition, 191 trap, 98 proper, 98 trivial, 98 unbounded, 49 unlabelled, 19 weak permutation, 191 weakly connected, 322 weakly live, 50 weakly persistent, 389 well-formed, 70, 82 with final states, 18 with inhibitor arcs, 194 Petri net classes, 284 Petri net languages, 17 algebraic characterisation, 37 place, 2 complementary, 195 isolated, 91 type, 481 place cluster, 121 place invariant, see S-invariant place multiplication, 110, 431 place set marked, 95 token-empty, 95

Index token-free, 95 plain, 3, 90, 134, 433 PN, see Petri net PN-computable strongly, 230 weakly, 234 PN-computer strong, 230 weak, 234 positive vector, 52 postset, 3 potential reachability, 77 pre-coverability graph, 405 pre-synthesis, 298 predicate symbol, 378 predicate/transition Petri net, 500 prefix of a word, 23 of an O-net, 208 prefix-closed, 19 Presburger arithmetic, 379 Presburger formulae, 378 Presburger set, 380 preset, 3 prime (relating to T-vectors), 345 principle of economy, 449 process algebra context rule, 445 dynamic context, 445 dynamic expression, 445 guarded recursion, 450 principle of economy, 449 recursion, 447 refinement, 431 relabelling, 431 step rule, 445 product (of lts), 300 progress property, 187, 193 projection theorem, 406 property co-linear, 164 does not hold in a state, 158 holds at a marking, 158 holds generally, 158 holds in a state, 158 is invariant, 158 of a net, 158 of a system, 158 predicate, 159 (PS), 82 (PT), 82 pure, 3 rank of a matrix, 134

543 rank of a Petri net, 134 𝑅𝐺, 46 reachability graph, 46 semilinear, 384 size, 243 reachability problem, 47, 410 lower bound, 411 reachability set, 8 realisable Parikh vector, 109 realisable T-invariant, 348 realisable T-vector, 77 recursion, 447 recursive toy algebra, 447 reduced asymmetric choice, 183 reduction, 147 refinement, 431 exchangeability, 433 re-usability, 431 simultaneous, 431, 433 region (of an lts), 255 regular expression, 376 regulation circuit, 91, 139 relabelling, 431 relation, 435 rendezvous, 474 repetitive, 80 residue (of T-vectors), 346 residue (of words), 285 restricted lts, 329 restriction, 5 reverse, 145 reverse graph, 415 reverse lts, 323 reverse net, 415 reverse word, 21, 416 reverse-dual, 145 reversible, 50, 280, 322 RP, 47, 410 R.TOYA, 447 expression, 447 S-component, 110 covering, 113 strongly connected, 110 S-invariant, 48, 80 cover, 82 properties, 81 S-net, 92 S-subvariant, 48 S-survariant, 48 S-system, 92 S-vector, 52 Σ-algebra, 491 assignment, 492

544 equational theory, 493 evaluation, 492 terms, 492 valid, 493 variables, 492 safe, 49, 438 safety property, 177 SAT, 360 satisfies, 357 𝑆4 , 357 scope expression, 468 seed, 405 seed-transition problem, 411 seed-transition sequence, 408 self-loop, 3 self-stabilisation, 122 semantics compositional, 443 denotational, 443 operational, 443 semaphor, 167, 342 semiflow, 85 semilinear, 375, 377 not, 391 semilinear set, 375, 377 bases, 377 extended Parikh vector, 382 nondecreasing sequence, 382 offsets, 377 periods, 377 semipositive vector, 52 sentence, 378 sequential composition VS.TOYA, 429 sequentialising state, 332 serialisation, 342 set of presets, 135 short path, 324, 342 shuffle, 24, 35, 198 side condition, 3 signature, 491 simple (loop in a net), 3 simple path (in a graph), 44 simple Petri net, 183, 197 simulation, 181, 274 singly live, 50 siphon, 98 proper, 98 token-empty, 100 trivial, 98 small cycle, 291, 342, 345 sorts, 491 SOS, 443 spanning tree, 264

Index SSP, 257 stop box, 449 start marking, 4 starting state, 4 starvation, 341 state, 4 end, 18 final, 18 state equation, 75 state separation problem, 257 state separation property, 257 state set, 4 state space, 8, 158 state space explosion, 173 state space reduction, 173 step, 11, 445 rule, 445 step semantics, 12, 222 strong permutation, 191 strongly computable addition +, 231 binary decoding 𝑖𝑛𝑡𝑟 , 231 constant 𝑐, 231 multiplication by a rational constant ∗𝑐, 231 not: multiplication, 232 not: squaring, 232 subtraction −⊥ , 231 strongly connected component terminal, 51 strongly cycle-consistent, 299 strongly live, see live strongly Petri net computable, 230 strongly PN-computable not Turing powerful, 239 structural boundedness, 70, 78 structural liveness, 70 structural operational semantics, 443 stubborn sets, 173 submultiset, 483 subnet, 6 elementary, 29 subvariant, 48 sum (of Petri nets), 35, 302 support, 80, 116, 253, 415 survariant, 48 symmetric confusion, 197 system, 5, 70 𝑇-restricted, 434 T-component, 110 activated, 150 covering, 116 strongly connected, 110 T-invariant, 80

Index cover, 82 in a coloured net, 501 minimal, 292, 342 properties, 81 realisable, 292, 348 transition-disjoint, 292 T-net, 92 T-system, 92 T-vector, 10, 52, 253 realisable, 77 residue, 346 TCSP, 452, 474 temporal logic, 356 theory, 379 arithmetic, 379 Presburger arithmetic, 379 reals with addition, 379 token, 4 token game, 8 token-empty (place set), 95 token-free (place set), 95 totally reachable, 322 transition, 2 activated in mode 𝑚, 487 dead, 50 fires in mode 𝑚, 488 isolated, 91 mode, 481 transition cluster, 118 transition invariant, see T-invariant transition rule, 7 transition sequence, 8 transition system, 252 edge, 252 initial state, 252 label, 252 transition-regular, 136, 205 transposed matrix, 5, 77 transposed multirelation, 483 transposition (in a net), 191 transposition (in an lts), 286, 304 transposition lemma, 77 transposition principle, 77 trap, 98 proper, 98 trivial, 98 truth domain, 380 two-way uniform chain forward, 299 general, 299 type of a place, 481

545 ultimately periodic word, 375, 394 unbounded, 9, 49 underapproximation (of the state space), 159 unfairness, 186 unfold (high-level net), 490 unfolding (O-net), 204, 208 maximal, 208 unification, 499 union, 36 variable declaration, 465 variant, 197 VAS (vector addition system), 13, 122 VASS (vector addition system with states), 243 vector dominate, 52 less, 52 less or equal, 52 positive, 52 S-vector, 52 semipositive, 52 strictly less, 52 T-vector, 52 vector addition system, 13, 122 with states, 243 VS.TOYA, 429 weak periodicity, 280 weak permutation, 191, 286 weakly computable, 234 addition, 235 constant, 235 multiplication, 235 only monotonic functions, 238 polynomials, 236 weakly live, see live, 438 weakly periodic, 291 weakly persistent, 389 weakly PN-computable Ackermann function, 239 not Turing powerful, 239 well-formed, 70 (WF), 82 word, 18 prefix, 23 residue, 285 ultimately periodic, 375, 394 𝑥-directed, 434 𝑥-place, 429, 460, 463