277 107 6MB
English Pages 285 [296] Year 2016
DEPENDABLE SOFTWARE SYSTEMS ENGINEERING
NATO Science for Peace and Security Series This Series presents the results of scientific meetings supported under the NATO Programme: Science for Peace and Security (SPS). The NATO SPS Programme supports meetings in the following Key Priority areas: (1) Defence Against Terrorism; (2) Countering other Threats to Security and (3) NATO, Partner and Mediterranean Dialogue Country Priorities. The types of meeting supported are generally “Advanced Study Institutes” and “Advanced Research Workshops”. The NATO SPS Series collects together the results of these meetings. The meetings are co-organized by scientists from NATO countries and scientists from NATO’s “Partner” or “Mediterranean Dialogue” countries. The observations and recommendations made at the meetings, as well as the contents of the volumes in the Series, reflect those of participants and contributors only; they should not necessarily be regarded as reflecting NATO views or policy. Advanced Study Institutes (ASI) are high-level tutorial courses to convey the latest developments in a subject to an advanced-level audience. Advanced Research Workshops (ARW) are expert meetings where an intense but informal exchange of views at the frontiers of a subject aims at identifying directions for future action. Following a transformation of the programme in 2006 the Series has been re-named and reorganised. Recent volumes on topics not related to security, which result from meetings supported under the programme earlier, may be found in the NATO Science Series. The Series is published by IOS Press, Amsterdam, and Springer Science and Business Media, Dordrecht, in cooperation with NATO Emerging Security Challenges Division. Sub-Series A. B. C. D. E.
Chemistry and Biology Physics and Biophysics Environmental Security Information and Communication Security Human and Societal Dynamics
Springer Science and Business Media Springer Science and Business Media Springer Science and Business Media IOS Press IOS Press
http://www.nato.int/science http://www.springer.com http://www.iospress.nl
Sub-Series D: Information and Communication Security – Vol. 45 ISSN 1874-6268 (print) ISSN 1879-8292 (online)
Dependable Software Systems Engineering
Edited by
Javier Esparza Technische Universität München, Germany
Orna Grumberg Technion, Israel
and
Salomon Sickert Technische Universität München, Germany
Amsterdam • Berlin • Washington, DC Published in cooperation with NATO Emerging Security Challenges Division
Proceedings of the NATO Advanced Study Institute (ASI) on Verification and Synthesis of Correct and Secure Systems Marktoberdorf, Germany 4–16 August 2015
© 2016 The authors and IOS Press. All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without prior written permission from the publisher. ISBN 978-1-61499-626-2 (print) ISBN 978-1-61499-627-9 (online) Library of Congress Control Number: 2016935536 Publisher IOS Press BV Nieuwe Hemweg 6B 1013 BG Amsterdam Netherlands fax: +31 20 687 0019 e-mail: [email protected] Distributor in the USA and Canada IOS Press, Inc. 4502 Rachael Manor Drive Fairfax, VA 22032 USA fax: +1 703 323 3668 e-mail: [email protected]
LEGAL NOTICE The author(s) of this publication is/are solely responsible for its content. This publication does not reflect the opinion of the publisher. The publisher cannot be held liable for any loss or damage that may occur because of this publication. PRINTED IN THE NETHERLANDS
v
Preface In the last years average citizens have become daily users of internet banking, social networks, and cloud services. Preventing malfunction of these services and protecting the integrity of private data from cyber-attacks has become one of the main challenges of society at large. Computer science is not only in charge of developing new technologies and systems to face this challenge, but also their scientific foundations. Approaches and technologies developed over the past years dramatically improve the quality of software. The computer science community is attacking the problem named above by developing verification and synthesis tools that mechanize more and more tasks in the design of safe and secure programs. Moreover, progress is being made in understanding the fundamentals, for example, in testing (efficient safety check of single program executions), in model-checking (more expensive, complete check of software models), in synthesis (development of software correct by construction), or in information-flow analysis (detection of information). During the last decades computer scientists have developed, and continue to develop, a theory of formal verification: a body of methodologies, algorithms, and software tools for finding bugs or security hazards in existing software, or for ensuring, with the rigor of mathematical proof, that they do not exist. The 2015 edition of the NATO Advanced Study Institute “Marktoberdorf Summer School”, under the the title Verification and Synthesis of Correct and Secure Systems, offered its participants a comprehensive view of the state-of-the-art in this area. In her tutorial Probabilistic Model Checking, Christel Baier introduced the principles of discrete-time, finite-state Markovian models and their quantitative analysis against temporal logic specifications. She presented the basic principles of the automatabased approach to model-checking probabilistic temporal logics, techniques for tackling the state-explosion problem, and algorithms for dealing with fairness assumptions and computing conditional probabilities and quantiles. Nikolaj Bjørner lectured on algorithms for solving satisfiability problems modulo theories (SMT). In his tutorial SMT Solvers: Foundations and Applications he also introduced a large collection of application scenarios, including program verification, network analysis, symbolic model checking, test-case generation, and white-box fuzzing. Security protocols should guarantee private communication over untrusted networks, but, due to mistakes, are often subject to attacks. In her lectures on Models and Techniques for Analyzing Security Protocols Véronique Cortier described and discussed decision techniques based on Formal Methods to automatically verify properties of security protocols like authentication or confidentiality. In his lecture series on Parameterized Verification of Crowds of Anonymous Processes, Javier Esparza explained that systems composed of a finite but possibly arbitrary number of identical components occur everywhere, from hardware design to distributed applications. Parameterized verification is the task of verifying the correctness of this kind of systems regardless the number of their components. He analyzed the decidability and complexity of parameterized verification when all processes execute the same code and have no identities.
vi
More than fifty years after its introduction by Alonzo Church, the synthesis problem is still one of the most intriguing challenges in the theory of reactive systems. The series of lectures by Bernd Finkbeiner on Synthesis of Reactive Systems presented an overview on the automata- and game-theoretic foundations, explored the spectrum of logics and algorithms for the synthesis of reactive systems, and discussed the ideas behind recent efficient approaches like bounded synthesis. Over the last two decades, significant progress has been made on how to broaden the scope of model checking from finite-state abstractions to actual software implementations. In his lectures on Software Model Checking, Patrice Godefroid presented his work on combining model checking and testing techniques, and reported on their remarkable success at Microsoft. Model checking is a technique for automating high-quality assurance of software. Despite its success, it still suffers from the state explosion problem, which refers to the worst-case exponential growth of the state space of a program with the number of concurrent components. Orna Grumberg’s tutorial on Compositional Model Checking presented techniques to break-up the global verification of a program into a collection of more manageable, local verification problems about its individual components. Sumit Gulwani lectured on Programming by Examples (PBE), a collection of techniques for synthesizing programs in an underlying domain-specific language from example based specifications. He explained how PBE has the potential to revolutionize enduser programming by enabling end users, most of them are non-programmers, to create scripts for automating repetitive tasks. Developing reliable concurrent programs is a difficult and specialized task, requiring highly skilled engineers, most of whose efforts are spent on the testing and validation phases. As a result, there is a strong economic and strategic incentive for software companies to automate parts of the verification process. Daniel Kröning lectured on Verification of Concurrent Software, and described tools that automatically check that all possible behaviors of a concurrent program satisfy a given specification. Two-player games can be used to mathematically describe non-terminating interactions between a system and its environment. In his tutorial Two-Player Zero Sum Games Played on Graphs: Omega Regular and Quantitative Objectives, JeanFrançois Raskin introduced algorithms and complexity results for the most relevant classes of two-player games, including reachability, Büchi, and parity games, as well as games with quantitative measures, like energy and mean-payoff games. In his course on Software Security by Information Flow Control, David Sands lectured on security by design, the long-term research goal of developing methods for the design of secure systems that treat security requirements as first-class citizens. In the course he introduced his work on security and privacy policies for data manipulated by software. Helmut Seidl presented a series of lectures on Equivalence – Combinatorics, Algebra, Proofs. He studied the problem of proving that two programs are equivalent, where the notion of equivalence is relativized w.r.t. the observable behavior of the code. He presented a collection of decidability and complexity results on different aspects of the problem. The title of Eran Yahav’s series of lectures was Analysis of Synthesis with ‘Big Code’. He considered semantic representations based on symbolic automata, tracelets, numerical abstractions, and textual descriptions, and notions of code similarity based on
vii
these representations. He presented a number of prediction techniques, including statistical language models, order Markov models, and other distance-based and model-based sequence classification techniques. The 2015 Marktoberdorf Summer School of the Advance Study Institute was a forum for challenging, exciting, and intense learning, and for discussion and development of cutting-edge ideas. It was a very productive and beneficial event, at both the professional and social levels. We thank all lecturers, the staff of the organization, and our hosts in Marktoberdorf. Special thanks go to Dr. Katharina Spies, Maximilian Irlbeck and Florian Schulz for their great support. The Marktoberdorf Summer School was an event of the Advanced Study Institute of the NATO Science for Peace and Security Programme. We thank all authorities that made it possible. The Editors
viii
ix
Contents Preface
v
Probabilistic Model Checking Christel Baier
1
SMT Solvers: Foundations and Applications Nikolaj Bjørner
24
Formal Models for Analyzing Security Protocols: Some Lecture Notes Véronique Cortier and Steve Kremer
33
Parameterized Verification of Crowds of Anonymous Processes Javier Esparza
59
Synthesis of Reactive Systems Bernd Finkbeiner
72
Between Testing and Verification: Dynamic Software Model Checking Patrice Godefroid
99
Learning-Based Compositional Model Checking of Behavioral UML Systems Orna Grumberg and Yael Meller
117
Programming by Examples (and Its Applications in Data Wrangling) Sumit Gulwani
137
Verification of Concurrent Software Daniel Kroening
159
A Tutorial on Mean-Payoff and Energy Games Jean-François Raskin
179
Equivalence – Combinatorics, Algebra, Proofs Helmut Seidl, Sebastian Maneth, Gregor Kemper and Joost Engelfriet
202
Analysis and Synthesis with “Big Code” Eran Yahav
244
Subject Index
283
Author Index
285
This page intentionally left blank
Dependable Software Systems Engineering J. Esparza et al. (Eds.) IOS Press, 2016 © 2016 The authors and IOS Press. All rights reserved. doi:10.3233/978-1-61499-627-9-1
1
Probabilistic Model Checking a TU
Christel BAIER a,1 Dresden, Department for Computer Science E-mail: [email protected]
Abstract. Probabilistic model checking is a fully automated method for the quantitative system analysis against temporal logic specifications. This article provides an overview of the main model-checking concepts for finite-state discrete-time Markov chains. Keywords. model checking, dicsrete-time Markov chains, probabilistic computation tree logic
1. Introduction Model checking [CGP00] is a prominent fully automatic approach for the formal verification of systems, represented by an operational model formalizing the system states and the possible transitions between them. The desired system properties are typically specified as formulas of some temporal logic. The task of a model checker is to check whether the model meets the temporal conditions of the specifications. For branchingtime temporal formulas (e.g., formulas of computation tree logic CTL [CES86]), the standard model-checking procedure treats the subformulas in a recursive manner and computes their satisfaction sets by analyzing the graph structure of the system model. In the case of linear temporal logic (LTL) specifications, the standard model-checking approach relies on a representation of the negation of the specification by a nondeterministic automata over infinite words, briefly called ω -automata. The verification task can then be reduced to an emptiness test for the product of the automaton and the system model [WVS83,VW86]. These techniques together with sophisticated methods for tackling the state-explosion problem have been implemented in several tools and successfully applied for various types of systems. For further details we refer to text books [MP92,MP95,CGP00,BBF+ 01,Pel01,Sch04,BK08] and survey articles [Eme90,CK96,MOSS99] on temporal logics, model checking and verification methods. Several extensions of traditional model-checking techniques have been developed to treat models with annotations for timing constraints, costs or stochastic phenomena. This article addresses the latter. The major task of probabilistic model checking is to analyze systems represented by (some kind of) Markovian model and to compute the probabilities for reachability event or other temporal conditions or the expectations of random 1 The author is supported by the German Research Foundation within the collaborative research center CRC 912 HAEC and the excellence initiative through the cluster of excellence cfaed and the support-the-best program of the institutional strategy of the TU Dresden as well as the research training groups QuantLA and RoSI.
2
C. Baier / Probabilistic Model Checking
variables (e.g. formalizing the expected costs until reaching a goal). The most prominent models supported by probabilistic model checkers are Markov chains and Markov decision processes (MDPs). While Markov chains are fully probabilistic, MDPs can represent both probabilistic and nondeterministic behaviors. The verification of MDPs typically relies on a worst-case or best-case anaysis where the task is to compute maximal or minimal probabilities and expectations when the extrema are taken by ranging over all (possibly history-dependent) resolutions of the nondeterminism. In the context of branching-time logics for specifying systems modeled by discretetime Markov chains, probabilistic computation tree logic (PCTL) [HJ94] has been introduced as a CTL-variant with a probability operator specifying lower or upper probability bounds for CTL-like path formulas with next, unbounded and step-bounded until as basic temporal modalities. Thus, PCTL-formulas are Boolean combinations of state formulas of the form PJ (ϕ ) where ϕ a path formula and J an interval with rational bounds in [0, 1]. An analogous branching-time logic for MDPs has been introduced by Bianco and de Alfaro [BdA95]. The intuitive meaning of the state formula, e.g., P≥p (ϕ ) (where p is a rational number with 0 ≤ p ≤ 1) is that ϕ holds with probability at least p for all resolutions of the nondeterminism. The PCTL model checking procedure follows the standard schema for CTL-like branching-time logics and recursively computes the satisfaction sets of state subformulas. The treatment of the formulas with the probability operator mainly relies on matrix-vector operations and linear equation systems for Markov chains and linear programs for Markov decision processes. This yields a polynomial time bound for the PCTL model checking problem for PCTL formulas without step-bounded path formulas. The typical task of the quantitative analysis of a Markovian model M against an LTL ϕ is to compute the probability of the paths in M satisfying ϕ (for Markov chains) resp. the maximal or minimal probability for ϕ when ranging over all schedulers, i.e., all resolutions of the nondeterminism in M (for MDPs). The standard approach to define the product by the synchronous parallel composition of the ω -automaton and the system model fails in the probabilistic setting, when the system model has probabilistic choices, while the automaton behaves nondeterministically. The simplest way is to switch from nondeterministic to deterministic ω -automata. In this case one can rely on a straightforward definition of the product of a deterministic automaton and a Markov chain resp. MDP. This yields a probabilistic model that can be analyzed using efficient graph algorithms and linear equation systems (for Markov chains) or linear programs (for MDPs); see e.g. [dA97a,BK08]. This article provides some details of the above sketched model checking techniques for finite-state discrete-time Markov chains. Section 2 presents some basic concepts for Markov chains. The logic PCTL and the treatment of ω -regular specifications will be presented in Sections 3 and 4. We conclude in Section 5 with a brief summary of the model checking techniques for Markov decision processes and weighted Markovian models.
2. Markov chains We give here a brief introduction in the foundations for the analysis of discrete-time Markov chains. Several concepts presented here are taken from [BK08]. For more details
C. Baier / Probabilistic Model Checking
3
we refer to standard text books on Markov chains (e.g. [Kul95,Hav98]) and articles on probabilistic model checking (see e.g. the literature mentioned in [BK08]). 2.1. Basic concepts Discrete-time Markov chains, briefly called Markov chains, can be seen as transition systems where each state s is augmented with a probabilistic distribution specifying the transition probabilities. Definition 2.1.1. A Markov chain is a tuple M = (S, P, ι ) where S is a countable, nonempty set of states, P : S × S → [0, 1] ∩ Q the transition probability function such that for all states s: P(s, s ) = 1. ∑
s ∈S
and ι : S → [0, 1] ∩ Q is initial distribution, satisfying ∑ ι (s) = 1. s∈S
The transition probability function P specifies for each state s the probability P(s, s ) of moving from s to s in one step, i.e., by a single transition. The constraint imposed on P ensures that P(s, ·) is a distribution for each state s. The requirement that the transition probabilities P(s, s ) and the initial probabilities ι (s) are rational is irrelevant from a mathematical point of view, but crucial for computational reasons. M is called finite if S is finite. For finite M , the size of M , denoted size(M ), is the number of states plus the number of pairs (s, s ) ∈ S × S with P(s, s ) > 0. If ι is a Dirac distribution, i.e., ι (s) = 1 for some state s ∈ S, then s is called the initial state of M and we simply write (S, P, s) rather than (S, P, ι ). For state s and T ⊆ S, let P(s, T ) denote the probability of moving from s to some state t ∈ T in a single step. That is, P(s, T ) = ∑ P(s,t). t∈T
In the sequel, we often identify the transition probability function P : S × S → [0, 1] with the matrix P(s,t) s,t∈S . The row P(s, ·) for state s in this matrix contains the probabilities of moving from s to its successors, while the column P(·, s) for state s specifies the probabilities of entering state any other state. Similarly, the initial distribution ι s from is often viewed as a vector ι (s) s∈S . Paths in M are defined as infinite state sequences π = s0 s1 s2 · · · ∈ Sω such that P(si , si+1 ) > 0 for all i ≥ 0. For path π in M , inf(π ) denotes the set of states that are visited infinitely often in π . For finite Markov chains, inf(π ) is nonempty for all paths π . Let InfPaths(M ) denote the set of paths in M , and FinPaths(M ) denote the set of finite path fragments s0 s1 . . . sn where n ≥ 0 and P(si , si+1 ) > 0 for 0 ≤ i < n. InfPaths(s) denotes the set of all paths in M that start in state s. Similarly, FinPaths(s) denotes the set of all finite path fragments s0 s1 . . . sn such that s0 = s. Markov chains are depicted by their underlying digraph where edges are equipped with the transition probabilities in ]0,1]. Example 2.1.2. The Markov chain depicted in Figure 1 models a simple communication protocol operating with a lossy channel. In the state start, a message is generated that
4
C. Baier / Probabilistic Model Checking
is senf off along the channel in its unique successor state try. The message is lost with 1 probability 10 , in which case the message will be sent off again, until it is eventually delivered. As soon as the message has been delivered correctly, the system returns to its initial state. Using the enumeration start, try, lost, delivered for the states, the transition
start 1 1 10
1 delivered
try
9 10
lost 1
Figure 1. Markov chain for a simple communication protocol.
probability function P viewed column vector are ⎛ 0 ⎜0 P = ⎜ ⎝0 1
as a 4×4 matrix and the initial distribution viewed as a
1 0 1 0
0
0
1 10
9 10
0 0
⎞
⎟ ⎟ 0⎠ 0
ιinit
⎛ ⎞ 1 ⎜0⎟ ⎜ = ⎝ ⎟ 0⎠ 0
An example of a path is π = (start try lost try lost try delivered)ω . Along this path each message has to be retransmitted two times before delivery. It follows that inf(π ) = S. For T = {lost, delivered}, we have P(try, T ) = 1. Example 2.1.3. Knuth and Yao [KY76] introduced a simple algorithm for simulating the behavior of a standard six-sided die by a fair coin. The algorithm can be modeled by the Markov chain depicted in Figure 2. The algorithm starts in the initial state s0 , i.e., we have ι (s0 ) = 1 and ι (s) = 0 for all states s = s0 . It performs a sequence of coin-tossing experiments until reaching one of the possible outcomes of the die (states 1, 2, 3, 4, 5, and 6 at the bottom). If the outcome of the coin-tossing experiment at some inner node is heads, the left branch determines the next state. Likewise, if the outcome is tails, the right branch determines the next state. Reasoning about quantitative properties requires a formalization of the probabilities for sets of paths. This formalization is based on standard concepts of measure and probability theory, see e.g. [Fel50]. We consider the σ -algebra over the infinite paths in M that is generated by the cylinder sets spanned by the finite path fragments: Cyl(s0 s1 . . . sn ) = {π ∈ InfPaths(M ) | s0 s1 . . . sn is a prefix of π } Classical concepts of probability theory ensure the existence of a unique probability measure PrM (or, briefly, Pr) on this σ -algebra where the probabilities for the cylinder sets are given by
5
C. Baier / Probabilistic Model Checking
s0 1 2
1 2
s4,5,6
s1,2,3
1 2
1 2
1 2
s1,2,3
1 2
s2,3
1 2
1 2
s4,5,6
s4,5
1 2
1 2
1 2
1 2
1
2
3
4
5
6
1
1
1
1
1
1
1 2
1 2
Figure 2. Markov chain for simulating a die by a fair coin.
PrM Cyl(s0 . . . sn ) = ι (s0 ) ·
∏
P(si , si+1 )
0≤i 0. Let S0 denote the set of states s ∈ S with xs = 0 and S1 the set of states s with xs = 1. Then: s ∈ S0
iff
s |= ∃♦B
Clearly, xs = 1 if s ∈ B. However, there might be more states satisfying xs = 1. Namely: s ∈ S1
iff s |= ∃(¬B) U S0
where U is the standard until operator. That is, s |= ∃(¬B)US0 if and only there is no infinite path s0 s1 s2 . . . starting in s0 = s such that sn ∈ S0 for some n ∈ N where sk ∈ S \ B for 0 ≤ k < n. Thus, S0 and S1 are efficiently computable using standard model-checking techniques for transition systems. Let S? = S \ (S0 ∪ S1 ). Then, s ∈ S? iff 0 < xs < 1, and in this case: xs =
∑ P(s,t) · xt
+ P(s, S? ).
t∈S?
This equation states that either S1 is reached within one step or by a path fragment s0 s1 . . . sn with s0 = s, sn ∈ S1 and s1 , . . . , sn−1 ∈ S? . For the vector x = (xs )s∈S? , we have
C. Baier / Probabilistic Model Checking
7
x = Ax + b, probabilities for the states in S? , i.e., A = where the matrix A contains the transition P(s,t) s,t∈S? , and the vector b = bs s∈S? contains the probabilities of reaching S1 from s within one step, i.e., bs = P(s, S? ). The above equation system can be rewritten into a (heterogeneous) linear equation system (I − A)·x = b where I is the identity matrix of cardinality |S? | × |S? |. The above technique yields the following two-phase algorithm to compute reachability probabilities in finite Markov chains: first, perform a graph analysis to compute the sets S0 , S1 and S? and then generate the matrix A and the vector b, and solve the linear equation system (I − A)·x = b. Indeed, thanks to the fact that s |= ∃♦B for the states s ∈ S? , the matrix A is non-singular, and thus, the linear equation system (I − A)·x = b has a unique solution. Example 2.2.1. We revisit the Markov chain for simulating a die by a fair coin shown in Figure 2 and verify that the probability to reach the outcome 6 is 1/6. That is, the task is to show that PrM (s0 |= ♦B) = 1/6 where B is the singleton {6}. We first observe that S1 = {6}, S? = {s0 , s4,5,6 , s4,5,6 } and S0 consists of all other states. Thus, the equation system to be solved has the form x0
=
x4,5,6 = x4,5,6 =
1 2 1 2 1 2
· x4,5,6 · x4,5,6 · x4,5,6 + 12
These equations can be rewritten as ⎛
1 ⎝0 0
− 12 1 − 12
⎛ ⎞ 0 − 12 ⎠ · x = ⎝ 0 ⎠ 1 1 2 0
⎞
which yields the (unique) solution x0 = 1/6, x4,5,6 = 1/3 and x4,5,6 = 2/3. Thus, we obtain the expected result PrM (s0 |= ♦6) = x0 = 1/6. Example 2.2.2. Consider again the simple communication protocol in Figure 1 and the event ♦B for B = {delivered}. As all states can reach the state delivered, we get S0 = S? = 0/ and S1 consists of all four states. Thus, PrM (s |= ♦delivered) = 1 for all states s. That is, the event of eventually reaching the state delivered is almost sure for any state. This example illustrates that graph algorithms are sufficient to verify that a reachability condition holds almost surely if S? is empty. The probability to be in a B-state after exactly n steps can be obtained by multiplying the n-th power of the probability matrix P with the initial distribution vector from the left and the characteristic vector for B from the right. Thus:
8
C. Baier / Probabilistic Model Checking
PrM (♦=n B) =
∑ ι (s) · yns
s∈S
where the values yis = Pr(s |= ♦=i B) for s ∈ S and i = 1, . . . , n are obtained by the equations:
1 : if s ∈ B y0s = yis = ∑ P(s, s ) · yi−1 s 0 : if s ∈ /B s ∈S Likewise, the probabilities xns = Pr(s |= ♦≤n B) for reaching B from state s within n of fewer steps are computable by the equations xns = 1 if s ∈ B and n ∈ N and for all states s ∈ S \ B and n ≥ 1: x0s = 0
and
xns =
P(s, s ) · xn−1 ∑ s
s ∈S
2.3. Repeated reachability and other long-run properties A central property of finite Markov chains is that almost all paths eventually enter a bottom strongly connected component (BSCC) C and visit each state of C infinitely often.2 Formally, if M = (S, P, ι ) is a finite Markov chain and C1 , . . . ,Ck are the BSCCs of M (viewed as a directed graph) then: 1 = PrM (♦BSCC) =
k
∑ PrM (♦Ci )
i=1
where BSCC = C1 ∪ . . . ∪ Ck . Moreover, for each i ∈ {1, . . ., k}, all states of Ci will be visited infinitely often almost surely, provided that Ci has been entered once: ♦s = 1 PrM ♦Ci −→ s∈Ci
This observation yields the key to compute the probabilities for long-run properties that only depend on the states that are visited infinitely often. For example, to compute the probability for a repeated reachability event ♦B (asserting that a state in B will be visited infinitely often) we first apply standard algorithms to compute the BSCCs of M and then compute the probability to reach one of the BSCCs Ci that contain at least one B-state: PrM ♦B) = PrM (♦D) where D= Ci i∈IB
and where IB denotes the set of indices i ∈ {1, . . . , k} such that Ci ∩ B = 0. / Thus, repeated reachability probabilities are computable via a reduction to the task of computing 2 A strongly connected component C is called bottom if there is no edge s → s emanating from some state / C. Thus, s |= ∀C for all states s ∈ C. s ∈ C where s ∈
C. Baier / Probabilistic Model Checking
9
reachability probabilities. The same holds for all other events that only depend on the set of states that are visited infinitely often. This, for instance applies to strong fairness conditions
(♦K j → ♦L j )
1≤ j≤k
where we need to compute the probability to reach one of the BSCC C such that for each j ∈ {1, . . . , k}, either C ∩ K j = 0/ or C ∩ L j = 0/ (or both). Another example are complements of strong fairness conditions. These have the form
(♦K j ∧ ♦¬L j )
1≤ j≤k
The probabilities for such events agree with the probability to reach a BSCC C such that C ∩ K j = 0/ and C ∩ L j = 0/ for some index i. 3. Probabilistic Computation Tree Logic Probabilistic computation tree logic (PCTL, for short) is a branching-time temporal logic that has been introduced first for Markov chains [HJ94] and later extended in various directions, e.g., for Markov reward models [AHK03] and Markov decision processes [BdA95,dA97a,BK98]. We consider here only the basic variant for Markov chains. PCTL can be seen as variant of the logic CTL where the path quantifiers ∃ and ∀ are replaced with a probability operator to quantify the paths where a path property holds. That is, PCTL formulas are Boolean combinations of state formulas of the form PJ (ϕ ) where ϕ is a path formula and J is an interval of [0, 1] specifying a lower bound and/or upper bound on the probability. The path formula ϕ imposes a condition on the set of paths. Intuitively, the formula PJ (ϕ ) holds in state s if the probability for the set of paths satisfying ϕ and starting in s meets the lower and upper bounds given by J. PCTL path formulas ϕ are defined using the basis modalities (next) and U (until) and a stepbounded until operator. Other temporal modalities, such as eventually ♦ and always , can be derived. 3.1. Syntax and semantics of PCTL To formalize state properties, we deal with a fixed, finite set AP of atomic propositions. These can be seen as basic properties that states of a Markov chains might have. Definition 3.1.1. The syntax of PCTL state and path formulas over AP is defined as follows. PCTL state formulas, briefly called PCTL formulas, are formed according to the following grammar: Φ ::= true a Φ1 ∧ Φ2 ¬Φ PJ (ϕ ) where a ∈ AP, ϕ is a path formula and J ⊆ [0, 1] is an interval with rational bounds. PCTL path formulas are formed according to the following grammar:
10
C. Baier / Probabilistic Model Checking
ϕ ::= Φ Φ1 U Φ2 Φ1 U≤n Φ2 where Φ, Φ1 , and Φ2 are PCTL state formulas and n ∈ N. The propositional logic fragment built by atomic propositions a ∈ AP and the operators ∧ (conjunction) and ¬ (negation) has its standard meaning. For example, a state s satisfies an atomic proposition a if a ∈ L(s). Likewise, a state s satisfies a conjunctive formula Φ1 ∧ Φ2 iff it satisfies both conjuncts. Other Boolean connectives, such as disjunction ∨ or implication →, can be derived as usual, e.g., Φ1 ∨ Φ2 is obtained by ¬(¬Φ1 ∧ ¬Φ2 ). The temporal operators (next) and U (until) and its bounded variant U≤n are required to be immediately preceded by P. We often use shorthand notations Pp(ϕ ) with the obvious meaning. Here, ϕ a path formula, p is a rational threshold in [0, 1] and ∈ {, , =} a comparison operator. For example, P≤0.5 (ϕ ) stands for P[0,0.5] (ϕ ). PCTL formulas with the trivial probability bounds 0 or 1 are said to specify qualitative properties. Typical examples are P=1 (ϕ ), which stands for P[1,1] (ϕ ) and asserts that the path property ϕ holds almost surely, and P>0 (ϕ ), which stands for P]0,1] (ϕ ) and asserts that ϕ holds with positive probability. The path formulas Φ and Φ1 UΦ2 has the same meaning as in CTL. While Φ asserts that Φ holds in the next state, Φ1 UΦ2 asserts that a Φ2 -state will be reached along a finite path fragment where each, but possibly the last state satisfies Φ1 . Path formula Φ1 U≤n Φ2 is the step-bounded variant of Φ1 UΦ2 . It asserts that the event specified by Φ2 will hold within at most n steps, while Φ1 holds in all states that are visited before a Φ2 -state has been reached. The eventually operator (♦) can be derived as usual by ♦Φ = true U Φ. Similarly, for step-bounded eventually we have: ♦≤n Φ = true U≤n Φ A path satisfies ♦≤n Φ if it reaches a Φ-state along a prefix of length at most n. Although the always operator cannot be directly derived at the path formula level, it can be obtained using the duality of eventually and always (as in CTL and LTL) and the duality of lower and upper bounds. The latter means that an event E holds with probability at least p if and only if its complement E¯ = InfPaths(M ) \ E holds with probability at most 1−p. This yields: P≥p (Φ) = P≤1−p (♦¬Φ) The always operator in combination with strict or upper probability bounds can be defined in an analogous way. Example 3.1.2. We consider again the Markov chain modeling an algorithm for simulating a six-sided die by a fair coin as described in Example 2.1.3. The property that each of the six possible outcomes of the die occurs with probability 1/6 can be formalized by the PCTL formula 1≤i≤6
P= 1 (♦i) 6
C. Baier / Probabilistic Model Checking
11
The following PCTL formula might serve as a specification for a communication protocol using an unreliable channel that might lose messages: P=1 (♦delivered) ∧ P=1 ( try to send → P≥0.99 (♦≤3 delivered) ) The above PCTL formula asserts that almost surely some message will be delivered and that almost surely for any attempt to send a message, with probability at least 0.99, the message will be delivered within three or fewer steps. We now turn to the formal semantics of PCTL formulas. PCTL state and path formulas are interpreted over the states and paths of a Markov chain M , respectively. More precisely, we deal here with state-labeled Markov chains that are augmented with a labeling function that declares which atomic propositions hold in each state. Definition 3.1.3. A state-labeled Markov chain is a tuple M = (S, P, ι , AP, L) where (S, P, ι ) is a Markov chain, AP a finite set of atomic propositions and L : S → 2AP a labeling function that assigns to each state s the set L(s) of atomic propositions a ∈ AP that hold in state s. Given an infinite path π = s0 s1 s2 . . . in M , its trace is the infinite word trace(π ) = L(s0 ) L(s1 ) L(s2 ) . . . over the alphabet 2AP . For the state formulas, the satisfaction relation |= is a relation between states in M and state formulas. Likewise, the statisfaction relation |= for path formulas is a relation between infinite paths in M and path formulas. Definition 3.1.4. Let M = (S, P, ιinit , AP, L) be a state-labeled Markov chain as above. The satisfaction relation |= is defined for state formulas by: s |= a
iff a ∈ L(s)
s |= ¬Φ
iff s |= Φ
s |= Φ ∧ Ψ iff s |= Φ and s |= Ψ s |= PJ (ϕ )
iff Pr(s |= ϕ ) ∈ J
Here, Pr(s |= ϕ ) = Prs {π ∈ InfPaths(s) | π |= ϕ }. Let SatM (Φ), or briefly Sat(Φ), denote the set of all states s that satisfy Φ, i.e.:
Sat(Φ) = s ∈ S | s |= Φ Given a path π = s0 s1 s2 . . . in M , the satisfaction relation for PCTL path formulas is defined as follows:
π |= Φ
iff s1 |= Φ
π |= Φ U Ψ
iff there exists j ∈ N s.t. s j |= Ψ and ∀0 ≤ k < j. sk |= Φ
π |= Φ U≤n Ψ iff there exists j ∈ N s.t. s j |= Ψ and ∀0 ≤ k < j. sk |= Φ and j ≤ n Thus, π |= Φ U Ψ iff π |= Φ U≤n Ψ for some n ∈ N. It can easily be shown that {π ∈ InfPaths(s) | π |= ϕ } is indeed measurable for all PCTL path formulas ϕ . This yields that the satisfaction relation for path formulas is indeed well-defined.
12
C. Baier / Probabilistic Model Checking
3.2. PCTL model checking The PCTL model-checking problem is the following decision problem. Given a finite state-labeled Markov chain M and a PCTL state formula Φ, the task is to compute Sat(Φ). As for CTL model checking, the basic procedure is to compute the satisfaction set Sat(Φ). This is done recursively using a bottom-up traversal of the parse tree of Φ. The nodes of the parse tree represent the subformulae of Φ. For each node of the parse tree, i.e., for each subformula Ψ of Φ, the set Sat(Ψ) is determined. For the propositional logic fragment of PCTL this is performed in exactly the same way as for CTL. The most interesting part is the treatment of subformulas of the form Ψ = PJ (ϕ ). In order to determine whether s ∈ Sat(Ψ), the probability Pr(s |= ϕ ) for the event specified by ϕ needs to be established. Then:
Sat PJ (ϕ ) = s ∈ S | Pr(s |= ϕ ) ∈ J For the next operator, we can rely on the equation: Pr(s |= Ψ) =
∑
s ∈Sat(Ψ)
P(s, s )
In matrix-vector notation we thus have that the vector ( Pr(s |= Ψ) )s∈S can be computed by multiplying the matrix P with the characteristic vector for Sat(Ψ), i.e., bit vector (bs )s∈S where bs = 1 if and only if s ∈ Sat(Ψ). The probability Pr(s |= ϕ ) for until formulas ϕ = Φ U Ψ relies on a mild variant of the technique explained in Section 2.2, using a graph-based pre-analysis to determine the sets S0 and S1 of states s where Pr(s |= ϕ ) is 0 or 1, respectively, and a linear equation system for all other states. For the bounded until operator U≤n , the vector ( Pr(s |= ϕ ) )s∈S can be obtained by O(n) vector-matrix multiplications. In both cases, the dimension of the involved matrix is bounded by N × N where N = |S| is the number of states in M . Thus, for finite Markov chain M and PCTL formula Φ, the PCTL model-checking problem M |= Φ can be solved in time O poly(size(M )) · nmax · |Φ| where nmax is the maximal step bound that appears in a subpath formula Ψ1 U≤n Ψ2 of Φ. Here, nmax = 1 if Φ does not contain a step-bounded until operator. Hence, the model-checking problem for PCTL without step-bounded path formulas is solvable in polynomial time. 4. Computing probabilities for ω -regular properties We now address the problem of computing the probability for the infinite paths in a state-labeled Markov chain M that satisfy a gievn ω -regular property E. That is, E is a language of infinite words over 2AP and the task is to compute PrM (E) = PrM {π ∈ InfPaths(M ) | trace(π ) ∈ E}.
13
C. Baier / Probabilistic Model Checking
We suppose that the property E is given by a deterministic Rabin automaton (DRA). If the property E is specified by an LTL formula ϕ known algorithms (e.g. [Saf88]) might be applied to generate a DRA from ϕ . A deterministic Rabin automaton (DRA) is a tuple A = (Q, Σ, δ , q0 , Acc) where Q is a finite set of states, Σ an alphabet, δ : Q × Σ → Q the transition function, q0 ∈ Q the starting state, and Acc ⊆ 2Q × 2Q . Thus, the acceptance condition Acc of a Rabin automaton is given by a set of pairs of states: Acc = {(Li , Ki ) | 0 < i ≤ k}
with Li , Ki ⊆ Q.
The meaning of the above Rabin condition Acc can be formalized by the following LTL formula: accept(A ) =
(♦¬Li ∧ ♦Ki ).
1≤i≤k
A run for σ = A0 A1 A2 . . . ∈ Σω denotes unique infinite sequence q0 q1 q2 . . . of states in A
i qi+1 for i ≥ 0. The run A where the first state is the initial state q0 of A and qi −→ q0 q1 q2 . . . is accepting if it satisfies the above LTL formula accept(A ), i.e., if there exists at least one Rabin pair (Li , Ki ) ∈ Acc such that qn ∈ / Li for almost all indices n ∈ N (i.e., there is some n0 ∈ N with qn ∈ / Li for all n ≥ n0 ) and qn ∈ Ki for infinitely many indices n ∈ N. The accepted language of A consists of all infinite words σ over the alphabet Σ where the unique run for σ in A meets the acceptance condition. That is:
Lω (A ) =
σ ∈ Σω | the run for σ in A is accepting .
In the context of DRA as specification formalism for state-labeled Markov chains, the alphabet Σ equals 2AP. For example, consider the DRA in Figure 3 over the alphabet 2AP = {{a}, 0} / with the acceptance condition Acc = {({q0 }, {q1})}. Here, we use syma q0 ¬a
q1 ¬a
a
Figure 3. A deterministic Rabin automaton for ♦a.
bolic notations for a stands for {a} and ¬a for 0. / The accepted language of the DRA shown in Figure 3 is the set of infinite words whose runs end with a suffix that never visits q0 , and, thus stays forever in state q1 . These are exactly the words A0 A1 . . . over 2AP satisfying ♦a. Let M = (S, P, ι , AP, L) be a finite Markov chain and let A = (Q, 2AP , δ , q0 , Acc) be a DRA. The probability of M to generate traces in Lω (A ) can be computed on the basis of a product construction. The product M ⊗ A is the Markov chain: M ⊗ A = (S × Q, P , ι )
14
C. Baier / Probabilistic Model Checking
where the transition probabilities in M ⊗ A are given by:
P (s, q, s , q ) =
P(s, s ) if q = δ (q, L(s )) 0
otherwise.
The initial distribution ι is defined by ι (s, qs ) = ι (s) where qs = δ (q0 , L(s)) and ι (s, q) = 0 in all other cases. Since A is deterministic, M ⊗ A can be viewed as the unfolding of M where the automaton component q of the states s, q in M ⊗ A records the current state in A for the path fragment taken so far. More precisely, for each (finite or infinite) path fragment π = s0 s1 s2 . . . in M there exists a unique run q0 q1 q2 . . . in A for trace(π ) = L(s0 ) L(s1 ) L(s2 ) . . . and
π + = s0 , q1 s1 , q2 s2 , q3 . . . is a path fragment in M ⊗ A . Vice versa, every path fragment in M ⊗ A which starts in state s, qs arises from the combination of a path fragment in M and a corresponding run in A . Note that the DRA A does not affect the probabilities. That is, for each measurable set Π of paths in M and state s, M ⊗A + PrM s (Π) = Prs,qs (Π )
where Π+ = {π + | π ∈ Π} and where the superscripts M and M ⊗ A are used to indicate the underlying Markov chain. (Recall that qs = δ (q0 , L(s)).) In particular, if Π is the set of paths that start in s and where trace(π ) is accepted by A , then the set Π+ is the set of paths in M ⊗ A that start in s, qs and satisfy A ’s acceptance condition. Hence: PrM (A ) = PrM ⊗A ( accept(A ) ) where PrM (A ) is a shortform notation for PrM (Lω (A )). Thus, the task to determine the probability for an ω -regular property specified by a DRA is reducible to computing the probability for a Rabin condition in the product Markov chain. For the latter, we can rely on the techniques presented in Section 2.3 and compute the probability to reach a BSCC C of M ⊗ A satisfying A ’s acceptance condition. Formally: PrM (A ) = PrM ⊗A ( ♦accBSCC ) where accBSCC denotes the union of all BSCCs C of M ⊗ A such that C ∩ (S × Li ) = 0/ and C ∩ (S × Ki ) = 0/ for some Rabin pair (Li , Ki ) in A ’s acceptance condition. The overall time complexity of this procedure is polynomial in the sizes of M and A . When starting with an LTL formula ϕ , the size of A can be double exponential in the length of ϕ . Indeed, there are examples illustrating that the double exponential blow-up for the generation of DRA from given LTL formulas can be unavoidable. From a complexity-theoretic point of view, the double exponential time complexity for computing the probability of an LTL formula in a Markov chain is not optimal. The almost-sure model checking problem for LTL specifications interpreted over Markov chains is known to be PSPACE-complete [Var85] and several (single) exponential-time
C. Baier / Probabilistic Model Checking
15
approaches have been suggested for computing the probabilities for an LTL formula in Markov chains. Courcoubetis and Yannakakis [CY95] present an automata-less approach that recursively eliminates the temporal modalities from a given LTL formula and replaces them with fresh atomic propositions, while modifying the Markov chain. The rough idea for eliminating a temporal modality representing a subformula ϕ is to duplicate each state s in the Markov chain for distinguishing the cases where ϕ resp. ¬ϕ holds from state s. To treat arbitrary ω -regular specifications, this approach has been adapted in [CY95] for extended LTL (ETL) formulas, which are Boolean combination of formulas given by NBA where the transition labels are atomic propositions or ETL formulas. To replace NBA-formulas with fresh atomic propositions, [CY95] uses deterministic transition tables for the NBA resulting from an ordinary powerset construction. The second approach by Couvreur, Saheb and Sutre [CSS03] uses separated B¨uchi automata, which means nondeterministic B¨uchi automata where the languages of the states are pairwise disjoint. Thus, separated automata are strongly unambiguous in the sense that each word has at most one accepting run, when declaring all states to be initial. The approach of [CSS03] relies on the observation that the classical closure algorithm for the generation of NBA from LTL formulas [WVS83,VW86] guarantees the separation property. The third approach, presented by Bustan, Rubin and Vardi [BRV04], starts with a Markov chain M and an ω -regular property represented by a weak alternating automaton A . The automaton A is converted first into a so-called full automaton that captures the languages of subsets of states in A and a reverse-deterministic transition system TA where the states are subsets of the state space of A . The extended version of [BRV04] shows that the product of M and TA yields a Markov chain that refines M and can be used to encode the probability for M to generate a path accepted by A by a linear equation system.
5. Conclusion The previous sections gave a brief introduction in the field model checking for finitestate discrete-time Markov chains. These techniques have been extended in various directions. Thanks to powerful tools with integrated sophisticated techniques to tackle the state-explosion problem, the applications are manyfold, including randomized coordination algorithms for parallel systems, communication and multi-media protocols, security protocols and biological systems. We conclude this section with a brief summary of the probabilistic model checking techniques that have been established for discrete-time probabilistic models with nondeterminism and weight functions as well as parametric transition probabilities and tool support. Probabilistic models with nondeterminism. While Markov chains are purely probabilistic, Markov decision processes (MDPs) combine the main features of transition systems and Markov chains, resulting in a model with nondeterministic and probabilistic branchings. This combination yields a powerful model that is widely used for representing, e.g., distributed systems with randomized behaviors. The foundations of the quantitative analysis of MDPs against finite-horizon (e.g., step-bounded reachability) and infinite-horizon (e.g., unbounded reachability or limit-average) properties using linear or dynamic programming techniques for Markov decision processes have been settled a long time ago
16
C. Baier / Probabilistic Model Checking
and can be found in text books, see e.g. [How90,Put94,FV96]. The focus of the pioneer work on probabilistic model checking was on qualitative properties in MDP-like models where the task is to prove that a path property holds almost surely (i.e., with probability 1) under all schedulers. Hart, Sharir and Pnueli [HSP83] addressed the almostsure termination problem. Tableau-based techniques have been presented by Pnueli and Zuck [PZ86a,PZ86b] for the qualitative analysis of MDP-like models and LTL formulas. Automata-based approaches and complexity-theoretic problems for LTL specifications and both Markov chains and MDPs have been addressed by Vardi and Wolper [Var85,VW86] as well as Courcoubetis and Yannakakis [CY95]. The problem to compute the maximal or minimal probabilities for a given LTL formula and MDP has been addressed in the context of probabilistic model-checking algorithms for the star-variant PCTL∗ of probabilistic computation tree logic by de Alfaro [dA97a] and Baier and Kwiatkowska [BK98] (see also [BK08,BGC09a]). The automatabased approach for computing the maximal probability Prmax M (ϕ ) for an ω -regular property ϕ in an MDP M relies on the following steps. The first step is to generate a deterministic ω -automaton A for the property ϕ . Various algorithms have been proposed for this purpose, with Safra’s determinization procedure [Saf88] being the most prominent one. However, this step is known to be expensive (double exponential in the worst case) and many authors suggested improvements and alternative constructions, see e.g. [Pit06,Sch09,MS10,BBKS13,FKVW13,EK14] Second, one builds a product-MDP M ⊗ A and applies graph algorithms to analyze its end components (i.e., strongly connected sub-MDPs) and to derive a set of “good states” in the product such that the maximal probability for M to satisfy ϕ equals the maximal probability for reaching such good states in M ⊗ A . The latter is computable in time polynomial in the sizes of M and A using linear programming techniques. If ϕ stands for an undesired property then the computation of Prmax M (ϕ ) can be seen as a worst-case analysis. Likewise, the task to ( ϕ ) can be understood as a best-case analysis and relates to the synthesis compute Prmin M problem where the task is to find a scheduler minimizing the probability for the bad bemax haviors. Obviously, Prmin M (ϕ ) = 1−PrM (¬ϕ ). However, switching from ϕ to ¬ϕ can be expensive if ϕ is given as an automaton, while it is “for free” on the logical level. The computation of maximal end components (MEC) in an MDP can be carried out in quadratic time by an iterative computation of the strongly connected components in sub-MDPs [CY95,dA97a]. More advanced algorithms for the computation of MEC decompositions with lower time complexity have been developed by Chatterjee et al. [CHJS13,CH14]. The realization of the task to extract the good states by analyzing the end components depends on the acceptance condition of A . If A is a DBA then the good states can be defined as all states in the product-MDP M ⊗ A that belong to some maximal end component containing at least one state s, q where q is a final state in A . For more complex acceptance conditions, the good states are those states in MECs containing at least one (possibly non-maximal) end component E satisfying the acceptance condition of A , i.e., all infinite paths in M ⊗ A , whose limit consists of the states and actions of E , do meet the acceptance condition of A . The good states for a Rabin condition are then computable in time polynomial in the sizes of M and A by applying MEC decomposition algorithms iteratively for sub-MDPs of M ⊗ A [dA97a]. This approach has been adapted for Streett and fairness conditions [BGC09b] and for generalized Rabin conditions [CGK13].
C. Baier / Probabilistic Model Checking
17
The exponential blow-up for the translation of LTL formulas into nondeterministic B¨uchi automata together with the exponential blow-up for the determinization of NBA yields a double-exponential upper bound for the analysis of finite-state Markovian models against LTL specifications. Given the 2EXPTIME-completeness result for checking whether an LTL formula holds almost surely in an MDP [CY95], this approach is optimal from a complexity-theoretic point of view for MDPs. Weighted Markovian models. Traditional approaches for MDPs with one or more weight functions for the states or transitions, briefly called weighted MDPs, focus on computing the maximal or minimal expected accumulated weight within a fixed number of steps (“finite horizon”), the optimal expected total or discounted weight (“infinite horizon”) or optimal limit-average weights (mean payoff). These approaches mainly use linear and dynamic programming techniques, see e.g. [How90,Put94,FV96]. De Alfaro proposed several extensions of PCTL by introducing long-run average operators [dA97a,dA98] for single-weighted MDPs and expectation operators for reasoning about minimal or maximal costs [dA97b,dA99] in MDPs with a single non-negative or nonpositive weight function. Andova et al. [AHK03] introduced an extension of PCTL with reward-bounded until operators, called PRCTL, and presented corresponding modelchecking algorithms for discrete Markov chains with one or more reward function. An analogous branching-time logic, called continuous stochastic reward logic (CSRL), for continuous-time Markov chains with impulse rewards for transitions and reward rates for the states has been introduced in [BCH+ 10]. CSRL also contains a steadystate operator, which has been originally introduced for the continuous counterpart to PCTL (called CSL) [BHHK03] and can be used to specify long-run frequencies and expected long-run rewards in discrete- or continuous-time Markov chains. Mixtures of PRCTL and CSRL as well as CTL∗ -like combinations of PRCTL/CSRL with LTL-like path formulas and other extensions of these weighted branching-time logics for Markov chains or MDPs have been studied by several authors. We mention here just a few of them. Laroussinie and Sproston [LS05] consider a PCTL-like logic with expectation operators for a timed variant of MDPs where the duration of transitions is specified by integer intervals. The recent paper [BKKW14] studies the decidability and complexitytheoretic aspects of LTL extended by accumulation operators for Markov chains and MDPs with multiple integer-weight functions. Algorithms for computing quantiles and percentile queries in Markov chains and MDPs with non-negative weights have been presented in [UB13,BDD+ 14,RRS15]. Complexity-theoretic aspects of quantiles and related cost-problems in Markovian models have been discussed by Haase and Kiefer [HK15]. A probabilistic variant of timed automata with a continuous semantics and timed PCTL has been presented in [KNSS02] and later extended by operators for expected rewards [JKNT09,NPS13]. The limitations of model-checking procedures for probabilistic timed automata with rewards become clear from the undecidability result established in [BCJ09] for reasoning about reward-bounded reachability probabilities. Operators for expected rewards until reaching a goal and long-run objectives for an extension of MDPs by transitions with stochastic delays, called Markov (reward) automata, have been studied by Guck et al. [GTH+ 14,GHH+ 14]. Parametric probabilistic model checking. Various forms of parametric Markovian models have been addressed in the literature. Some approaches use Markovian mod-
18
C. Baier / Probabilistic Model Checking
els with intervals (or more generally convex constraints) for the transition probabilities viewed as a family of models where the typical verification task is to establish probability bounds for all family-members, e.g., [SVA06,CSH08,WTM12,BLW13,PLSS13]. The focus in this project will be on parametric Markov chains where the transition probabilities are specified by expressions over parameters. Algorithms for computing closedform solutions in terms of rational functions for the probabilities of reachability conditions based on a variant of the well-known state-elimination procedure for finite automata have been first considered by Daws [Daw04]. Refinements of this approach and an extension for expected accumulated rewards have been developed by Hahn et al. [HHZ11b]. These techniques have been adapted for MDPs with parametric transition probabilities [HHZ11a]. Tool support. Most prominent is the tool PRISM [KNP04,KNP11] with engines for LTL and PCTL interpreted over discrete- and continuous-time Markov chains and MDPs with reward functions and several sophisticated techniques to tackle the state-explosion problem, such as purely and semi-symbolic methods with multi-terminal binary decision diagrams, symmetry reduction and game-based abstraction. PRISM supports the computation of (maximal and minimal) probabilities for step-bounded reachability properties, several types of expectation operators (accumulated reward until reaching a goal under the premise that the goal will be reached almost surely, accumulated rewards within a fixed number of steps and instantaneous rewards) for both Markov chains and MDPs as well as a steady-state operator for Markov chains. Refinements of PRISM’s symbolic engines and computation schemes for (maximal and minimal) probabilities for rewardbounded reachability properties and corresponding quantiles as well as algorithms for the computation of conditional probabilities for ω -regular path properties in discrete Markov chains and MDPs have been implemented recently [BDD+ 14,BKKM14,KBC+ 16]. The tool MRMC [KZH+ 11] supports model checking for discrete- and continuous-time Markov chains with rewards and the branching-time logics PRCTL and CSRL. Model checking for the latter is also supported by the tool MARCIE [HRS13] for generalized stochastic Petri nets. Implementations of the model-checking algorithms for parametric Markovian models are provided by the tools PARAM [HHZ11b,HHZ11a] and Prophecy [DJJ+ 15], which – among others – provide engines for calculating the expected accumulated reward until reaching a goal in parametric models. Besides the implementation of the DRA-based approach in the probabilistic model checker PRISM [KNP11,PRI] with non-symbolic, symbolic and hybrid engines, there have been other implementations of the automata-based analysis of MDPs. The DRAbased approach for the analysis of Markovian models has also been implemented in the probabilistic model checker iscasMC [HLS+ 14]. To increase the performance of Safra’s determinization procedure, it generates over- and under-approximations of the language of the given NBA using ordinary subset constructions. These can be used to classify the bottom strongly connected components of a Markov chain into accepting ones, rejecting ones and inconclusive ones. Only for the latter ones, breakpoint constructions are required. This approach has been adapted for MDPs as described in [HLS+ 15]. The tool ProbDiVinE [BBC+ 08b,BBC+ 08a] is a parallel model checker for MDPs and LTL specifications. Algorithms for the analysis of MDPs with strong or weak fairness constraints have been presented in [BGC09b]. These techniques yield the foundations of a MECdecomposition technique required for computing the maximal probabilities for properties specified by a deterministic Rabin or Streett automaton in MDPs with fairness con-
C. Baier / Probabilistic Model Checking
19
straints. PRISM, in some of its engines, supports a limited notion of fairness for MDPs as well, requiring that a nondeterministic choice may not be ignored forever [BK98]. PRISM-games [CFK+ 13] is an extension of PRISM for stochastic games and a probabilistic alternating-time logic with expected reward operators. Furthermore, there are special-purpose tools for the controller synthesis in game structures with meanpayoff objectives, such as QUASY [CHJS11] and MultiGain [BCFK15], and interactive theorem-proving approaches for the analysis of probabilistic systems against PCTL specifications, see e.g. [HN12a,HN12b].
References [AHK03]
Suzana Andova, Holger Hermanns, and Joost-Pieter Katoen. Discrete-time rewards modelchecked. In First International Workshop on Formal Modeling and Analysis of Timed Systems Workshop (FORMATS), volume 2791 of Lecture Notes in Computer Science, pages 88–104, 2003. [BBC+ 08a] Jiri Barnat, Lubos Brim, Ivana Cern´a, Milan Ceska, and Jana Tumova. Local quantitative LTL model checking. In 13th International Wokshop on Formal Methods for Industrial Critical Systems (FMICS), volume 5596 of Lecture Notes in Computer Science, pages 53–68, 2008. [BBC+ 08b] Jiri Barnat, Lubos Brim, Ivana Cern´a, Milan Ceska, and Jana Tumova. ProbDiVinE-MC: multicore LTL model checker for probabilistic systems. In 5th International Conference on the Quantitative Evaluation of Systems (QEST), pages 77–78. IEEE Computer Society, 2008. Beatrice B´erard, Michel Bidoit, Alain Finkel, Francois Laroussinie, Antoine Petit, Laure [BBF+ 01] Petrucci, Philippe Schnoebelen, and Pierre McKenzie. Systems and Software Verification: Model-Checking Techniques and Tools. Springer-Verlag, 2001. [BBKS13] Tom´as Babiak, Frantisek Blahoudek, Mojm´ır Kret´ınsk´y, and Jan Strejcek. Effective translation of LTL to deterministic Rabin automata: Beyond the (F,G)-fragment. In 11th International Symposium on Automated Technology for Verification and Analysis (ATVA), volume 8172 of Lecture Notes in Computer Science, pages 24–39, 2013. [BCFK15] Tom´as Br´azdil, Krishnendu Chatterjee, Vojtˇech Forejt, and Anton´ın Kucera. Multigain: A controller synthesis tool for MDPs with multiple mean-payoff objectives. In 21st International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS), volume 9035 of Lecture Notes in Computer Science, pages 181–187, 2015. [BCH+ 10] Christel Baier, Lucia Cloth, Boudewijn R. Haverkort, Holger Hermanns, and Joost-Pieter Katoen. Performability assessment by model checking of Markov reward models. Formal Methods in System Design, 36(1):1–36, 2010. [BCJ09] Jasper Berendsen, Taolue Chen, and David N. Jansen. Undecidability of cost-bounded reachability in priced probabilistic timed automata. In 6th Annual Conference on Theory and Applications of Models of Computation (TAMC), volume 5532 of Lecture Notes in Computer Science, pages 128–137, 2009. [BdA95] Andrea Bianco and Luca de Alfaro. Model checking of probabilistic and non-deterministic systems. In International Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS), volume 1026 of Lecture Notes in Computer Science, pages 499– 513, 1995. [BDD+ 14] Christel Baier, Marcus Daum, Clemens Dubslaff, Joachim Klein, and Sascha Kl¨uppelholz. Energy-utility quantiles. In 6th NASA Formal Methods Symposium (NFM), volume 8430 of Lecture Notes in Computer Science, pages 285–299, 2014. [BGC09a] Christel Baier, Marcus Gr¨oßer, and Frank Ciesinski. Model checking linear-time properties of probabilistic systems. In Manfred Droste, Werner Kuich, and Heiko Vogler, editors, Handbook of weighted automata, Monographs in Theoretical Computer Science, chapter 13, pages 519– 570. Springer, 2009. [BGC09b] Christel Baier, Marcus Gr¨oßer, and Frank Ciesinski. Quantitative analysis under fairness constraints. In 7th International Symposium on Automated Technology for Verification and Analysis (ATVA), volume 5799 of Lecture Notes in Computer Science, pages 135–150, 2009.
20 [BHHK03]
[BK98] [BK08] [BKKM14]
[BKKW14]
[BLW13]
[BRV04]
[CES86] [CFK+ 13]
[CGK13]
[CGP00] [CH14]
[CHJS11]
[CHJS13]
[CK96] [CSH08]
[CSS03]
[CY95] [dA97a] [dA97b]
C. Baier / Probabilistic Model Checking
Christel Baier, Boudewijn R. Haverkort, Holger Hermanns, and Joost-Pieter Katoen. Modelchecking algorithms for continuous-time Markov chains. IEEE Transactions on Software Engineering, 29(6):524–541, 2003. Christel Baier and Marta Z. Kwiatkowska. Model checking for a probabilistic branching time logic with fairness. Distributed Computing, 11(3):125–155, 1998. Christel Baier and Joost-Pieter Katoen. Principles of Model Checking. MIT Press, 2008. Christel Baier, Joachim Klein, Sascha Kl¨uppelholz, and Steffen M¨arcker. Computing conditional probabilities in Markovian models efficiently. In 20th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS), volume 8413 of Lecture Notes in Computer Science, pages 515–530, 2014. Christel Baier, Joachim Klein, Sascha Kl¨uppelholz, and Sascha Wunderlich. Weight monitoring with linear temporal logic: Complexity and decidability. In 23rd Conference on Computer Science Logic and the 29th Symposium on Logic In Computer Science (CSL-LICS), pages 11:1– 11:10. ACM, 2014. Michael Benedikt, Rastislav Lenhardt, and James Worrell. LTL model checking of interval Markov chains. In 19th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS), volume 7795 of Lecture Notes in Computer Science, pages 32–46, 2013. Doron Bustan, Sasha Rubin, and Moshe Y. Vardi. Verifying ω -regular properties of Markov chains. In 16th International Conference on Computer Aided Verification (CAV), volume 3114 of Lecture Notes in Computer Science, pages 189–201, 2004. E. M. Clarke, E. A. Emerson, and A. P. Sistla. Automatic verification of finite-state concurrent systems using temporal logic specifications. ACM Transactions on Programming Languages and Systems, 8(2):244–263, 1986. Taolue Chen, Vojtˇech Forejt, Marta Z. Kwiatkowska, David Parker, and Aistis Simaitis. PRISMgames: A model checker for stochastic multi-player games. In 19th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS), volume 7795 of Lecture Notes in Computer Science, pages 185–191, 2013. Krishnendu Chatterjee, Andreas Gaiser, and Jan Kret´ınsk´y. Automata with generalized Rabin pairs for probabilistic model checking and LTL synthesis. In 25th International Conference on Computer Aided Verification (CAV), Lecture Notes in Computer Science, pages 559–575, 2013. Edmund Clarke, Orna Grumberg, and Doron Peled. Model Checking. MIT Press, 2000. Krishnendu Chatterjee and Monika Henzinger. Efficient and dynamic algorithms for alternating B¨uchi games and maximal end-component decomposition. Journal of the ACM, 61(3):15:1– 15:40, 2014. Krishnendu Chatterjee, Thomas A. Henzinger, Barbara Jobstmann, and Rohit Singh. QUASY: quantitative synthesis tool. In 17st International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS), volume 6605 of Lecture Notes in Computer Science, pages 267–271, 2011. Krishnendu Chatterjee, Monika Henzinger, Manas Joglekar, and Nisarg Shah. Symbolic algorithms for qualitative analysis of Markov decision processes with B¨uchi objectives. Formal Methods in System Design, 42(3):301–327, 2013. E. M. Clarke and R. Kurshan. Computer-aided verification. IEEE Spectrum, 33(6):61–67, 1996. Krishnendu Chatterjee, Koushik Sen, and Thomas A. Henzinger. Model-checking omegaregular properties of interval Markov chains. In 11th International Conference on Foundations of Software Science and Computational Structures (FOSSACS), volume 4962 of Lecture Notes in Computer Science, pages 302–317, 2008. Jean-Michel Couvreur, Nasser Saheb, and Gr´egoire Sutre. An optimal automata approach to LTL model checking of probabilistic systems. In 10th International Conference on Logic for Programming Artificial Intelligence and Reasoning (LPAR), volume 2850 of Lecture Notes in Computer Science, pages 361–375, 2003. Costas Courcoubetis and Mihalis Yannakakis. The complexity of probabilistic verification. Journal of the ACM, 42(4):857–907, 1995. Luca de Alfaro. Formal Verification of Probabilistic Systems. PhD thesis, Stanford University, Department of Computer Science, 1997. Luca de Alfaro. Temporal logics for the specification of performance and reliability. In 14th An-
C. Baier / Probabilistic Model Checking
[dA98]
[dA99]
[Daw04] [DJJ+ 15]
[EK14]
[Eme90] [Fel50] [FKVW13]
[FV96] [GHH+ 14] [GTH+ 14]
[Hav98] [HHZ11a]
[HHZ11b] [HJ94] [HK15] [HLS+ 14] [HLS+ 15]
[HN12a]
[HN12b]
21
nual Symposium on Theoretical Aspects of Computer Science (STACS), volume 1200 of Lecture Notes in Computer Science, pages 165–176, 1997. Luca de Alfaro. How to specify and verify the long-run average behavior of probabilistic systems. In 13th Annual IEEE Symposium on Logic in Computer Science (LICS), pages 454–465. IEEE Computer Society, 1998. Luca de Alfaro. Computing minimum and maximum reachability times in probabilistic systems. In 10th International Conference on Concurrency Theory (CONCUR), volume 1664 of Lecture Notes in Computer Science, pages 66–81, 1999. Conrado Daws. Symbolic and parametric model checking of discrete-time Markov chains. In First International Colloquium on Theoretical Aspects of Computing (ICTAC), volume 3407 of Lecture Notes in Computer Science, pages 280–294, 2004. Christian Dehnert, Sebastian Junges, Nils Jansen, Florian Corzilius, Matthias Volk, Harold Bruintjes, Joost-Pieter Katoen, and Erika Abraham. PROPhESY: a probabilistic parameter synthesis tool. In 27th International Conference on Computer Aided Verification (CAV), Lecture Notes in Computer Science, 2015. To appear. Javier Esparza and Jan Kret´ınsk´y. From LTL to deterministic automata: a Safraless compositional approach. In 26th International Conference on Computer Aided Verification (CAV), volume 8559 of Lecture Notes in Computer Science, pages 192–208, 2014. E. A. Emerson. Temporal and modal logic. In J. van Leeuwen, editor, Handbook of Theoretical Computer Science, vol B: Formal Models and Semantics. Elsevier Publishers B.V., 1990. William Feller. An introduction to probability theory and its applications. Wiley, 1950. Seth Fogarty, Orna Kupferman, Moshe Y. Vardi, and Thomas Wilke. Profile trees for B¨uchi word automata, with application to determinization. In 4th International Symposium on Games, Automata, Logics and Formal Verification (GandALF), volume 119 of Electronic Proceedings in Theoretical Computer Science, pages 107–121, 2013. Jerzy Filar and Koos Vrieze. Competitive Markov Decision Processes. Springer, 1996. Dennis Guck, Hassan Hatefi, Holger Hermanns, Joost-Pieter Katoen, and Mark Timmer. Analysis of timed and long-run objectives for Markov automata. Logical Methods in Computer Science, 10(3), 2014. Dennis Guck, Mark Timmer, Hassan Hatefi, Enno Ruijters, and Mari¨elle Stoelinga. Modelling and analysis of Markov reward automata. In 12th International Symposium on Automated Technology for Verification and Analysis (ATVA), volume 8837 of Lecture Notes in Computer Science, pages 168–184, 2014. Boudewijn R. Haverkort. Performance of Computer Communication Systems: A Model-Based Approach. Wiley, 1998. Ernst Moritz Hahn, Tingting Han, and Lijun Zhang. Synthesis for PCTL in parametric Markov decision processes. In Third International Symposium NASA Formal Methods (NFM), volume 6617 of Lecture Notes in Computer Science, pages 146–161, 2011. Ernst Moritz Hahn, Holger Hermanns, and Lijun Zhang. Probabilistic reachability for parametric Markov models. Software Tools and Technology Transfer (STTT), 13(1):3–19, 2011. Hans Hansson and Bengt Jonsson. A logic for reasoning about time and reliability. Formal Aspects of Computing, 6:512–535, 1994. Christoph Haase and Stefan Kiefer. The odds of staying on budget. In 42nd International Colloquium on Automata, Language and Programming (ICALP), volume 9135 of Lecture Notes in Computer Science, pages 234–246, 2015. Ernst Moritz Hahn, Yi Li, Sven Schewe, Andrea Turrini, and Lijun Zhang. ISCASMC: A webbased probabilistic model checker. In 19th International Symposium on Formal Methods (FM), volume 8442 of Lecture Notes in Computer Science, pages 312–317, 2014. Ernst Moritz Hahn, Guangyuan Li, Sven Schewe, Andrea Turrini, and Lijun Zhang. Lazy probabilistic model checking without determinisation. In 26th International Conference on Concurrency Theory, CONCUR 2015, volume 42 of Leibniz International Proceedings in Informatics (LIPIcs), pages 354–367. Schloss Dagstuhl – Leibniz-Zentrum fuer Informatik, 2015. Johannes H¨olzl and Tobias Nipkow. Interactive verification of Markov chains: Two distributed protocol case studies. In Quantities in Formal Methods (QFM), volume 103 of EPTCS, pages 17–31, 2012. Johannes H¨olzl and Tobias Nipkow. Verifying pCTL model checking. In 18th International
22
[How90] [HRS13]
[HSP83] [JKNT09] [KBC+ 16]
[KNP04]
[KNP11]
[KNSS02]
[Kul95] [KY76] [KZH+ 11]
[LS05]
[MOSS99]
[MP92] [MP95] [MS10]
[NPS13] [Pel01] [Pit06]
[PLSS13]
C. Baier / Probabilistic Model Checking
Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS), volume 7214 of Lecture Notes in Computer Science, pages 347–361, 2012. Ronald Howard. Dynamic Programming and Markov Processes. John Wiley and Sons, 1990. Monika Heiner, Christian Rohr, and Martin Schwarick. MARCIE - model checking and reachability analysis done efficiently. In 34th International Conference on Application and Theory of Petri Nets and Concurrency (PN), volume 7927 of Lecture Notes in Computer Science, pages 389–399, 2013. Sergiu Hart, Micha Sharir, and Amir Pnueli. Termination of probabilistic concurrent program. ACM Transactions on Programming Languages and Systems, 5(3):356–380, 1983. Marcin Jurdzinski, Marta Z. Kwiatkowska, Gethin Norman, and Ashutosh Trivedi. Concavelypriced probabilistic timed automata. In 20th International Conference on Concurrency Theory (CONCUR), volume 5710 of Lecture Notes in Computer Science, pages 415–430, 2009. Joachim Klein, Christel Baier, Philipp Chrszon, Marcus Daum, Clemens Dubslaff, Sascha Kl¨uppelholz, Steffen M¨arcker, and David M¨uller. Advances in symbolic probabilistic model checking with PRISM. In 22th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS), Lecture Notes in Computer Science. Springer, 2016. to appear. Marta Z. Kwiatkowska, Gethin Norman, and David Parker. Probabilistic symbolic model checking with PRISM: A hybrid approach. International Journal on Software Tools for Technology Transfer (STTT), 6(2):128–142, 2004. Marta Z. Kwiatkowska, Gethin Norman, and David Parker. PRISM 4.0: Verification of probabilistic real-time systems. In 23rd International Conference on Computer Aided Verification (CAV), volume 6806 of Lecture Notes in Computer Science, pages 585–591, 2011. Marta Z. Kwiatkowska, Gethin Norman, Roberto Segala, and Jeremy Sproston. Automatic verification of real-time systems with discrete probability distributions. Theoretical Computer Science, 282(1):101–150, 2002. Vidyadhar G. Kulkarni. Modeling and Analysis of Stochastic Systems. Chapman & Hall, 1995. D. E. Knuth and A. C. Yao. The complexity of nonuniform random number generation. In J.E. Traub, editor, Algorithms and Complexity: New Directions and Recent Results, pages 357–428. Academic Press, New York, 1976. Joost-Pieter Katoen, Ivan S. Zapreev, Ernst Moritz Hahn, Holger Hermanns, and David N. Jansen. The ins and outs of the probabilistic model checker MRMC. Performance Evaluation, 68(2):90–104, 2011. Franc¸ois Laroussinie and Jeremy Sproston. Model checking durational probabilistic systems. In 8th International Conference on Foundations of Software Science and Computational Structures (FOSSACS), volume 3441 of Lecture Notes in Computer Science, pages 140–154, 2005. Markus M¨uller-Olm, David Schmidt, and Bernhard Steffen. Model checking: a tutorial introduction. In 6th Static Analysis Symposium, volume 1694 of Lecture Notes in Computer Science, pages 330–354. Springer, 1999. Z. Manna and A. Pnueli. The Temporal Logic of Reactive and Concurrent Systems: Specification. Springer-Verlag, 1992. Z. Manna and A. Pnueli. The Temporal Logic of Reactive and Concurrent Systems: Safety. Springer-Verlag, 1995. Andreas Morgenstern and Klaus Schneider. Exploiting the temporal logic hierarchy and the non-confluence property for efficient LTL synthesis. In First Symposium on Games, Automata, Logic, and Formal Verification (GANDALF), Electronic Proceedings in Theoretical Computer Science, pages 89–102, 2010. Gethin Norman, David Parker, and Jeremy Sproston. Model checking for probabilistic timed automata. Formal Methods in System Design, 43(2):164–190, 2013. Doron Peled. Software Reliability Methods. Springer, 2001. Nir Piterman. From nondeterministic B¨uchi and Streett automata to deterministic parity automata. In 21th IEEE Symposium on Logic in Computer Science (LICS), pages 255–264. IEEE Computer Society, 2006. Alberto Puggelli, Wenchao Li, Alberto L. Sangiovanni-Vincentelli, and Sanjit A. Seshia. Polynomial-time verification of PCTL properties of MDPs with convex uncertainties. In 25th International Conference on Computer Aided Verification (CAV), volume 8044 of Lecture Notes
C. Baier / Probabilistic Model Checking
[PRI] [Put94] [PZ86a] [PZ86b] [RRS15]
[Saf88] [Sch04] [Sch09]
[SVA06]
[UB13]
[Var85]
[VW86]
[WTM12]
[WVS83]
23
in Computer Science, pages 527–542, 2013. The PRISM model checker. http://www.prismmodelchecker.org/. Martin L. Puterman. Markov Decision Processes: Discrete Stochastic Dynamic Programming. John Wiley & Sons, 1994. Amir Pnueli and Lenore D. Zuck. Probabilistic verification by tableaux. In Symposium on Logic in Computer Science (LICS), pages 322–331. IEEE Computer Society, 1986. Amir Pnueli and Lenore D. Zuck. Verification of multiprocess probabilistic protocols. Distributed Computing, 1(1):53–72, 1986. Mickael Randour, Jean-Franc¸ ois Raskin, and Ocan Sankur. Percentile queries in multidimensional Markov decision processes. In 27th International Conference on Computer Aided Verification (CAV), volume 9206 of Lecture Notes in Computer Science, pages 123–139, 2015. Shmuel Safra. On the complexity of omega-automata. In 29th Annual Symposium on Foundations of Computer Science (FOCS), pages 319–327. IEEE Computer Society, 1988. Klaus Schneider. Verification of Reactive Systems. Springer, 2004. Sven Schewe. Tighter bounds for the determinisation of B¨uchi automata. In 12th International Conference on Foundations of Software Science and Computational Structures (FOSSACS), volume 5504 of Lecture Notes in Computer Science, pages 167–181, 2009. Koushik Sen, Mahesh Viswanathan, and Gul Agha. Model-checking Markov chains in the presence of uncertainties. In 12th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS), volume 3920 of Lecture Notes in Computer Science, pages 394–410, 2006. Michael Ummels and Christel Baier. Computing quantiles in Markov reward models. In 16th International Conference on Foundations of Software Science and Computation Structures (FOSSACS), volume 7794 of Lecture Notes in Computer Science, pages 353–368, 2013. Moshe Y. Vardi. Automatic verification of probabilistic concurrent finite-state programs. In 26th IEEE Symposium on Foundations of Computer Science (FOCS), pages 327–338. IEEE Computer Society, 1985. Moshe Y. Vardi and Pierre Wolper. An automata-theoretic approach to automatic program verification (preliminary report). In 1st Symposium on Logic in Computer Science (LICS), pages 332–344. IEEE Computer Society Press, 1986. Eric M. Wolff, Ufuk Topcu, and Richard M. Murray. Robust control of uncertain Markov decision processes with temporal logic specifications. In 51th IEEE Conference on Decision and Control, (CDC), pages 3372–3379. IEEE Computer Society, 2012. Pierre Wolper, Moshe Y. Vardi, and A. Prasad Sistla. Reasoning about infinite computation paths (extended abstract). In 24th Annual Symposium on Foundations of Computer Science (FOCS), pages 185–194. IEEE Computer Society, 1983.
The author(s) of this publication is/are solely responsible for its content. This publication does not reflect the opinion of the publisher. The publisher cannot be held liable for any loss or damage that may occur because of this publication.
24
Dependable Software Systems Engineering J. Esparza et al. (Eds.) IOS Press, 2016 © 2016 The authors and IOS Press. All rights reserved. doi:10.3233/978-1-61499-627-9-24
SMT Solvers: Foundations and Applications Nikolaj Bjørner e-mail: [email protected] Microsoft Research, Redmond Abstract. Satisfiability Modulo Theories (SMT) solvers are used in many modern program verification, analysis and testing tools. They owe their scale and efficiency thanks to advances in search algorithms underlying modern SAT solvers and first-order theorem provers. They owe their versatility in software development applications thanks to specialized algorithms supporting theories, such as numbers and algebraic data-types, of relevance for software engineering. This lecture introduces algorithmic principles of SMT solving, taking as basis modern SAT solvers and integration with specialized theory solvers and quantifier reasoning. We detail some of the algorithms used for main theories used in current SMT solvers and survey newer theories and approaches to integrating solvers. The lectures also outline some application scenarios where SMT solvers have found use, including program verification, network analysis, symbolic model checking, test-case generation, and white-box fuzzing. These lecture notes highlight the main algorithm used in modern SAT and SMT solvers, the SAT Conflict Directed Clause Learning algorithm, and casts it as process of searching for a model and building a refutation partial proof in tandem. This theme is common to many of the algorithms used in theory solvers. Keywords. SMT, SAT, Theorem Proving, Program Analysis
1. Introduction These lecture notes cover selected topics discussed in the Marktoberdorf lectures on Satisfiabiltiy Modulo Theories taking place from August 10 to 14th 2015. Most of the material is already available in slide format and suitable for self studies. These notes therefore complement the slides with additional detail that they do not cover. Furthermore, there is a good selection of tutorials and lecture notes on SMT already available. Instead of repeating material from these places, these notes highlight results that are not covered elsewhere. In particular, we highlight a dichotomy between models and proofs as manifested in modern conflict directed clause learning search algorithms. Our treatment of the CDCL algorithm is one instance of a common trait shared among various algorithms used for deciding theories [15]. 1.1. Online Material The lectures are partially based on the functionality present in the SMT solver Z3 [7]. Z3 implements a state-of-the-art SMT solver. The source code for Z3 is available from
N. Bjørner / SMT Solvers: Foundations and Applications
25
http://github.com/z3prover/z3 Most relevant pointers to reading material is available from: http://github.com/z3prover/z3/Documents You can use Z3 online from http://rise4fun.com/z3 and you can there follow links to an associated online interactive tutorial. 1.2. Topics The lectures cover the following topics: SAT solving basics Elements of modern SAT solvers: Unit propagation, Conflict Directed Clause Learning, Variable heuristics, indexing data-structures, pre- and inprocessing. Theory Combination Techniques Nelson-Oppen combination, Model-based theory combination. Theories - Algorithms and Integration Arithmetic, Uninterpreted functions,Bit-vectors, arrays, algebraic data-types, strings and sequences, collections. Quantifier Reasoning E-matching Based Quantifier Instantiation, Model Based Quantifier Instantiation, Quantifier Elimination, Quantifier Satisfiability, Super-position Horn Clauses Modulo Theories Top-down, bottom-up search methods, PDR/IC3 solvers for Horn clause, Symbolic Datalog engines. 1.3. Background Reading Let us highlight a small selection of articles that are specifically prepared as background material. • • • • •
[11] Boolean Satisfiability: From Theoretical Hardness to Practical Success. [8] Satisfiability Modulo Theories: Introduction & Applications1 . [6] Tractability and Modern Satisfiability Modulo Theories Solver2 . Applications of SMT solvers3 . [3] Horn Clause Solvers for Program Verification4 .
2. An Introduction to SMT with Z3 Modern SMT solvers are a often good fit for program analysis tools. Handle domains found in programs directly. The selected examples are intended to show instances where sub-tasks are reduced to SMT/Z3. 1 http://dl.acm.org/citation.cfm?id=1995394 2 http://research.microsoft.com/en-us/people/nbjorner/tractability.pdf 3 http://research.microsoft.com/en-US/people/nbjorner/smt-application-chapter.pdf 4 http://research.microsoft.com/en-US/people/nbjorner/yurifest.pdf
26
N. Bjørner / SMT Solvers: Foundations and Applications
2.1. SAT in a nutshell Propositional logic is given by a vocabulary P of propositional atoms. Formulas are formed from propositional atoms by using logical connectives, negation, conjunction, disjunction and equivalence. An example propositional formula is given below: (Tie ∨ Shirt) ∧ (¬Tie ∨ ¬Shirt) ∧ (¬Tie ∨ Shirt)
(1)
The formula is satisfiable, in a model where Tie is false and Shirt is true. 2.2. SMT in a nutshell SMT formulas include first-order logical formulas. Furthermore, selected symbols are interpreted modulo fixed background theories. For example, the standard theory of arithmetic is assumed as a background theory when presenting an SMT formula. No explicit axiomatization for arithmetic is required when presenting an SMT formula with arithmetic to an SMT solver. SMT solvers implement specialized algorithms to handle background theories. An example SMT formula is as follows: x + 2 = y ∧ f (select(store(a, x, 3), y − 2)) = f (y − x + 1)
(2)
Besides the theory of arithmetic, it uses the function symbols select and store, which are characterized by the McCarthy [13] array axioms: select(store(a, i, v), i) = v i = j ∨ select(store(a, i, v), j) = select(a, j)
(3) (4)
The function f is given without interpretation. We say that f is uninterpreted. The only properties of f that an SMT solver has to ensure is the congruence axiom and the properties of equality: x=x
(5)
x=y→y=x
(6)
x = y∧y = z → x = z
(7)
x = y → f (x) = f (y)
(8)
3. Algorithmic underpinnings of SAT/SMT In this section we summarize the DPLL(T) [14] calculus used as an underpinning of contemporary SMT solvers. It is based on conflict directed clause learning [12] (CDCL) used in modern SAT solvers.
N. Bjørner / SMT Solvers: Foundations and Applications
27
3.1. A Dichotomy of Models and Proofs We use a, b, c to range over propositional variables, ranges over literals (a propositional variable or the negation of a propositional variable). a is the negation of the propositional variable a and is the negation of the literal . We use C, D, E for clauses (disjunctions of literals), F for a set of clauses and M to range over sets or sequences of literals. Example 1 Suppose C is the clause a ∨ b ∨ c, and D the clause a ∨ c. Then we can use F as shorthand for the set of clauses C, D. An example model M satisfying F is abc. In contrast, the model abc does not satisfy F. Definition 1 (Model forcing) We say M |= F if for every clause C in F there is a literal in the intersection of M and C (a literal in M that satisfies C). In other words, M forces F. Definition 2 (Derivation F Π C) A resolution proof Π from set of hypotheses F and conclusion C is a directed acyclic graph (DAG) where each leaf is labeled by a clause from F. The DAG Π contains at least one node labeled by C. Each internal node has in-degree 2. The label of the internal node is a clause C and a literal , such that the two ancestors are labeled by clauses ∨ D and ∨ E and C equals D ∨ E up to removing repeated literals (this process is known as factoring). We allow Π to contain multiple roots and and have multiple disconnected components. The only requirement is that C has a justification obtained by following its parents. Definition 3 (de Morgan Complements) If C is a clause then C is a conjunction of negated literals from C. If M is a conjunction of literals, then M is a clause with negated literals from M. Example 2 For the clause C : a ∨ b ∨ c we have C = abc. For the model M = abc we have the clause M = a ∨ b ∨ c. Theorem 1 (A Parametric Dichotomy of Models and Proofs) For every M, exactly one of the two conditions hold: 1. M |= F for some M where M ⊆ M . 2. F Π M for some M where M ⊆ M and some proof Π. When M = 0/ we have the special case: Corollary 1 (A Dichotomy of Models and Proofs) Exactly one of the conditions hold: ∃M . M |= F or ∃Π . F Π 0/ The dichotomy is realized by CDCL as it prunes models using consequences of F: Corollary 2 If F Π C, then for every M where C ⊆ M it is the case that M |= F. Similarly, consequences of F are inferred based on searching for models: Corollary 3 If M |= ¬F, i.e., C ⊆ M for some C ∈ F, then for every consequence D from C using forced literals in M it is the case that D ⊆ M and there is no M ⊇ D such that M |= F.
28
N. Bjørner / SMT Solvers: Foundations and Applications
3.1.1. CDCL as a cooperative process The purpose of this section is to present CDCL as a cooperative process that simultaneously refines a partial model M and a partial proof Π while establishing which of the cases of Theorem 1 that hold. A model M is represented as a sequence of literals and literals are either of the form d or C∨ , annotated either with a d to indicate that the literal is a decision literal, or with a clause that justifies the assignment as the literals in C are in the prefix before . Abstract CDCL comprises of rules that transform states of the form M || F and M || F ||C. The latter state satisfies the invariant: Invariant 1 (Conflict state) For the state M || F ||C it is the case that C ⊆ M, F C. The following invariant holds for justified literals: Invariant 2 (Propagation) For the state M || F and M || F || D, whenever M = M1 C∨ M2 , then C ⊆ M1 and F Π C ∨ . The transitions are summarized in Figure 1. We describe the steps in more detail as follows: Initialize =⇒ ε || F. The assignment is initially empty. Decide M || F =⇒ Md || F when , ∈ M. If M can be extended to M |= F, then M contains either or . Propagate M || F,C ∨ =⇒ MC∨ || F,C ∨ if C ⊆ M. The rule preserves extensibility of M: The new state can be extended to a full model of F if and only if the old state can. Sat M || F =⇒ sat if every clause in F contains a literal in M. Unsat M || F || 0/ =⇒ unsat. Conflict M || F =⇒ M || F ||C if C ∈ F,C ⊆ M. The new state satisfies case (2) of Theorem 1. The new state is the starting point for conflict resolution. Conflict resolution uses this to identify a prefix with minimal decision literals of M such that case (2) is no longer forced. Resolve M || F ||C ∨ =⇒ M || F ||C ∨ D if D∨ occurs in M. The step maintains the invariant that C ∨ D ⊆ M and that F C ∨ D. The weight of the explanation for the conflict is also decreased because is replaced by literals that occur below it. d Backjump MM || F ||C ∨ =⇒ MC∨ || F if C ⊆ M and ∈ M is a decision literal. The backjumping strategy is called First Unique Implication Point (FUIP) when the number of decision literals in M is minimal. The FUIP strategy has the property that the level of literals in the explanation is minimized. Why is FUIP better? Other strategies also produce explanations why the decisions led to a conflict, but include more backtracking points that are potentially also conflicting with the current conflict. The assignment to would be re-learned over these backtracking points. The FUIP strategy also produces the smallest prefix of M that is not known to include the complement of a clause derivable by F, but includes as many consequences of unit propagation as possible. In terms of the dichotomy, the literals in the conclusion C ∨ use the smallest prefix of the partial model where the current proof DAG does not establish (2) of Theorem 1.
N. Bjørner / SMT Solvers: Foundations and Applications
Initialize
=⇒ || F
F is a set of clauses
Decide
M || F =⇒ Md || F
if
Propagate
M || F,C ∨ =⇒
Conflict
M || F,C =⇒ M || F,C ||C
if C is false under M
Resolve
M || F ||C ∨ ¬ =⇒ M || F ||C ∨C
if C∨ ∈ M
Backjump
Md0 M || F ||C ∨ =⇒ MC∨ || F,C ∨
Restart
M || F =⇒ || F
Unsat
M || F || 0/ =⇒ unsat
Sat
M || F =⇒ M
MC∨ || F,C ∨
if
T -PropagateM || F =⇒ MC∨ || F
T -Conflict
M || F =⇒ M || F ||C
or ¬ occurs in F unassigned in M unassigned in M C⊆M
if C ⊆ M, ¬ ⊆ d0 M
if
29
M is T consistent and is a model for F.
⎧ unassigned in M ⎪ ⎪ ⎨ or ¬ occurs in F if T C∨ ⎪ ⎪ ⎩ C⊆M if C ⊆ M, T |= C.
Figure 1. Abstract DPLL(T ) Procedure
Learn M || F ||C =⇒ M || F,C ||C adds C to the set of clauses F. Clause learning exposes a new leaf in the partial proof Π. Without learning CDCL would only be able to build tree resolution proofs. (recall that we add learned clauses when annotating the forced literal during backjumping) Forget M || F,C =⇒ M || F if C is a learned clause. Removing learned clauses that are no longer useful for pruning the current search space is critical for scaling CDCL search. The SAT solver Glucose [1], based on MiniSAT [9], is known for implementing good heuristics for pruning learned clauses. Z3’s SAT solver implements some of these heuristics. Restart M || F =⇒ ε || F. Restarting search is important to allowing search explore different search spaces. The SAT solver Lingeling [2] implements frequent restarts as it relies on the current assignment stack to implement fast re-propagation of learned clauses.
30
N. Bjørner / SMT Solvers: Foundations and Applications
3.1.2. A Dichotomy with some relation: Farkas Lemma Farkas lemma states that precisely one of the two conditions hold: (∃x . Ax = b ∧ x ≥ 0) or (∃y . yA ≥ 0 ∧ yb < 0) The dichotomy is exposed by Dual Simplex: it either finds a feasible assignment x or establishes a linear combination of rows y that is unfeasible. Simplex exploits this dichotomy during pivoting (regard pivoting as resolution over rings) when narrowing the search for a feasible solution or finding a linear combination of rows that establishes infeasibility. Analogously with Dual Simplex, CDCL is concerned about finding an arbitrary feasible assignment to a formula F. Models for propositional set of clauses are similar to the satisfying assignment x to a set of linear inequalities, but there is no simple vector y that corresponds to the certificate of unsatisfiability. Instead, a resolution proof corresponds to a potentially larger DAG structure whose leaves are labeled by clauses from F. 3.1.3. Dichotomies elsewhere A thesis is that many modern decision procedure algorithms exploit related dichotomies. They use models to guide the search for proofs and use proofs for guiding the search for models. This theme is present in abstraction-refinement schemes such as counterexample guided abstraction refinement for model checking, model-based quantifier instantiation, model-based theory combination, it is exploited in Property Directed Reachability, in algorithms for integer linear solving and solving satisfiability of polynomial inequalities.
4. Theories, Solvers and Applications SMT solvers support a large range of theories. The theory of uninterpreted functions and theory of arithmetic are the most widely used theory, other theories include the theories of bit-vectors, floating-point arithmetic, arrays, algebraic data-types, sequences (strings), regular expressions, sets and multi-sets. These theories typically support a range of applications in program analysis, testing, synthesis and verification. The lectures highlight an application of Z3 and the theory of bit-vectors for analyzing policies in Microsoft Azure. In this application, we translate firewall policies into logical formulas over bit-vectors. The application uses Z3 to check equivalence of policies and to check contracts of policies (e.g., is some specified set of traffic patterns permitted or disallowed by the policy). The encoding in the the theory of bit-vectors allows to answer such queries for entire sets of IP headers (in contrast to testing each combination in turn). Furthermore, the application allows to extract a symbolic difference between policies and between queries and policies. This enumerates compactly a set of solutions. More information in the application is available from [10,4].
5. Topics: Quantifiers, Horn Clauses, Optimization Z3 implements several extensions to solving satisfiability of quantifier-free formulas. A prominent extension is handling satisfiability of quantified formulas. Z3 uses several
N. Bjørner / SMT Solvers: Foundations and Applications
31
approaches for instantiating quantifiers. These include instantiating quantifiers based on patterns, based on partial interpretations (for the quantifier-free part of a formula) and by quantifier elimination. Horn clauses form a special class of quantified formulas and Z3 contains a set of engines that are specifically tailored for checking satisfiability of Horn clauses. These range from engines that handle Horn clauses over small finite domains (Z3 uses a bottom-up Datalog engine with Hash tables for this purpose), to Horn clauses over large finite domains (Z3 implements symbolic hash-tables based on ternary bit-vectors, which match domains in Network analysis), to Horn clauses over arithmetic where Z3 implements property directed reachability/IC3 style algorithms and algorithms based on computing tree-interpolants. In addition to checking satisfiability of formulas with quantifiers, Z3 supports satisfiability modulo objective functions. In other words one can find satisfiable models that maximize, respectively minimize, selected objectives [5].
6. Summary The summer school lectures on SMT provided a taste for SMT solving algorithms and a selected application. Other applications, such as symbolic model checking and dynamic symbolic simulation, were covered in other lectures. In these notes we provided a more detailed account for modern CDCL algorithms and associated DPLL(T) solving as a cooperative process that oscillates between building models and proofs. This process is characteristic of several of the algorithms that are used for solving satisfiability at theory level. The same traits also recur in the algorithms for solving Horn clauses.
References [1]
[2]
[3]
[4]
[5]
[6] [7]
[8]
Gilles Audemard and Laurent Simon. Predicting learnt clauses quality in modern SAT solvers. In IJCAI 2009, Proceedings of the 21st International Joint Conference on Artificial Intelligence, Pasadena, California, USA, July 11-17, 2009, pages 399–404, 2009. Armin Biere. Lingeling essentials, A tutorial on design and implementation aspects of the the SAT solver lingeling. In POS-14. Fifth Pragmatics of SAT workshop, a workshop of the SAT 2014 conference, part of FLoC 2014 during the Vienna Summer of Logic, July 13, 2014, Vienna, Austria, page 88, 2014. Nikolaj Bjørner, Arie Gurfinkel, Kenneth L. McMillan, and Andrey Rybalchenko. Horn clause solvers for program verification. In Fields of Logic and Computation II - Essays Dedicated to Yuri Gurevich on the Occasion of His 75th Birthday, pages 24–51, 2015. Nikolaj Bjørner and Karthick Jayaraman. Checking cloud contracts in microsoft azure. In Distributed Computing and Internet Technology - 11th International Conference, ICDCIT 2015, Bhubaneswar, India, February 5-8, 2015. Proceedings, pages 21–32, 2015. Nikolaj Bjørner, Anh-Dung Phan, and Lars Fleckenstein. νz - an optimizing SMT solver. In Tools and Algorithms for the Construction and Analysis of Systems - 21st International Conference, TACAS 2015, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2015, London, UK, April 11-18, 2015. Proceedings, pages 194–199, 2015. Lucas Bordeaux, Youssef Hamadi, and Pushmeet Kohli, editors. Tractability : practical approaches to hard problems. Cambridge University Press, 2014. Leonardo Mendonc¸a de Moura and Nikolaj Bjørner. Z3: an efficient SMT solver. In Tools and Algorithms for the Construction and Analysis of Systems, 14th International Conference, TACAS 2008, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2008, Budapest, Hungary, March 29-April 6, 2008. Proceedings, pages 337–340, 2008. Leonardo Mendonc¸a de Moura and Nikolaj Bjørner. Satisfiability modulo theories: introduction and applications. Commun. ACM, 54(9):69–77, 2011.
32 [9]
[10] [11] [12] [13] [14] [15]
N. Bjørner / SMT Solvers: Foundations and Applications
Niklas E´en and Niklas S¨orensson. An extensible sat-solver. In Theory and Applications of Satisfiability Testing, 6th International Conference, SAT 2003. Santa Margherita Ligure, Italy, May 5-8, 2003 Selected Revised Papers, pages 502–518, 2003. Karthick Jayaraman, Nikolaj Bjrner, Geoff Outhred, and Charlie Kaufman. Automated analysis and debugging of network connectivity policies. Technical Report MSR-TR-2014-102, July 2014. Sharad Malik and Lintao Zhang. Boolean satisfiability from theoretical hardness to practical success. Commun. ACM, 52(8):76–82, 2009. J. P. Marques-Silva and K. A. Sakallah. GRASP - A New Search Algorithm for Satisfiability. In ICCAD, 1996. John McCarthy. Towards a mathematical science of computation. In IFIP Congress, pages 21–28, 1962. R. Nieuwenhuis, A. Oliveras, and C. Tinelli. Solving SAT and SAT Modulo Theories: From an abstract Davis–Putnam–Logemann–Loveland procedure to DPLL(T). J. ACM, 53(6), 2006. Natarajan Shankar. Inference systems for logical algorithms. In FSTTCS 2005: Foundations of Software Technology and Theoretical Computer Science, 25th International Conference, Hyderabad, India, December 15-18, 2005, Proceedings, pages 60–78, 2005.
The author(s) of this publication is/are solely responsible for its content. This publication does not reflect the opinion of the publisher. The publisher cannot be held liable for any loss or damage that may occur because of this publication.
Dependable Software Systems Engineering J. Esparza et al. (Eds.) IOS Press, 2016 © 2016 The authors and IOS Press. All rights reserved. doi:10.3233/978-1-61499-627-9-33
33
Formal Models for Analyzing Security Protocols: Some Lecture Notes V´eronique Cortier a,1 and Steve Kremer a a LORIA - CNRS, INRIA Nancy, Universit´ e de Lorraine Abstract. Security protocols are distributed programs that aim at securing communications by the means of cryptography. They are for instance used to secure electronic payments, home banking and more recently electronic elections. Given the financial and societal impact in case of failure, and the long history of design flaws in such protocols, formal verification is a necessity. A major difference from other safety critical systems is that the properties of security protocols must hold in the presence of an arbitrary adversary. The aim of these notes is to provide a tutorial to some modern approaches for formally modeling protocols, their goals and automatically verifying them. Keywords. Security protocol, verification, secrecy, authentication
1. Introduction Security protocols are used to protect electronic transactions. The probably most used security protocol is the SSL/TLS protocol which underlies the https protocol in web browsers. It may be used for electronic commerce, or simply to encrypt web search queries on their way between the host and the search engine. There are of course many other protocols in use, e.g. to authenticate to providers on mobile phones or withdraw cash on an ATM. Moreover, the digitalization of our modern society requires the use of security protocols in an increasing number of contexts, such as electronic passports that may include RFID chips, electronic elections that allow for Internet voting, etc. We may think of security protocols as distributed programs that make use of cryptography, e.g. encryption, to achieve a security property, such as confidentiality of some data, e.g. your credit card number. Given the difficulty of designing correct distributed systems in general, it is not surprising that many flaws were discovered in security protocols, even without breaking the underlying cryptography. During the last 30 years many research efforts were spent on designing techniques and tools to analyze security protocols. One may trace this line of work back to the seminal work of [24] who pioneered the ideas of an attacker who completely controls the communication network, has an unbounded computational power, but manipulates protocol messages according to some predefined rules, idealizing the protections offered by cryptography. These techniques not only allowed to better understand the principles underlying secure protocol design, 1 This work has received support from the European Research Council under the European Union’s Seventh Framework Programme (FP7/2007-2013) / ERC grant agreement n◦ 258865
34
V. Cortier and S. Kremer / Formal Models for Analyzing Security Protocols: Some Lecture Notes
but also resulted in mature tools, for automated protocol analysis, and the discovery of many attacks. For example, while designing a formal model of Google’s Single Sign-On protocol, that allows a user to identify himself only once and then access various applications (such as Gmail or Google calendar), [8] discovered that a dishonest service provider could impersonate any of its users at another service provider. This flaw has been corrected since. [11] have identified flaws and proposed fixes for the ISO/IEC 9798 standard for entity authentication, using automated protocol verification tools. The standard has been revised to include their proposed amendments. [16] designed a dedicated analysis tool for hardware security tokens that implement the PKCS#11 standard. The tool automatically reverse-engineers the tokens to extract its configuration, builds an abstract model to be analyzed and verifies the attack on the token if an attack is found. They were able to find unknown attacks on more than 10 commercial tokens. This paper presents modern techniques to model analyze security protocols. Given the large body of work in this area we do not aim to be exhaustive and only present some selected methods and results. The outline of these notes is as follows. • We first present an informal description of our running example, the Needham Schroeder public key protocol that we used for illustration purposes in the remainder of the paper. • Then, we explain how protocol messages can be modeled as first order terms, and how adversary capabilities can be modeled by an inference system. We also provide a decision algorithm for deduction, i.e. the adversary’s capability to construct new messages. • Next, we introduce a more general model, based on equational theories. We revisit deduction and define a notion of message indistinguishability, called static equivalence. We again provide a decision procedure for static equivalence for a simple equational theory representing symmetric encryption. • We continue by introducing a process calculus, the applied pi calculus, which we use to model protocols. One of the main differences with the original pi calculus is that the calculus allows communication of messages represented by terms, rather than only names. We illustrate how protocols can be conveniently modeled in this formalism. • We conclude by briefly discussing some approaches for automated verification. 2. Running example We first introduce our running example, the Needham Schroeder public key protocol [39]. We will also describe the famous man in the middle attack, discovered by [33] 17 years after the publication of the original paper. 2.1. The Needham Schroeder public key protocol The Needham Schroeder public key protocol can be described by the following three message transmissions. 1. A → B : {|A, Na |}apk(B) 2. B → A : {|Na , Nb |}apk(A) 3. A → B : {|Nb |}apk(B)
V. Cortier and S. Kremer / Formal Models for Analyzing Security Protocols: Some Lecture Notes
35
The notation A → B : m denotes that Alice (A) is sending the message m to Bob (B). We use the notation {|m|}apk(B) to denote the asymmetric encryption of the message m with B’s public key. m1 , m2 denotes the pairing, i.e., the concatenation, of messages m1 and m2 . In the first message Alice sends her identity A and a nonce Na encrypted with Bob’s public key to Bob. A nonce, (which stands for number used once) is a fresh number that is randomly generated in each session. When Bob receives this first message, he decrypts the message and checks that it is well-formed. Bob also generates a nonce Nb and encrypts the nonces NA and NB with Alice’s public key. This mechanism of sending someone an encrypted nonce and waiting for the recipient to send back this nonce is often called a challenge-response mechanism. Here Alice’s challenge to Bob is to be able to decrypt the message and Bob’s response consists in sending back the nonce. When receiving the second message Alice decrypts it and verifies that it contains her previous nonce Na . This proves to Alice that Bob has indeed received the first message. Indeed, Bob is the only entity able to decrypt the first message, as it was encrypted with his public key pk(B). Then Alice retrieves the second nonce Nb and replies with the encryption of Nb with Bob’s public key. Bob can decrypt this message and verify that it contains the nonce Nb generated previously. We can observe that the second and third message also form a challenge-response from Bob to Alice. The aim of the protocol is to guarantee mutual authentication: at the end of a successful protocol run Bob should be ensured he has been communicating with Alice while Alice has been communicating with Bob. Moreover, the protocol should guarantee confidentiality of the nonces Na and Nb . While the above presentation of the protocol is convenient to read it does however lack a formal semantics and many details are left implicit, e.g. which messages are freshly generated, etc. It also only shows one possible execution of the protocol, which is the honest execution where everyone behaves as intended. However the network used for communication is not reliable, i.e. an attacker may intercept messages, re-route and modify them. Moreover, this notation leaves implicit which part of a message needs to be verified as being well formed, or correspond to some known value. A slightly more faithful representation of this protocol is given in Figure 1. In this figure it is explicit that the messages sent by A are not necessarily received by B. The empty space between outgoing and incoming arrows represents the untrusted network. We also note the use of variables in some messages. Variables are used whenever the value of part of the message is a priori unknown. These variables may then be used to construct other messages later. For example, the second message from A’s point of view {|Na , z|}apk(A) explicits that A needs to check that the first component of the encrypted pair corresponds to the nonce Na while the second component is unknown and will be bound to variable z. In the third message A uses variable z as the value to be encrypted. This representation is very useful when thinking about what could go wrong? in the protocol. 2.2. Lowe’s man in the middle attack [33] discovered an attack in the case where A initiates a session with a dishonest party C: C may fool B, by making B believe he is A. Moreover, the nonce Nb which B believes
36
V. Cortier and S. Kremer / Formal Models for Analyzing Security Protocols: Some Lecture Notes
A A A
{|A, Na |}apk(B)
{|x, y|}apk(B)
{|Na , z|}apk(A)
{|y, Nb |}apk(x)
{|z|}apk(B)
{|Nb |}apk(B)
B B B
Figure 1. Lowe’s attack on the Needham Schroeder public key protocol.
to only share with A becomes known to C. The attack is displayed in Figure 2. We write C(A) whenever C is impersonating A. The first message from A to C is decrypted and re-encrypted with B’s public key. As the second message is encrypted with A’s public key, C cannot modify this message, and forwards it to A. As A has no means to know that Nb was generated by B rather than C, he accepts this message and returns {|Nb |}apk(C) . One may say that A acts as a decryption oracle. The attacker may now learn Nb and successfully complete the protocol with B.
A
{|A, Na |}apk(C)
C C(A) C(A)
A A
{|Na , Nb |}apk(A) {|Nb |}apk(C)
{|A, Na |}apk(B) {|Na , Nb |}apk(A)
B B
C C C(A)
{|Nb |}apk(B)
B
Figure 2. Lowe’s attack on the Needham Schroeder public key protocol.
This execution violates the secrecy of Nb and authentication from B’s point of view. B believes he successfully finished a session of the protocol with A while A did not authenticate to B. The crucial observation is that A is willing to start a session with a corrupted agent. This is actually in contradiction with the assumptions made by [39]: We also assume that each principal has a secure environment in which to compute, such as is provided by a personal computer or would be by a secure shared operating system. Our viewpoint throughout is to provide authentication services to principals that choose to communicate securely. We have not considered the extra problems encountered when trying to force all communication to be performed in a secure fashion or when trying to prevent communication between particular principals in order to enforce restrictions on information flow. It may therefore be debatable whether Lowe’s finding should be considered as an attack. Nevertheless, while the above assumption may have been reasonable in 1978, it is cer-
V. Cortier and S. Kremer / Formal Models for Analyzing Security Protocols: Some Lecture Notes
37
tainly not the case anymore. Computers may run malware and people may connect to a malicious server, e.g., because of a phishing attack. Lowe also shows that the protocol can easily be fixed to avoid this attack by adding B’s identity to the second message, i.e. replace the message {|Na , Nb |}apk(A) by {|Na , Nb , B|}apk(A) . Indeed, this altered message prevents the man in the middle attack as A would expect the second message to be {|Na , Nb ,C|}apk(A) while the intruder can only produce the message {|Na , Nb , B|}apk(A) . We will refer to the fixed protocol as the Needham Schroeder Lowe (NSL) protocol.
3. Messages and deduction Many different symbolic models are used to represent and reason about protocols. Examples of symbolic models are process algebra (e.g. CSP [44], applied-pi calculus [3]), strand spaces [46], constraint systems [37,19], or Horn clauses [15]. These models have many differences but they all have in common the fact that messages are represented by terms. Intuitively, the exact values of keys, identities, or nonces are abstracted away but the structure of the message is kept and modeled as a special labelled graph, called term. 3.1. Terms Terms are a common object in computer science. We introduce here only the basic definitions of terms, substitutions, and unification and we refer the reader to handbooks, e.g. [9,10], for a deeper introduction on terms. 3.1.1. Terms Cryptographic primitives such as encryption or signatures are simply represented by function symbols. A finite set of function symbols is called a signature, where a function symbol f has an associated integer, its arity. Definition 1 Given a set X of variables and a set N of names (used to represent atomic data such as keys, nonces, or identities), the set of terms of the signature F , the variables X and the names N is denoted T (F , X , N ) and is inductively defined as names, variables, and function symbols applied to other terms. Variables are typically used to represent parts of messages that are left unspecified, such as components received from the outside. The set of variables occurring in a term t is denoted var(t), while the set of names of t is denoted n(t). Terms without variables are called ground or closed. Example 1 In the context of security protocols, a standard signature is Fstd = {senc, aenc, pair, pk} where senc, aenc and pair are three symbols of arity 2, representing respectively symmetric encryption, asymmetric encryption, and concatenation, while pk is a symbol of arity 1, representing the public key associated to some private key. For example, the term t0 = aenc(pair(a, na ), pk(ka )), where a, na , ka ∈ N , represents the encryption under the public key pk(ka ) of the concatenation of the identity a together with the nonce na . This term t0 can be used to represent the first message sent by a in the Needham-Schroeder protocol.
38
V. Cortier and S. Kremer / Formal Models for Analyzing Security Protocols: Some Lecture Notes
For readability, we may write t1 ,t2 instead of pair(t1 ,t2 ). We may also write {|t1 |}ts2 instead of senc(t1 ,t2 ), and {|t1 |}ta2 for aenc(t1 ,t2 ). The set of positions of a term t is written pos(t) ⊆ N∗ . We use ε to denote the root position. Formally, the set pos(t) is inductively defined as follows. pos( f (t1 , . . . ,tn )) = {ε} ∪
n
i · pos(ti )
i=1
The set of subterms of t is written st(t). The subterm of t at position p ∈ pos(t) is written t| p . In particular, t|ε = t. The term obtained by replacing t| p with a term u in t is denoted t[u] p . The set of subterms of a set of terms S is simply st(S) = t∈S st(t). Example 2 Let t0 = aenc(pair(a, na ), pk(ka )) as defined in Example 1. The set of its positions is pos(t0 ) = {ε, 1, 1.1, 1.2, 2, 2.1}. The set of its subterms is st(t0 ) = {t0 , pair(a, na ), a, na , pk(ka ), ka , } and we have t0 [aenc(na , pk(ka ))]1 = aenc(aenc(na , pk(ka )), pk(ka )). 3.1.2. Substitutions Terms with variables are used to represent partially specified messages. For example, in the Needham-Schroeder protocol, the agent B expects an encrypted message that contains A’s identity and some unknown part X, that should be the nonce sent by A. However, B cannot control that X is indeed the nonce sent by A. The message expected by B will therefore be represented by the term aenc(pair(a, x), pk(ka )) where x is a variable. Such variables can then be instantiated depending on the actually received message, that is, they are replaced by a term. This is formally defined through substitutions. Definition 2 (Substitution) A substitution is a function σ from a finite subset, called domain and denoted Dom(σ ) of the set of variables X to T (F , X , N ). When applied to a term, a substitution replaces any variable x by the corresponding term σ (x). Formally: σ (x) = x if x ∈ / Dom(σ ) σ ( f (t1 , . . . ,tn )) = f (σ (t1 ), . . . , σ (tn )) We often write tσ instead of σ (t). The question of whether two terms with variables may be made equal is called unification. Two terms u and v are said unifiable if there exists a substitution σ , called unifier, such that uσ = vσ . Proposition 1 ([10]) If two terms u and v are unifiable then there exists a most general unifier mgu(u, v) such that any unifier is actually an instance of mgu(u, v), that is for any σ such that uσ = vσ , there exists θ such that σ = mgu(u, v)θ .
V. Cortier and S. Kremer / Formal Models for Analyzing Security Protocols: Some Lecture Notes
IDY :
⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩
x
y
x, y
x, y
x
y
x, y x
y
senc(x, y)
senc(x, y) x
y
39
y
x aenc(x, pk(y))
aenc(x, y)
y
x
Figure 3. Inference system IDY corresponding to a “Dolev-Yao adversary”.
3.2. Message deduction Cryptographic primitives have of course particular properties. For example, anyone can decrypt a message with the corresponding decryption key. These properties are reflected through inference rules, that define which messages can be computed from an a priori given set of messages. 3.2.1. Inference system From the key k and the message senc(m, k), which represents the (symmetric) encryption of m over k, one can compute m. This can be represented by the rule senc(m, k)
k
m This can be formalized as follows. Definition 3 An inference rule is a rule of the form: terms (with variables). An inference system is a set of inference rules.
u1
··· u
un
with u1 , . . . , un , u are
The standard inference system corresponding to the encryption and concatenation is presented in Figure 3. It is often called the Dolev-Yao system, in reference to the first symbolic formalisation of the deductive power of an attacker by [24]. The first line of Figure 3 corresponds to the fact that anyone can concatenate terms and retrieve terms from a concatenation. The second line models that one can encrypt and decrypt symmetrically whenever he/she has the corresponding key. Similarly, the third line corresponds to asymmetric encryption. The first inference rule of each line of Figure 3 is called a composition rule while the other rules are called decomposition rules. Of course, more inference rules may be considered, for example to represent signatures, hashes, etc. 3.2.2. Derivability Inference rules may be combined to compute or derive new messages. For example, from a set of messages S0 = {k1 , k2 , k3 , a, {|n|}sk1 ,k3 }, an attacker may learn k1 from
40
V. Cortier and S. Kremer / Formal Models for Analyzing Security Protocols: Some Lecture Notes
k1 , k2 and k3 from k3 , a. This allows him to obtain k1 , k3 , which in turn enables him to decrypt {|n|}sk ,k to get n. He may then further append a to it for example, yielding 1 3 n, a. We say that n, a is deducible from S0 and the corresponding deduction steps can be conveniently represented by the following proof tree. k1 , k2
k3 , a
k1
k3
{|n|}sk1 ,k3
k1 , k3
k3 , a
n
a n, a
Formally, the notion of deduction is defined as follows. Definition 4 (deduction) Let I be an inference system. A term t is deducible in one step from a set of terms S for the inference system I , which is denoted S 1I t if there exists u1 · · · u n of I , terms t1 , . . . ,tn ∈ S, and a substitution θ such that an inference rule u ti = ui θ t1
···
tn
for
1≤i≤n
is an instance of
u1
···
and t = uθ un
. u t A term t is deducible from a set of terms S, denoted S I t if there exists a proof tree Π, that is, a tree such that
We say that
• the leaves of Π are labelled by terms in S; • if a node of Π is labelled by tn+1 and has n child nodes labelled by t1 , . . . ,tn t1 · · · tn is an instance of some inference rule of I ; respectively then tn+1 • the root of Π is labelled by t. The set Terms(Π) is the set of all terms that appear in any node of Π. We may write S t instead of S I t when I is clear from the context. Example 3 Let S0 = {k1 , k2 , k3 , a, {|n|}sk ,k } and IDY as defined in Figure 3. The 1 3 fact that S0 k1 , k2 is witnessed by the proof tree consisting of a single node labelled k1 , k2 . Then k1 and k3 are derivable in one step from S0 . We have S0 n, a and a proof tree of it is the tree presented in Section 3.2.2. 3.3. An algorithm to decide message deduction To analyse security protocols, one of the key methods is to design automatic procedures to detect whether a protocol is subject to attacks. In this context, a very basic question is to decide whether a term t is deducible from a (finite) set of terms S. This problem is called the intruder deduction problem. Definition 5 (Intruder deduction problem) Let I be an inference system. The intruder deduction problem consists in deciding the following problem:
V. Cortier and S. Kremer / Formal Models for Analyzing Security Protocols: Some Lecture Notes
41
Input a finite set of terms S and a term t Output whether S I t. The intruder deduction problem is undecidable in general [2], for arbitrary inference systems. It is however easily decidable for the case of local theories that satisfy that whenever S t then there exists a proof of it that uses only subterms of S and t. Definition 6 An inference system I is local if for any finite set of terms S and for any term t such that S t there exists a proof tree Π of S t such that any term labeling Π is in st(S ∪ {t}). Theorem 1 Let I be a local inference system. The intruder deduction system is decidable in polynomial time (PTIME). Proof: Let I be a local inference system. Whether S t can be decided by the following algorithm: • • • •
Let S0 = S. Let Si+1 = Si ∪ {u | Si 1 u} ∩ st(S ∪ {t}) . If Si+1 = Si then stop. Check whether t ∈ Si .
Let N be the cardinality of st(S ∪ {t}). For any i, we have that Si ⊆ st(S ∪ {t}) therefore the procedure stops in at most N steps. Since each step can be computed in polynomial time, the overall procedure is polynomial. Its correctness is ensured by the locality property of I . The intruder deduction problem is decidable in PTIME for the Dolev-Yao inference system, due to its locality property. Proposition 2 The Dolev-Yao inference system IDY is local. Proof: We say that a proof tree Π of S t is minimal if its number of nodes is minimal. We show a property that is slightly stronger than locality: For any S, t such that S t, for any minimal proof Π of S t, it holds that Terms(Π) ⊆ st(S ∪ {t}). Moreover if Π is reduced to a leaf or ends with a decomposition rule then Terms(Π) ⊆ st(S). The proof proceeds by induction on the size of Π. Base case. If Π is a leaf then by definition Terms(Π) ⊆ st(S). Induction. Let Π be a minimal proof of S t and consider the last applied inference rule. Assume first it is a composition rule: ⎧ Π2 ⎪ ⎨Π1 Π = t1 t2 ⎪ ⎩ t with t = f (t1 ,t2 ) for f ∈ {pair, aenc, senc}. By induction hypothesis, it must be the case that Terms(Π1 ) ⊆ st(S ∪ {t1 }) ⊆ st(S ∪ {t}) and Terms(Π2 ) ⊆ st(S ∪ {t2 }) ⊆ st(S ∪ {t}). Hence Terms(Π) ⊆ st(S ∪ {t}). Consider now the case where the last applied inference rule is a decomposition and assume first that it corresponds to asymmetric decryption, that is
42
V. Cortier and S. Kremer / Formal Models for Analyzing Security Protocols: Some Lecture Notes
Π=
⎧ ⎪ ⎨ ⎪ ⎩
Π1
Π2
aenc(t, pk(t2 ))
t2
t
The last applied rule in Π1 must be a decomposition. Indeed, if it were a composition, we would have ⎧ Π1 Π1 ⎪ ⎪ ⎪ ⎪ ⎨ t pk(t2 ) Π= ⎪ ⎪ aenc(t, pk(t2 )) ⎪ ⎪ ⎩ t
Π2 t2
Therefore Π1 would be a smaller proof of S t, which contradicts the minimality of Π. Now, since the last applied rule of Π1 is a decomposition and applying the induction hypothesis, we deduce that Terms(Π1 ) ∪ {aenc(t, pk(t2 ))} ⊆ st(S) and Terms(Π2 ) ⊆ st(S ∪ {t2 }). We deduce that aenc(t, pk(t2 )) and therefore t and t2 are in st(S). Hence Terms(Π) ⊆ st(S ∪ {t}). The other cases are left to the reader. Corollary 1 The intruder deduction problem is decidable in PTIME for the Dolev-Yao inference system IDY . This follows directly from Theorem 1 and Proposition 2.
4. Equational theory and static equivalence Inference systems model what an attacker can compute. This is however still not sufficient in cases where the attackers gains information not by learning new values but by observing the difference between two behaviors. Consider for example the case of an election where voters vote 0 or 1. An important security property is that votes remain confidential. However, the question is not whether the values 0 or 1 remain confidential since they are both public. The votes remains confidential if an attacker cannot distinguish whether some agent A is voting 0 or 1. We first start this section by enriching term algebras with equational theories, to model more primitives. We then provide a formal notion of indistinguishability, called static equivalence. 4.1. Equational theories A pure free algebra does not always suffice to accurately represent cryptographic primitives. Consider for example the bit-wise exclusive or operator ⊕. This operation has a cancellation property: m ⊕ m = 0. This is of course true even if this operation is nested in other primitives. For example {m ⊕ m}k = {0}k . The exclusive or is also commutative and associative. These properties cannot be accurately reflected by an inference system. Instead, it is convenient to quotient the term algebra with an equational theory.
V. Cortier and S. Kremer / Formal Models for Analyzing Security Protocols: Some Lecture Notes
43
4.1.1. Definitions Let F be a signature. An equational theory E is simply a set of equations u = v, where u and v are terms in T (F , X , N ). The equivalence relation =E is defined by the equalities of E closed by reflexivity, transitivity, substitutions and context. Formally, =E is the smallest equivalence relation such that: • uθ =E vθ for any u = v ∈ E and any substitution θ , • u1 =E v1 , . . . , uk =E vk implies f (u1 , . . . , uk ) =E f (v1 , . . . , vk ). 4.1.2. Examples Exclusive Or. The standard equational theory E⊕ for representing the exclusive or is defined as follows x ⊕ (y ⊕ z) = (x ⊕ y) ⊕ z x⊕x = 0
x⊕y = y⊕x x⊕0 = x
where ⊕ is a function symbol of arity 2. The two equations of the first line model resp. associativity and commutativity while the last ones model the cancellation property and the identity element of the exclusive or. Example 4 Let u = k1 ⊕ k2 , v = k2 ⊕ k3 , and w = k1 ⊕ k3 . Then w =E⊕ u ⊕ v. Modular Exponentiation Another standard primitive that cannot be modeled by an inference system is modular exponentiation. Modular exponentiation is used in many encryption schemes (such as RSA or El Gamal) and is at the heart of the Diffie-Hellman key exchange protocol [23] for example. A minimal equational theory for modular exponentiation is Eexp induced by the equation exp(exp(x, y), z) = exp(exp(x, z), y) where exp is a function symbol of arity 2. Of course, this is not the only equation that holds for modular exponentiation but it suffices to reflect the property needed to execute the Diffie-Hellman protocol: A → B : exp(g, na ) B → A : exp(g, nb ) At the end of an honest execution, the two agents A and B share the same key exp(exp(g, na ), nb ) =Eexp exp(exp(g, nb ), na ). Encryption. Encryption and concatenation can also be modeled by an equational theory, with the introduction of explicit destructor operators. Let Fdec = {sdec, adec, fst, snd} corresponding to, respectively, symmetric decryption, asymmetric decryption, first, and second projections. Let F0 be an arbitrary (finite) set of constant symbols. The properties
44
V. Cortier and S. Kremer / Formal Models for Analyzing Security Protocols: Some Lecture Notes
of concatenation and standard symmetric and asymmetric encryption are modeled by the following set of equations Eenc , over the term algebra T (Fstd ∪ Fdec ∪ F0 , X ): sdec(senc(x, y), y) = x fst(pair(x, y)) = x
adec(aenc(x, pk(y)), y) = x snd(pair(x, y)) = y
4.1.3. Deduction It is again possible to model what an attacker can deduce from a given set of messages. Actually, when cryptographic primitives are modeled by an equational theory, the notion of deduction can be defined in a uniform way: the intruder may deduce any term that he can obtain by applying functions. Formally, given an equational theory E, the associated deduction system E is defined as follows: ···
tn
t
f (t1 , . . . ,tn )
t
t1
if t =E t
Example 5 Consider the equational theory E⊕ ∪Eenc of XOR and encryption, over terms in T (Fstd ∪ Fdec ∪ {⊕}, X ). Let S = {senc(a, a ⊕ c), a ⊕ b, b ⊕ c}. Then S E⊕ ∪Eenc a. Indeed a⊕b senc(a, a ⊕ c)
b⊕c
a⊕c
a Alternatively, deduction can be characterized as a variant of a unification problem. Proposition 3 Let E be an equational theory. Let S be a set of terms and t a term. Then S E t if and only if there exist a context C such that n(C) = 0/ and terms t1 , . . . ,tn ∈ S such that t =E C[t1 , . . . ,tn ]. A context is a term with holes (more formally, with variables x1 , . . . , xn ) and C[t1 , . . . ,tn ] denotes C where the holes (more formally the variables xi ) are replaced by the ti . Proof: The implication S E t ⇒ ∃C,t1 , . . . ,tn ∈ S s.t. n(C) = 0/ and t =E C[t1 , . . . ,tn ] is proved by a simple induction of the size of the proof tree of S E t. The converse implication is proved again by induction, on the size of the context C. This property was first noticed by [1] and the existence of a context C such that t =E C[t1 , . . . ,tn ] has been later called the cap unification problem [7]. 4.1.4. The case of explicit decryption Back to the case of encryption, there are two ways of defining deduction: either through the inference system IDY introduced in §3.2.1 or through the deduction system induced by the equational theory Eenc . Both definitions actually coincide.
V. Cortier and S. Kremer / Formal Models for Analyzing Security Protocols: Some Lecture Notes
45
Proposition 4 Let S be a set of terms and t be a term in the term algebra T (Fstd , X ). Then S IDY t
if and only if
S Eenc t.
The proof is left as an exercise. 4.2. Static equivalence Deduction does not always suffice to reflect the attacker’s abilities. For example, the attacker can observe in which order messages are sent. Messages are therefore organized into frames. Definition 7 (frame) A frame is an expression ϕ = ν nθ ˜ = ν n{M ˜ 1 /x1 , . . . , Mn /xn } where θ is a substitution and n˜ is a set of names that are restricted in ϕ. The terms M1 , . . . , Mn represent the attacker knowledge while the names in n˜ are initially unknown to the attacker. If k is a name νkϕ denotes ν(n˜ ∪ {k})θ . We define the domain of ϕ, Dom(ϕ), to be Dom(θ ). Example 6 νk{1/x1 , 0/x2 , senc(0, k)/x3 } is a frame. It models the fact that the attacker has seen two constants 1 and 0 in addition to the encryption senc(0, k) where the key k is initially unknown to the attacker. 4.2.1. Deduction and recipe The notion of deduction can easily be extended to frames. Definition 8 (deduction) A term t is deducible from a frame ϕ = ν nθ ˜ if it can be deduced using ϕ and any name that does not occur in n. ˜ More formally, given an equational theory E and a frame ϕ = ν nθ ˜ , we write ϕ E t if Dom(ϕ) ∪ (N \n) ˜ E t. Consider for example the frame ϕ1 = νn, kθ1 where θ1 = {senc(pair(n, n), k)/x, k/y} and consider the equational theory Eenc defined in §4.1.2. Then ϕ1 Eenc n. More precisely, n can be obtained from ϕ1 by first decrypting the message stored in x by the key in y and then projecting on the first component. Let M = fst(dec(x, y)). We have that Mθ1 =Eenc n. Such a term M is called a recipe of n w.r.t. ϕ1 . Definition 9 (recipe) Let ϕ = ν nθ ˜ be a frame and t a term such that ϕ E t. A term R is said free w.r.t. ϕ if n(R) ∩ n˜ = 0. / A term R is a recipe of t in ϕ if R is free w.r.t. ϕ and if Rθ =E t. A term is deducible if and only if there exists a recipe of it. Proposition 5 Let ϕ = ν nθ ˜ be a frame and t a term. ϕ E t if and only if there exists a recipe R of t in ϕ. The proof is very similar to the proof of Proposition 3.
46
V. Cortier and S. Kremer / Formal Models for Analyzing Security Protocols: Some Lecture Notes
4.2.2. Definition of static equivalence Consider the two frames ϕ1 = {0/x, 1/y} and ϕ2 = {1/x, 0/y} where 0 and 1 are constants. Clearly, the same terms can be deduced from ϕ1 and ϕ2 . However, the two frames model two different situations. In ϕ1 , the attacker observed 0 and 1, while in ϕ2 , the attacker observed 1 then 0. To reflect the ability of the attacker to compare messages, [3] introduced the notion of static equivalence. Before defining static equivalence we require the following auxiliary definition. Definition 10 Given frames ϕ1 , ϕ2 we write ϕ1 =α ϕ2 if ϕ1 is equal to ϕ2 up to αconversion of restricted names. We say that the equation M =E N holds in a frame ϕ, written (M =E N)ϕ if and ˜ , M, N are free w.r.t. ν nθ ˜ and Mθ =E Nθ . only if there exist n˜ and θ such that ϕ =α ν nθ Definition 11 (static equivalence) Two frames ϕ1 and ϕ2 are statically equivalent w.r.t. an equational theory E, denoted ϕ1 ∼E ϕ2 , if Dom(ϕ1 ) = Dom(ϕ2 ) and for any two terms M, N we have that (M =E N)ϕ1 if and only if (M =E N)ϕ2 . We may write ϕ1 ∼ ϕ2 instead of ϕ1 ∼E ϕ2 when E is clear from the context. Example 7 Let Eenc be the equational theory of encryption as defined in §4.1.2. Let ϕ1 = {0/x, 1/y} and ϕ2 = {1/x, 0/y}. Then ϕ1 ∼ ϕ2 . Indeed (x = 0)ϕ1 while (x = 0)ϕ2 . Example 8 Let ϕ1 = νk{aenc(0, pk(k))/x, pk(k)/y} and ϕ2 = νk{aenc(1, pk(k))/x, pk(k)/y} corresponding respectively to the asymmetric encryption of 0 and 1. Then ϕ1 ∼ ϕ2 . Indeed (aenc(0, y) = x)ϕ1 while (aenc(0, y) = x)ϕ2 . An attacker may encrypt 0 itself and check for equality. This is not the case anymore if encryption is randomized. Let ϕ1 = νk, n{aenc(pair(0, n), pk(k))/x, pk(k)/y} and ϕ2 = νk, n{aenc(pair(1, n), pk(k))/x, pk(k)/y}. Then ϕ1 ∼ ϕ2 . Static equivalence is closed under restriction and composition. Proposition 6 Let ϕ = ν nσ ˜ , ϕ1 = ν n˜ 1 σ1 , and ϕ2 = ν n˜ 2 σ2 be three frames such that ϕ1 ∼ ϕ2 , Dom(σ ) ∩ Dom(σi ) = 0/ and n˜ ∩ n˜ i = 0/ (1 ≤ i ≤ 2). Then 1. νs ϕ1 ∼ νs ϕ2 , and 2. ψ1 = ν(n˜ ∪ n˜ 1 )(σ ∪ σ1 ) ∼ ν(n˜ ∪ n˜ 1 )σ ∪ σ2 = ψ2 Proof: Property 1 follows from the fact that n(νs ϕi ) ⊆ n(ϕi ). Property 2 is also simple to prove. Assume (M = N)ψ1 . This can be rewritten as (Mσ = Nσ )ϕ1 assuming n(M, N) ∩ n˜ = 0, / which can be enforced through α-renaming. Since ϕ1 ∼ ϕ2 , this implies (Mσ = Nσ )ϕ2 , therefore (M = N)ψ2 . The case where (M = N)ψ2 is symmetric.
V. Cortier and S. Kremer / Formal Models for Analyzing Security Protocols: Some Lecture Notes
47
4.2.3. Decision procedure for encryption In this section, we consider the equational theory of symmetric encryption Esenc restricting the theory Eenc (defined in §4.1.2) to pair and symmetric encryption. Our goal is to present a decision procedure for static equivalence, for the theory Esenc . The procedure is a special case of the procedure presented by [12], for arbitrary subterm convergent equational theories. For the sake of clarity of exposition, we consider only frames where all names are restricted (public names should be explicitly added in the frame). That is, we only consider frames of the form ν nθ ˜ where n(θ ) ⊆ n. ˜ Therefore, by abuse of notations, we write θ1 ∼Esenc θ2 whenever νn(θ1 )θ1 ∼Esenc νn(θ2 )θ2 . We first need to introduce some vocabulary. Definition 12 (extended frame) An extended frame is an expression {M1 u1 , . . . , Mn un } where ui and Mi are terms. The extended frame associated to a frame ν n{u ˜ 1 /x1 , . . . , un /xn } (where n(ui ) ⊆ n) ˜ is simply defined as {x1 u1 , . . . , xn un }. Initialization Given a substitution θ , the decision procedure starts by eliminating duplicated terms, replacing them by equations. Suppose the extended frame corresponding to θ is {x11 t1 , . . . , x1k1 t1 , . . . , xn1 tn , . . . , xnkn tn } where the ti are pairwise distinct. Then we define Init(θ ) = ({x11 t1 , x21 t2 , . . . , xn1 tn }, {xi1 xij | 1 ≤ i ≤ n, 1 ≤ j ≤ ki }) Saturation Let ϕ0 be an extended frame and E0 a set of equations M N where M and N are terms. We define the saturation procedure for ϕ0 and E0 as follows. Sat(ϕ0 , E0 ) = 1. Let ϕ := ϕ0 and E := E0 . 2. Repeat until reaching a fixed point. For any M1 t1 , M2 t2 ∈ ϕ, f ∈ {senc, dec, pair}, g ∈ {fst, snd}: • If f (t1 ,t2 ) =Esenc t for some term t subterm of ϕ then ∗ if ∃M. M t ∈ ϕ then E := E ∪ { f (M1 , M2 ) M}; ∗ else ϕ := ϕ ∪ { f (M1 , M2 ) t}. • If g(t1 ) =Esenc t for some term t subterm of ϕ then ∗ if ∃M. M t ∈ ϕ then E := E ∪ {g(M1 ) M}; ∗ else ϕ := ϕ ∪ {g(M1 ) t}. 3. Return E. This procedure terminates and reaches a fixed point (left as exercise). Theorem 2 Let θ1 and θ2 be two substitutions. then θ1 ∼Esenc θ2 if and only if
48
V. Cortier and S. Kremer / Formal Models for Analyzing Security Protocols: Some Lecture Notes
• for any (M N) ∈ Sat(Init(θ1 )) it holds that Mθ2 =Esenc Nθ2 . • for any (M N) ∈ Sat(Init(θ2 )) it holds that Mθ1 =Esenc Nθ1 . This theorem is a particular case of the decision procedure developed by [12]. Its proof will not be given here. Example 9 Consider θ = {senc(pair(n, n), k)/x, k/y}. Then Init(θ ) = ({x senc(pair(n, n), k), y k}, 0) / and Sat(Init(θ )) = (ϕ, E) with ϕ = x senc(pair(n, n), k), y k, dec(x, y) pair(n, n), fst(dec(x, y)) n and E = {snd(dec(x, y)) fst(dec(x, y))} 4.2.4. More decision procedures A much more general decision procedure has been developed by [12] for any convergent theory. It is guaranteed to terminate for the class of convergent subterm theories2 and it also works in more general cases that encompass for example blind signatures [13]. The corresponding implementation is the tool YAPA. Another available tool for static equivalence is KISS by [18]. This tool also handles a large class of equational theories that includes for example trapdoor commitments. Decidability of static equivalence has also been obtained for the class of monoidal theories [20] that captures e.g. Exclusive Or and pure Associativity and Commutativity. It also encompasses homomorphic equations of the form h(x + y) = h(x) + h(y) where + is an associative and commutative symbol. Decidability results can be combined [20] : if static equivalence is decidable for two disjoint theories E1 and E2 then it is also decidable for E1 ∪ E2 (provided deduction is also decidable for E1 and E2 ). Some more decidability results have been obtained for specific theories such as the theory of trapdoor commitment and that of reencryption [14] as well as theories for Diffie-Hellman exponentiation [31] and bilinear pairings [32].
5. A cryptographic process calculus The previous section describes how messages exchanged in cryptographic protocols can be represented as terms. In this section, we discuss how the protocols themselves can be modelled. While the kind of “Alice - Bob” notation used in §2 are convenient for explaining protocols, these notations generally only describe a honest protocol execution, 2 A convergent theory is an equational theory induced by a convergent rewriting system. The theory is subterm convergent if there is a corresponding (convergent) rewriting system such that any rewrite rule → r is such that r is a subterm of or a constant.
V. Cortier and S. Kremer / Formal Models for Analyzing Security Protocols: Some Lecture Notes
49
P, Q, R := plain processes 0 PQ !P νn.P if t1 = t2 then P else Q in(u, x).P out(u,t).P Figure 4. Syntax: plain processes
and contain ambiguities. Fundamentally, security protocols are concurrent programs and formalisms for representing such programs do exist. In particular process algebras and calculi have been developed for this purpose. Some “general purpose process algebras”, e.g. CSP [42], have indeed been used to reason about security protocols. There also exist dedicated calculi which integrate support for sending messages that use cryptographic primitives. Examples of such dedicated process calculi are CryptoSPA [27], which extend the CCS process algebra, the spi calculus [4], and the applied pi calculus [3], that both extend the pi calculus. We present here in more details the applied pi calculus. In contrast to the pure pi calculus it is not restricted to communicating names, but processes may output terms that represent messages. One may also note that some people have considered the problem of compiling “Alice - Bob” kind notations to more formal models, e.g., [30,36,17], but we will not discuss them here. 5.1. Syntax and informal semantics We assume a set of names N , a set of variables X , and a signature F which define the set of terms T (F , X , N ) equipped with an equational theory E (see §4). The equational theory is left implicit throughout this section. Moreover, we rely on a simple sort system that distinguishes channels and basic messages. We distinguish the set Nch ⊂ N of channel names and partition X = Xb Xch in the set of variables of base sort and variables of channel sort. We suppose that function symbols cannot be applied to variables or names of channel sort, and cannot return terms of that sort. Hence, channels are always atomic: the only terms of channel sort are variables and names of that sort. The applied pi calculus has two kind of processes: plain and extended processes. Plain processes are generated by the grammar given in Figure 4, where t,t1 ,t2 , . . . range over terms, n over names, x over variables and u is a meta-variable that stands for either a name or a variable of channel sort. The 0 process is the process that does nothing. Parallel composition P Q models that processes P and Q are executed in parallel. The replication of P, denoted !P, allows an unbounded number of copies of P to be spawned. New names can be created using the new operator νn, which acts as a binder and generates a restricted name. The conditional if t1 = t2 then P else Q behaves as P whenever t1 =E t2 and as Q otherwise. Finally, in(u, x).P expects an input on channel u that is bound to variable x in P and out(u, M).P outputs term M on channel u and then behaves as P. Extended processes are generated by the grammar given in Figure 5. They extend plain processes by active substitutions, and allow restrictions on both names and variables. An active substitution {t /x } allows processes to address a term by a variable.
50
V. Cortier and S. Kremer / Formal Models for Analyzing Security Protocols: Some Lecture Notes
A, B,C := extended processes P AB νn.A νx.A {t /x } Figure 5. Syntax: extended processes
The scope of this access may be restricted using the ν operator on variables. This also allows to define local variables as follows: the construct let x = t in P is defined as νx.(P {t /x }). When the variable x is not restricted, it means that the environment, which represents the attacker, may use x to access the term t. As exemplified in the description of the labelled semantics of the applied pi calculus, these active substitutions are typically used to record the terms output by the processes and represent the attacker knowledge. Given several active substitutions in parallel {t1 /x1 } . . . {tn /xn } we often regroup them in a single substitution {t1 /x1 , . . . ,tn /xn }. We suppose that these substitutions are cycle free, that there is at most one substitution defined for each variable, and exactly one when this variable is restricted, and substitutions only contain terms of base sort. Given an extended process A we denote by φ (A) the process obtained by replacing any plain process with 0. φ (A) is called the frame of the process A. We also note that extended processes must not appear under a replication, an input, an output, or a conditional. For readability we often omit trailing 0 processes and else 0 branches, e.g. we write if t1 = t2 then out(c,t) instead of if t1 = t2 then out(c,t).0 else 0 As usual we define free and bound names for processes, denoted n(A) and bn(A). Similarly we define free and bound variables, denoted fv(A) and bv(A). The set of bound variables contains all variables that have been bound by an input and the ones that are restricted by the ν operator. For an active substitution {t /x } the set of free variables contains x in addition to the variables occurring in t. Finally, we define the notion of context. A context is a process with a “hole”, often denoted . Given a context C, we write C[A] for the process obtained by replacing with the process A. An evaluation context is a context whose hole is neither under replication, nor input, output or conditional. 5.2. Modelling protocols as processes Before defining the formal semantics of the applied pi calculus we illustrate how the calculus can be used for modelling security protocols. As an example we consider the Needham-Schroeder public key protocol, introduced in §2. The protocol can be informally described as follows.
V. Cortier and S. Kremer / Formal Models for Analyzing Security Protocols: Some Lecture Notes
51
1. A → B : aenc(a, na , pkb ) 2. B → A : aenc(na , nb , pka ) 3. A → B : aenc(nb , pkb ) We assume the previously defined equational theory Eenc on signature Fdec ∪ Fstd and model each of the roles A and B by a process. It is important to distinguish between the role of the initiator A, modelled by a process, and the agent (a in the above informal description) who is executing the role. To make this distinction explicit we parametrize the processes representing the initiator and responder with the keys of the agents who execute the role. ˆ νna . out(c, aenc(pk(ski ), na , pkr )). PA (ski , pkr ) = in(c, x). if fst(adec(x, ski )) = na then let xnb = snd(adec(x, ski )) in out(c, aenc(xnb , pkr )) The process first generates a fresh nonce na and then outputs the first message on channel c. Note that for simplicity we assume that the agent’s identity is his public key. Next, the initiator is waiting for a message which is going to be bound to the variable x. Using a conditional the initiator checks that the message contains the previously sent nonce na . For readability we then create a local variable xnb and store in this variable what is expected to be the nonce generated by the responder. We can model similarly the responder process PB . ˆ in(c, y). PB (skr ) = let pki = fst(adec(y, skr )) in let yna = snd(adec(y, skr )) in νnb .out(c, aenc(yna , nb , pki )) in(c, z). if adec(z, skr )) = nb then Q One may note that PB only takes a single argument, the responder’s private key. The initiator’s public key is received during the execution. When the final test succeeds we suppose that the responder continues to execute some task modelled by the process Q. We can now put the processes together into a process that models the Needham Schroeder public key protocol as a whole. 1 = ˆ νska , skb .(PA (ska , pk(skb )) PB (skb ) Pnspk
out(c, pk(ska )) out(c, pk(skb ))) This first version models that a (or more precisely the agent identified by pk(ska )) is executing an instance of the role PA with b (identified by pk(skb )). We also output the public keys of a and b to make these available to the adversary.
52
V. Cortier and S. Kremer / Formal Models for Analyzing Security Protocols: Some Lecture Notes
However, one may notice that the above modeling would miss Lowe’s man in the middle attack since this setting does not involve any dishonest agent c. To capture this attack one could explicitly include that a is willing to start a session with the intruder. We suppose that the intruder possesses a secret key skc which formally is just a free name. 2 = ˆ νska , skb .(PA (ska , pk(skb )) PA (ska , pk(skc )) PB (skb ) Pnspk
out(c, pk(ska )) out(c, pk(skb ))) This second version explicitly includes a session started by a with the intruder and indeed captures Lowe’s man in the middle attack. However, this situation is not satisfactory, as one does not know a priori with whom agents should start a session. One trick is to leave this choice to the attacker: we add an input that is used to define the public key given to the initiator role. 3 Pnspk = ˆ νska , skb .(in(c, x pk ).PA (ska , x pk ) PB (skb )
out(c, pk(ska )) out(c, pk(skb ))) Now the attacker can just input the public key that suits him best to create an attack. Note that he may input one of the two regular public keys pk(ska ) and pk(skb ), or any other term, including in particular his own key pk(skc ). Note that the attacker may also trigger the agent a to execute a protocol “with himself”, i.e., with the public key pk(ska ). There exist indeed attacks, sometimes called reflection attacks, that rely on this behavior. This version has still a shortcoming. We only consider one session for each role. Many attacks do however require several parallel sessions of the same role. [35] has even shown that there is a priori no upper bound on the number of parallel sessions that would avoid all attacks. We therefore add replication. 4 Pnspk = ˆ νska , skb .(!in(c, x pk ).PA (ska , x pk ) !PB (skb )
out(c, pk(ska )) out(c, pk(skb ))) This new version allows a and b to execute an arbitrary number of sessions. Note that a may execute several sessions with the same as well as different responders. However, this modeling still misses that both initiator and responder may be executed by the same agent. We therefore include explicitly that a and b may execute both roles. Moreover, the above process only allows two honest agents while an attack may a priori require the presence of more honest agents. Therefore we add an additional replication that allows the creation of an arbitrary number of honest private keys, each of which can be used in an arbitrary number of sessions. 5 Pnspk = ˆ !νska , skb .(!in(c, x pk ).PA (ska , x pk ) !PB (ska ) !in(c, x pk ).PA (skb , x pk ) !PB (skb ) out(c, pk(ska )) out(c, pk(skb )))
V. Cortier and S. Kremer / Formal Models for Analyzing Security Protocols: Some Lecture Notes
53
Observing the symmetric roles of a and b this process can be written more succinctly and elegantly as 6 = ˆ !νsk.(!in(c, x pk ).PA (sk, x pk ) !PB (sk) out(c, pk(sk))) Pnspk
This final modeling allows the adversary to spawn an arbitrary number of instances of PA and PB with either the same or different private keys. 5.3. Formal semantics As the goal is to prove security properties of protocols modelled as processes, we need to define the semantics of the calculus in order to have a precise definition on how a process can be executed. 5.3.1. Operational semantics We first define the notion of structural equivalence. Intuitively, structural equivalence relates identical processes that are simply written in a different way. Formally, structural equivalence ≡ is the smallest equivalence relation closed under α-conversion of bound names and variables and application of evaluation contexts and such that:
PAR -0 PAR -C PAR -A R EPL
A0≡A AB≡BA (A B) C ≡ A (B C) !P ≡ P !P
N EW-0 N EW-PAR N EW-C
ν n. 0 ≡ 0 A ν u.B ≡ ν u.(A B) when u ∈ fv(A) ∪ n(A) ν u.ν v.A ≡ ν v.ν u.A
A LIAS S UBST R EWRITE
ν x.{t /x } ≡ 0 {t /x } A ≡ {t /x } A{t /x } {t1 /x } ≡ {t2 /x } when t1 =E t2
While most of the above rules are standard the last three rules may require some explanation. A LIAS allows the creation of a new local variable. S UBST allows the application of an active substitution to a process and R EWRITE allows to relate two active substitutions modulo the equational theory. Example 10 Let us illustrate these rules by showing that out(c,t1 ) ≡ out(c,t2 ) when t1 =E t2 .
54
V. Cortier and S. Kremer / Formal Models for Analyzing Security Protocols: Some Lecture Notes
out(c,t1 ) ≡ out(c,t1 ) 0 by PAR -0 ≡ out(c,t1 ) ν x.{t1 /x } by A LIAS ≡ ν x.(out(c,t1 ) {t1 /x }) by N EW-PAR ≡ ν x.({t1 /x } out(c,t1 )) by PAR -C ≡ ν x.({t1 /x } out(c, x)) by S UBST ≡ ν x.({t2 /x } out(c, x)) by R EWRITE ≡ ν x.({t2 /x } out(c,t2 )) by S UBST ≡ ν x.(out(c,t2 ) {t2 /x }) by PAR -C ≡ out(c,t2 ) ν x.{t2 /x } by N EW-PAR ≡ out(c,t2 ) 0 by A LIAS ≡ out(c,t2 ) by PAR -0 Note that we also implicitly used the fact that structural equivalence is closed under application of evaluation contexts as we applied some of the rules directly under a context. One may also note that for any extended process A, we have that φ (A) ≡ ν n.σ ˜ for some sequence of names n˜ and substitution σ . Therefore we can lift static equivalence to processes and we write A ∼E B whenever φ (A) ≡ ν n˜ A .σA , φ (B) ≡ ν n˜ B .σB , and n˜ A .σA ∼ ν n˜ B .σB . We can now define how processes interact together. Internal reduction is the smallest relation on processes closed under structural equivalence and application of evaluation contexts such that C OMM out(c,t).P1 in(c, x).P2 → P1 P2 {t /x } T HEN if t = t then P else Q → P E LSE if t1 = t2 then P else Q → Q where t1 ,t2 are ground and t1 =E t2 The first rule (C OMM) models communication: whenever a process is ready to output a term t on channel c and another process, running in parallel, is ready to input on channel c, i.e., it starts with in(c, x) then a communication can take place and x is replaced by t. Rules T HEN and E LSE model a conditional. One may note that the T HEN rule requires syntactic equality of terms (if t = t). However as internal reduction is closed under structural equivalence this rule is equivalent to the rule T HEN ’ if t1 = t2 then P else Q → P where t1 =E t2 using structural equivalence in a similar way as in Example 10. One may also note that in the E LSE rule, contrary to the T HEN rule we require t1 ,t2 to be ground. This is due to the fact that equality is closed under substitution while disequality is not, e.g. even though x = y we have that xσ = yσ for σ = {x /y }. Example 11 We illustrate internal reduction by modelling the honest execution of the Needham Schroeder public key protocol. For simplicity, we consider a naive model that only considers two honest participants:
V. Cortier and S. Kremer / Formal Models for Analyzing Security Protocols: Some Lecture Notes
55
νska , skb . PA (ska , pk(skb )) PB (skb ) → νska , skb , na , nb . in(c, x). if fst(adec(x, ski )) = na then let xnb = snd(adec(x, ski )) in out(c, aenc(xnb , pkr )) out(c, aenc(na , nb , pk(ska ))) in(c, z). if adec(z, skb )) = nb then Q → νska , skb , na , nb . if na = na then out(c, aenc(nb , pkr )) in(c, z). if adec(z, skb )) = nb then Q → νska , skb , na , nb . out(c, aenc(nb , pkr )) in(c, z). if adec(z, skb )) = nb then Q → νska , skb , na , nb . if nb = nb then Q → νska , skb , na , nb . Q
6. Further readings and conclusion In these notes, we have mostly focused on how to model security protocols, that is, how to formally define the security of a protocol, for various notion of security (trace-based as well as equivalence-based). As illustrated in these notes, the design of protocols is errorprone and finding flaws is not an easy task. Therefore, the two last decades have seen the development of decision techniques and corresponding tools to check automatically whether a protocol can be attacked. A longer version of this lecture notes has appeared in [21]. Many techniques focus on the case where the number of sessions is bounded : assuming that the protocol is executed a limited number of times, can we check whether the secrecy of some data s is preserved? Note that even if the number of sessions is bounded the system to verify is still infinite (due to the infinite number of messages that an attacker may construct and use when interacting with the protocol participants). One of the first decidability results was proposed by [5,6]. [40,41] extend this result to a more general framework that includes in particular composed keys. They also provide a complexity result: secrecy is shown to be (co-)NP-complete. Another algorithm based on constraint systems has been proposed by [37] and extended by [19] to the exclusive or. The algorithm is rather elegant, and amenable to extensions. The analysis of protocols when the number of sessions is bounded yields efficient tools to find attacks. However, when no attack is found, it is impossible to conclude whether the analyzed protocol is secure or not. Indeed, there might exist an attack that requires a few additional sessions. Moreover, in practice, tools can only analyse a small number of sessions (typically 2 or 3) in a reasonable amount of time. Therefore, if no attack is found then there is no proof that the protocol is secure. To overcome this limitation, it is necessary to analyse protocols for an unbounded number of session. However,
56
V. Cortier and S. Kremer / Formal Models for Analyzing Security Protocols: Some Lecture Notes
even a simple property like secrecy is undecidable when we do not bounded the number of sessions [25]. Nevertheless, it is still possible to design verification tools in this case. Obviously in this case termination is not guaranteed. There have been two main approaches. • One approach is to perform backward search, relying on causality arguments. This backward search may not terminate, but user interaction with the tool or additional lemmas allow to prune some branches and enforce termination. Examples of such tools are the NRL Protocol Analyzer [34], and its reimplementation in Maude, Maude-NPA [26], as well as the Athena [45], Scyther [22] and Tamarin [43] tools. • The other approach is to use abstractions. Protocols may be encoded as (first order) Horn clauses [47,15] or tree automata [38,29,28], over-approximating the intruder capabilities. These tools may allow for false attacks and termination is generally still not guaranteed. The ProVerif tool proposed by Blanchet [15] is the most mature tool using this approach: its optimizations and a dedicated Horn clause resolution algorithm make the tool extremely efficient and non-termination and false attacks (at least for weak secrecy properties) rare on practical examples. Conclusion To conclude, the formal analysis of security protocols is now a mature field that offers several powerful techniques to perform security proofs or to find flaws. Many challenges remain yet to be solved such as obtaining security proofs in more accurate models, verifying the implementations, or tackling new families of protocols such as e-voting or APIs for secure elements.
References [1]
[2] [3] [4] [5]
[6] [7]
[8]
[9] [10]
M. Abadi and V. Cortier. Deciding knowledge in security protocols under equational theories. In Proc. 31st International Colloquium on Automata, Languages, and Programming (ICALP’04), volume 3142 of Lecture Notes in Computer Science, pages 46–58. Springer, 2004. M. Abadi and V. Cortier. Deciding knowledge in security protocols under equational theories. Theoretical Computer Science, 387(1-2):2–32, 2006. M. Abadi and C. Fournet. Mobile values, new names, and secure communication. In Proc. 28th ACM Symposium on Principles of Programming Languages (POPL’01), pages 104–115. ACM, 2001. M. Abadi and A. D. Gordon. A calculus for cryptographic protocols: The spi calculus. Information and Computation, 148(1):1–70, 1999. R. Amadio and D. Lugiez. On the reachability problem in cryptographic protocols. In Proc. 12th International Conference on Concurrency Theory (CONCUR’00), volume 1877 of Lecture Notes in Computer Science, pages 380–394, 2000. R. Amadio, D. Lugiez, and V. Vanack`ere. On the symbolic reduction of processes with cryptographic functions. Theoretical Computer Science, 290(1):695–740, 2002. S. Anantharaman, P. Narendran, and M. Rusinowitch. Intruders with caps. In Proc. 18th International Conference on Term Rewriting and Applications (RTA’07), volume 4533 of Lecture Notes in Computer Science, pages 20–35. Springer, 2007. A. Armando, R. Carbone, L. Compagna, J. Cuellar, and L. T. Abad. Formal analysis of saml 2.0 web browser single sign-on: Breaking the saml-based single sign-on for google apps. In Proc. 6th ACM Workshop on Formal Methods in Security Engineering (FMSE 2008), pages 1–10, 2008. F. Baader and T. Nipkow. Term rewriting and all that. Cambridge University Press, 1998. F. Baader and W. Snyder. Unification theory. In J. A. Robinson and A. Voronkov, editors, Handbook of Automated Reasoning, pages 445–532. Elsevier and MIT Press, 2001.
V. Cortier and S. Kremer / Formal Models for Analyzing Security Protocols: Some Lecture Notes
[11]
[12] [13] [14] [15] [16]
[17] [18] [19]
[20] [21] [22]
[23] [24] [25] [26]
[27]
[28]
[29]
[30]
[31]
[32] [33]
57
D. Basin, C. Cremers, and S. Meier. Provably repairing the ISO/IEC 9798 standard for entity authentication. In Proc. 1st Conference on Principles of Security and Trust (POST’12), volume 7215 of Lecture Notes in Computer Science, pages 129–148. Springer, 2012. M. Baudet. Deciding security of protocols against off-line guessing attacks. In Proc. 12th ACM Conference on Computer and Communications Security (CCS’05), pages 16–25. ACM, Nov. 2005. M. Baudet, V. Cortier, and S. Delaune. YAPA: A generic tool for computing intruder knowledge. ACM Transactions on Computational Logic, 14, 2013. M. Berrima, N. Ben Rajeb, and V. Cortier. Deciding knowledge in security protocols under some evoting theories. Theoretical Informatics and Applications (RAIRO-ITA), 45:269–299, 2011. B. Blanchet. An efficient cryptographic protocol verifier based on prolog rules. In Proc. of the 14th Computer Security Foundations Workshop (CSFW’01). IEEE Comp. Soc. Press, 2001. M. Bortolozzo, M. Centenaro, R. Focardi, and G. Steel. Attacking and fixing PKCS#11 security tokens. In Proc. 17th ACM Conference on Computer and Communications Security (CCS’10), pages 260–269. ACM, 2010. Y. Chevalier and M. Rusinowitch. Compiling and securing cryptographic protocols. Inf. Process. Lett., 110(3):116–122, 2010. S¸. Ciobˆac˘a, S. Delaune, and S. Kremer. Computing knowledge in security protocols under convergent equational theories. Journal of Automated Reasoning, 48(2):219–262, 2012. H. Comon-Lundh and V. Shmatikov. Intruder deductions, constraint solving and insecurity decision in presence of Exclusive Or. In Proc. 18th Annual IEEE Symposium on Logic in Computer Science (LICS ’03), pages 271–280, Los Alamitos, CA, 2003. IEEE Computer Society. V. Cortier and S. Delaune. Decidability and combination results for two notions of knowledge in security protocols. Journal of Automated Reasoning, 48, 2012. V. Cortier and S. Kremer. Formal models and techniques for analyzing security protocols: A tutorial. Foundations and Trends in Programming Languages, 1(3):151–267, 2014. C. Cremers. The Scyther Tool: Verification, falsification, and analysis of security protocols. In Proc. 20th International Conference on Computer Aided Verification (CAV’08), volume 5123 of Lecture Notes in Computer Science, pages 414–418. Springer, 2008. W. Diffie and M. Helman. New directions in cryptography. IEEE Transactions on Information Society, 22(6):644–654, 1976. D. Dolev and A. Yao. On the security of public key protocols. In Proc. 22nd Symposium on Foundations of Computer Science, pages 350–357. IEEE Comp. Soc. Press, 1981. N. Durgin, P. Lincoln, J. Mitchell, and A. Scedrov. Undecidability of bounded security protocols. In Proc. Workshop on Formal Methods and Security Protocols, 1999. S. Escobar, C. Meadows, and J. Meseguer. Maude-npa: Cryptographic protocol analysis modulo equational properties. In Foundations of Security Analysis and Design V, volume 5705 of Lecture Notes in Computer Science, pages 1–50. Springer, 2009. R. Focardi and F. Martinelli. A uniform approach for the definition of security properties. In Proc. World Congress on Formal Methods (FM’99), Lecture Notes in Computer Science, pages 794–813. Springer, 1999. T. Genet and F. Klay. Rewriting for cryptographic protocol verification. In Proc. 17th International Conference on Automated Deduction (CADE’00), volume 1831 of Lecture Notes in Computer Science, pages 271–290. Springer, 2000. J. Goubault-Larrecq. A method for automatic cryptographic protocol verification (extended abstract). In Proc. Workshops of the 15th International Parallel and Distributed Processing Symposium, volume 1800 of Lecture Notes in Computer Science, pages 977–984. Springer, 2000. F. Jacquemard, M.Rusinowitch, and L. Vigneron. Compiling and verifying security protocols. In Proc. 7th International Conference on Logic for Programming and Automated Reasoning (LPAR’00), volume 1955 of Lecture Notes in Computer Science, pages 131–160. Springer, 2000. S. Kremer and L. Mazar´e. Adaptive soundness of static equivalence. In Proc. 12th European Symposium on Research in Computer Security (ESORICS’07), volume 4734 of Lecture Notes in Computer Science, pages 610–625. Springer, 2007. S. Kremer, A. Mercier, and R. Treinen. Reducing equational theories for the decision of static equivalence. Journal of Automated Reasoning, 48(2):197–217, 2012. G. Lowe. Breaking and fixing the Needham-Schroeder public-key protocol using FDR. In Proc. 2nd International Conference on Tools and Algorithms for the Construction and Analysis of Systems
58
[34] [35] [36] [37] [38] [39] [40]
[41] [42] [43]
[44] [45] [46] [47]
V. Cortier and S. Kremer / Formal Models for Analyzing Security Protocols: Some Lecture Notes
(TACAS’96), volume 1055 of Lecture Notes in Computer Science, pages 147–166. Springer-Verlag, 1996. C. Meadows. The NRL protocol analyzer: An overview. Journal of Logic Programming, 26(2):113–131, 1996. J. Millen. A necessarily parallel attack. In FMSP ’99, 1999. J. Millen and G. Denker. Capsl and mucapsl. J. Telecommunications and Information Technology, 4:16–27, 2002. J. Millen and V. Shmatikov. Constraint solving for bounded-process cryptographic protocol analysis. In Proc. 8th ACM Conference on Computer and Communications Security (CCS’01), 2001. D. Monniaux. Abstracting cryptographic protocols with tree automata. Sci. Comput. Program., 47(23):177–202, 2003. R. M. Needham and M. D. Schroeder. Using encryption for authentication in large networks of computers. Communications of the ACM, 21(12):993–999, 1978. M. Rusinowitch and M. Turuani. Protocol insecurity with finite number of sessions is NP-complete. In Proc. 14th Computer Security Foundations Workshop (CSFW’01), pages 174–190. IEEE Comp. Soc. Press, 2001. M. Rusinowitch and M. Turuani. Protocol Insecurity with Finite Number of Sessions and Composed Keys is NP-complete. Theoretical Computer Science, 299:451–475, 2003. P. Ryan, S. Schneider, M. Goldsmith, G. Lowe, and A. Roscoe. Modelling and Analysis of Security Protocols. Addison Wesley, 2000. B. Schmidt, S. Meier, C. Cremers, and D. Basin. Automated analysis of Diffie-Hellman protocols and advanced security properties. In Proc. 25th IEEE Computer Security Foundations Symposium (CSF’12), pages 78–94. IEEE Comp. Soc. Press, 2012. S. Schneider. Verifying authentication protocols with CSP. In Proc. 10th Computer Security Foundations Workshop (CSFW’97). IEEE Comp. Soc. Press, 1997. D. Song. Athena, an automatic checker for security protocol analysis. In Proc. 12th IEEE Computer Security Foundations Workshop (CSFW’99). IEEE Comp. Soc. Press, 1999. J. Thayer, J. Herzog, and J. Guttman. Strand spaces: proving security protocols correct. IEEE Journal of Computer Security, 7:191–230, 1999. C. Weidenbach. Towards an automatic analysis of security protocols in first-order logic. In Proc. 16th International Conference on Automated Deduction (CADE’99), volume 1632 of Lecture Notes in Computer Science, pages 314–328. Springer, 1999.
The author(s) of this publication is/are solely responsible for its content. This publication does not reflect the opinion of the publisher. The publisher cannot be held liable for any loss or damage that may occur because of this publication.
Dependable Software Systems Engineering J. Esparza et al. (Eds.) IOS Press, 2016 © 2016 The authors and IOS Press. All rights reserved. doi:10.3233/978-1-61499-627-9-59
59
Parameterized Verification of Crowds of Anonymous Processes Javier Esparza Faculty of Computer Science, Technical University of Munich Boltzmannstr. 3, 85748 Garching, Germany [email protected] Abstract. We survey some results on the automatic verification of parameterized programs without identities. These are systems composed of arbitrarily many components, all of them running exactly the same finite-state program. We discuss the complexity of deciding that no component reaches an unsafe state. A former version of this paper was published in the Proceedings of STACS 2014 [8]. Keywords. Parameterized Verification
1. Introduction Parameterized programs (where “program” is used here in a wide sense) consist of arbitrarily many instantiations of the same piece of code. We call each of these instantiations a process, and the set of processes a crowd. Examples include many classical distributed algorithms (for mutual exclusion, leader election, distributed termination, and other problems), families of hardware circuits (for instance, a family of carry-look-ahead adders, one for each bit size), cache-coherence protocols, telecommunication protocols, replicated multithreaded programs, algorithms for ad-hoc and vehicular networks, crowdsourcing systems, swarm intelligence systems, and biological systems at the molecular level. If automatic verification is not your field of expertise, then you may think it awkward to study the complexity of verification problems for parameterized programs. Since Rice’s theorem shows that any non-trivial question on the behavior of one single while-program is undecidable, is there any more to say? Actually, yes. Rice’s theorem refers to whileprograms acting on variables over an infinite domain (typically the integers). Since the primary task of distributed algorithms or cache-coherence protocols is not to compute a function, but rather solve a coordination problem, they typically use only boolean variables as semaphores, or variables ranging between 0 and the number of processes. So for each number N, the set of reachable configurations of the crowd with N processes is finite, and most verification questions can be decided by means of an exhaustive search of the configuration space. However, this brute force technique can only show correctness for a finite number of values of N. This is not what we usually understand under “proving a parameterized program correct”, we mean proving that the property holds for all values of N. In other words, the task consists of proving that each member of an infinite family of systems,
60
J. Esparza / Parameterized Verification of Crowds of Anonymous Processes
each of them having a finite state space, satisfies a given property. Are questions of this kind always undecidable? In the way we have formulated the problem, the answer is still negative: yes, all non-trivial problems are still undecidable. Let us sketch a proof for a simple reachability problem. Given a Turing machine M and an input x, we can easily construct a little finitestate program that simulates a tape cell. The program has a boolean variable indicating whether the head is on the cell or not, a variable storing the current tape symbol, and a third variable storing the current control state when the head is on the cell (if the head is not on the cell the value of this variable is irrelevant). A process running the program communicates with its left and right neighbors by message passing. If M accepts x, then it does so using a finite number N of tape cells. Therefore, the crowd containing N processes eventually reaches a configuration in which the value of the control-state variable of a process is a final state of M. On the contrary, if M does not accept x, then no crowd, however large, ever reaches such a configuration. So the reachability problem for parameterized programs is undecidable. But this proof sketch contains the sentence “the program communicates with its left and right neighbors”. How is this achieved? A communication structure where processes are organized in an array (like in our simulation of M), in a ring, a tree, or some other shape is achieved by giving processes an identity, typically a number in the range [1..N]. This identifier appears as a parameter i in the code, and so it is not the case that all processes execute exactly the same code, but the code where the parameter is instantiated with the process identity. For instance, the instruction “if you’re not the last process in the array, then send the content of variable x to your right neighbor” is encoded as “if i < N, then send the content of variable x to process i + 1”. (Observe that, since N also appears in the code, the processes also know how many they are.) There are applications where processes have no identities and do not know—or do not care about—how many they are: for instance, in natural computing processes may be molecules swimming in a solution. In others applications identities are not needed. A typical example are cache-coherence protocols, whose purpose is to guarantee the consistency of all cache lines containing copies of a memory cell. The protocol should guarantee that if a processor updates a variable in its cache, the other processors mark their cached values as no longer valid. Since the situation is completely symmetric, and processors are connected by a bus implementing a broadcast communication primitive, identities are not needed. The same holds for many multithreaded programs where one only cares about, say, the number of threads that are still active. Finally, there is an increasing number of applications where identities are considered harmful. For instance, in vehicular networks cars may communicate with each other to interchange information about traffic jams. Since cars must necessarily communicate their positions, identities might allow one to track individual cars. Applications involving secret voting are another example. These considerations lead us to our problem, which can be informally, but suggestively, formulated as follows: What is the complexity of checking that a (finite, but arbitrarily large) anonymous crowd will stay safe? Formally, the input to the problem is a finite automaton A, the template, representing the finite-state code to be executed by each process, and a state qu of A, the unsafe state,
J. Esparza / Parameterized Verification of Crowds of Anonymous Processes
61
modelling some kind of error or undesirable situation. The transitions of A correspond either to internal moves or to communications with the rest of the system. The question to be answered is whether there exists a number N such that some execution of the system composed by N identical copies of A reaches a configuration in which at least one of the processes is in the unsafe state qu . We say that such configurations cover qu , and so the problem is called the coverability problem. The complexity of the coverability problem crucially depends on the power of the communication mechanism between processes. So first we must analyze these mechanisms in some detail. This is done in Section 2. Section 3 presents the complexity results. Finally, Section 4 briefly describes some additional work in which the template A is allowed to have more computational power than that of a finite automaton. 2. Communication Mechanisms for Crowds The two main communication paradigms are message-passing (typical of communication protocols and distributed systems where processes reside in different machines) and communication through global variables (typical of multithreaded programs). Within each paradigm there is a number of mechanisms. We informally describe the syntax and operational semantics of the template A for the four mechanisms most commonly found in the literature. In particular, we give the syntax of the transition labels of A, and describe how a communication takes place. We assume a finite set V of values which can be communicated. 2.1. Message-passing paradigms We consider three variants of message passing. In all of them a value v is to be exchanged. v! We use the following notation. If a process is in state q, and there is a transition q −−→ q , v? then the process is willing to send the message v. If there is a transition q −−→ q , then it is willing to receive the message v. One-to-one: Rendez-vous communication. In a communication step of the system, exactly two processes exchange a message. If two processes are currently in states q1 v! v! and q2 of the template, and the template has transitions q1 −−→ q1 and q2 −−→ q2 , then the following atomic step can take place: the two processes move to the states q1 , q2 , while all other processes stay put. One-to-all: Broadcast communication. We assume that for every state q and every value v? v the template A has one transition q −−→ q for some state q (which may be equal to q). If a process is currently in state q1 of the template, and the template has a transition v! q1 −−→ q1 , then the following atomic step can take place: the process in q1 moves to q1 and, simultaneously, all other processes take outgoing transitions of their current states labeled by v?. Because of the assumption above, the process in state q1 can always broadcast message v, independently of the current state of the other processes. All-to-one: Global guards. This communication mechanism is close to being the dual of broadcast communication: a process receives a value if all other processes simultaneously send it. If a process is currently in a state q1 of the template having an outgoing transition
62
J. Esparza / Parameterized Verification of Crowds of Anonymous Processes v?
q1 −−→ q1 , and all other processes are in states having an outgoing transition labeled by v!, then the following atomic step can take place: the process in q1 moves to q1 , and all other processes take outgoing transitions labeled by v!. Observe, however, an important difference with broadcast communication: we do not assume the existence of a transition v! q −−→ q for every state q and value v, since this would impose the artificial constraint that every process is always willing to send any message; in particular no information about the current state of the other processes would flow to the processes receiving v. While a global guard mechanism is usually too difficult to implement, it is useful to model abstract models of distributed algorithms. For example, if we have implemented a mutual exclusion algorithm in a distributed system, then we may want to abstractly model a call to it by a single move, in which a process can enter the critical section if all other processes simultaneously send the message the critical section if currently all other processes are outside it. 2.2. Shared-memory paradigms Communication by global store. In this mechanism we assume that all processes in the crowd communicate through a global store. At every point in time the store contains a v! value, i.e., an element of V . If a process at state q1 has an outgoing transition q1 −−→ q1 , then the following atomic step can take place: the process moves to q1 and, simultaneously, the store changes its current value to v. In other words, this step models writing the value v! v in the store. If the process has an outgoing transition q1 −−→ q1 , and the current value of the store is v, then the following atomic step can take place: the process moves to q1 , the value of the store does not change. Observe that in this communication mechanism a value in the store can be overwritten by a new value before any process reads it. Communication by global store with locking. In this mechanism, a process must first obtain a lock on the store before being able to write or read. At every moment in time the store holds a vauel, and is either locked or unlocked. Initially the store is unlocked. If a lock process at state q1 has a transition q1 −−−→ q1 , and the store is currently unlocked, then the process can move to q1 , simultaneously the store becomes locked, and the locked is owned by the process. If a process owns a lock, then it can perform reads as writes as in the previous mechanism. Finally, if a process that owns the lock is at state q1 , and the unlock state has an outgoing transition q1 −−−−→ q1 , then the process can move to q1 , the store becomes unlocked, and the process does not own the lock anymore. 2.3. A classification We introduce three criteria to classify communication mechanism which, as we shall see, greatly influence the complexity of the coverability problem for crowds. The first criterion is the “fan-in” of the mechanism, i.e., whether the process must listen to all processes or only to one process to make a move. If there are different kinds of moves, then we take the “worst-case”, i.e., the moves where the process has to listen to the largest number of processes. Further, we consider the store a distinguished process. The second criterion is the “fan-out”, i.e., whether a process talks to one or to all processes.
J. Esparza / Parameterized Verification of Crowds of Anonymous Processes
63
The third criterion is the ability to elect a leader. Loosely speaking, this is the question whether there is a template such that a perfectly symmetric crowd (i.e., one in which initially all processes are in the same state), can split into a distinguished process which cannot leave a special subset of states of the template, and an arbitrarily large crowd that cannot leave another subset, disjoint with the first. More formally: Is there a template A with two distinguished states q1 , q2 and all processes initially in q1 , such that some reachable configuration has one process in q2 , and no reachable configuration has more than one process on q2 ? We can now classify our five communication mechanisms according to these criteria. (1) Who must I listen to? We can classify our five mechanisms into two categories. • I must listen to everyone. This is the case of the global-guard mechanism, where a process must receive a signal from all other processes in order to make a move. • I must listen to someone. This is the case of all the other four mechanisms. Indeed, in all other four mechanisms, a process receives information from at most another process, or from the store, which can be considered a distinguished process. In the rest of the paper we use the term local-guard mechanism to refer to a communication mechanism in which at a move a process only receives information from at most another process. (2) Who must listen to me? Here we we have three different cases: – Everyone must listen to me. This is the case of broadcast communication. – Someone must listen to me. This is the case of global guards and rendez-vous, but also global store with locking can also provide this guarantee. Indeed, we can easily use a global store with locking to simulate rendez-vous communication. The store initially contains a special value, say f , standing for “store is free”. A process wishing to communicate a value v acquires the lock, reads the content of the store, and, if its value is f , changes it to v and releases the lock. If the value is not f , it just releases the lock. A process wishing to receive a value acquires the lock and reads the store: if its value is f , the process just releases the lock; otherwise, it copies the value into its local state and releases the lock. This guarantees that the value will be preserved until someone reads it, and, under a suitable fairness assumption, that it will eventually read. However, neither rendez-vous communication nor global store with locking can implement broadcast. Intuitively, in these paradigms there is no way to detect that a process does not react to a message. – No one must listen to me. This is the case of a global store without locking. A value written by a process can be overwritten by another process before anyone reads it. Notice that we can no longer implement rendez-vous using the trick above. Since the store cannot be locked, two processes P1 and P2 wishing to write values v1 , v2 may both read the value f and proceed to write. If P1 writes immediately before P2 , then the value v1 is not read by anyone. (3) Can I become a leader? Global guards, broadcast communication and communication through global store with locking can easily produce a leader. In the global guard a! a? case, the template with transitions q1 −−→ q2 and q1 −−→ q3 can be used to elect a leader. Indeed, we can interpret a as the message “I don’t want to be a leader”. A process that
64
J. Esparza / Parameterized Verification of Crowds of Anonymous Processes
receives this message from all others becomes the leader: it moves to q3 , while all other processes move to q2 . The case of broadcast communication is analogous. If we now interpret a as the message “I am the leader”, then the process that sends it moves to q2 and becomes the leader, while all other processes, who must listen, move to q3 . In the case of global store, processes can elect a leader by means of a template in which all processes initially compete for the lock; the process that acquires it changes the value of the store to “we have a leader” and moves to q2 . Rendez-vous communication and communication through a global store without a lock cannot produce a leader. The reason is that when a process makes a move, say, from q to q , arbitrarily many of the processes in state q may follow suit, making exactly the same move immediately after. Intuitively, a process cannot “lock the door behind him”. We will come back to this point later.
3. The Power of Crowds We can sort the four communication mechanism of the previous section in order of decreasing power according to our two criteria, as shown in Table 1
Global guards Broadcast Global store with locking Rendez-vous Global store without locking
Who must I listen to?
Who must listen to me?
Can I become a leader?
everyone someone someone someone
someone everyone someone someone
yes yes yes no
someone
no one
no
Table 1. Classification of the five communication mechanisms.
According to this classification, the communication mechanisms seem to become weaker as we go down the list, with the exception of the global-guard and broadcast mechanisms, which seem incomparable. However, we can use our previous observation about the global-guard and the broadcast mechanism being almost dual to each other to break the tie, and conclude that the broadcast mechanism can be simulated by the global-guard mechanism. Indeed, consider a template A satisfying the constraint that for v every state q and every value v the template has a transition q −→ q , and let A be the dual template obtained by exchanging the send and receive labels in all transitions. Then the behaviors of A with the broadcast mechanism and of A with the global-guard mechanism exhibit coincide. In the other direction the construction does not work: if A is a template with the global-guard mechanism, then its dual may not satisfy the constraint, and so broadcast communication is not defined for it. Before proceeding to present results on the computational complexity of the coverability problem, it is important to observe that the complexity is related to the crowd’s computational power, where we look at a crowd as a nondeterministic machine. Indeed, if coverability is hard for a complexity class C , then any problem in C can be reduced to coverability. Therefore, given an instance of the problem, we can construct a template A such that a large enough crowd will solve it: a process will reach the state qu , which
J. Esparza / Parameterized Verification of Crowds of Anonymous Processes
65
instead of an unsafe state now becomes the state at which the process can post the answer “yes” to the problem instance. So—informally but suggestively—studying the complexity of the coverability problem amounts to studying the following question: What is the computational power of a (finite but arbitrarily large) anonymous crowd? In particular, a result proving high complexity of the coverability problem means bad news for crowd verifiers, but good news for crowd designers, and vice versa. We are now ready to analyze the complexity of the four communication mechanisms above. 3.1. Communication by global guards. Emerson and Kahlon proved in [6] that the coverability problem for crowds communicating by means of the global-guard mechanism is undecidable. The proof shows that a counter machine can be simulated by such a crowd. Recall that a counter machine is an automaton whose transitions act on a set of counters. Transition labels are of the form c:=c+1 c>0→c:=c−1 q −−−−−→ q (increase counter c by 1 and move to q ), q −−−−−−−−→ q (if the current c=0 value of the counter is greater than 0, decrease it by 1 and move to q ), and q −−−→ q (if the current value is 0 then move to q . Transitions of the last kind are usually called tests-for-zero. Intuitively, the template A simulating the counter machine has a state for each state of the counter machine, and a state qc for every counter c. The simulation ensures that the current number of processes in qc remains equal to the current value of c. The power of c=0 global-guard communication is particularly needed to simulate a test-for-zero q −−−→ q . A process in state q should be able to move to state q if no processes are currently in state qc . This is achieved by introducing for each counter c a message with intended meaning “I am not in state qc ”. Anonymous crowds with global guards Turing powerful, or, equivalently, identities are not necessary to achieve Turing power. 3.2. Communication by broadcast. Despite the power of broadcast communication, it was proved in [9] by Finkel, Mayr, and the author that the coverability problem is decidable. So we have: Anonymous crowds with local guards are not Turing powerful, or, conversely, identities are necessary in order to achieve full Turing power. The proof is a straightforward application of a more general result of [1] on wellstructured systems (see also [2,11]). Let us sketch it. The configuration of a crowd with template A is completely determined by the number of processes at each state of A. So, given a numbering {q1 , . . . , qn } of the states of A, a configuration can be formalized as a vector of Nn . Assume without loss of generality that qu = q1 . We wish to know whether,
66
J. Esparza / Parameterized Verification of Crowds of Anonymous Processes
for some number N, a crowd of N individuals can reach a configuration (k1 , . . . , kn ) such that k1 ≥ 1, or, equivalently, a configuration (k1 , . . . , kn ) ≥ (1, 0, . . . , 0), where ≥ is defined componentwise. The set of configurations (k1 , . . . , kn ) ≥ (1, 0, . . . , 0) is upward closed (with respect to ≤), i.e., if a configuration c belongs to the set, then so does any other configuration of the form c + c , where c ∈ Nn and + is defined componentwise. Given an upward-closed set C of configurations, it is easy to show that its set of immediate predecessors (i.e., the set of configurations from which some configuration of C can be reached in one step) is also upward-closed. Indeed, assume we can reach a configuration c ∈ C from some configuration d by means of the broadcast of a value v. Now, consider a configuration d + d . If we perform the same broadcast, then the processes of d move to the same states as before, yielding again the configuration c, and the processes of d move somewhere, yielding a configuration c . The result is a configuration c + c , where addition of configurations is defined componentwise. Since c ∈ C and C is upward-closed, we have c + c ∈ C, and we are done. So letting C0 be the set of configurations (k1 , . . . , kn ) ≥ (1, 0, . . . , 0), the sequence C0 ,C1 ,C2 , . . ., where Ci+1 is the set of immediate predecessors of Ci , is a sequence of upward-closed sets. We now exploit the well-known fact that the order ≥ is a well-quasi-order: every infinite sequence v1 , v2 , . . . of elements of Nn contains an infinite ordered subsequence vi1 ≤ vi2 ≤ . . .. A first easy consequence of the theory of well-quasi-orders is that any upward-closed set of configurations has finitely many minimal elements with respect to ≤. So, since an upward-closed set is completely determined by its minimal elements, we can use the minimal elements as a finite representation of the set. This allows to explicitly construct the sequence C0 ,C1 ,C2 . . .. A second easy consequence is that this sequence contains two indices i < j such that Ci ⊇ C j . So we can stop the construction at j C j , because subsequent steps will not discover any new configuration. The set k=0 Ck contains all configurations from which a configuration of C0 can be reached. We can then inspect this set, and check whether it contains one of the possible initial configurations of a crowd. So crowds communicating by broadcasts are not Turing powerful. But, how powerful are they? The answer, due to Schmitz and Schnoebelen [22], is very surprising: The time complexity of the coverability problem for anonymous crowds communicating by broadcast grows faster than any primitive recursive function. More precisely, the result is that coverability of broadcast protocols is Fω -hard, where Fω is a class of problems of “Ackermannian complexity” (i.e., whose complexity is bounded by an Ackermann-like function). In particular, Fω is closed under primitive recursive reductions. We refer to [22] for a more precise description. In any case, this is one of the most natural problem with provably non-primitive recursive complexity. As a summary, we have that crowds communicating by broadcast may not be Turing powerful, but keeping them under control may quickly exceed any reasonable amount of computational resources. 3.3. Communication by global store with locking. Global variables with locking is the natural communication mechanism for multithreaded programs. The coverability problem for this kind of communication reduces to the cover-
J. Esparza / Parameterized Verification of Crowds of Anonymous Processes
67
ability problem of Petri nets, a fact that was already observed by German and Sistla [14], and the converse also holds. The coverability problem for Petri nets was proved to be EXPSPACE-complete already in the 70s, which yields the following result: The coverability problem for a crowd communicating by global variables with locking is EXPSPACE-complete. EXPSPACE-hardness was proved by Lipton [17] (see also [7]) who showed that a n counter able to count up to 22 can be simulated by a Petri net (or an automaton) of size n2 . Membership in EXPSPACE was proved by Rackoff [20]. He shows that, if the state qu is coverable, then it is coverable by a sequence of moves of double exponential length in the size of the template. This yields immediately an NEXPSPACE algorithm, after which we use NEXPSPACE=EXPSPACE. Rackoff’s nondeterministic algorithm is not useful in practice. A more practical algorithm was suggested (some years before Rackoff’s paper) by Karp and Miller [16]. The algorithm uses the notion of generalized configuration, which for a template with n states is a vector of dimension n whose elements are either natural numbers or the symbol ω, which intuitively stands for “arbitrarily many processes”, or “as many process as necessary”. The algorithm starts at a generalized configuration describing the initial situation: for example, we may have exactly one process in state q1 , and arbitrarily many in state q2 , modelled by (1, ω, 0, . . . , 0). Given a generalized configuration, we construct its successors (that is, the algorithm explores new configurations in the forward direction, contrary to the algorithm v! for broadcasts, which explores backwards). If the template, say, has transitions q1 −−→ q3 v? and q2 −−→ q4 , then a rendez-vous can take place, and we can move from (1, ω, 0, . . . , 0) to (0, ω, 1, 1, 0, . . . , 0). The important point is that this construction can be “accelerated”. v!
v?
For example, if the template has transitions q1 −−→ q1 and q2 −−→ q4 , then we can move from (1, ω, 0, . . . , 0) to (1, ω, 0, 1, 0, . . . , 0) (state q2 loses a process, but we apply ω − 1 = ω + 1 = ω) and, since (1, ω, 0, 1, 0, . . . , 0) ≥ (1, ω, 0, . . . , 0), the rendez-vous can take place again, leading to (1, ω, 0, 2, 0, . . . , 0), (1, ω, 0, 3, 0, . . . , 0), etc. The algorithm “jumps to the limit”, and moves directly from (1, ω, 0, . . . , 0) to (1, ω, 0, ω, 0, . . . , 0). Termination of the algorithm follows once more from a very simple application of the theory of well-quasi-orders. Karp and Miller’s algorithm has been recently improved in a number of ways: efficient data structures, construction of a minimal set of generalized configurations, etc. (see e.g. [19,23,13,21]). However, these improvements do not change its worst-case complexity, which is surprisingly worse than that of Rackoff’s algorithm: Karp and Miller’s algorithm can take non-primitive recursive time and space. Recently, this puzzling mismatch has lead to two beautiful results. First, Bozzelli and Ganty have shown that the backwards algorithm described above for broadcast systems no longer has non-primitive recursive complexity when applied to the rendez-vous case. Instead, it runs in double exponential time, much closer to the lower bound [3]. Geeraerts, Raskin, and Van Begin have proposed another simple algorithm based on forward exploration [12]. It applies a so-called “Enlarge, Expand, and Check” algorithmic principle, which constructs a sequence of under- and overapproximations of the set of reachable generalized configurations.Very recently,
68
J. Esparza / Parameterized Verification of Crowds of Anonymous Processes
Majumdar and Wang have shown that this algorithm also runs in double exponential time [18]. Early work by Delzanno, Raskin and Van Begin [4] and more recent work by Kaiser, Kr¨oning and Wahl [15] (see also [5]) has applied these coverability algorithms and other techniques for the construction of over- and underapproximations, to verify safety of a large number of multithreaded programs. 3.4. Communication by rendez-vous. Rendez-vous communication is a natural communication model for systems whose processes “move” in some medium where they occasionally meet and interact. Natural computing systems in which computing entities are molecules moving in a “soup” are an example. When studying the complexity of this problem there is a subtle point. As we have seen in Section 2, a crowd communicating by rendez-vous communication cannot produce a leader. However, one can set up the system so that the initial configuration already contains one. For instance, we can choose an initial configuration with exactly one process in state q1 , and arbitrarily many processes in state q2 . So we have to examine two cases. Crowds with an initial leader. In this case we can easily use rendez-vous to simulate global store with locking. Intuitively, the template is designed so that the leader simulates the store, and the rest of the crowd only communicates with the leader. Conversely, as we saw in Section 2, rendez-vous communication can be simulated by a global store with locking, and so we obtain: The coverability problem for crowds communicating by rendez-vous and having an initial leader is EXPSPACE-complete. Leaderless crowds. This is the case in which all processes are initially in the same state. In other words, if we assume that this state is q1 , then the initial generalized configuration of the system is (ω, 0, . . . , 0). We can again solve the coverability problem by means of the Karp-Miller algorithm. However, it is easy to see that in this special case the algorithm can only generate new configurations whose components are either ω or 0, and so a configuration is completely determined by the set of ω-components. Moreover, a successor (k1 , . . . , kn ) of a generalized configuration (k1 , . . . , kn ) necessarily satisfies ki = ω ⇒ ki = ω for every 1 ≤ i ≤ n, i.e., the set of ω-components can only grow along a path. Finally, the Karp-Miller graph satisfies a diamond property: if a configuration with a set Ω1 of ω-components has two successor configurations with different sets Ω2 , Ω3 of ω-components, then these two configurations have a common successor with set Ω2 ∪ Ω3 . These properties together yield a simple polynomial fixed point algorithm for the coverability problem: starting with Ω = {q1 }, let Δ(Ω) be the set of states reachable from markings that put arbitrarily many tokens in every state of Ω, and repeatedly execute Ω := Ω ∪ Δ(Ω) until a fixed point is reached. The coverability problem for leaderless crowds communicating by rendez-vous is in PTIME.
J. Esparza / Parameterized Verification of Crowds of Anonymous Processes
69
This case is studied in detail be German and Sistla in section 4 of [14], where they show that many other analysis problems, not only coverability, can be solved in polynomial time. 3.5. Communication by global store without locking. Locking mechanisms are easy to implement in a multithreading environment where all threads are executed on a single processor, or on a number of processors physically close to each other. They become more problematic for crowdsourcing systems, ad-hoc networks, vehicular networks or, more generally, any sort of decentralized system where processes may enter or leave the system at any time. The danger of this setting is obvious: a process may acquire the lock, and leave the system without returning it, blocking the complete crowd. Additionally, the locking mechanism is not as easy to implement as in a multithreading environment. The case of communication by global variables without locking has been recently investigated in [10]. The main finding is that the absence of locking drastically simplifies the task of checking coverability (good news for verifiers), or, equivalently, decreases the computational power (bad news for designers): The coverability problem for a crowd with initial leaders communicating by global variables without locking is NP-complete. Intuitively, in the rendez-vous case the template can be designed so that a process communicates a value to, say, exactly three other processes, which allows the crowd to perform some arithmetic. In particular, the crowd can store an integer n by putting exactly n processes in a given state of the template. This is not possible in a global store without a lock, because the process has no control on how many processes may read a value. The NP-completeness result is proved with the help of two lemmas. The first lemma shows that the crowd can be simulated by a system composed of a finite number of simulators, one for each value of V . The simulator for the value v is an automaton Av that can be easily constructed from the template A and the value v. So we can construct a finite crowd that simulates the behavior of any crowd with template A, of any size. This result already shows that the coverability problem is in PSPACE, but not yet that it belongs to NP. Membership in NP is proved with the help of a second lemma. Loosely speaking, the lemma states that, if the unsafe state is reachable, then it can be reached by means of computations of the simulators that can be guessed in polynomial time. The leaderless case is, as in the case of rendez-vous, polynomial. Essentially, one uses the same algorithm.
4. Some Results on Crowds of Infinite-State Processes. So far we have assumed that processes are finite state (i.e., the template is a finite automaton). If we totally relax this condition (for instance, if we allow processes to be Turing machines), then the coverability problem becomes of course undecidable: a crowd of one suffices to achieve Turing power! But we can consider milder extensions of the computational power of a process.
70
J. Esparza / Parameterized Verification of Crowds of Anonymous Processes
For broadcast communication and global variables with locking, even very modest extensions already make the crowd Turing powerful. In particular, this is already the case if processes can count, i.e., if the template is a finite automaton whose transitions may act on a counter, increasing or decreasing it by one, or testing it for zero. Two processes suffice to simulate a two-counter machine, which are known to be Turing powerful. A crowd can select a leader, who can then select a second leader, and these two leaders can then communicate with each other, ignoring the messages from the rest of the crowd. The same applies to rendez-vous if the crowd initially contains a leader. For global variables without locking, the situation is more interesting. In [10] two extensions are considered. First, the paper studies the case in which processes are pushdown automata (since stack can be used as a counter, this includes the counter case). The coverability problems remains NP-complete for “leaderless crowds” and becomes PSPACE-complete for crowds with one leader. The second extension considers the case in which processes are Turing machines that can only run for polynomial time. This models the situation in which each process has no restrictions in computational power, but can only contribute a polynomial amount of work to the crowd. Since the crowd is arbitrarily large, the total amount of work is not bounded, and so we could hope to be able to show problems far beyond NP. However, the coverability problem remains NP-complete. Interpreting the result, we conclude that without a locking mechanism the crowd cannot distribute an arbitrary exponential computation among its members in such a way that each individual only does a polynomial amount of work. ˇ ´ ´ Michael LuttenAcknowledgements. Very special thanks to Pierre Ganty, Jan Kretınsky, berger, and Rupak Majumdar for numerous comments on former versions of this note. In particular, Rupak suggested the final structure. Many thanks to Sasha Rubin for contacting me about the errors in the former version, and for pointing out that the leaderless case for rendez-vous was studied in detail by German and Sistla in their seminal paper [14].
References [1] [2] [3]
[4]
[5] [6]
[7]
P. A. Abdulla, K. Cerans, B. Jonsson, and Y.-K. Tsay. General decidability theorems for infinite-state systems. In LICS, pages 313–321. IEEE Computer Society, 1996. P. A. Abdulla, K. Cerans, B. Jonsson, and Y.-K. Tsay. Algorithmic analysis of programs with well quasi-ordered domains. Inf. Comput., 160(1-2):109–127, 2000. L. Bozzelli and P. Ganty. Complexity analysis of the backward coverability algorithm for vass. In G. Delzanno and I. Potapov, editors, RP, volume 6945 of Lecture Notes in Computer Science, pages 96–109. Springer, 2011. G. Delzanno, J.-F. Raskin, and L. V. Begin. Towards the automated verification of multithreaded java programs. In J.-P. Katoen and P. Stevens, editors, TACAS, volume 2280 of Lecture Notes in Computer Science, pages 173–187. Springer, 2002. A. F. Donaldson, A. Kaiser, D. Kroening, M. Tautschnig, and T. Wahl. Counterexample-guided abstraction refinement for symmetric concurrent programs. Formal Methods in System Design, 41(1):25–44, 2012. E. A. Emerson and V. Kahlon. Model checking guarded protocols. In 18th IEEE Symposium on Logic in Computer Science (LICS 2003), 22-25 June 2003, Ottawa, Canada, Proceedings, pages 361–370. IEEE Computer Society, 2003. J. Esparza. Decidability and complexity of petri net problems - an introduction. In W. Reisig and G. Rozenberg, editors, Petri Nets, volume 1491 of Lecture Notes in Computer Science, pages 374–428. Springer, 1996.
J. Esparza / Parameterized Verification of Crowds of Anonymous Processes
[8]
[9] [10]
[11] [12]
[13] [14] [15]
[16] [17] [18]
[19] [20] [21]
[22]
[23]
71
J. Esparza. Keeping a crowd safe: On the complexity of parameterized verification (invited talk). In E. W. Mayr and N. Portier, editors, STACS, volume 25 of LIPIcs, pages 1–10. Schloss Dagstuhl Leibniz-Zentrum fuer Informatik, 2014. J. Esparza, A. Finkel, and R. Mayr. On the verification of broadcast protocols. In Logic in Computer Science, 1999. Proceedings. 14th Symposium on, pages 352–359. IEEE, 1999. J. Esparza, P. Ganty, and R. Majumdar. Parameterized verification of asynchronous shared-memory systems. In N. Sharygina and H. Veith, editors, CAV, volume 8044 of Lecture Notes in Computer Science, pages 124–140. Springer, 2013. A. Finkel and P. Schnoebelen. Well-structured transition systems everywhere! Theor. Comput. Sci., 256(1-2):63–92, 2001. G. Geeraerts, J.-F. Raskin, and L. V. Begin. Expand, enlarge, and check: New algorithms for the coverability problem of wsts. In K. Lodaya and M. Mahajan, editors, FSTTCS, volume 3328 of Lecture Notes in Computer Science, pages 287–298. Springer, 2004. G. Geeraerts, J.-F. Raskin, and L. V. Begin. On the efficient computation of the minimal coverability set of petri nets. Int. J. Found. Comput. Sci., 21(2):135–165, 2010. S. M. German and A. P. Sistla. Reasoning about systems with many processes. Journal of the ACM (JACM), 39(3):675–735, 1992. A. Kaiser, D. Kroening, and T. Wahl. Dynamic cutoff detection in parameterized concurrent programs. In T. Touili, B. Cook, and P. Jackson, editors, CAV, volume 6174 of Lecture Notes in Computer Science, pages 645–659. Springer, 2010. R. M. Karp and R. E. Miller. Parallel program schemata. J. Comput. Syst. Sci., 3(2):147–195, 1969. R. Lipton. The reachability problem requires expo- nential space. Technical Report 62, Yale University, 1976. Available online at www.cs.yale.edu/publications/techreports/tr63.pdf. R. Majumdar and Z. Wang. Expand, enlarge, and check for branching vector addition systems. In P. R. D’Argenio and H. C. Melgratti, editors, CONCUR, volume 8052 of Lecture Notes in Computer Science, pages 152–166. Springer, 2013. A. Piipponen and A. Valmari. Constructing minimal coverability sets. In P. A. Abdulla and I. Potapov, editors, RP, volume 8169 of Lecture Notes in Computer Science, pages 183–195. Springer, 2013. C. Rackoff. The covering and boundedness problems for vector addition systems. Theor. Comput. Sci., 6:223–231, 1978. P.-A. Reynier and F. Servais. Minimal coverability set for petri nets: Karp and miller algorithm with pruning. In L. M. Kristensen and L. Petrucci, editors, Petri Nets, volume 6709 of Lecture Notes in Computer Science, pages 69–88. Springer, 2011. S. Schmitz and P. Schnoebelen. The power of well-structured systems. In P. R. D’Argenio and H. C. Melgratti, editors, CONCUR, volume 8052 of Lecture Notes in Computer Science, pages 5–24. Springer, 2013. A. Valmari and H. Hansen. Old and new algorithms for minimal coverability sets. In S. Haddad and L. Pomello, editors, Petri Nets, volume 7347 of Lecture Notes in Computer Science, pages 208–227. Springer, 2012.
The author(s) of this publication is/are solely responsible for its content. This publication does not reflect the opinion of the publisher. The publisher cannot be held liable for any loss or damage that may occur because of this publication.
72
Dependable Software Systems Engineering J. Esparza et al. (Eds.) IOS Press, 2016 © 2016 The authors and IOS Press. All rights reserved. doi:10.3233/978-1-61499-627-9-72
Synthesis of Reactive Systems Bernd Finkbeiner Universit¨at des Saarlandes Abstract. These lecture notes trace the developments triggered by Church’s classic synthesis problem from the early solutions in the 1960s to the practical tools that have come out in the past few years. The article gives an overview on the automata- and game-theoretic foundations of the synthesis problem. We then explore the spectrum of logics for the synthesis of reactive systems, from reduced logics, like GR(1), to advanced logics such as strategy and coordination logic. Finally, we discuss the ideas behind recent synthesis approaches, like bounded synthesis and symbolic and SAT-based methods. Keywords. Church’s problem, synthesis, reactive systems, automata over infinite words and trees, S1S, temporal logic, GR(1), strategy logic, coordination logic, incomplete information, synthesis of distributed systems, bounded synthesis, symbolic synthesis, SAT-based synthesis
1. Introduction More than fifty years after its introduction by Alonzo Church [1], the synthesis problem is still one of the most intriguing challenges in the theory of reactive systems. Reactive systems are computer systems that maintain a continuous interaction with their environment. Hardware circuits, communication protocols, and embedded controllers are typical examples. Synthesis algorithms construct such systems automatically from logical specifications, such as formulas of a temporal logic. Because synthesis eliminates the need for a manual implementation, it has the potential to revolutionize the development process for reactive systems. And indeed, synthesis has, over the past few years, found applications in several areas of systems engineering, notably in the construction of circuits and device drivers and in the synthesis of controllers for robots and manufacturing plants. The quest, however, is far from over: the performance of the available algorithms still leaves much to be desired. Finding logics and algorithms that make the synthesis of reactive systems efficient and scalable remains a vigorously pursued research goal. Church’s problem statement in 1957 ignited research on several fundamental topics, notably on the connection between logics and automata, on algorithmic solutions of infinite games over finite graphs, and on the theory of automata over infinite objects. The basic underlying insight is that the synthesis problem can be understood as a game between the system, trying to satisfy the specification, and the environment, trying to expose an error. A winning strategy for the system player defines an implementation that is guaranteed to satisfy the specification. The games that result from formulas of the usual specification logics for reactive systems are infinite in the sense that the plays have infinite duration, and yet finite in the sense that the games are played on a finite game arena. These games can be solved algorithmically, i.e., one can determine which player wins
B. Finkbeiner / Synthesis of Reactive Systems
73
the game, and produce a winning strategy; the winner is guaranteed to have a strategy that only requires finite memory; and the winning strategies form a regular set, which can be recognized and manipulated with automata over infinite trees. Progress in automata and game theory allowed Church’s problem to be solved within a decade, in the sense that the decidability of the problem was established [2]. The path towards practical algorithms, however, turned out to be much longer. The main challenge to be overcome was the algorithmic complexity of the problem. In his 1974 thesis, Larry Stockmeyer discussed one of the corner stones of the solution of Church’s problem, the translation of formulas of the monadic second-order logic of one successor (S1S) into B¨uchi automata. The translation from S1S to automata is nonelementary in the length of the formula. Stockmeyer predicted, somewhat darkly, that “any attempt to find an efficient algorithm for the problem is foredoomed” [3]. The expensive translation from formulas to automata was not to remain the only obstacle towards practical solutions of Church’s problem. A very discouraging result was Rosner and Pnueli’s discovery of the undecidability of the synthesis problem for distributed systems [4]. Most modern reactive systems are distributed in the sense that they consist of multiple processes with individual inputs, which they may or may not share with other processes. A key difficulty in the design of such systems is to decide how the processes should interact so that each process obtains the information needed to carry out its functionality. Distributed synthesis, i.e., the extension of Church’s problem to the case that the implementation consists of several processes, is thus a particularly useful type of synthesis. However, when Pnueli and Rosner began, in the late 1980s, to investigate Church’s problem for distributed systems, they quickly discovered that the problem is undecidable in even the most restricted settings, such as architectures with as few as two independent processes. Today, reactive synthesis has matured, despite these challenges, into an area with not only well-understood foundations, but also a rich supply of practical algorithms. There is a growing landscape of tool implementations (cf. [5,6,7]). In 2014, the first synthesis competition took place at the annual CAV conference, where the synthesis tools were compared against each other using an initial collection of roughly 500 standard benchmark problems [8]. While synthesis may not yet have reached the same level of industrial acceptance as its twin brother, computer-aided verification, there is a growing number of impressive success stories from areas such as hardware circuits, device drivers, and robotics. The historical developments on the synthesis problem can broadly be grouped into three big waves. The first wave, during the decade after Church’s inception of the problem, brought an initial set of basic algorithms that established the principal decidability. In 1969, B¨uchi and Landweber gave the first game-theoretic solution [2]. Michael Rabin invented automata over infinite trees and provided an automata-theoretic approach to the solution of Church’s problem [9], which paved the way for more advanced automatabased algorithms to be discovered later. The common basis for these early synthesis algorithms was a system specification given as a formula of monadic second order logic (MSO). As a result, the early algorithms suffer, as observed by Stockmeyer, from nonelementary complexity. The focus on MSO was disrupted in the 1980s, when the introduction of temporal logic for the specification of reactive systems by Amir Pnueli triggered a second wave of interest in Church’s problem. The translation of linear-time temporal logic (LTL) to
74
B. Finkbeiner / Synthesis of Reactive Systems
deterministic automata is doubly exponential, and, hence, while far from cheap, certainly much closer to practicality than MSO. At the same time, LTL-based model checking, which only requires PSPACE, took off as an industrial technique, in particular in the hardware industry. The need to detect unrealizable, and, hence, erroneous, specifications before verification — “a specification is useless if it cannot be realized by any concrete implementation” [10] — provided additional, practical motivation for the study of Church’s problem. Synthesis algorithms were developed for the common linear and branching-time temporal logics. The algorithmic advances led to broader interest in realizability and similar questions. Properties about the existence of strategies, including realizability and related properties, like the existence of Nash equilibria or recoverability from faults, began to be recognized as genuine system properties worthy of formal specification and analysis. Game-based logics like ATL [11], Strategy Logic [12], and Coordination Logic [13] extend temporal logic with quantification over strategies, and can express properties like realizability within the logic, as opposed to an external semantical condition, like the realizability of an MSO or LTL formula. The result is much greater flexibility: customized queries, such as realizability under specific assumptions or in specific system architectures, can be checked without the need to come up, every time, with customized synthesis algorithms. Once the desired variation is encoded in an appropriate logic, the actual synthesis work can be left to the model checking algorithm or decision procedure of the logic. A third wave of inquiry into Church’s problem, this time with the explicit goal of developing practical algorithms, began about a decade ago. The stage for the third wave had been set by the enormous advances in the performance of automatic verification during the 1980s and 1990s. To reduce the complexity of the synthesis problem, much attention focussed on “synthesis-friendly” reduced logics, such as generalized reactivity GR(1), and on practically relevant variations of the basic synthesis problems, such as bounded synthesis [14]. Bounded synthesis focusses the search for an implementation to implementations with small (bounded) size. Motivated by the success of symbolic and SAT-based verification (cf. [15,16,17]), similar techniques were also developed for synthesis. Eventually, the advances in the performance of the synthesis algorithms and the growing availability of tools made it possible to tackle the first real-world design problems, such as the synthesis of an arbiter for the AMBA AHB bus, an open industrial standard for the on-chip communication and management of functional blocks in systemon-a-chip (SoC) designs [18] and the synthesis of device drivers like the Intel PRO/1000 ethernet controller [19]. 2. Church’s Problem Church’s problem [1,20] is concerned with the existence of reactive systems, which transform, in an online fashion, an infinite stream of inputs into an infinite stream of outputs. Reactive systems differ fundamentally from data transforming programs, which read some input and produce, upon termination, some output. The synthesis of data transforming programs is a fascinating subject with a rich history of its own (cf. [21,22,23]), which is, however, beyond the scope of this article. In Church’s problem, the inputs and outputs of the reactive system to be synthesized are valuations of boolean variables. Consider a set I of input variables and a set O of
B. Finkbeiner / Synthesis of Reactive Systems
75
output variables; the reactive system must, given a finite sequence of valuations of the input variables w ∈ (2I )∗ produce a valuation f (w) ∈ 2O of the output variables. A specification of a reactive system is a set Spec ⊆ (2I∪O )ω of infinite sequences of valuations of both the input and the output variables. The system is correct if for every infinite sequence of inputs, the input-output sequence that is obtained by computing the outputs according to f , is an element of Spec. In the following, we make the definition of Church’s problem precise, by fixing monadic second-order logic as the specification language and finite-state machines as implementations. Monadic Second-Order Logic The input to Church’s problem is a regular set of ω words, given as a formula of the monadic second-order logic of one successor (S1S). Suppose, for example, we are interested in constructing an arbiter circuit. Arbiters are used when more than one client circuit needs access to some shared resource, such as a communication bus. To access the resource, the client sends a request signal R and waits until it receives a grant signal G from the arbiter. The task of the arbiter is to answer each request with a grant without giving grants to the two clients at the same time. In S1S, an arbiter with two clients can be specified as a conjunction of three properties: ∀x . ¬G1 (x) ∨ ¬G2 (x) ∀x . R1 (x) → ∃y . y ≥ x ∧ G1 (y) ∀x . R2 (x) → ∃y . y ≥ x ∧ G2 (y)
(mutual exclusion) (response 1) (response 2)
The mutual exclusion property states that at every point in time x, at most one grant signal can be set; the response properties state that if a request is made at time x, then there must exist a point in time y ≥ x, where the corresponding grant signal is set. S1S formulas are based on two types of variables: first-order variables V1 , which range over natural numbers, and second-order variables V2 , which range over sets of natural numbers. We will distinguish the two types of variables here by using small letters V1 = {z, y, x, . . .} for the first-order variables and capital letters V2 = {Z,Y, X, . . .} for second-order variables. Terms are constructed from first-order variables, the constant 0, and the successor function: t + 1; formulas consist of the membership test X(t), where t is a term and X a second-order variable, equality t1 = t2 on terms, first-order quantification ∃x and secondorder quantification ∃X, and boolean combinations. Greater-than-or-equal x ≥ y is definable in S1S as the requirement that every upward-closed set Z that contains y must also contain x: ∀Z . (Z(y) ∧ (∀y . Z(y ) → Z(y + 1))) → x ∈ Z. A model of an S1S formula can be interpreted as an infinite word from (2V1 ∪V2 )ω , where each letter is a set of variables. A second-order variable evaluates to the set of positions where it occurs, first-order variables must occur at exactly one position and evaluate to this position. To specify the behavior of a reactive system in S1S, we use free second-order variables that are interpreted as the values of input and output signals at various points in time. In the arbiter example, R1 , R2 are second-order variable representing the input, G1 , G2 second-order variables representing the output. Implementations. A solution to Church’s problem is a circuit that satisfies the S1S formula for every possible input. The restriction to circuits means that we are interested in finite-state solutions, which we will formalize in the following as finite-state machines. A (finite-state) Moore machine over input alphabet Σ and output alphabet Λ has the form M = (S, s0 , T, G), where S is a finite set of states, s0 ∈ S is an initial state,
76
B. Finkbeiner / Synthesis of Reactive Systems
{R1 }
} ,R2 {R 1
{R2 }
0, / {R1 }
2}
R
2}
1,
{R
0/ s2 : {G2 }
s3 : {G1 }
{R2 }, {R1 , R2 }
2}
0/ , {R
{R1 }
{R
{R2 }
s0 : 0/
{R1 , R2 }
{R1 }, {R1 , R2 }
} {R 1 0/
0/
s1 : {G1 }
s4 : {G2 }
Figure 1. Moore machine implementing the arbiter specification.
T : S × Σ → S is a transition function, mapping a state and an input letter to the next state, and G : S → Λ is an output function mapping each state to an output letter. For a given sequence of inputs α = α (0) α (1) α (2) . . ., the Moore machine generates a sequence β = s0 T (β (0), α (0)) T (β (1), α (1)) . . . of states and a sequence γ = G(β (0)) G(β (1)) G(β (2)) . . . of outputs. For Church’s problem, the input alphabet Σ = 2I consists of the valuations of the input variables I, and the output alphabet Λ = 2O consists of the valuations of the output variables O. The Moore machine is a realization of the S1S formula ϕ if for all possible inputs α ∈ (2I )ω , the combination α (0) ∪ γ (0) α (1) ∪ γ (1) α (2) ∪ γ (2) . . . satisfies ϕ . Figure 1 shows a Moore machine that implements the arbiter specification. This implementation carefully answers every request with a grant at a later point in time. Note that this implementation is actually unnecessarily complicated. If a request comes in at a point in time where a grant is already being given out, there is, according to our specification, no need for a further grant at a later point in time. Also, there is no requirement in our specification that grants must actually be preceded by requests. Another, and much simpler, solution for our specification would be to completely ignore the input and alternate between giving a grant to the first and to the second client, independently of whether there was a request or not.
3. Early Solutions In 1960, B¨uchi [24] and Elgot [25] established the connection between logic and automata, by showing that formulas of monadic second-order logic over finite words and finite automata can be translated into each other. B¨uchi’s Theorem [26] extends this connection to S1S and automata over infinite words. B¨uchi’s Theorem establishes the decidability of S1S and also provides the first step for the solution of Church’s problem: the translation of the logical specification to an automaton. This first step is common to both the game-based solution due to B¨uchi and Landweber, and the automata-based solution due to Rabin.
77
B. Finkbeiner / Synthesis of Reactive Systems
{A | x ∈ A}
{A | x ∈ A} {A | {x,Y } ⊆ A}
{A | {x, y} ∩ A = 0} / {A | y ∈ A}
{A | {x, y} ∩ A = 0} / {A | x ∈ A}
Figure 2. B¨uchi automata for S1S formulas. The automaton on the left accepts the models of the S1S formula Y (x), the automaton on the right accepts the models of x = y + 1.
3.1. The Logic-Automata Connection A (nondeterministic) automaton over infinite words over alphabet Σ has the form A = (Q, Q0 , T, Acc), where Q is a finite set of states, Q0 ⊆ Q is a set of initial states, T ⊆ Q × Σ × Q is a set of transitions, and Acc is an acceptance condition. We assume that A is complete, i.e., for every q ∈ Q and σ ∈ Σ there is at least one q ∈ Q such that (q, σ , q) ∈ T . If, furthermore, Q0 is singleton and for every q ∈ Q and σ ∈ Σ there is at most one q ∈ Q such that (q, σ , q) ∈ T , then A is deterministic. In this case, we also write T as a function T : Q × Σ → Q. The language of automata over infinite words is defined with the following mechanism. A run of an automaton A on an infinite input word α = α (0) α (1) α (2) . . . ∈ Σω is an infinite sequence of states r = r(0) r(1) r(2) . . . ∈ Qω such that the sequence starts with the initial state r(0) = q0 and subsequently follows the transitions, i.e., for all i ∈ N, (r(i), α (i), r(i + 1)) ∈ T . An automaton A accepts an infinite word α if there is a run of A on α that satisfies the acceptance condition. The language of A consists of all accepted words. A basic acceptance condition is safety, which is given as a subset of S ⊆ Q of states; a run is accepting if only safe states are visited. For the automata-theoretic representation of MSO it suffices to use a comparatively simple extension of the safety condition, the B¨uchi condition [26], which is given as a subset F ⊆ S of accepting states. The B¨uchi condition requires that some state in F is visited infinitely often. There are several other useful acceptance conditions. The co-B¨uchi condition is given as a set of rejecting states F and is satisfied if the states in F are only visited finitely often. The Muller acceptance condition consists of a set F ⊂ 2Q of sets of states. A run is accepting if the set of states that occur infinitely often on the run is an element of F . The Rabin condition consists of a set Ω = {(E1 , F1 ), . . . , (Ek , Fk )} of pairs of sets of states. A run is accepted if there exists a pair (E, F) ∈ Ω such that none of the states in E occurs infinitely often and some state in F occurs infinitely often. A commonly used special case of the Rabin condition is the Rabin chain or parity condition [27]. Here, the acccepting pairs (E1 , F1 ), . . . , (Ek , Fk ) form a chain E1 ⊂ F1 ⊂ E2 ⊂ . . . ⊂ Ek ⊂ Fk with respect to set inclusion. The states can thus be colored with natural numbers, where the states of E1 are colored with 1, the states of F1 E1 with 2, and so forth. A run is accepting if the least color that occurs infinitely often is even. The construction of a B¨uchi automaton from an S1S formula follows the structure of the formula. The base cases are atomic formulas like Y (x) and x = y + 1, which are translated into the automata shown in Fig. 2. Automata for more complex formulas are built by applying automata transformations corresponding to the logical operators, i.e., disjunction is implemented by language
78
B. Finkbeiner / Synthesis of Reactive Systems
union, conjunction by intersection, quantification by projection, negation by complementation. Language complementation is exponential in the number of states; the complexity of this translation is therefore, in general, nonelementary. 3.2. The B¨uchi-Landweber Theorem In 1969, one decade after its inception, Church’s problem was solved by B¨uchi and Landweber [2]. B¨uchi and Landweber stated Church’s problem as a game between two players, one player representing the system and one player representing the environment. In each round of the game, the environment player first chooses a valuation of the inputs, then the system chooses the valuation of the outputs. The system player wins if the sequence of valuations produced in this way satisfies the given S1S formula. Realizability thus amounts to the existence of a winning strategy for the system player against all possible behaviors of the environment player. Determinization. B¨uchi and Landweber’s construction is based on the connection between logic and automata provided by B¨uchi’s Theorem. Instead of checking the original S1S formula on every possible play, the game is played directly on a finite graph that is constructed from the the states of an automaton obtained from the formula. The automata in B¨uchi’s Theorem are, in general, nondeterministic, i.e., the set of initial states may consist of more than one state and there may be two transitions (q, σ , q1 ), (q, σ , q2 ) ∈ T from the same source state q that lead, for some input symbol σ , to two different target states q1 = q2 . This is a problem for the construction of the game, because every sequence of choices of the two players must result in a unique play. An important preprocessing step, before the game can be constructed, is therefore to determinize the automaton. Unlike automata over finite words, where each nondeterministic automaton can be translated into an equivalent deterministic automaton, deterministic B¨uchi automata are strictly weaker than nondeterministic B¨uchi automata. The determinization of a B¨uchi automaton therefore results in automata with more expressive acceptance conditions. Already in 1966, McNaughton [28] showed that every nondeterministic B¨uchi automaton can be translated into an equivalent deterministic automaton with Muller acceptance condition. In 1988, Safra gave a translation that produces a deterministic Rabin automaton with only 2O(n log n) states and O(n) pairs in the acceptance condition, where n is the number of states of the nondeterministic automaton [29]. Michel showed that Safra’s construction is in fact optimal [30]. More recently, Piterman gave an adaptation of Safra’s construction that translates nondeterministic B¨uchi automata into deterministic parity automata [31]. The synthesis game. The treatment of Church’s problem as an infinite game was first proposed by McNaughton [32]. A game arena is a graph A = (V, E), where the nodes V are partitioned into two disjoint sets V = V0 ∪V1 , the positions of players 0 and 1, respectively. The edges E are a subset of V × V such that every every position p ∈ V has at least one outgoing edge (p, p ) ∈ E. A game G = (A , Win) consists of a game arena and a winning condition Win ⊆ V ω , given for example as a B¨uchi, Muller, or parity condition. In the synthesis game of an S1S formula ϕ , Player 0 represents the system, Player 1 the environment. A round of the game consists of Player 0 choosing a valuation of the
79
B. Finkbeiner / Synthesis of Reactive Systems p 0, / {G}, {R, G}
p
0, / {R} (p, {G})
{R} {G}, {R, G}
(p, 0) /
(q, {G})
(q, 0) /
q
q
Figure 3. Deterministic B¨uchi automaton and synthesis game for the response property. Game positions depicted as circles belong to Player 0, game positions depicted as rectangles to Player 1. Double lines indicate that the game position or automaton state is accepting.
output variables, then Player 1 choosing a valuation of the input variables. The positions of the game keep track of the state of the deterministic automaton Aϕ = (Q, {q0 }, T, Acc) for ϕ : for Player 0, we set V0 = Q. For Player 1, we set V1 = Q × 2O . The game position of Player 1 stores the output selected by Player 0; a position p of Player 0 has edges to all positions (p, o) of Player 1 for some o ∈ 2O . Player 1 then chooses some input and the game moves to the successor state of the automaton. A position (p, o) of Player 1 thus has edges to all positions T (p, i ∪ o) of Player 0, where i ∈ 2I is some input. The winning condition is derived from the acceptance condition of the automaton. For example, if the acceptance condition of the deterministic automaton is a parity condition c : Q → N, then the winning condition of the game is the parity condition c : Q ∪ (Q × 2O ) → N with c (q) = c(q) for q ∈ Q and c (q, o) = c(q) for q ∈ Q, o ∈ 2O . A play π ∈ V ω is an infinite sequence π = π (0) π (1) π (2) . . . of positions such that ∀i ∈ N . (π (i), π (i + 1)) ∈ E. The players take turns in choosing successor positions, i.e., π (1) is chosen by Player 0 from the available successors of π (0) in E, then π (2) is chosen by Player 1, etc. A play π is won by Player 0 iff π satisfies the winning condition. A strategy for player σ is a function fσ : V ∗ · Vσ → V that maps a sequence of game positions, representing a history of a play, to a successor position such that (p, p ) ∈ E whenever fσ (u · p) = p . A play π conforms to a strategy fσ of player σ if ∀i ∈ N . if pi ∈ Vσ then π (i + 1) = fσ (π (0) . . . π (i)). Strategies that do not depend on the history of the game except for the last position can be given as a function gσ : Vσ → V (defining the strategy fσ (w · v) = gσ (v) for w ∈ V ∗ ) and are called memoryless. A strategy fσ is pwinning for player σ and position p if all plays that conform to fσ and that start in p are won by Player σ . In the synthesis game of an S1S formula ϕ , Player 0 has a q0 -winning strategy if and only if ϕ is realizable. As an example, consider the deterministic automaton and the synthesis game shown in Fig. 3 for the response property: ∀x . R(x) → ∃y . y ≥ x ∧ G(y) (response) Note that this example is chosen so that the acceptance condition of the deterministic automaton, and, hence, the winning condition of the game, is a B¨uchi condition. In general, the result of the determinization would be an automaton with a more expressive acceptance condition such as parity.
80
B. Finkbeiner / Synthesis of Reactive Systems
Player 0 has a p-winning strategy. For example, if Player 0 always gives out the grant, the game moves back and forth between p and (p, {G}), without ever reaching q. Game Solving. The B¨uchi-Landweber Theorem [2] says that for a game with Muller winning condition, one can decide whether Player 0 has a winning strategy and, if the answer is yes, construct a finite-state winning strategy. Starting with this fundamental result, the detailed analysis of the game solving problem, in particular with regard to the size of the required strategies and the complexity of determining the winning player, evolved into a research area that still flourishes today. Gurevich and Harrington showed that the memory needed to win a Muller game can be limited to store a latest appearance record, which indicates the order in which the positions of the game arena were visited most recently [33]. The memory required for the latest appearance records is bounded by the factorial of the number of positions. A corresponding exponential lower bound for the memory required to win Muller games was shown by Lescow [34]. The situation is simpler for games with Rabin winning condition, where memoryless winning strategies suffice for Player 0 [35], although Player 1 may need exponential memory [34]. For parity games (and games with subsumed winning conditions like safety and B¨uchi), the winning player always has a memoryless winning strategy [36]. The complexity of determining the winner of a Muller game depends on the precise representation of the winning condition. For an explicit representation of the winning condition F as a list of sets of states, the winner can be determined in polynomial time [37]. For succinct representations, such as the Emerson-Lei representation [38], where F is given as a boolean formula whose variables are the positions of the game, the problem is PSPACE-complete [39]. Safety games can be solved in linear time in the size of the arena [40,41]; B¨uchi games in quadratic time in the number of positions [42]. Deciding the winner of a Rabin game is NP-complete [43]. An intriguing open question is the complexity of solving parity games. The problem is known to be in NP ∩ co-NP [44] (and in UP ∩ co-UP [45]). All currently known deterministic algorithms have complexity bounds that are either exponential in the number of colors [46,47,48], or in the square root of the number of game positions [49,50]. For some time, strategy improvement algorithms [49,51] were believed to be promising candidates for a polynomial-time solution; recently, however, a family of games with exponential running time was demonstrated for this class of algorithms as well [52]. A winning strategy for Player 0 in the synthesis game can be translated into an implementation. Suppose, for example, that the synthesis game is a parity game, derived from a deterministic parity automaton A = (Q, {q0 }, T, c), and a memoryless winning strategy f0 : V0 → V . The strategy is implemented by the Moore machine M = (S, s0 , T , G), where S = Q, s0 = q0 , T (q, i) = T (q, f0 (q) ∪ i), and G(q) = f0 (q), which always chooses the next output according to f0 . 3.3. Automata-Based Synthesis In 1969, Rabin introduced automata over infinite trees [53]. Tree automata provide an alternative, and very elegant, solution to Church’s problem [9]. Rabin’s insight was to view implementations as infinite trees that branch according to the possible inputs and that are labeled with the outputs chosen by the system. Figure 4 shows such a tree representation
B. Finkbeiner / Synthesis of Reactive Systems
81
f (ε )
{i
}
0/ f (0) /
f ({i})
...
{i}
f (0{i}) /
0/
{i}
0/ f (0/ 0) /
f ({i}0) /
f ({i}{i}) ...
Figure 4. A strategy tree for a strategy fσ : (2{i} )∗ → 2O .
for a strategy f : (2{i} )∗ → 2O with a single input variable i. Rabin’s solution to Church’s problem is to represent the set of all implementations that satisfy the specification as a tree automaton. The specification is thus realizable if and only if the language of the tree automaton is non-empty. Automata on Infinite Trees. For a given set ϒ of directions, the infinite tree is the set ϒ∗ of finite sequences over ϒ. A Σ-labeled ϒ-tree is a function ϒ∗ → Σ. A tree automaton over Σ-labeled ϒ-trees has the form A = (Q, q0 , T, Acc), where Q is a finite set of states, q0 ∈ Q is an initial state, T ⊆ Q × Σ × (ϒ → Q) is a set of transitions and Acc is an acceptance condition Acc ⊆ Qω . A run of a tree automaton A on a Σ-labeled ϒ-tree v : ϒ∗ → Σ is a Q-labeled ϒ-tree r : ϒ∗ → Q, where the root is labeled with the initial state, r(ε ) = q0 , and every node n ∈ ϒ∗ satisfies some transition (r(n), v(n), f ) ∈ T in the sense that f (υ ) = r(n · υ ) for all directions υ ∈ ϒ. A run r is accepting if all paths satisfy the acceptance condition Acc. The language of A consists of all accepted Σ-labeled ϒ-trees. To build a tree automaton that accepts all implementations of a given S1S formula ϕ , we first translate ϕ into a deterministic word automaton Aϕ . The tree automaton then simply simulates the word automaton along every path of the input tree. Suppose Aϕ is a deterministic parity word automaton (Q, {q0 }, T, c). To simulate Aϕ , the tree automaton Aϕ = (Q, q0 , T , c), transitions, for every direction, to the successor state of the word automaton for the combination of the label and the direction, i.e., T = {(q, σ , f ) | q ∈ Q, σ ∈ Σ, f (υ ) = T (q, σ ∪ υ ) for all υ ∈ ϒ}. An example of this construction is shown in Fig. 5. The tree automaton for the response property shown on the left of Fig. 5 was obtained from the deterministic word automaton shown on the left of Fig. 3. In this example, the tree automaton has a B¨uchi acceptance condition, because the automaton in Fig. 3 is a B¨uchi automaton. In general, the tree automaton might have a more expressive acceptance condition, such as a parity condition. Tree Automata Emptiness. Rabin’s original emptiness test for tree automata had nonelementary running time. In 1972, this was improved to algorithms with exponential running time in both the number of states and in the number of pairs of the Rabin condi-
82
B. Finkbeiner / Synthesis of Reactive Systems p
Q = {p, q} q0 = p T = {(p, {G}, f : 0/ → p, {R} → p) (p, 0, / f : 0/ → p, {R} → q) (q, {G}, f : 0/ → p, {R} → p)} (q, 0, / f : 0/ → q, {R} → p) F = {p}
(p, {G}, f : 0/ → p, {R} → p)
(p, 0, / f : 0/ → p, {R} → q)
(q, {G}, f : 0/ → p, {R} → p)
(q, 0, / f : 0/ → q, {R} → q)
q
Figure 5. B¨uchi tree automaton and emptiness game for the response property. Game positions for player Automaton (Player 0) are depicted as circles, game positions for player Pathfinder (Player 1) as rectangles.
tion [54,9]. Finally, in 1989, algorithms with polynomial running time in the number of states and exponential running time in the number of pairs were found [55,56]. In general, one can easily reduce the emptiness problem of a tree automaton with a certain acceptance condition to the problem of solving the game with the same type of condition as its winning condition. Following [33], the players in this game are often called Automaton and Pathfinder: Automaton (Player 0) tries to prove that an accepted tree exists, Pathfinder (Player 1) tries to disprove this by finding a path where the acceptance condition is violated. For a tree automaton A = (Q, q0 , T, ω ), the emptiness game ((V, E), Win) has positions V = V0 ∪V1 where Automaton’s positions V0 = Q are the states of the automaton and Pathfinder’s positions V1 = T are the transitions. The edges E = E0 ∪ E1 correspond to Automaton choosing a transition originating from the present state E0 = {(q, (q, σ , f )) | q ∈ Q, (q, σ , f ) ∈ T } and Pathfinder choosing a direction E1 = {((q, σ , f ), f (υ )) | υ ∈ ϒ}. The winning condition checks if the sequence of states satisfies the acceptance condition of the automaton; e.g., if Acc is given as a parity condition c : Q → N, then Win is the parity condition c : V → N with c (q) = c(q) for q ∈ Q and c ((q, σ , f )) = c(q) for (q, σ , f ) ∈ T . The tree automaton is non-empty iff Player Automaton has a q0 -winning strategy. Figure 5 shows a B¨uchi tree automaton for the response property and its emptiness game. A p-winning strategy for Player Automaton is to move from p to (p, {G}, f : 0/ → p, {R} → p), corresponding to the implementation that always provides the grant. The automata-based solution to Church’s problem thus leads to an infinite game, just like the game-based approach. The automata-based approach is, however, an important generalization, because the representation as automata makes it possible to manipulate sets of implementations with automata transformations (see, for example, Section 4.3).
4. Synthesis from Temporal Logics The invention of temporal logic and the discovery of temporal logic based model checking in the 1980s established computer-aided verification as a field with practical relevance especially for the hardware industry. The temporal logics turned out not only to be often more intuitive to use than S1S, but also to lead to significantly lower complexity for both verification and synthesis.
B. Finkbeiner / Synthesis of Reactive Systems
83
4.1. Linear-time Temporal Logic In his seminal paper “The temporal logics of programs” [57], Pnueli introduced lineartime temporal logic (LTL) as a new specification language for the desired behavior of reactive systems. Similar to S1S, LTL is based on sequences. LTL differs from S1S in that there is no explicit mechanism for quantification. References to points in time, and quantification over time is done implicitly through modal operators. For a given set of atomic propositions AP, LTL is generated by the following grammar:
ϕ ::= p | ¬ϕ | ϕ ∧ ϕ |
ϕ | ϕ Uϕ
where p ∈ AP is an atomic proposition, ¬ and ∧ are Boolean connectives, and and U are temporal operators: is the next operator, U the until operator. Typically, the logic is extended with other Boolean operators and with derived temporal operators such as eventually ϕ ≡ true U ϕ and globally ϕ ≡ ¬ ϕ . Like S1S, models of LTL formulas are infinite words. For an infinite word α ∈ (2AP )ω , α |= p iff p ∈ α (0); α |= ϕ iff α (1) α (2) α (3) . . . |= ϕ ; and α |= ϕ1 U ϕ2 iff there is an i ≥ 0 such that α (i) α (i + 1) α (i + 2) . . . |= ϕ2 and for all 0 ≤ j < i, α ( j) α ( j + 1) α ( j + 2) . . . |= ϕ1 . The arbiter specification, which was given as a conjunction of S1S formulas in Section 2, can be equivalently stated in LTL as follows: (¬G1 ∨ ¬G2 ) (R1 → G1 ) (R2 → G2 )
(mutual exclusion) (response 1) (response 2)
For synthesis from LTL formulas, we assume that the atomic propositions AP = I ∪O are partitioned into inputs I and outputs O. Given an LTL formula ϕ over AP of length n, one can build a B¨uchi word automaton with 2O(n) states that recognizes the models of ϕ [58]. The basic idea of the translation is to store, in each state, a set of subformulas that are required to hold on the suffix of the input word. This idea has been optimized in various ways [59,60]. In practice, it is particularly important to simplify the automaton on-the-fly during the construction, for example by removing redundant transitions and by merging equivalent states [61]. From the nondeterministic B¨uchi automaton, one can obtain, via Safra’s construction, a deterministic automaton as required by the game-based or automata-based synthesis approaches discussed in Sections 3.2 and 3.3. In terms of complexity, the LTL-toautomata translation results in a B¨uchi automaton with an exponential number of states in the length of the formula; from there, one obtains, via Safra’s construction, a deterministic Rabin automaton with a doubly exponential number of states and a single-exponential number of pairs. The emptiness game of the corresponding tree automaton can be solved in polynomial running time in the number of states and in exponential running time in the number of pairs, resulting overall in a doubly exponential running time. The problem is, in fact, 2EXPTIME-complete [56].
84
B. Finkbeiner / Synthesis of Reactive Systems
4.2. Branching-time Temporal Logic The temporal operators of LTL describe possible observations along a single time line. Branching-time temporal logics, by contrast, see time as a tree structure: at any point in time, there may be multiple futures. Path quantifiers make it possible to specify the existence and absence of futures with certain properties. The standard branching-time temporal logics are CTL∗ [62] and its sublogic CTL [63]. The syntax of CTL∗ distinguishes state formulas Φ and path formulas ϕ , as generated by the following grammar: Φ ::= a | ¬Φ | Φ ∧ Φ | A ϕ | E ϕ ϕ ::= Φ | ¬ϕ | ϕ ∧ ϕ | ϕ | ϕ U ϕ As for LTL, CTL∗ is usually extended with additional Boolean connectives and temporal operators. CTL is the sublogic of CTL∗ where every temporal operator is immediately preceded by a path quantifier. CTL∗ formulas can be interpreted over arbitrary Kripke structures. For the purposes of synthesis, the models of interest are again 2O -labeled 2I -trees. Let v be such a tree. A path γ in v is an infinite sequence γ (0) γ (1) γ (2) . . . ∈ ((2I )∗ )ω of nodes such that for each node γ (i) in the sequence, the successor node γ (i + 1) is a child of γ (i), i.e., γ (i + 1) = γ (i) · υ for some υ ∈ 2I . The set of paths originating in a node n is denoted by Paths(n). A node n satisfies an existentially quantified state formula E ϕ iff there is a path in Paths(n) that satisfies the path formula ϕ ; analogously, n satisfies a universally quantified state formula E ϕ iff all paths in Paths(n) satisfy ϕ . A path γ satisfies an atomic proposition p ∈ AP iff p ∈ v(γ (0)). The satisfaction of the temporal modalities is defined as for LTL, i.e., γ satisfies ϕ iff γ (1) γ (2) γ (3) . . . satisfies ϕ , and γ satisfies ϕ1 U ϕ2 iff there is an i ≥ 0 such that γ (i) γ (i + 1) γ (i + 2) . . . satisfies ϕ2 and for all 0 ≤ j < i γ ( j) γ ( j + 1) γ ( j = 2) . . . satisfies ϕ1 . The Boolean connectives are interpreted in the usual way. Similar to the translation of LTL formulas into equivalent B¨uchi word automata, CTL formulas can be translated into B¨uchi tree automata. The size of the resulting automaton is exponential in the length of the formula [64]. Since B¨uchi games can be solved in polynomial time, the realizability of a CTL formula can thus be checked in exponential time. A matching lower bound follows from the satisfiability problem of CTL [65]. For CTL∗ formulas, we obtain a Rabin automaton with a doubly exponential number of states and a single exponential number of pairs. The synthesis problem can therefore be solved in doubly exponential time [66], and is, hence, 2EXPTIME-complete (the lower bound follows from the synthesis problem for LTL). 4.3. Synthesis under Incomplete Information Church’s problem is based on a specification that refers to the inputs and outputs of the implementation to be synthesized. Since the inputs are, by definition, observable by the implementation, this results in a game with perfect information. Often, however, one is interested in synthesizing a process within a larger system, where the process only observes a subset of the global inputs. This results in a game with incomplete information.
85
B. Finkbeiner / Synthesis of Reactive Systems f (ε )
f ({i})
0/
f ({i})
f (0/ 0) / f (0/ 0) /
}
f (0) /
f (0) /
{i}
,i
}
{i
{i
f (0/ 0) / f (0{i}) / f (0{i}) / ...
f (0/ 0) / f (0{i}) / f (0{i}) / ...
f ({i}0) / f ({i}{i}) f ({i}0) / f ({i}{i}) ... f ({i}0) / f ({i}{i}) f ({i}0) / f ({i}{i}) ...
Figure 6. Widening of the strategy tree for strategy f : (2{i} )∗ → 2O over input variable i, shown in Fig. 4, to input variables i and i .
The classic solution to games with incomplete information is the translation to perfect-information games with a knowledge-based subset construction due to Reif [67]. Reif’s construction simulates a given game G with incomplete information with a game G with perfect information, where the positions of G are sets of positions of G . The set of positions of G reached after a certain sequence of moves in G consists of those positions of G that would have been reached after some sequence of moves that is indistinguishable, by Player 0, from the sequence that actually occurred. A similar idea can also be applied in a transformation on tree automata that recognize sets of strategy trees. Figure 6 shows the widening of the strategy tree from Fig. 4 for input variable i to the larger set of input variables consisting of both i and i . Since the original strategy does not depend on i , the widened tree has identical labels on paths where the values of i are identical. Kupferman and Vardi defined a tree automata transformation called narrowing that transforms a given tree automaton on strategy trees to a tree automaton on strategies with a reduced set of inputs such that a strategy tree is accepted by the new automaton if and only if its widening is accepted by the original automaton [68]. To synthesize an implementation for a temporal specification, one first builds, as in standard synthesis, a tree automaton for the specification and then applies narrowing to reduce the inputs to the subset that is actually observable by the implementation. Realizability under incomplete information corresponds to non-emptiness of this automaton. Independently of the presence of incomplete information, the synthesis problems for LTL, CTL, and CTL∗ are complete for 2EXPTIME, EXPTIME, and 2EXPTIME, respectively. 4.4. Synthesis of Distributed Systems The distributed synthesis problem is the generalization of the synthesis problem where we construct, instead of a single implementation, a set of implementations, one for each system process, that together must satisfy the specification. The system architecture is typically given as a directed graph, where the nodes represent processes, including the environment as a special process. The edges of the graph are labeled by variables indicating that data may be transmitted between two processes. The same variable may occur
86
B. Finkbeiner / Synthesis of Reactive Systems
p1 a p2 b p3 c p4
p1 a p2 c
d
p1 a p2
b c
b
Pipeline
Architecture A0
p4
d
p5
e
b d
Two-way ring with 4 system processes
p3 d
p3
e c
f
d p1 a p2 b p3 c p4
p1 a p2
b c
One-way ring
p3
c
p4
d e
d
p5
e
p6
f
b Two-way ring with 5 system processes
Figure 7. Distributed architectures
on multiple outgoing edges of a single node, allowing for the broadcast of data. Figure 7 shows several examples of system architectures. The distributed synthesis problem was introduced by Pnueli and Rosner, who showed that the problem is decidable for pipeline architectures but undecidable in general [4]. In particular, the problem is already undecidable for the simple architecture A0 , which consists of the environment and two independent system processes. The decidability result was later generalized to one-way ring architectures [69] and, finally, to all architectures without information forks [70]. Information forks are a comprehensive criterion that characterizes all architectures with an undecidable synthesis problem. Intuitively, an information fork is a situation where two processes receive information from the environment (directly or indirectly), in such a way that they cannot completely deduce the information received by the other process. Consider, for example, the two-way ring with five system processes shown in Figure 7 on the bottom on the right side. The synthesis problem is undecidable because of the information fork in processes p4 and p5 . The environment p1 can transmit information through a, b, c to p4 that remains secret to p5 , and vice versa, transmit information through a, b, f to p5 that remains secret to p4 . The distributed synthesis problem becomes decidable if we eliminate one of the two processes, resulting in the two-way ring with four processes shown on the top on the right side. For architectures without information forks, the synthesis problem is solved in two phases. First, the architecture is simplified, by grouping the system processes according to the information they possess about the environment’s behavior. Groups of processes with the same level of information can simulate each other, and are therefore collapsed into a single process. Feedback edges from processes with a lower level of information to those with a higher level are eliminated, because they do not transmit new information. In the second phase, the synthesis problem is solved with an automata-based construction that, by repeatedly applying the narrowing construction, successively eliminates processes along the information order, starting with the best-informed process. The complexity of this construction is nonelementary in the number of processes. 4.5. Temporal Logics with Strategy Quantifiers In 1997, Alur, Henzinger, and Kupferman extended the linear – branching-time spectrum of the temporal logics with the alternating-time temporal logics as a third type of tempo-
B. Finkbeiner / Synthesis of Reactive Systems
87
ral logic [11]. The alternating-time logic ATL∗ replaces the path quantifier of CTL∗ with the more general strategy quantifier. While the CTL∗ formula E ϕ means that there exists a path where ϕ holds, the ATL∗ formula 0ϕ means that Player 0 has a strategy to ensure ϕ . With alternating-time temporal logic, the realizability of a temporal property became itself a property expressible inside the logic, rather than as an externally defined semantical condition. The added expressiveness is useful to describe the game-like behavior of multiprocess systems. For example in a cache-coherence protocol, one might require that every process has the capability to obtain memory access, irrespective of the behavior of the other processes. The semantics of the alternating-time temporal logics refer to concurrent game structures, a game-based extension of Kripke structures. For a natural number k, a kplayer concurrent game structure over a set of atomic propositions AP has the form G = (S, L, d, δ ), where S is a set of states, L : S → 2AP is a labeling function; da : S → N assigns to each player a ∈ {1, 2, . . . , k} and each state s ∈ S a number of moves available to player a, the resulting set of vectors D(s) = {1, . . . , d1 (s)} × {1, . . . , d2 (s)} × . . . × {1, . . . , dk (s)} are called the move vectors of state s; δ is the transition function, which assigns to each state s and move vector j1 , j2 , . . . , jk ∈ D(s) a successor state δ (s, j1 , j2 , . . . jk ) ∈ S. If a play reaches state s, each player chooses, concurrently, a number ja beween 1 and da (s), and the play continues with state δ (s, j1 , j2 , . . . jk ). Since realizability of a temporal property ϕ can be stated as the ATL∗ property 0ϕ of a generic game structure in which Player 0 gets to set the outputs and Player 1 the inputs, it is the model checking problem, not the realizability problem, that is most relevant for synthesis. ATL∗ model checking requires, like LTL synthesis, doubly exponential time. Model checking formulas of the restricted sublogic ATL requires the solution of games with Boolean combinations of B¨uchi conditions, which can be done in PSPACE. Strategy Logic (SL) [12,71] generalizes alternating-time temporal logic by treating strategies as explicit first-order objects. In SL, the ATL∗ property 1ϕ is expressed as ∃x. ∀y. ϕ (x, y), i.e., there exists a strategy x for Player 1 such that for all strategies y of Player 2, ϕ is guaranteed to hold. The explicit quantification over strategies makes it possible to compare multiple strategies for the same player; Nash equilibria and similar properties can easily be expressed as SL formulas. The complexity of the model checking problem for SL formulas (which suffices for synthesis) is nonelementary; the satisfiability problem is undecidable. A first attempt of a logical characterization of the distributed synthesis problem was already carried out within the setting of the alternating-time temporal logics. Game structures under incomplete information extend game structures with an observability vector P = {Πa | 1 ≤ a ≤ k}, which identifies, for each player a, a set of atomic propositions that are observable by a. A strategy for player a may only depend on the history of its observable propositions, not on the history of the other, unobservable propositions. ATL∗ under incomplete information can thus express the existence of a distributed implementation. Not suprisingly, however, the model checking problem for ATL∗ (and even ATL) under incomplete information turned out to be undecidable [72]. A decidable logic for the distributed synthesis problem is Coordination Logic (CL) [13]. Like Strategy Logic, CL has explicit strategy quantifiers. The informedness of the strategies is not, as in ATL∗ under incomplete information, defined by the model, but is defined directly in the formula. CL uses two types of variables: strategy variables S ,
88
B. Finkbeiner / Synthesis of Reactive Systems
which represent, like in SL, strategies (or output) and coordination variables C , which represent information (or input). The syntax of CL is given by the grammar
ϕ ::= x | ¬x | ϕ ∨ ϕ | ϕ ∧ ϕ |
ϕ | ϕ U ϕ | C∃s.ϕ | C∀s.ϕ ,
where x ∈ C S , C ⊆ C , and s ∈ S . The operators of CL consist of the usual LTL operators as well as the new subtree quantifiers C∃s.ϕ | C∀s.ϕ . Coordination variables provide strategy variables with the information required for their decisions. Following the structure of a formula, a bound coordination variable c is visible to a bound strategy variable s, if s is in the scope of the subtree quantifier that introduced c or if c is a free coordination variable. The strategies the subtree quantifier ranges over are mappings from histories of valuations of the coordination variables in the scope of s, to a (Boolean) valuation of s. CL can express all decidable distributed synthesis problems discussed in Section 4.4. In practice, many more synthesis problems can be expressed in CL (and are, hence, decidable), even though their system architecture does not fall into a decidable class. Suppose, for example, that the outputs c and d of processes p1 and p2 , respectively, in the undecidable architecture A0 from Fig. 7 are independent of each other and only depend on their respective inputs a and b. Then, the synthesis problem can be expressed as the CL formula {a}∃c. ϕ (a, c) ∧ {b}∃d. ψ (b, d), where the independence of the outputs is reflected by the fact that the two conjuncts do not have any shared variables and can, hence, be evaluated independently of each other.
5. Towards Practical Synthesis Algorithms Even with the reduction in complexity from nonelementary for S1S to doubly exponential for LTL, the complexity challenge was not solved – for practical purposes, doubly exponential is still an intractable complexity. The quest for more practical synthesis methods since about a decade ago has, so far, been based on three main lines of research. The first line is concerned with the input to the synthesis problem, the specification logic. The goal is to find fragments of specification languages like LTL that allow for faster synthesis algorithms, while still being sufficiently expressive for the specification of relevant synthesis problems. The second line is concerned with the output of the synthesis problem, the implementation. Bounded synthesis restricts the size of the implementation and thus forces the synthesis algorithm to search for small, and, hence, easy to find, solutions. The third line of work towards practical algorithms is concerned with the internal representation of the synthesis problem. Symbolic and SAT-based approaches can, in certain cases, reduce the required memory exponentially. 5.1. Efficient Fragments A fruitful starting point for “synthesis-friendly” specification logics is the observation that certain games, such as B¨uchi games and parity games with a constant number of
89
B. Finkbeiner / Synthesis of Reactive Systems
colors, can be solved in polynomial time. Synthesis algorithms based on such games are often referred to as “polynomial-time” even though, strictly speaking, the polynomialtime complexity is in the size of the game arena, which is typically at least exponential in its logical description. A widely used synthesis-friendly fragment of LTL is GR(1), short for Generalized Reactivity (1), which was introduced in 2006 by Piterman, Pnueli, and Sa’ar [73]. GR(1) generalizes other fragments of LTL considered earlier [74,75]. For a given set of input variables I and output variables O, a GR(1) formula is an implication A1 ∧ A2 ∧ . . . ∧ Am → G1 ∧ G2 ∧ . . . ∧ Gn , of a conjunction of assumptions Ai , i = 1 . . . m to a conjunction of guarantees Gi , i = 1 . . . n. The guarantees are restricted to the following types of formulas: (1) initialization properties, which are state formulas, i.e., formulas without temporal operators; these formulas are used to specify the initial state of the system; (2) safety properties of the form (ϕ → ψ ), where ϕ and ψ are state formulas; these properties are used to describe ϕ , where ϕ is a the evolution of the system; and (3) liveness properties of the form state formula; these properties describe objectives that need to be accomplished infinitely often. The assumptions have the same form, but are additionally restricted to refer to the environment in the sense that initialization properties are state formulas over I only, and safety properties are of the form (ϕ → ψ ) where ϕ is a state formula over I ∪ O and ψ is a state formula over I. GR(1) specifications define a game, where each position identifies a valuation of the input and output variables, the edges are defined by the safety assumptions and guarantees, and the winning condition for Player 0 is given by an LTL formula based on the liveness properties, i.e., a temporal formula of the form (
ϕ1 ∧
ϕ2 ∧ . . .
ϕk ) → (
ψ1 ∧
ψ2 ∧ . . .
ψl ),
where ϕi , i = 1 . . . k, and ψi , i = 1 . . . l, are state formulas. This game can be translated into a parity game with three colors [76], which can be solved in quadratic time [77]. Ehlers showed that the same principle also applies to an extension of GR(1) with stability ϕ , called Generalized Rabin (1) [78]. Generalized Rabin (1) properties of the form specifications translate to parity games with five colors, which can also be solved in polynomial time. A disadvantage of GR(1) and similar fragments of LTL is, however, that they often require an expensive pre-processing step known as pre-synthesis [79]. Pre-synthesis is needed to encode system properties that cannot be specified directly in GR(1). Consider, for example, the response property (R → G), which cannot be expressed directly in GR(1). To encode the response property in GR(1), pre-synthesis adds an auxiliary output variable H which is true whenever there was a request that has not yet been followed by a grant: (¬H) ∧ ( ((R ∨ H) ∧ ¬G) →
H) ∧ ( ((¬R ∧ ¬H) ∨ ¬G) →
¬H) ∧ (
¬H)
The initialization property ¬H and the safety properties ((R ∨ H) ∧ ¬G) → H and ((¬R ∧ ¬H) ∨ ¬G) → ¬H ensure that H is indeed true if and only if there is an
90
B. Finkbeiner / Synthesis of Reactive Systems
∗ a r1 g1
g1 g2
r2 g2
b
⊥
c
g1
∗
g2
Figure 8. Universal co-B¨uchi automaton corresponding to the arbiter specification. The states depicted as double circles (b, ⊥, and c) are the rejecting states in F. The abbreviations r1 g1 , g1 g2 , r2 g2 , g1 , g2 are used to indicate, in Boolean notation, the input symbols of the transitions; e.g., r1 g1 = {r1 , r2 , g2 }, {r1 , g2 }, {r1 , r2 }, {r1 }. ∗ denotes all subsets of {r1 , r2 , g1 , g2 }.
unanswered request; the liveness property ¬H ensures that no request remains unanswered forever. Despite the syntactic restrictions and the need for pre-synthesis, GR(1) has found many applications in practice. 5.2. Bounded Synthesis The bounded synthesis problem [14] is a variation of the synthesis problem, where only implementations up to a given bound on the number of states are considered. The motivation for bounded synthesis is to focus on small implementations. In practice, realizable specifications often have reasonably small implementations, even though the theoretical bound on the smallest implementation is large, such as doubly exponential for LTL specifications. Algorithmically, bounded synthesis is closely related to Kupferman and Vardi’s Safraless approach [80], which avoids Safra’s complicated determinization construction with an alternative automata transformation that only preserves realizability, not language-equivalence in the sense that all implementations remain represented. The Safraless construction goes via a universal co-B¨uchi automaton to a nondeterministic B¨uchi tree automaton that is non-empty if and only if the original specification is realizable. The Safraless approach is easier to implement than the standard construction, in particular using symbolic representations (see Section 5.3). Bounded synthesis improves the Safraless approach by constructing, instead of a single B¨uchi tree automaton, a sequence of increasingly larger safety tree automata, corresponding to an increasing bound on the size of the implementation, until a precomputed maximal bound is reached, at which point, an implementation, if it exists, must have been found. The form A = (Q, q0 , T, Acc) of a universal automaton over infinite words is the same as that of a nondeterministic automaton (see Section 3.1): Q is a finite set of states, q0 ∈ Q is an initial state, T ⊆ Q × Σ × Q is a set of transitions, and Acc is an acceptance condition. The transitions are, however, not interpreted existentially, which means that, in every run, some applicable transition is chosen; instead, the transitions are interpreted universally: all applicable transitions must be chosen. A run on an infinite input word σ0 σ1 σ2 . . . is an S-labeled ϒ-tree r : ϒ∗ → Σ for some set of directions ϒ, such that the root is labeled with the initial state, r(ε ) = q0 , and for every node n ∈ ϒ∗ and every transition (r(n), σ|n| , q ) ∈ T , q occurs on a child of n, i.e., there exists a direction υ ∈ ϒ such
B. Finkbeiner / Synthesis of Reactive Systems
91
that r(n · υ ) = q . The run is accepting if every branch satisfies the acceptance condition Acc. Algorithms for the translation of LTL formulas to nondeterministic B¨uchi automata can also be used to translate the formulas to universal co-B¨uchi automata. For this purpose, one actually translates the negation of the formula of interest; the nondeterministic B¨uchi automaton obtained from the negation thus recognizes the complement of the intended language. The same automaton interpreted as a universal co-B¨uchi automaton is the exact dual of the nondeterministic automaton, and, hence, recognizes again the complement, i.e., the set of models of the original formula. Figure 8 shows an example of a universal co-B¨uchi automaton. The automaton accepts precisely the models of the arbiter specification. In bounded synthesis, the universal co-B¨uchi automaton is approximated with a sequence of deterministic safety automata. The safety automata maintain a set of “currently active” states of the universal automaton, and, for each state of the universal automaton, a natural number, which indicates the maximal number of visits to rejecting states on some path in the run tree that leads to this state. The ith safety automaton in the sequence limits this number to i (and rejects if the number is exceeded). 5.3. Symbolic Synthesis Symbolic synthesis algorithms are based on compact representations of sets of game positions, using data structures such as binary decision diagrams (BDDs) [81], and-inverter graphs (AIGs) [82], or antichains [83]. The symbolic data structures are used to represent the arena (V, E) and the sets of positions that occur in a fixed point iteration that computes the winning positions. For a safety game ((V, E), S), the fixed point computation iterates the uncontrollable predecessors operator, defined as UPRE(X) = {v ∈ V0 | ∀v ∈ V . (v, v ) ∈ E → v ∈ X} ∪ {v ∈ V1 | ∃v ∈ V . (v, v ) ∈ E ∧ v ∈ X}. The UPRE operator collects all positions of Player 1 where some outgoing edge leads to the given set X of positions, and all positions of Player 0, where all outgoing edges lead to X. The least fixed point of the function τ : X → S ∪ UPRE(X), where S denotes the complement of the safe states S, consists of exactly the game positions from which Player 1 has a winning strategy [84]. For slightly more complex synthesis games, such as B¨uchi games and the synthesis games that result from GR(1) specifications (cf. Section 5.1), the winning positions are similarily computed in a nested fixed point iteration (cf. [73]). For full LTL specifications, the synthesis problem is often reduced, via the bounded synthesis approach described in Section 5.2, to the solution of safety games [83,85]. There is no general guarantee that the symbolic algorithms outperform algorithms based on an explicit representation of the game arena, and there are, in fact, well-known structures on which, for example, BDDs perform poorly [86]. In most practical situations, however, symbolic methods perform significantly better than explicit methods. especially in synthesis problems with many input variables, which result in game arenas with a large number of edges.
92
B. Finkbeiner / Synthesis of Reactive Systems
5.4. SAT-based Synthesis The advances in Boolean satisfiability (SAT) and satisfiability modulo theories (SMT) solving during the 1990s and the success of bounded model checking [16,17] and other satisfiability-based verification techniques inspired the development of synthesis techniques based on SAT and SMT solvers. The bounded synthesis approach (cf. Section 5.2) can, for example, be encoded as a constraint system, where the transition function and the output function of the Moore machine to be synthesized are represented as uninterpreted functions [87]. This approach generalizes naturally to the distributed synthesis problem, where each process has its own transition function, and other variations of the synthesis problem, such as the synthesis of symmetric processes or the synthesis of systems built from component libraries [88]. SMT-based bounded synthesis has been combined with symbolic verification in the lazy synthesis approach [89], where the SMT solver builds candidate implementations based on an incomplete constraint system; the symbolic verifier then compares the candidate against the specification and either proves the correctness of the candidate, which terminates the synthesis process, or finds counter examples, which are in turn used to refine the constraint system used by the SMT solver. For safety games, satisfiability checking for quantified boolean formulas (QBF) combined with computational learning has been used to compute the winning region of Player 0 [90]. QBF solving has also been used to refute realizability, by unrolling the game arena for a bounded number of steps, quantifying universally over choices of Player 0 and existentially over choices of Player 1 [91]. This approach again generalizes to the distributed synthesis problem, by unrolling the game arena simultaneously along several paths that are indistinguishable from the perspective of some process [92]. 6. Modern Applications For a long time, reactive synthesis was considered a theoretician’s exercise. It was only about five to ten years ago that serious case studies started to be carried out. With the advances towards practical synthesis algorithms and the growing availability of synthesis tools, it suddenly became clear that reactive synthesis was no longer limited to academic toy examples and should, instead, venture out to industrial applications. At the time of this writing, it is still too early to give a complete picture of the real-world applications of reactive synthesis. The following examples should give a reasonable idea, however, of the breadth and potential of the modern applications. Hardware. The automatic construction of circuits is still one of the main targets of reactive synthesis. A good illustration of the current state of the art is the synthesis of the arbiter for the AMBA AHB bus. The Advanced Microcontroller Bus Architecture (AMBA) is an open-standard, on-chip interconnect specification for the connection and management of functional blocks in system-on-a-chip (SoC) designs [93]. The Advanced high-performance Bus (AHB) is a bus protocol defined in the AMBA standard that connects up to 16 masters with up to 16 slaves. The role of the arbiter is to control the access to the bus in response to requests by the masters. In 2007, Bloem et al. gave a specification of the AMBA AHB bus as a GR(1) formula with four assumptions and eleven guarantees [18] and reported the synthesis of an arbiter for up to three masters; in followup work, the synthesis was scaled to the full set of 16 masters specified in the AMBA standard [94,95].
B. Finkbeiner / Synthesis of Reactive Systems
93
Device drivers. A device driver is a computer program that provides a software interface to a hardware device. In addition to the manufacturer of the hardware device, the operating systems vendors also care about the quality of the device drivers, and impose quality checks, because it impacts the reputation of the operating system. Manually developing device drivers is an error-prone and often tedious task. Synthesis can be used to construct device drivers automatically from formal specifications of the operating system (OS) interface and of the hardware device. The synthesis game begins with the environment making OS-to-driver requests. In response to these requests, the system selects commands to the device that cause the device to produce the correct response for the given OS request and to continue to operate correctly in the future. Examples of low-level drivers that have been synthesized successfully are an Intel PRO/1000 ethernet controller, a UART NS16450 Linux driver, and an SD Host controller [19]. Robotics. Reactive synthesis is used to generate controllers for robotic mission and motion planning for complex robot behaviors. The advantage of reactive synthesis over traditional hierarchical planning is that the robot is guaranteed to achieve the desired task if it is feasible. In the LTLMoP tool [96], the user defines a task for a robot by drawing a workspace map and then defining goals, such as search and rescue, coverage, or collision avoidance, that refer to the regions of the map. This task specification is translated into a GR(1) formula. The atomic propositions of this formula include propositions that refer to the robot’s sensor readings, which are controlled by the environment. If the formula is realizable, the resulting strategy is combined with continuous control handlers to create a hybrid controller, which is then used to control a real or simulated robot.
7. Conclusions This article went on a journey through more than 50 years of research on the synthesis problem; we traced the evolution of Church’s problem from the theoretical challenge in 1957 to practical synthesis algorithms and modern applications. Each of the three waves of inquiry into Church’s problem, the early solutions during the 1960s, the synthesis algorithms for the temporal logics starting in the 1980s, and the quest for practical algorithms over the past decade, has brought enormous progress. In terms of complexity, we have gone from “foredoomed” problems to algorithms with substantial but reasonable complexity; the running times are somewhere between polynomial and exponential, provided that certain realistic assumptions are satisfied, such as specifications given in a synthesis-friendly fragment, like GR(1), or implementations that are sufficiently small implementation to be discovered quickly by SMT-based bounded synthesis. For distributed systems, we have gone from isolated decidability results in restricted system architectures to uniform algorithms and a comprehensive logical representation that covers all decidable cases. In terms of applications, academic toy examples have started to give way to real design problems from industry. In short, we have gone from an open theoretical problem to a field with modern applications, practical algorithms, and the potential to revolutionize the development process for reactive systems.
94
B. Finkbeiner / Synthesis of Reactive Systems
References [1] A. Church. Applications of recursive arithmetic to the problem of circuit synthesis. In Summaries of the Summer Institute of Symbolic Logic, volume 1, pages 3–50. Cornell Univ., Ithaca, NY, 1957. [2] J. Richard B¨uchi and Lawrence H. Landweber. Solving sequential conditions by finite-state strategies. Transactions of the American Mathematical Society, 138, 1969. [3] L. Stockmeyer. The Complexity of Decision Problems in Automata Theory and Logic. PhD thesis, Massachusetts Institute of Technology, 1974. [4] Amir Pnueli and Roni Rosner. Distributed reactive systems are hard to synthesize. In Proc. FOCS’90, pages 746–757, 1990. [5] Aaron Bohy, V´eronique Bruy`ere, Emmanuel Filiot, Naiyong Jin, and Jean-Franc¸ois Raskin. Acacia+, a tool for LTL synthesis. In P. Madhusudan and Sanjit A. Seshia, editors, CAV, volume 7358 of LNCS, pages 652–657. Springer, 2012. [6] Roderick Paul Bloem, Hans-J¨urgen Gamauf, Georg Hofferek, Bettina K¨onighofer, and Robert K¨onighofer. Synthesizing robust systems with RATSY. In Open Publishing Association, editor, SYNT 2012, volume 84, pages 47 – 53. Electronic Proceedings in Theoretical Computer Science, 2012. [7] R¨udiger Ehlers. Unbeast: Symbolic bounded synthesis. In Parosh Aziz Abdulla and K. Rustan M. Leino, editors, TACAS, volume 6605 of Lecture Notes in Computer Science, pages 272–275. Springer, 2011. [8] Roderick Bloem, R¨udiger Ehlers, and Swen Jacobs. The synthesis competition. http://www. syntcomp.org/, July 2014. [9] Michael Oser Rabin. Automata on Infinite Objects and Church’s Problem. American Mathematical Society, Boston, MA, USA, 1972. [10] Mart´ın Abadi, Leslie Lamport, and Pierre Wolper. Realizable and unrealizable specifications of reactive systems. In Proceedings of the 16th International Colloquium on Automata, Languages and Programming, ICALP ’89, pages 1–17, London, UK, UK, 1989. Springer-Verlag. [11] Rajeev Alur, Thomas A. Henzinger, and Orna Kupferman. Alternating-time temporal logic. In Willem P. de Roever, Hans Langmaack, and Amir Pnueli, editors, COMPOS, volume 1536 of Lecture Notes in Computer Science, pages 23–60. Springer, 1997. [12] Krishnendu Chatterjee, Thomas A. Henzinger, and Nir Piterman. Strategy logic. Inf. Comput., 208(6):677–693, 2010. [13] Bernd Finkbeiner and Sven Schewe. Coordination logic. In Anuj Dawar and Helmut Veith, editors, CSL, volume 6247 of Lecture Notes in Computer Science, pages 305–319. Springer, 2010. [14] Sven Schewe and Bernd Finkbeiner. Bounded synthesis. In Proc. ATVA, pages 474–488. SpringerVerlag, 2007. [15] J. R. Burch, E.M. Clarke, K. L. McMillan, D.L. Dill, and L. J. Hwang. Symbolic model checking: 1020 states and beyond. In Logic in Computer Science, 1990. LICS ’90, Proceedings., Fifth Annual IEEE Symposium on e, pages 428–439, Jun 1990. [16] F. Copty, L. Fix, E. Giunchiglia, G. Kamhi, A. Tacchella, and M. Vardi. Benefits of bounded model checking at an industrial setting. In Proceedings of 13th International Conference on Computer Aided Verification (CAV 2001), 18–22 July, Paris, France, LNCS, pages 436–453. Springer Verlag, 2001. [17] Armin Biere, Alessandro Cimatti, Edmund M. Clarke, Ofer Strichman, and Yunshun Zhu. Bounded model checking. Advances in Computers, 58:118–149, 2003. [18] Roderick Bloem, Stefan Galler, Barbara Jobstmann, Nir Piterman, Amir Pnueli, and Martin Weiglhofer. Automatic hardware synthesis from specifications: A case study. In Proceedings of the Conference on Design, Automation and Test in Europe (DATE), pages 1188–1193, 2007. [19] Mona Vij, John Keys, Arun Raghunath, Scott Hahn, Vincent Zimmer, Leonid Ryzhyk, Adam Christopher Walker, and Alexander Legg. Device driver synthesis. Intel Technology Journal, 17(2):136–157, dec 2013. [20] Alonzo Church. Logic, arithmetic, and automata. In Proc. Internat. Congr. Mathematicians (Stockholm, 1962), pages 23–35. Inst. Mittag-Leffler, Djursholm, 1963. [21] Cordell Green. Application of theorem proving to problem solving. In Proceedings of the 1st International Joint Conference on Artificial Intelligence, IJCAI’69, pages 219–239, San Francisco, CA, USA, 1969. Morgan Kaufmann Publishers Inc. [22] Zohar Manna and Richard Waldinger. A deductive approach to program synthesis. ACM Trans. Program. Lang. Syst., 2(1):90–121, January 1980. [23] Armando Solar-Lezama. Program sketching. STTT, 15(5-6):475–495, 2013.
B. Finkbeiner / Synthesis of Reactive Systems
95
[24] J. Richard B¨uchi. Weak second-order arithmetic and finite automata. Mathematical Logic Quarterly, 6(1-6):66–92, 1960. [25] Calvin C. Elgot. Decision problems of finite automata design and related arithmetics. Transactions of the American Mathematical Society, 98(1):pp. 21–51, 1961. [26] Julius R. B¨uchi. On a decision method in restricted second order arithmetic. In Ernest Nagel, Patrick Suppes, and Alfred Tarski, editors, Proceedings of LMPS, pages 1–11. Stanford University Press, June 1962. [27] A.W. Mostowski. Regular expressions for infinite trees and a standard form of automata. In Andrzej Skowron, editor, Computation Theory, volume 208 of Lecture Notes in Computer Science, pages 157– 168. Springer Berlin Heidelberg, 1985. [28] Robert McNaughton. Testing and generating infinite sequences by a finite automaton. Information and Control, 9(5):521–530, 1966. [29] S. Safra. On the complexity of omega -automata. In Proceedings of the 29th Annual Symposium on Foundations of Computer Science, SFCS ’88, pages 319–327, Washington, DC, USA, 1988. IEEE Computer Society. [30] M. Michel. Complementation is more difficult with automata on infinite words. Technical report, CNET, 1988. [31] Nir Piterman. From nondeterministic B¨uchi and Streett automata to deterministic parity automata. In LICS, pages 255–264. IEEE Computer Society, 2006. [32] R. McNaughton. Project MAC Rep. Technical report, MIT, 1965. [33] Yuri Gurevich and Leo Harrington. Trees, automata, and games. In Proceedings of the Fourteenth Annual ACM Symposium on Theory of Computing, STOC ’82, pages 60–65, New York, NY, USA, 1982. ACM. [34] Helmut Lescow. On polynomial-size programs winning finite-state games. In Pierre Wolper, editor, Computer Aided Verification, volume 939 of Lecture Notes in Computer Science, pages 239–252. Springer Berlin Heidelberg, 1995. [35] E.Allen Emerson. Automata, tableaux, and temporal logics. In Rohit Parikh, editor, Logics of Programs, volume 193 of Lecture Notes in Computer Science, pages 79–88. Springer Berlin Heidelberg, 1985. [36] E.A Emerson and C.S. Jutla. Tree automata, mu-calculus and determinacy. In Foundations of Computer Science, 1991. Proceedings., 32nd Annual Symposium on, pages 368–377, Oct 1991. [37] Florian Horn. Explicit Muller games are PTIME. In Ramesh Hariharan, Madhavan Mukund, and V. Vinay, editors, IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science, FSTTCS 2008, December 9-11, 2008, Bangalore, India, volume 2 of LIPIcs, pages 235–243. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2008. [38] E. Allen Emerson and Chin-Laung Lei. Modalities for model checking: Branching time logic strikes back. Sci. Comput. Program., 8(3):275–306, June 1987. [39] Paul Hunter and Anuj Dawar. Complexity bounds for regular games. In Joanna Jdrzejowicz and Andrzej Szepietowski, editors, Mathematical Foundations of Computer Science 2005, volume 3618 of Lecture Notes in Computer Science, pages 495–506. Springer Berlin Heidelberg, 2005. [40] Catriel Beeri. On the membership problem for functional and multivalued dependencies in relational databases. ACM Trans. Database Syst., 5(3):241–259, September 1980. [41] Neil Immerman. Number of quantifiers is better than number of tape cells. Journal of Computer and System Sciences, 22(3):384 – 406, 1981. [42] Krishnendu Chatterjee and Monika Henzinger. An O(n2 ) time algorithm for alternating B¨uchi games. In Proceedings of the Twenty-third Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’12, pages 1386–1399. SIAM, 2012. [43] E. Allen Emerson and Charanjit S. Jutla. The complexity of tree automata and logics of programs. SIAM J. Comput., 29(1):132–158, September 1999. [44] E.A. Emerson, C.S. Jutla, and A.P. Sistla. On model-checking for fragments of -calculus. In Costas Courcoubetis, editor, Computer Aided Verification, volume 697 of Lecture Notes in Computer Science, pages 385–396. Springer Berlin Heidelberg, 1993. [45] Marcin Jurdzi´nski. Deciding the winner in parity games is in up ∩ co-up. Information Processing Letters, 68(3):119 – 124, 1998. [46] E. Allen Emerson and Chin-Laung Lei. Efficient model checking in fragments of the propositional mu-calculus (extended abstract). In LICS, pages 267–278. IEEE Computer Society, 1986. [47] Robert McNaughton. Infinite games played on finite graphs. Annals of Pure and Applied Logic,
96
B. Finkbeiner / Synthesis of Reactive Systems
65(2):149 – 184, 1993. [48] Sven Schewe. Solving parity games in big steps. In V. Arvind and Sanjiva Prasad, editors, FSTTCS 2007: Foundations of Software Technology and Theoretical Computer Science, volume 4855 of Lecture Notes in Computer Science, pages 449–460. Springer Berlin Heidelberg, 2007. [49] Walter Ludwig. A subexponential randomized algorithm for the simple stochastic game problem. Inf. Comput., 117(1):151–155, February 1995. [50] M. Jurdzi´nski, M. Paterson, and U. Zwick. A deterministic subexponential algorithm for solving parity games. SIAM Journal on Computing, 38(4):1519–1532, 2008. [51] Jens V¨oge and Marcin Jurdzi´nski. A discrete strategy improvement algorithm for solving parity games. In E.Allen Emerson and Aravinda Prasad Sistla, editors, Computer Aided Verification, volume 1855 of Lecture Notes in Computer Science, pages 202–215. Springer Berlin Heidelberg, 2000. [52] Oliver Friedmann. An exponential lower bound for the latest deterministic strategy iteration algorithms. Logical Methods in Computer Science, 7(3), 2011. [53] M.O. Rabin. Decidability of second-order theories and automata on infinite trees. Trans. Amer. Math. Soc., 141:1–35, 1969. [54] R. Hossley and Charles Rackoff. The emptiness problem for automata on infinite trees. In SWAT (FOCS), pages 121–124. IEEE Computer Society, 1972. [55] E. Allen Emerson and Charanjit S. Jutla. The complexity of tree automata and logics of programs. In FOCS, pages 328–337. IEEE Computer Society, 1988. [56] A. Pnueli and R. Rosner. On the synthesis of a reactive module. In Proceedings of the 16th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL ’89, pages 179–190, New York, NY, USA, 1989. ACM. [57] Amir Pnueli. The temporal logic of programs. In Foundations of Computer Science, 1977., 18th Annual Symposium on, pages 46–57, Oct 1977. [58] Moshe Y. Vardi and Pierre Wolper. Reasoning about infinite computations. Inf. Comput., 115(1):1–37, November 1994. [59] Rob Gerth, Doron Peled, Moshe Y. Vardi, and Pierre Wolper. Simple on-the-fly automatic verification of linear temporal logic. In Proceedings of the Fifteenth IFIP WG6.1 International Symposium on Protocol Specification, Testing and Verification XV, pages 3–18, London, UK, UK, 1996. Chapman & Hall, Ltd. [60] Fabio Somenzi and Roderick Bloem. Efficient B¨uchi automata from LTL formulae. In CAV 2000, LNCS 1855:247263. Springer-Verlag, 2000. [61] Paul Gastin and Denis Oddoux. Fast LTL to B¨uchi automata translation. In G´erard Berry, Hubert Comon, and Alain Finkel, editors, Proceedings of the 13th International Conference on Computer Aided Verification (CAV’01), volume 2102 of Lecture Notes in Computer Science, pages 53–65, Paris, France, July 2001. Springer. [62] E.Allen Emerson and Joseph Y. Halpern. Decision procedures and expressiveness in the temporal logic of branching time. Journal of Computer and System Sciences, 30(1):1 – 24, 1985. [63] Edmund M. Clarke and E. Allen Emerson. Design and synthesis of synchronization skeletons using branching time temporal logic. In Dexter Kozen, editor, Logics of Programs, volume 131 of Lecture Notes in Computer Science, pages 52–71. Springer Berlin Heidelberg, 1982. [64] Moshe Y Vardi and Pierre Wolper. Automata-theoretic techniques for modal logics of programs. J. Comput. Syst. Sci., 32(2):183–221, April 1986. [65] Michael J. Fischer and Richard E. Ladner. Propositional dynamic logic of regular programs. Journal of Computer and System Sciences, 18(2):194 – 211, 1979. [66] E. Allen Emerson. Temporal and modal logic. In Jan van Leeuwen, editor, Handbook of Theoretical Computer Science (Vol. B), pages 995–1072. MIT Press, Cambridge, MA, USA, 1990. [67] John H. Reif. The complexity of two-player games of incomplete information. J. Comput. Syst. Sci., 29(2):274–301, 1984. [68] Orna Kupferman and Moshe Y. Vardi. Synthesis with incomplete information. In Proc. ICTL’97, 1997. [69] Orna Kupferman and Moshe Y. Vardi. Synthesizing distributed systems. In Proc. LICS’01, July 2001. [70] Bernd Finkbeiner and Sven Schewe. Uniform distributed synthesis. In IEEE Symposium on Logic in Computer Science, pages 321–330, June 2005. [71] Fabio Mogavero, Aniello Murano, and Moshe Y. Vardi. Reasoning about strategies. In Kamal Lodaya and Meena Mahajan, editors, FSTTCS, volume 8 of LIPIcs, pages 133–144. Schloss Dagstuhl - LeibnizZentrum fuer Informatik, 2010. [72] Rajeev Alur, Thomas A. Henzinger, and Orna Kupferman. Alternating-time temporal logic. J. ACM,
B. Finkbeiner / Synthesis of Reactive Systems
97
49(5):672–713, 2002. [73] Nir Piterman, Amir Pnueli, and Yaniv Sa’ar. Synthesis of reactive(1) designs. In E.Allen Emerson and KedarS. Namjoshi, editors, Verification, Model Checking, and Abstract Interpretation, volume 3855 of Lecture Notes in Computer Science, pages 364–380. Springer Berlin Heidelberg, 2006. [74] A Pnueli, E Asarin, O Maler, and J Sifakis. Controller synthesis for timed automata. Proc. System Structure and Control. Elsevier, 1998. [75] Rajeev Alur and Salvatore La Torre. Deterministic generators and games for LTL fragments. ACM Trans. Comput. Logic, 5(1):1–25, January 2004. [76] Roderick Bloem, Krishnendu Chatterjee, Karin Greimel, ThomasA. Henzinger, and Barbara Jobstmann. Robustness in the presence of liveness. In Tayssir Touili, Byron Cook, and Paul Jackson, editors, Computer Aided Verification, volume 6174 of Lecture Notes in Computer Science, pages 410–424. Springer Berlin Heidelberg, 2010. [77] Luca de Alfaro and Marco Faella. An accelerated algorithm for 3-color parity games with an application to timed games. In Werner Damm and Holger Hermanns, editors, Computer Aided Verification, volume 4590 of Lecture Notes in Computer Science, pages 108–120. Springer Berlin Heidelberg, 2007. [78] R¨udiger Ehlers. Generalized Rabin(1) synthesis with applications to robust system synthesis. In Mihaela Bobaru, Klaus Havelund, Gerard Holzmann, and Rajeev Joshi, editors, Proceedings of the 3rd NASA Formal Methods Symposium, volume 6617 of Lecture Notes in Computer Science, pages 101–115. Springer-Verlag, 2011. [79] S. Sohail and F. Somenzi. Safety first: A two-stage algorithm for ltl games. In Formal Methods in Computer-Aided Design, 2009. FMCAD 2009, pages 77–84, Nov 2009. [80] O. Kupferman and M.Y. Vardi. Safraless decision procedures. In Foundations of Computer Science, 2005. FOCS 2005. 46th Annual IEEE Symposium on, pages 531–540, Oct 2005. [81] Randal E. Bryant. Graph-based algorithms for boolean function manipulation. IEEE Trans. Comput., 35(8):677–691, August 1986. [82] Andreas Kuehlmann, Malay K Ganai, and Viresh Paruthi. Circuit-based boolean reasoning. In Design Automation Conference, 2001. Proceedings, pages 232–237. IEEE, 2001. [83] Emmanuel Filiot, Naiyong Jin, and Jean-Franois Raskin. An antichain algorithm for ltl realizability. In Ahmed Bouajjani and Oded Maler, editors, Computer Aided Verification, volume 5643 of Lecture Notes in Computer Science, pages 263–277. Springer Berlin Heidelberg, 2009. [84] Wolfgang Thomas. On the synthesis of strategies in infinite games. In Ernst W. Mayr and Claude Puech, editors, STACS 95, volume 900 of Lecture Notes in Computer Science, pages 1–13. Springer Berlin Heidelberg, 1995. [85] R¨udiger Ehlers. Symbolic bounded synthesis. In T. Touili, B. Cook, and P. Jackson, editors, 22nd International Conference on Computer Aided Verification, volume 6174 of LNCS, pages 365–379. Springer Verlag, 2010. [86] Armin Biere, Alessandro Cimatti, Edmund Clarke, and Yunshan Zhu. Symbolic model checking without bdds. In W.Rance Cleaveland, editor, Tools and Algorithms for the Construction and Analysis of Systems, volume 1579 of Lecture Notes in Computer Science, pages 193–207. Springer Berlin Heidelberg, 1999. [87] Bernd Finkbeiner and Sven Schewe. SMT-based synthesis of distributed systems. In Proc. AFM, 2007. [88] Bernd Finkbeiner and Sven Schewe. Bounded synthesis. International Journal on Software Tools for Technology Transfer, 15(5-6):519–539, 2013. [89] Bernd Finkbeiner and Swen Jacobs. Lazy synthesis. In 13th International Conference on Verification, Model Checking, and Abstract Interpretation (VMCAI 2012), pages 219–234. Springer Verlag, 2012. [90] Bernd Becker, R¨udiger Ehlers, Matthew Lewis, and Paolo Marin. Allqbf solving by computational learning. In S. Chakraborty and M. Mukund, editors, 10th International Symposium on Automated Technology for Verification and Analysis (ATVA), volume 7561 of LNCS, pages 370–384. Springer Verlag, 2012. [91] Marc Herbstritt, Bernd Becker, and Christoph Scholl. Advanced sat-techniques for bounded model checking of blackbox designs. Fifth International Workshop on Microprocessor Test and Verification (MTV’04), 0:37–44, 2006. [92] Bernd Finkbeiner and Leander Tentrup. Detecting unrealizable specifications of distributed systems. ´ In Erika Abrah´ am and Klaus Havelund, editors, TACAS, volume 8413 of Lecture Notes in Computer Science, pages 78–92. Springer, 2014. [93] ARM Ltd. AmbaTM specification (rev. 2). Available at http://www.arm.com, 1999. [94] Roderick Bloem, Stefan Galler, Barbara Jobstmann, Nir Piterman, Amir Pnueli, and Martin Weiglhofer.
98
B. Finkbeiner / Synthesis of Reactive Systems
Specify, compile, run: Hardware from psl. Electron. Notes Theor. Comput. Sci., 190(4):3–16, November 2007. [95] Yashdeep Godhal, Krishnendu Chatterjee, and ThomasA. Henzinger. Synthesis of amba ahb from formal specification: a case study. International Journal on Software Tools for Technology Transfer, 15(56):585–601, 2013. [96] H. Kress-Gazit, G.E. Fainekos, and G.J. Pappas. Temporal-logic-based reactive mission and motion planning. Robotics, IEEE Transactions on, 25(6):1370–1381, Dec 2009.
The author(s) of this publication is/are solely responsible for its content. This publication does not reflect the opinion of the publisher. The publisher cannot be held liable for any loss or damage that may occur because of this publication.
Dependable Software Systems Engineering J. Esparza et al. (Eds.) IOS Press, 2016 © 2016 The authors and IOS Press. All rights reserved. doi:10.3233/978-1-61499-627-9-99
99
Between Testing and Verification: Dynamic Software Model Checking Patrice GODEFROID Microsoft Research Abstract. Dynamic software model checking consists of adapting model checking into a form of systematic testing that is applicable to industrial-size software. Over the last two decades, dozens of tools following this paradigm have been developed for checking concurrent and data-driven software. Compared to traditional software testing, dynamic software model checking provides better coverage, but is more computationally expensive. Compared to more general forms of program verification like interactive theorem proving, this approach provides more limited verification guarantees, but is cheaper due to its higher level of automation. Dynamic software model checking thus offers an attractive practical trade-off between testing and formal verification. This paper presents a brief introduction to dynamic software model checking. Keywords. Software Model Checking, Systematic Testing, Verification, Bug Finding
1. Introduction Model Checking was introduced more than 30 years ago [15,61] as an automated verification technique for checking the correctness of concurrent reactive systems. Its basic idea is conceptually simple: when designing, implementing and testing a concurrent reactive system, check its correctness by modeling each component of the system using some form of (extended) finite-state machine, and then by systematically exploring the product of such finite-state machines, often called the state space of the system. The state space is a directed graph whose nodes represent states of the whole system, and whose edges represent state changes. Branching in the graph represents either branching in individual state machine components or nondeterminism due to concurrency, i.e., different orderings of actions performed by different components. The state space of a system thus represents the joint behavior of all its components interacting with each other in all possible ways. By systematically exploring its state space, model checking can reveal unexpected possible interactions between components of the system’s model, and hence reveal potential flaws in the actual system. Model checking thus means to check whether a system satisfies a property by exploring its state space. Historically, the term “model checking” was introduced to mean “check whether a system is a model of a temporal logic formula”, in the classic logical sense. In this paper, we will use the term “model checking” in a broad sense, to denote any systematic state-space exploration technique that can be used for verification purposes when it is exhaustive.
100
P. Godefroid / Between Testing and Verification: Dynamic Software Model Checking
Model checking and testing have a lot in common. In practice, the main value of both is to find bugs in programs. And, if no bugs are to be found, both techniques increase the confidence that the program is correct. In theory, model checking is a form of formal verification based on exhaustive statespace exploration. As famously stated by Dijkstra decades ago, “testing can only find bugs, not prove their absence”. In contrast, verification (including exhaustive testing) can prove the absence of bugs. This is the key feature that distinguishes verification, including model checking, from testing. In practice, however, the verification guarantees provided by model checking are often limited: model checking checks only a program, or a manually-written model of a program, for some specific properties, under some specific environment assumptions, and the checking itself is usually approximate for nontrivial programs and properties when an exact answer is too expensive to compute. Therefore, model checking should be viewed in practice more as a form of “super testing” rather than as a form of formal verification in the strict mathematical sense. Compared to testing, model checking provides better coverage, but is more computationally expensive. Compared to more general forms of program verification like interactive theorem proving, model checking provides more limited verification guarantees, but is cheaper due to its higher level of automation. Model checking thus offers an attractive practical trade-off between testing and formal verification. The key practical strength of model checking is that it is able to find bugs that would be extremely hard to find (and reproduce) with traditional testing. This key strength has been consistently demonstrated, over and over again, during the last three decades when applying model checking tools to check the correctness of hardware and software designs, and more recently software implementations. It also explains the gradual adoption of model checking in various industrial environments (hardware industry, safety-critical systems, software industry). Over the last 20 years, significant progress has been made on how to apply model checking to software, i.e., precise descriptions of software implementations written in programming languages (like C, C++ or Java) and of realistic sizes (often hundreds of thousands lines of code or more). Unlike traditional model checking, a software model checker does not require a user to manually write an abstract model of the software program to be checked in some modeling language, but instead works directly on a program implementation written in a full-fledged programming language. As illustrated in Figure 1, there are essentially two main approaches to software model checking, i.e., two ways to broaden the scope of model checking from modeling languages to programming languages. One approach uses adaptation: it consists of adapting model checking into a form of systematic testing that is applicable to industrialsize software (e.g., [26,71,53,32]). Another approach uses abstraction: it consists of automatically extracting an abstract model out of a software application by statically analyzing its code, and then of analyzing this model using traditional model-checking algorithms (e.g., [3,18,55,44]). The aim of this paper is to present a brief introduction to the first approach to software model checking. We discuss the main ideas and techniques used to systematically test and explore the state spaces of concurrent and data-driven software. This paper only provides a brief introduction to this research area, not an exhaustive survey.
P. Godefroid / Between Testing and Verification: Dynamic Software Model Checking
Modeling languages
state-space exploration
Model checking
abstraction
Programming languages
101
adaptation state-space exploration
Systematic testing
Figure 1. Two main approaches to software model checking.
2. Dynamic Software Model Checking: Dealing with Concurrency In this section, we present techniques inspired by model checking for systematically testing concurrent software. We discuss nondeterminism due to concurrency before nondeterminism due to data inputs (in the next section) for historic reasons. Indeed, model checking was first conceived for reasoning about concurrent reactive systems [15,61], and software model checking via systematic testing was also first proposed for concurrent programs [26]. 2.1. Software Model Checking Using a Dynamic Semantics Like a traditional model checker explores the state space of a system modeled as the product of concurrent finite-state components, one can systematically explore the “product” of concurrently executing operating-system processes by using a run-time scheduler for driving the entire software application through the states and transitions of its state space [26]. The product, or state space, of concurrently executing processes can be defined dynamically as follows. Consider a concurrent system composed of a finite set of processes and a finite set of communication objects. Each process executes a sequence of operations described in a sequential program written in any full-fledged programming language (such as C, C++, etc.). Such sequential programs are deterministic: every execution of the program on the same input data performs the same sequence of operations. We assume that processes communicate with each other by performing atomic operations on communication objects. Examples of communication objects are shared variables, semaphores, and FIFO buffers. Operations on communication objects are called visible operations, while other operations are by default called invisible. The execution of an operation is said to be blocking if it cannot be completed; for instance, waiting for the reception of a message blocks until a message is received. We assume that only executions of visible operations may be blocking. At any time, the concurrent system is said to be in a state. The system is said to be in a global state when the next operation to be executed by every process in the system is a visible operation. Every process in the system is always expected to eventually attempt
102
P. Godefroid / Between Testing and Verification: Dynamic Software Model Checking
executing a visible operation.1 This assumption implies that initially, after the creation of all the processes of the system, the system can reach a first and unique global state s0 , called the initial global state of the system. A process transition, or transition for short, is defined as one visible operation followed by a finite sequence of invisible operations performed by a single process and ending just before a visible operation. Let T denote the set of all transitions of the system. A transition is said to be disabled in a global state s when the execution of its visible operation is blocking in s. Otherwise, the transition is said to be enabled in s. A transition t enabled in a global state s can be executed from s. Since the number of invisible operations in a transition is finite, the execution of an enabled transition always terminates. When the execution of t from s is completed, the system reaches a global state s , called t the successor of s by t and denoted by s → s .2 We can now define the state space of a concurrent system satisfying our assumptions as the transition system AG = (S, Δ, s0 ) representing its set of reachable global states and the (process) transitions that are possible between these: • S is the set of global states of the system, • Δ ⊆ S × S is the transition relation defined as follows: (s, s ) ∈ Δ iff ∃t ∈ T : s → s , t
• s0 is the initial global state of the system. We emphasize that an element of Δ, or state-space transition, corresponds to the execution of a single process transition t ∈ T of the system. Remember that we use here the term “transition” to refer to a process transition, not to a state-space transition. Note how (process) transitions are defined as maximal sequences of interprocess “local” operations from a visible operation to the next. Interleavings of those local operations are not considered as part of the state space. It can be proved [26] that, for any concurrent system satisfying the above assumptions, exploring only all its global states is sufficient to detect all its deadlocks and assertion violations, i.e., exploring all its non-global states is not necessary. This result justifies the choice of the specific dynamic semantics described in this section. Deadlocks are states where the execution of the next operation of every process in the system is blocking. Deadlocks are a notorious problem in concurrent systems, and can be difficult to detect through conventional testing. Assertions can be specified by the user in the code of any process with the special visible operation “assert”. It takes as its argument a boolean expression that can test and compare the value of variables and data structures local to the process. Many undesirable system properties, such as unexpected message receptions, buffer overflows and application-specific error conditions, can easily be expressed as assertion violations. Note that we consider here closed concurrent systems, where the environment of one process is formed by the other processes in the system. This implies that, in the case of a single “open” reactive system, the environment in which this system operates has 1 If a process does not attempt to execute a visible operation within a given amount of time, an error is reported at run-time. 2 Operations on objects (and hence transitions) are deterministic: the execution of a transition t in a state s leads to a unique successor state.
P. Godefroid / Between Testing and Verification: Dynamic Software Model Checking
103
Run-time scheduler
System Processes
Figure 2. Overall architecture of a dynamic software model checker for concurrent systems.
to be represented somehow, possibly using other processes. In practice, a complete representation of such an environment may not be available, or may be very complex. It is then convenient to use a simplified representation of the environment, or test driver or mock-up, to simulate its behavior. For this purpose, it is useful to introduce a special operation to express a valuable feature of modeling languages, not found in programming languages: nondeterminism. This operation, let us call it nondet3 , takes as argument a positive integer n, and returns an integer in [0, n]. The operation is visible and nondeterministic: the execution of a transition starting with nondet(n) may yield up to n + 1 different successor states, corresponding to different values returned by nondet. This operation can be used to represent input data nondeterminism or the effect of input data on the control flow of a test driver. How to deal with input data nondeterminism will be discussed further in Section 3. 2.2. Systematic Testing with a Run-Time Scheduler The state space of a concurrent system as defined in the previous section can be systematically explored with a run-time scheduler. This scheduler controls and observes the execution of all the visible operations of the concurrent processes of the system (see Figure 2). Every process of the concurrent system to be analyzed is mapped to an operatingsystem process. Their execution is controled by the scheduler, which is another process external to the system. The scheduler observes the visible operations executed by processes inside the system, and can suspend their execution. By resuming the execution of (the next visible operation of) one selected system process in a global state, the scheduler can explore one transition in the state space AG of the concurrent system. Combined with a systematic state-space search algorithm, the run-time scheduler can drive an entire application through all (or many) possible concurrent executions by systematically scheduling all possible interleavings of their communication operations. In order to explore an alternative execution, i.e., to “backtrack” in its search, the run-time scheduler can, for instance, restart the execution of the entire software application in its initial state, and then drive its execution along a different path in its state space. Whenever an error (such as a deadlock or an assertion violation) is detected during the search, a whole-system execution defined by the sequence of transitions that lead to the error state from the initial state can be exhibited to the user. Dynamic software model 3 This
operation is called VS toss in [26].
104
P. Godefroid / Between Testing and Verification: Dynamic Software Model Checking
checkers typically also include an interactive graphical simulator/debugger for replaying executions and following their steps at the instruction or procedure/function level. Values of variables of each process can be examined interactively. The user can also explore interactively any path in the state space of the system with the same set of debugging tools (e.g., see [27]). It is thus assumed that there are exactly two sources of nondeterminism in the concurrent systems considered here: concurrency and calls to the special visible operation nondet used to model nondeterminism as described in the previous section and whose return values are controled by the run-time scheduler. When this assumption is satisfied, the run-time scheduler has complete control over nondeterminism. It can thus reproduce any execution leading to an error found during a state-space search. It can also guarantee, from a given initial state, complete coverage of the state space up to some depth. Remember that the ability to provide state-space coverage guarantees, even limited ones, is precisely what distinguishes verification, including model checking, from traditional testing, as explained earlier in the introduction. This is why the term “software model checking” was applied to this approach of systematic testing with a run-time scheduler, since eventually it does provide full state space coverage. Of course, in practice, state spaces can be huge, even infinite. But even then, the state space can always be explored exhaustively up to some depth, which can be increased progressively during state-space exploration using an “iterative deepening” search strategy. Efficient search algorithms, based on partial-order reduction, have been proposed for exhaustively exploring the state spaces of message-passing concurrent systems up to a “reasonable” depth, say, all executions with up to 50 message exchanges. In practice, such depths are often sufficient to thoroughly exercise implementations of communication protocols and other distributed algorithms. Indeed, exchanging a message is an expensive operation, and most protocols are therefore designed so that few messages are sufficient to exercise most of their functionality. By being able to systematically explore all possible interactions of the implementation of all communicating protocol entities up to tens of message exchanges, this approach to software model checking has repeatedly been proven to be effective in revealing subtle concurrency-related bugs [27]. 2.3. Stateless Vs. Stateful Search This approach to software model checking for concurrent programs thus adapts model checking into a form of systematic testing that simulates the effect of model checking while being applicable to concurrent processes executing arbitrary code written in fullfledged programming languages (like C, C++, Java, etc.). The only main requirement is that the run-time scheduler must be able to trap operating system calls related to communication (such as sending or receiving messages) and be able to suspend and resume their executions, hence effectively controlling the scheduling of all processes whenever they attempt to communicate with each other. This approach to software model checking was pioneered in the VeriSoft tool [26]. Because each state of implementations of large concurrent software systems can require megabytes of storage, VeriSoft does not store states in memory and simply traverse statespace paths in a stateless manner, exactly as in traditional testing. It is shown in [26] that in order to make a systematic stateless search tractable, partial-order reduction is necessary to avoid re-exploring over and over again parts of the state space reachable by different interleavings of a same concurrent partial-order execution.
P. Godefroid / Between Testing and Verification: Dynamic Software Model Checking
105
However, for small to medium-size applications, computing state representations and storing visited states in memory can be tractable, possibly using approximations and especially if the entire state of the operating-system can be determined as is the case when the operating system is a virtual machine. This extension was first proposed in the Java PathFinder tool [71]. This approach limits the size and types of (here Java) programs that can be analyzed, but allows the use of standard model-checking techniques for dealing with state explosion, such as bitstate hashing, stateful partial-order reduction, symmetry reduction, and the use of abstraction techniques. Another trade-off is to store only partial state representations, such as storing a hash of a part of each visited state, possibly specified by the user, as explored in the CMC tool [53]. Full state-space coverage with respect to a dynamic semantics defined at the level of operating-system processes can then no longer be guaranteed, even up to some depth, but previously visited partial states can now be detected, and multiple explorations of their successor states can be avoided, which helps focus the remainder of search on other parts of the state space more likely to contain bugs. 2.4. Systematic Testing for Multi-Threaded Programs Software model checking via systematic testing is effective for message-passing programs because systematically exploring their state spaces up to tens of message exchanges typically exercises a lot of their functionality. In contrast, this approach is more problematic for shared-memory programs, such as multi-threaded programs where concurrent threads communicate by reading and writing shared variables. Instead of a few well-identifiable message queues, shared-memory communication may involve thousands of communicating objects (e.g., memory addresses shared by different threads) that are hard to identify. Moreover, while systematically exploring all possible executions up to, say, 50 message exchanges can typically cover a large part of the functionality of a protocol implementation, systematically exploring all possible executions up to 50 read/write operations in a multi-threaded program typically covers only a tiny fraction of the program functionality. How to effectively perform software model checking via systematic testing for shared-memory systems is a harder problem and has been the topic of recent research. Dynamic partial-order reduction (DPOR) [23] dynamically tracks interactions between concurrently-executing threads in order to identify when communication takes place through which shared variables (memory locations). Then, DPOR computes backtracking points where alternative paths in the state space need to be explored because they might lead to other executions that are not “equivalent” to the current one (i.e., are not linearizations of the same partial-order execution). In contrast, traditional partial-order reduction [70,57,25] for shared-memory programs would require a static alias analysis to determine which threads may access which shared variables, which is hard to compute accurately and cheaply for programs with pointers. DPOR has been extended and implemented in several recent tools [72,54,42,66]. Even with DPOR, state explosion is often still problematic. Another recent approach is to use iterative context bounding, a novel search ordering heuristics which explores executions with at most k context switches, where k is a parameter that is iteratively increased [60]. The intuition behind this search heuristics is that many concurrency-related bugs in multi-threaded programs seem due to just a few unexpected context switches. This search strategy was first implemented in the Chess tool [54].
106
P. Godefroid / Between Testing and Verification: Dynamic Software Model Checking
Even when prioritizing the search with aggressive context bounding, state explosion can still be brutal in large shared-memory multi-threaded programs. Other search heuristics for concurrency have been proposed, which could be called collectively concurrency fuzzing techniques [20,65,8]. The idea is to use a random run-time scheduler that occasionally preempts concurrent executions selectively in order to increase the likelihood of triggering a concurrency-related bug in the program being tested. For instance, the execution of a memory allocation, such as ptr=malloc(...), in one thread could be delayed as much as possible to see if other threads may attempt to dereference that address ptr before it is allocated. Unlike DPOR or context bounding, these heuristic techniques do not provide any state-space coverage guarantees, but can still be effective in practice in finding concurrency-related bugs. Other recent work investigates the use of concurrency-related search heuristics with probabilistic guarantees (e.g., see [8]). This line of work attempts to develop randomized algorithms for concurrent system verification which can provide probabilistic coverage guarantees, under specific assumptions about the concurrent program being tested and for specific classes of bugs. The work reported in this paper is only a partial overview of research in this area. Especially during the last decade, dozens of other tools have been developed for software model checking via systematic testing for concurrent systems, for various programming languages and application domains.
3. Dynamic Software Model Checking: Dealing with Data Inputs In this section, we present techniques inspired by model checking for systematically testing sequential software. We assume that nondeterminism in such programs is exclusively due to data inputs. Enumerating all possible data inputs values with a nondet operation as described in Section 2.1 is tractable only when sets of possible input values are small, like selecting one choice in a menu with (few) options. For dealing with large sets of possible input data values, the main technical tool used is symbolic execution, which computes equivalence classes of concrete input values that lead to the execution of the same program path. We start with a brief overview of “classical” symbolic execution in the next section, and then describe recent extensions for systematic software testing. 3.1. Classical Symbolic Execution Symbolic execution is a program analysis technique that was introduced in the 70s (e.g., see [47,6,16,62,45]). Symbolic execution means executing a program with symbolic rather than concrete values. Assignment statements are represented as functions of their (symbolic) arguments, while conditional statements are expressed as constraints on symbolic values. Symbolic execution can be used for many purposes, such as bug detection, program verification, debugging, maintenance, and fault localization [17]. One of the earliest proposals for using static analysis as a kind of systematic symbolic program testing method was proposed by King almost 35 years ago [47]. The idea is to symbolically explore the tree of all computations the program exhibits when all possible value assignments to input parameters are considered. For each control path ρ, that
P. Godefroid / Between Testing and Verification: Dynamic Software Model Checking
107
is, a sequence of control locations of the program, a path constraint φρ is constructed that characterizes the input assignments for which the program executes along ρ. All the paths can be enumerated by a search algorithm that explores all possible branches at conditional statements. The paths ρ for which φρ is satisfiable are feasible and are the only ones that can be executed by the actual program. The solutions to φρ characterize the inputs that drive the program through ρ. This characterization is exact provided symbolic execution has perfect precision. Assuming that the theorem prover used to check the satisfiability of all formulas φρ is sound and complete, this use of static analysis amounts to a kind of symbolic testing. How to perform symbolic execution and generate path constraints is illustrated with an example later in Section 3.4. A prototype of this system allowed the programmer to be presented with feasible paths and to experiment, possibly interactively [40], with assertions in order to force new and perhaps unexpected paths. King noticed that assumptions, now called preconditions, also formulated in the logic could be joined to the analysis forming, at least in principle, an automated theorem prover for Floyd/Hoare’s verification method [24,43], including inductive invariants for programs that contain loops. Since then, this line of work has been developed further in various ways, leading to various approaches of program verification, such as verification-condition generation (e.g., [19,4]), symbolic model checking [7] and bounded model checking [14]. Symbolic execution is also a key ingredient for precise test input generation and systematic testing of data-driven programs. While program verification aims at proving the absence of program errors, test generation aims at generating concrete test inputs that can drive the program to execute specific program statements or paths. Work on automatic code-driven test generation using symbolic execution can roughly be partitioned into two groups: static versus dynamic test generation. 3.2. Static Test Generation Static test generation (e.g., [47]) consists of analyzing a program P statically, by using symbolic execution techniques to attempt to compute inputs to drive P along specific execution paths or branches, without ever executing the program. Unfortunately, this approach is ineffective whenever the program contains statements involving constraints outside the scope of reasoning of the theorem prover, i.e., statements “that cannot be reasoned about symbolically”. This limitation is illustrated by the following example [28]: int obscure(int x, int y) { if (x == hash(y)) abort(); return 0; }
// error // ok
Assume the constraint solver cannot “symbolically reason” about the function hash (perhaps because it is too complex or simply because its code is not available). This means that the constraint solver cannot generate two values for inputs x and y that are guaranteed to satisfy (or violate) the constraint x == hash(y). In this case, static test generation cannot generate test inputs to drive the execution of the program obscure through either branch of the conditional statement: static test generation is helpless for a program like this. Note that, for test generation, it is not sufficient to know that the constraint x
108
P. Godefroid / Between Testing and Verification: Dynamic Software Model Checking
== hash(y) is satisfiable for some values of x and y, it is also necessary to generate specific values for x and y that satisfy or violate this constraint. The practical implication of this fundamental limitation is significant: static test generation is doomed to perform poorly whenever precise symbolic execution is not possible. Unfortunately, this is frequent in practice due to complex program statements (pointer manipulations, floating-point operations, etc.) and calls to operating-system and library functions that are hard or impossible to reason about symbolically with good enough precision. 3.3. Dynamic Test Generation A second approach to test generation is dynamic test generation (e.g., [48,56,41,32,11]): it consists of executing the program P, typically starting with some random inputs, while performing symbolic execution dynamically, collecting symbolic constraints on inputs gathered from predicates in branch statements along the execution, and then using a constraint solver to infer variants of the previous inputs in order to steer the next execution of the program towards an alternative program branch. The conventional stance on the role of symbolic execution is thus turned upside-down: symbolic execution is now an adjunct to concrete execution. A key observation [32] is that, with dynamic test generation, imprecision in symbolic execution can be alleviated using concrete values and randomization: whenever symbolic execution does not know how to generate a constraint for a program statement depending on some inputs, one can always simplify this constraint using the concrete values of those inputs. Consider again the program obscure given above. Even though it is impossible to generate two values for inputs x and y such that the constraint x == hash(y) is satisfied (or violated), it is easy to generate, for a fixed value of y, a value of x that is equal to hash(y) since the latter can be observed and known at run-time. By picking randomly and then fixing the value of y, we can first run the program, observe the concrete value c of hash(y) for that fixed value of y in that run; then, in the next run, we can set the value of the other input x either to c or to another value, while leaving the value of y unchanged, in order to force the execution of the then or else branches, respectively, of the conditional statement in the function obscure. In other words, static test generation is unable to generate test inputs to control the execution of the program obscure, while dynamic test generation can easily drive the executions of that same program through all its feasible program paths, finding the abort() with no false alarms. In realistic programs, imprecision in symbolic execution typically creeps in in many places, and dynamic test generation allows test generation to recover from that imprecision. Dynamic test generation can be viewed as extending static test generation with additional run-time information, and is therefore more general, precise, and powerful. How much more precise is dynamic test generation compared to static test generation? In [29], it is shown exactly when the “concretization trick” used in the above obscure example helps, and when it does not help. It is also shown that the main property of dynamic test generation that makes it more powerful than static test generation is only its ability to observe concrete values and to record those in path constraints. In contrast, the process of simplifying complex symbolic expressions using concrete run-
P. Godefroid / Between Testing and Verification: Dynamic Software Model Checking
109
time values can be accurately simulated statically using uninterpreted functions. However, those concrete values are necessary to effectively compute new input vectors, a fundamental requirement in test generation [29]. In principle, static test generation can be extended to concretize symbolic values whenever static symbolic execution becomes imprecise [46]. In practice, this is problematic and expensive because this approach not only requires to detect all sources of imprecision, but also requires one call to the constraint solver for each concretization to ensure that every synthesized concrete value satisfies prior symbolic constraints along the current program path. In contrast, dynamic test generation avoids these two limitations by leveraging a specific concrete execution as an automatic fall back for symbolic execution [32]. In summary, dynamic test generation is the most precise form of code-driven test generation that is known today. It is more precise than static test generation and other forms of test generation such as random, taint-based and coverage-heuristic-based test generation. It is also the most sophisticated, requiring the use of automated theorem proving for solving path constraints. This machinery is more complex and heavy-weight, but may exercise more paths, find more bugs and generate fewer redundant tests covering the same path. Whether this better precision is worth the trouble depends on the application domain. 3.4. Systematic Dynamic Test Generation Dynamic test generation was discussed in the 90s (e.g., [48,56,41]) in a property-guided setting, where the goal is to execute a given specific target program branch or statement. More recently, new variants of dynamic test generation [32,11] blend it with model checking techniques with the goal of systematically executing all feasible program paths of a program while detecting various types of errors using run-time checking tools (like Purify, Valgrind or AppVerifier, for instance). In other words, each new input vector attempts to force the execution of the program through some new path, but the whole search is not guided by one specific target program branch or statement. By repeating this process, such a systematic search attempts to force the program to sweep through all its feasible execution paths, in a style similar to systematic testing and dynamic software model checking [26] as presented in Section 2. Along each execution, a run-time checker is used to detect various types of errors (buffer overflows, uninitialized variables, memory leaks, etc.). Systematic dynamic test generation as described above was introduced first in [32], as a part of an algorithm for “Directed Automated Random Testing”, or DART for short. Independently, [11] proposed “Execution Generated Tests” as a test generation technique very similar to DART. Also independently, [73] described a prototype tool which shares some of the same features. Subsequently, this approach was adopted and implemented in many other tools (see Section 3.5), and is also sometimes casually referred to as “concolic testing” [67], or simply “dynamic symbolic execution” [69]. Systematic dynamic test generation consists of running the program P under test both concretely, executing the actual program, and symbolically, calculating constraints on values stored in program variables v and expressed in terms of input parameters. Side-by-side concrete and symbolic executions are performed using a concrete store M and a symbolic store S, which are mappings from memory addresses (where program
110
P. Godefroid / Between Testing and Verification: Dynamic Software Model Checking
variables are stored) to concrete and symbolic values respectively [32]. A symbolic value is any expression e in some theory4 T where all free variables are exclusively input parameters. For any program variable v, M(v) denotes the concrete value of v in M, while S(v) denotes the symbolic value of v in S. For notational convenience, we assume that S(v) is always defined and is simply M(v) by default if no symbolic expression in terms of inputs is associated with v in S. When S(v) is different from M(v), we say that that program variable v has a symbolic value, meaning that the value of program variable v is a function of some input(s) which is represented by the symbolic expression S(v) associated with v in the symbolic store. A program manipulates the memory (concrete and symbolic stores) through statements, or commands, that are abstractions of the machine instructions actually executed. A command can be an assignment of the form v := e where v is a program variable and e is an expression, a conditional statement of the form if b then C else C where b denotes a boolean expression, and C and C denote the unique5 next command to be evaluated when b holds or does not hold, respectively, or stop corresponding to a program error or normal termination. Given an input vector assigning a concrete value to every input parameter Ii , the program executes a unique finite6 sequence of commands. For a finite sequence ρ of commands (i.e., a control path ρ), a path constraint φρ is a quantifier-free first-order logic formula over theory T that is meant to characterize the input assignments for which the program executes along ρ. The path constraint is sound and complete when this characterization is exact. A path constraint is generated during dynamic symbolic execution by collecting input constraints at conditional statements. Initially, the path constraint φρ is defined to true, and the initial symbolic store S0 maps every program variable v whose initial value is a program input: for all those, we have S0 (v) = xi where xi is the symbolic variable corresponding to the input parameter Ii . During dynamic symbolic execution, whenever an assignment statement v := e is executed, the symbolic store is updated so that S(v) = σ (e) where σ (e) denotes either an expression in T representing e as a function of its symbolic arguments, or is simply the current concrete value M(v) of v if e does not have symbolic arguments or if e cannot be represented by an expression in T . Whenever a conditional statement if b then C else C is executed and the then (respectively else) branch is taken, the current path constraint φρ is updated to become φρ ∧ c (respectively φρ ∧ ¬c) where c = σ (b). Note that, by construction, all symbolic variables ever appearing in φρ are variables xi corresponding to whole-program inputs Ii . Given a path constraint φρ = 1≤i≤n ci , new alternate path constraints φρ can be defined by negating one of the constraints ci and putting it in a conjunction with all the pre vious constraints: φρ = ¬ci ∧ 1≤ jmsgCounter = 0; s->connect(); if (x) { s->send(x); } else { s->sendDefault(); } return s->errorCode; }
{
41 42 43 44 45 46 47
int readLast(char* data) { MyFile* f = new MyFile(); f->open(); f->seek(f->size); return f->current; }
Figure 13. Class definitions and example codes using MySocket and MyFile. Function sendInt send a value determined by input through a socket and function readLasT returns last character of a file.
in line 25 also points to o1 , the “mov eax,[eax+4]” instruction in line 26 reads a value from a field of o1 . Overall, we statically observe the following explicit events: (i) a write to field at line 2, (ii) a read from field in line 26, (iii) virtual calls at lines 7,16 and 23. We also observe 6 implicit events in this example: (i) an access to the object’s virtual table (read of the field in offset 0) in lines 4, 13 and 20, and (ii) the object is used as the
270
E. Yahav / Analysis and Synthesis with “Big Code”
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
mov eax, [esp+4] mov [eax+16], 0 mov eax, [esp+4] mov edx, [eax] mov ecx, [esp+4] mov eax, [edx] call eax cmp [esp+8], 0 jz branch mov eax, [esp+8] push eax mov eax, [esp+4] mov edx, [eax] mov ecx, [esp+4] mov eax, [edx+12] call eax jmp merge branch: mov eax, [esp+4] mov edx, [eax] mov ecx, [esp+4] mov eax, [edx+16] call eax merge: mov eax, [esp+4] mov eax, [eax+4]
; virtual call
; virtual call
; virtual call
Figure 14. Assembly code generated for function sendInt. Lines 7,16,23 match the function invocations at lines 33,35,37 of Fig. 13.
this pointer (pointed to by ecx) in lines 5, 14 and 21. Overall, 11 events are performed on the tracked object. We employ a points-to analysis that allows us to determine the accesses to objects in the binary (we discuss the analysis further in [43]). Our analysis statically tracks the events as they appear in the function and extracts sequences of events (the object tracelets) as a representation of the object’s behavior. The tracked event sequences are illustrated in Fig. 15a. The nodes in the figure represent the events and the superscript numbers correlate to the relevant line of assembly code. The events are marked as follows:
(a) (b) (c) (d)
W(x) – write to field of object at offset x R(x) – read from field of object at offset x C(x) – call to virtual function of object at offset x this – object was used as this pointer
Additional kinds of events are described in [43]. The two extracted sequences will be used as our object tracelets. Computing Reference Types.: To match the object tracelets we collected with an explicit type, we need reference data on which to train our SLMs. We build the reference data by collecting object tracelets correlating to objects for which the explicit type can be determined. We call these tracelets “type tracelets.” An explicit type can be determined when, for example, we observe the allocation or initialization of the object.
271
E. Yahav / Analysis and Synthesis with “Big Code”
(b)
(a) object o1
MyFile
(c) MySocket
Figure 15. Example tracelets for object o1 extracted from function sendInt, type MyFile from function readLast, and type MySocket.
1 2 3 4 5 6
push call mov call mov mov
16 new ecx, eax MyFile::MyFile ecx, eax [f], ecx
mov mov mov mov call mov mov push mov mov mov mov call mov mov
eax, edx, ecx, eax, eax eax, ecx, ecx edx, eax, ecx, edx, edx eax, eax,
7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
[f] [eax] [f] [edx] ; virtual call [f] [eax+16] [f] [edx] [f] [eax+8] ; virtual call [f] [eax+8]
Figure 16. Assembly code generated for function readLast. Lines 12,20 match function invocations at lines 44,45 of Fig. 13.
The function readLast in Fig. 13 is part of the same binary as sendInt. This function yields the x86 assembly code of Fig. 16. For simplicity, we omitted some instructions (prolog, epilog, etc.) from the assembly code. Similarly to the extraction of object tracelets for the object o1 of sendInt, from this function we can extract the tracelet in Fig. 15b for the object represented by the value of [f]. Because [f] holds the return value of the constructor of type MyFile, we know that [f] is of type MyFile and we associate the tracelet in Fig. 15b with type MyFile (See [43] for details).
272
E. Yahav / Analysis and Synthesis with “Big Code”
In a similar manner we found additional locations where we could determine the types of objects and associate their corresponding tracelets with the relevant types. Fig. 15c shows an example tracelet collected for the type MySocket. Correlating Implicit and Explicit Types.: We match different objects (implicit types) and (explicit) types based on the probability that their sets of tracelets originated from the same model. We create a model based on the set of tracelets corresponding to the explicit type and match the tracelets of the object to that model, as described in [43]. Since we are looking to find o1 ’s actual allocated type, we can immediately eliminate BasicSocket as a candidate because it is an interface. Interfaces have purely virtual functions (without any concrete implementation), which have a unique representation in the binary and cannot be allocated. Because some actions are not feasible for certain types, such as reading a field at an offset that doesn’t exist, some types are unlikely to be the type of an object. Such types cannot be candidates for o1 ’s type. Consider the set of tracelets we extracted for the object o1 . These tracelets access a field at offset 12, the third field of the object, and call a function at offset 16, the fifth virtual function of the object. The type MinimalSocket does not have a third field and the type OneWaySocket does not have a fifth virtual function. We use this knowledge to determine that the types MinimalSocket and OneWaySocket cannot be candidates for o1 ’s type. From now on, we focus only on the likely candidates of o1 ’s type, MySocket and MyFile. The SLM models we use to match objects and types are VMMs. When sequences and dependencies in the data are not known to have a fixed length, as is the case in our scenario, VMMs are a natural model to use. Our VMMs are based on an n-gram model with smoothing and backoff mechanisms. We note that n-gram models are, in essence, Markov models of fixed-order n − 1, where the probability of an event is determined based solely on the n − 1 events that preceded it. The backoff mechanism transforms the fixed-order n-gram model to a variable-order Markov model by allowing to revert to a lower-order model when the current model doesn’t hold enough data. Specifically, we use the prediction by partial match (PPM) algorithm [23]. We compute the probability that each of o1 ’s tracelets from Fig. 15a originated from the resulting models (details in [43]) and multiply the results to get a score for the entire set. Using the trained models, the probability that o1 ’s tracelets originated from MySocket’s model is found to be drastically higher than the probability that they originated from MyFile’s model. We thus see that MySocket’s model is more likely to be the origin model, meaning that MySocket is more likely to be o1 ’s type. This result matches the actual type declared in the code in Fig. 13. Knowing the likely type of o1 is MySocket, we can now deduce that the likely targets of the virtual calls in lines 7,16 and 23 of Fig. 14 are the relevant implementations of MySocket’s functions.
5. Statistical Similarity in Binaries During December 2014, several vulnerabilities were discovered in the prominent implementation of the network time protocol (NTP) ntpd [2]. As this implementation is the de-defacto standard, many products from major vendors were affected, including RedHat’s Linux distribution, Apple’s OSX and Cisco’s 5900x switches. Because some of
E. Yahav / Analysis and Synthesis with “Big Code”
273
these vulnerabilities were introduced many years ago, in the passing time different versions have been ported and integrated into many software packages, ready to be deployed in binary form to home computers, enterprise servers and even appliance firmwares. A security savvy individual, or more commonly a security aware company, would want to use a sample of the vulnerable product (in binary form) to search for the vulnerability across all the software installed in the organization, where source-code is mostly not available. Unfortunately, automatically identifying these software packages is extremely challenging. For example, given a sample from the Debian Linux distribution, finding other (recent or older) vulnerable versions of it is already hard. This is because even though older distributions were probably compiled with the same compiler, gcc, older distributions used older gcc versions, producing syntactically different binary code. Trying to find OSX applications is even harder as they are commonly compiled with a different compiler (CLang), and the firmware of an appliance, using Intel chips, might be compiled with icc (the Intel C compiler) such that the resulting binary procedures differ vastly in syntax. We address this challenge by providing an effective means of searching for semantically similar procedures, at assembly code level. Problem definition Given a query procedure q and a large collection T of (target) procedures, in binary form, our goal is to quantitatively define the similarity of each procedure t ∈ T to the query q. The main challenge is to define a semantic notion of similarity that is precise enough to avoid false positives, but is flexible enough to allow finding the code in any combination of the following scenarios: (i) the code was compiled using different compiler versions, (ii) the code was compiled using different compiler vendor, and (iii) a different version of the code was compiled (e.g. a patch). We require a method that can operate without information about the source code and/or tool-chain used in the creation of the binaries only of procedures in binary form. Existing techniques Previous work on clone detection in binaries [76] can overcome different instruction selection, but is unable to handle syntactic differences beyond single instructions. Binary code search [29] is mostly syntactic and therefore fails to handle differences due to different compilers. Equivalence checking and semantic differencing techniques [65,36,60,62] operate at the source-code level and require a matching (labeling of variables) to operate. Furthermore, they do not provide quantitative measure of similarity when procedures are not equivalent. Dynamic equivalence checking techniques [65] are hard to apply as they require obtaining sufficient coverage for each procedure. Data-driven equivalence checking [80] is aimed at translation validation and is limited to short segments of assembly code. Similarity by composition We draw inspiration from Boiman and Irani’s [20] work on image similarity, where the key idea is that one image is similar to another if it can be composed using regions of the other image, and that this similarity can be quantified using statistical reasoning. Fig. 17(a) and (b) illustrate the idea of similarity by composition for images. Looking at the three images in Fig. 17(a), which of the two query images iq1 and iq2 is more similar to the target image it? Intuitively, we would probably say that iq2 is the one that is more similar to it. Looking at Fig. 17(b) provides some explanation to this intuition. Looking at this figure, we identify similar regions between iq2 and it (marked by the numbered outlined
274
E. Yahav / Analysis and Synthesis with “Big Code” similar regions in ݅ݐ
target image ݅ݐ
5
3 1
6
4
2 query image ݅ݍଶ
query image ݅ݍଵ
similar regions in ݅ݍଵ 1
similar regions in ݅ݍଶ 4
3
5
6
2 (a)
(b)
(c) target code ݐ
M shr N lea O mov lea mov mov mov
eax, 8 r14d, [r12+13h] r13, rax rcx, [r13+3] [r13+1], al [r13+2], r12b rdi, rcx
(e) query code ݍଶ
(d) query code ݍଵ
M
mov mov shr mov add xor test jl
rsi, 14h rdi, rcx eax, 8 ecx, r13 esi, 1h ebx, ebx eax, eax short loc_22F4
O M
mov mov add mov lea mov lea shr
r9, 13h r12, rax rbp, 3 rsi, rbp rdi, [r12+3] [r12+2], bl r13d, [rbx+r9] eax, 8
N
Figure 17. Image vs. Code Similarity By Composition. Query image iq2 is similar to the target image it. Query code q2 is similar to the target code t. Images courtesy of Irani et al. [20]
segments). Although the images are not identical, their similarity stems from the fact that the query image iq2 can be composed using significant (and maybe transformed) regions from target image it. In contrast, image iq1 shares only two small regions with it, and indeed we intuitively consider it to be different. These vague notions are made precise within a statistical framework that lifts similarity between significant regions into similarity between images. In this paper, we show that the general framework of “similarity by composition” can be also applied to code. Specifically, we consider assembly code extracted from stripped binaries (with no debug information). Fig. 17(c), (d), and (e) show partial assembly code of three procedures containing the “Heartbleed” vulnerability. The three procedures were compiled using different compilers. For simplicity (and space) we only present a small part of the code. Finding similarity between the procedures using syntactic techniques is challenging, as different compilers can produce significantly different assembly code. Instead, we decompose each procedure to strands, semantically compare similarity of strands, and lift the results to procedures. In Fig. 17(c) & (e), the query code q2 and the target code t share three matching strands, numbered in the figure as 1 , 2 , and 3 . Each strand is a sequence of in-
E. Yahav / Analysis and Synthesis with “Big Code”
275
structions, and strands are considered as matches when they perform an equivalent computation. In the figure, we mark matching strands using the same circled number. Two syntactically different strands can be equivalent. For example, the strands numbered 2 in q2 and t are different syntactically but are equivalent (up to renaming and ignoring the change of r9). Further, strands need not be syntactically contiguous, this is because they are based on data-flow dependencies rather than on syntactic properties. For example, strand 3 in the query procedure q2 (mov r12, rax; lea rdi, [r12+3]). This strand matches the strand mov r13, rax;lea rcx, [r13+3] in the target procedure t. In contrast, the code in Fig. 17(d) only matches the single strand 1 in the target. Our approach We present a novel notion of similarity for procedures based on the following key components: Decomposing the procedure into strands: We decompose procedures into smaller segments (we refer to as strands), which are feasible to compare, and try to use semantically similar strands from one procedure to compose the other. Comparing strands: We use a program verifier [15] to check whether two strands are semantically equivalent by assuming input equivalence and checking if intermediate and output values are the same. When they are not equivalent, we define a quantitative notion of strand similarity based on the proportion of matching values to the total number of values in the strand. Statistical reasoning over strands: We present a statistical framework for reasoning about similarity of whole procedures based on strand similarity. We compute a global similarity evidence score between procedures using local evidence score (LES) between strands. A key aspect of our framework is that we amplify similarity scores of unique strands, expressed by a high LES and dampen the significance of “common” strands (with respect to the targets database examined) as they are less indicative of similarity. Main contributions: • A framework for reasoning about similarity of procedures in stripped binaries. The main idea is to decompose procedures into strands, perform semantic comparison of strands, and lift strand similarity to similarity between procedures using statistical techniques. • A technique for checking input-output equivalence of strands of code, where all variables are unlabeled. • A statistical model that quantifies procedures’ probability of similarity based on similarity of their strands. • A prototype implementation in T RACY, which is publicly available. We compare T RACY to previous binary code-search techniques using challenging search scenarios that combine patched and non-patched versions of real-world vulnerable procedures in binary form, compiled with different compiler versions and vendors. These experiments show that T RACY achieves significantly better results. 5.1. Overview In this section, we illustrate our approach informally using an example. Given a query procedure from a stripped binary, our goal is to find similar procedures in other stripped binaries. To simplify presentation, we illustrate our approach on two code snippets instead of full procedures. Consider the assembly code of Fig. 18(a). This
276
E. Yahav / Analysis and Synthesis with “Big Code” 1
r14d, [r12+13h] r13, rax eax, r12d rcx, [r13+3] eax, 8 rsi, [rbx+3] [r13+1], al [r13+2], r12b rdi, rcx memcpy ecx, r14d esi, 18h write_bytes eax, eax short loc_2A38
lea mov mov lea shr lea mov mov mov call mov mov call test js
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
(a) gcc v.4.9 -O3
mov mov mov add mov lea mov lea shr mov call add mov mov call mov test jl
r9, 13h r12, rax eax, ebx rbp, 3 rsi, rbp rdi, [r12+3] [r12+2], bl r13d, [rbx+r9] eax, 8 [r12+1], al _intel_memcpy r9, 5h esi, r9d ecx, r13d write_bytes ebx, eax ebx, ebx short loc_342E
(b) icc v.15.0.1 -O3
Figure 18. Heartbleed vulnerability code snippets. assume r12q == rbxt 1 1 2 3 4 5 6 7 8
v1q = v2q = v3q = r14q v4q = rsiq v5q = raxq
r12q 13h + v1q int_to_ptr(v2q ) = v3q 18h = v4q v4q + v3q = v5q
2 3 4 5 6 7 8 9 10
v1t = 13h r9t = v1t v2t = rbxt v3t = v2t + v3t v4t = int_to_ptr(v3t ) r13t = v4t v5t = v1t + 5 rsit = v5t v6t = v5t + v4t raxt = v6t
assert v1q == v2t ,v2q == v3t , v3q == v4t ,r14q == r13t , v4q == v5t ,rsiq == rsit ,v5q == v6t ,raxq == raxt Figure 19. Semantically similar strands.
snippet is taken from a version of OpenSSL, which is vulnerable to the Heartbleed bug [3]. Our goal is to find similar vulnerable code snippets in our code-base. We would like to define a notion of similarity that can find matches even when the code has been modified, or compiled with a different compiler vendor and version. For example, the code of Fig. 18(a) was compiled using gcc v4.9, and the code of Fig. 18(b), originating from the same source, was compiled using icc v15.0.1. We would like our approach to find the similarity of these two snippets despite their noticeable syntactic difference. We focus one similarity instead of equivalence, as we would like our approach to apply to code may have been patched. Towards that end, we compute a similarity score that captures similarity between (small) partial computations performed by each procedure. The main idea of our approach is to decompose the code to smaller fragments, for which similarity is easy to compute, and use statistical reasoning over fragment similarity to establish the global similarity between code snippets. Towards that end, we have to answer three design questions:
E. Yahav / Analysis and Synthesis with “Big Code” 1 2 3 4 5 6 7
v1q = raxq v2q = and(v1q ,-1) raxq = v2q raxq = 0 v3q = raxq v4q = lshr(v3q ,8) raxq = v4q
1 2 3 4 5 6 7
277
v1t = raxt v2t = and(v1t ,-1) raxt = v2t // no change to raxt v3t = raxt v4t = lshr(v3t ,8) raxt = v4t
Figure 20. Syntactically similar but semantically different strands.
• what is the best wasy to decompose the code snippets? • how to compare decomposed fragments to each other? • how to lift fragment similarity to snippet similarity? Decomposition into strands We decompose a procedure into fragments which are feasible to compare. In this work, we use strands—partial dependence chains, as the basic unit. Fig. 19 shows two strands obtained from the code snippets of Fig. 18. For now, ignore the assume and assert operations added around the strands. The strands in the figure have been transformed to our intermediate representation, abstracting away from specific assembly instructions. The strands in the figure have been aligned such that similar instructions of the two strands appear side by side. This alignment is only for the purpose of presentation, and our comparison is based on semantically comparing the strands. We added q and t postfixes to the strands’ variable to separate the name spaces of variables in the two strands and specify one as the query and the other as the target. This allows us to create a joint program that combines the variables from the two strands and makes assumptions and assertions on their equality. Comparing a pair of strands To compare a pair of strands, we create a joint-program that combines them, but has a separate name space for the variables of each strand. We then explore the space of equality assumptions on different inputs, and equality assertions on outputs. For example, one choice for assumptions and assertions is shown in Fig. 19. Technically, given a pair of strands, we perform the following steps: (i) add equality assumption over inputs of the two strands, (ii) add assertions that check equality of all output variables (where output variables also include temporaries), and (iii) check the assertions using a program verifier and count how many variables are equivalent. Picking which variables to pair when assuming and asserting equality is solved by searching the space of possible pairs. The choice of the strand as a small unit of comparison (with a relatively small number of variables), along with verifier based optimizations, greatly reduce the search space making the use of a verifier feasible. Match Probability: We define an asymmetric similarity measure between a query strand sq and a target strand st as the percentage of variables from sq that have an equivalent counterpart in st . We denote this measure by VCP(sq , st ). We later use this measure as a basis for computing the probability Pr(sq |st ) that a strand sq is input-output equivalent to a strand st (see [27]). For example, taking sq to be the left strand of Fig. 19, and st to be the strand on the right hand side, VCP(sq , st ) = 1, because all of the 8 variables from the left hand side have an equivalent variable on the right hand side. However, in the other direction VCP(st , sq ) = 8/9. We note that no previous approach is able to produce such a matching as the equivalent values are computed using different instructions. In contrast to Fig. 19, the strands of Fig. 20 are similar syntactically, but different semantically. Syntactic approaches would typically classify such pairs as matching, lead-
278
E. Yahav / Analysis and Synthesis with “Big Code”
ing to a high rate of false positives. Our semantic comparison identifies that these two strands are different, despite their significant syntactic overlap, and yields VCP = 2/5. Local Evidence of Similarity: Modeling strand similarity using probability allows us to express other notions of similarity in a natural manner. For example, given a target procedure t and a query strand sq , we can capture how well sq can be matched in t by computing the maximal probability of Pr(sq |st ) over any possible strand st in t. We further define the probability Pr(sq |H0 ), of finding a matching strand for sq at random (H0 representing all possible strands). The significance of finding a match for sq in t can be then defined as: LES(sq ,t) = log
maxst ∈t Pr(sq , st ) Pr(st , H0 )
The LES provides a measure of how significant is the matching of sq with t by comparing it to the matching of sq with the random source H0 . It is important to measure the significance of a match, because many strand matches may be due to common strands introduced by the compiler (e.g., prolog/epilog), and therefore not significant in determining a match between procedures. Lifting strand similarity to procedure similarity We say that two procedures are similar if one can be composed using (significantly similar) parts of the other. Given a query procedure q and a target procedure t, the LES(sq ,t) measure indicates how significant is the match of sq in t. We can therefore define a global evidence of similarity (GES) between the procedures q and t by summing LES(sq ,t) over all strands sq in q (as further explained in [27]). The local and global similarity evidence allow us to lift semantic similarity computed between individual strands into a statistical notion of similarity between procedures. Key Aspects of our approach • Similarity by composition: we decompose procedures into strands, semantically compare strands, and lift strand similarity to procedure similarity using statistical techniques. • Using strand similarity allows us to establish similarity between procedures even when the procedures are not equivalent, but still contain statistically significant similar strands. • By performing semantic comparison between strands, we are able to find similarity across different compilers versions and vendors (without knowledge of the specifics of the compilers). In [27] we compare our approach to previous work and show the importance of semantic comparison.
6. Conclusions In this chapter, we briefly (and mostly informally) surveyed recent advances in analysis and synthesis with “big code”. We examined several semantic representations, and machine learning techniques, and demonstrated their utility in several applications. In Section 2, we showed how to use symbolic automata, together with clustering, for semantic code search and code completion. We also briefly mention work from [66],
E. Yahav / Analysis and Synthesis with “Big Code”
279
which used object tracelets with statistical language models for effective code completion. In Section 3, we showed how to use control tracelets for code search in (stripped) binaries. In Section 4, we showed how to use object tracelets with variable order Markov models (VMMs) to estimate types in stripped binaries. Finally, in Section 5, we again use object tracelets, but this time with a statistical similarity measure, used for semantic code search in stripped binaries. The use of “big code” holds the potential to revolutionize many programming tasks. We are only taking the first steps in this exciting research direction, but the preliminary results seem very promising.
Acknowledgements This chapter summarizes work with many co-authors, notably Martin Vechev (ETH Zurich), and Sharon Shoham (MTA Israel), as well as Alon Mishne, and Hila Peleg. Parts of this work have previously appeared in [55,63,82,61,28,43,27]. The research leading to these results has received funding from the European Union’s Seventh Framework Programme (FP7) under grant agreement no. 615688 ERC-COG-PRIME.
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15]
[16]
http://like2drops.com. Clubberingtime - cves, and affected products. Heartbleed vulnerability cve information. https://cve.mitre.org/cgi-bin/cvename. cgi?name=CVE-2014-0160. Microsoft corporation. visual studio. https://www.visualstudio.com. ACHARYA , M., X IE , T., P EI , J., AND X U , J. Mining API patterns as partial orders from source code: from usage scenarios to specifications. In ESEC-FSE ’07, pp. 25–34. A LNUSAIR , A., Z HAO , T., AND B ODDEN , E. Effective API navigation and reuse. In IRI (aug. 2010), pp. 7 –12. A LUR , R., C ERNY, P., M ADHUSUDAN , P., AND NAM , W. Synthesis of interface specifications for Java classes. In POPL (2005). A MMONS , G., B ODIK , R., AND L ARUS , J. R. Mining specifications. In POPL’02, pp. 4–16. BACON , D. F., AND S WEENEY, P. F. Fast static analysis of c++ virtual function calls. In Proceedings of the 11th ACM SIGPLAN Conference on Object-oriented Programming, Systems, Languages, and Applications (New York, NY, USA, 1996), OOPSLA ’96, ACM, pp. 324–341. BALAKRISHNAN , G., AND R EPS , T. Divine: discovering variables in executables. In VMCAI’07 (2007), pp. 1–28. BALAKRISHNAN , G., AND R EPS , T. Divine: Discovering variables in executables. In Verification, Model Checking, and Abstract Interpretation, B. Cook and A. Podelski, Eds., vol. 4349 of Lecture Notes in Computer Science. Springer, 2007, pp. 1–28. BALAKRISHNAN , G., AND R EPS , T. Wysinwyx: What you see is not what you execute. ACM Trans. Program. Lang. Syst. 32, 6 (Aug. 2010), 23:1–23:84. BALL , T., AND L ARUS , J. R. Efficient path profiling. In Proceedings of the 29th Int. Symp. on Microarchitecture (1996), MICRO 29. BANSAL , S., AND A IKEN , A. Automatic generation of peephole superoptimizers. In ASPLOS XII (2006). BARNETT, M., C HANG , B. E., D E L INE , R., JACOBS , B., AND L EINO , K. R. M. Boogie: A modular reusable verifier for object-oriented programs. In Formal Methods for Components and Objects, 4th International Symposium, FMCO 2005, Amsterdam, The Netherlands, November 1-4, 2005, Revised Lectures (2005), pp. 364–387. BAXTER , I. D., YAHIN , A., M OURA , L., S ANT ’A NNA , M., AND B IER , L. Clone detection using abstract syntax trees. In ICSM ’98.
280 [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36]
[37] [38] [39] [40] [41] [42] [43] [44] [45] [46]
E. Yahav / Analysis and Synthesis with “Big Code”
B ECKMAN , N., K IM , D., AND A LDRICH , J. An empirical study of object protocols in the wild. In ECOOP’11. B EGLEITER , R., E L -YANIV, R., AND YONA , G. On prediction using variable order markov models. Journal of Artificial Intelligence Research (2004), 385–421. B ELLON , S., KOSCHKE , R., A NTONIOL , G., K RINKE , J., AND M ERLO , E. Comparison and evaluation of clone detection tools. IEEE TSE 33, 9 (2007), 577–591. B OIMAN , O., AND I RANI , M. Similarity by composition. In Advances in Neural Information Processing Systems 19, Proceedings of the Twentieth Annual Conference on Neural Information Processing Systems, Vancouver, British Columbia, Canada, December 4-7, 2006 (2006), pp. 177–184. B RUSCHI , D., M ARTIGNONI , L., AND M ONGA , M. Detecting self-mutating malware using controlflow graph matching. In DIMVA’06. C HEN , S. F., AND G OODMAN , J. An empirical study of smoothing techniques for language modeling. In Proceedings of the 34th annual meeting on Association for Computational Linguistics (1996), Association for Computational Linguistics, pp. 310–318. C LEARY, J. G., AND W ITTEN , I. H. Data compression using adaptive coding and partial string matching. Communications, IEEE Transactions on 32, 4 (1984), 396–402. C OMPARETTI , P., S ALVANESCHI , G., K IRDA , E., KOLBITSCH , C., K RUEGEL , C., AND Z ANERO , S. Identifying dormant functionality in malware programs. In IEEE Symp. on Security and Privacy (2010). C OOK , J. E., AND W OLF, A. L. Discovering models of software processes from event-based data. ACM Trans. Softw. Eng. Methodol. 7, 3 (1998), 215–249. DALLMEIER , V., L INDIG , C., WASYLKOWSKI , A., AND Z ELLER , A. Mining object behavior with ADABU. In WODA ’06. DAVID , Y., PARTUSH , N., AND YAHAV, E. Statistical simiarlity in binaries. In PLDI (2016). DAVID , Y., AND YAHAV, E. Tracelet-based code search in executables. In Proceedings of the 35th ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI ’14. DAVID , Y., AND YAHAV, E. Tracelet-based code search in executables. In PLDI (2014), p. 37. D UCASSE , S., R IEGER , M., AND D EMEYER , S. A language independent approach for detecting duplicated code. In ICSM ’99. F INK , S., YAHAV, E., D OR , N., R AMALINGAM , G., AND G EAY, E. Effective typestate verification in the presence of aliasing. In ISSTA ’06: Proceedings of the 2006 international symposium on Software testing and analysis (2006), ACM, pp. 133–144. ACM SIGSOFT Best paper award. G ABEL , M., J IANG , L., AND S U , Z. Scalable detection of semantic clones. In ICSE ’08, pp. 321–330. G ABEL , M., AND S U , Z. Javert: fully automatic mining of general temporal properties from dynamic traces. In FSE’08. github code search. https://github.com/search. G RUSKA , N., WASYLKOWSKI , A., AND Z ELLER , A. Learning from 6,000 projects: Lightweight crossproject anomaly detection. In ISSTA ’10. H AWBLITZEL , C., L AHIRI , S. K., PAWAR , K., H ASHMI , H., G OKBULUT, S., F ERNANDO , L., D ETLEFS , D., AND WADSWORTH , S. Will you still compile me tomorrow? static cross-version compiler validation. In Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering, ESEC/FSE’13, Saint Petersburg, Russian Federation, August 18-26, 2013 (2013), pp. 191–201. H OLMES , R., AND M URPHY, G. C. Using structural context to recommend source code examples. In ICSE ’05. H OLMES , R., WALKER , R. J., AND M URPHY, G. C. Strathcona example recommendation tool. In FSE’05, pp. 237–240. H ORWITZ , S., R EPS , T., AND B INKLEY, D. Interprocedural slicing using dependence graphs. In PLDI ’88 (1988). JANG , J., W OO , M., AND B RUMLEY, D. Towards automatic software lineage inference. In USENIX Security (2013). J IANG , L., M ISHERGHI , G., S U , Z., AND G LONDU , S. Deckard: Scalable and accurate tree-based detection of code clones. IEEE Computer Society, pp. 96–105. K AMIYA , T., K USUMOTO , S., AND I NOUE , K. CCFinder: a multilinguistic token-based code clone detection system for large scale source code. IEEE Trans. Softw. Eng. 28, 7 (2002). K ATZ , O., E L -YANIV, R., AND YAHAV, E. Estimating types in binaries using predictive modeling. In POPL ’16: Proceedings of the ACM SIGPLAN-SIGACT symposium on Principles of programming languages (2016). to appear. K ATZ , S. M. Estimation of probabilities from sparse data for the language model component of a speech recognizer. Acoustics, Speech and Signal Processing, IEEE Transactions on 35, 3 (1987), 400–401. K HOO , W. M., M YCROFT, A., AND A NDERSON , R. Rendezvous: a search engine for binary code. In MSR ’13. K IM , J., L EE , S., WON H WANG , S., AND K IM , S. Towards an intelligent code search engine. In AAAI’10.
E. Yahav / Analysis and Synthesis with “Big Code”
[47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62]
[63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75]
281
Koders. http://www.koders.com/. KOMONDOOR , R., AND H ORWITZ , S. Using slicing to identify duplication in source code. In SAS ’01, pp. 40–56. K RINKE , J. Identifying similar code with program dependence graphs. In WCRE (2001), pp. 301–309. K RUEGEL , C., K IRDA , E., M UTZ , D., ROBERTSON , W., AND V IGNA , G. Polymorphic worm detection using structural information of executables. In Proc. of int. conf. on Recent Advances in Intrusion Detection, RAID’05. L IVIERI , S., H IGO , Y., M ATUSHITA , M., AND I NOUE , K. Very-large scale code clone analysis and visualization of open source programs using distributed CCFinder: D-CCFinder. In ICSE’07. L O , D., AND K HOO , S.-C. SMArTIC: towards building an accurate, robust and scalable specification miner. In FSE’06. M ANDELIN , D., X U , L., B ODIK , R., AND K IMELMAN , D. Jungloid mining: helping to navigate the API jungle. In PLDI ’05, pp. 48–61. M ISHNE , A. Typestate-based semantic code search over partial programs. Master’s thesis, TechnionIsrael Institute of Technology, Haifa, Israel, 2012. M ISHNE , A., S HOHAM , S., AND YAHAV, E. Typestate-based semantic code search over partial programs. In OOPSLA’12: Proceedings of the 27th ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages and Applications (2012). M ONPERRUS , M., B RUCH , M., AND M EZINI , M. Detecting missing method calls in object-oriented software. In ECOOP’10 (2010), vol. 6183 of LNCS, pp. 2–25. M YLES , G., AND C OLLBERG , C. K-gram based software birthmarks. In Proceedings of the 2005 ACM symposium on Applied computing, SAC ’05, pp. 314–318. PARTUSH , N., AND YAHAV, E. Abstract semantic differencing for numerical programs. In SAS’13: The 20th International Static Analysis Symposium (2013), pp. 238–258. PARTUSH , N., AND YAHAV, E. Abstract semantic differencing for numerical programs. In SAS (2013). PARTUSH , N., AND YAHAV, E. Abstract semantic differencing for numerical programs. In SAS (2013), pp. 238–258. PARTUSH , N., AND YAHAV, E. Abstract semantic differencing via speculative correlation. In OOPSLA’14: Proceedings of the ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages and Applications (2014). PARTUSH , N., AND YAHAV, E. Abstract semantic differencing via speculative correlation. In Proceedings of the 2014 ACM International Conference on Object Oriented Programming Systems Languages & Applications, OOPSLA 2014, part of SPLASH 2014, Portland, OR, USA, October 20-24, 2014 (2014), pp. 811–828. P ELEG , H., S HOHAM , S., YAHAV, E., AND YANG , H. Symbolic automata for represnting big code. In STTT’15: International journal on Software Tools for Technology Transfer (STTT) (2015). R AMALINGAM , G., F IELD , J., AND T IP, F. Aggregate structure identification and its application to program analysis. In Proceedings of the 26th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (New York, NY, USA, 1999), POPL ’99, ACM, pp. 119–132. R AMOS , D. A., AND E NGLER , D. R. Practical, low-effort equivalence verification of real code. In CAV (2011), pp. 669–685. R AYCHEV, V., V ECHEV, M., AND YAHAV, E. Code completion with statistical language models. In Proceedings of the 35th ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI ’14. R EISS , S. P. Semantics-based code search. In ICSE’09. R EPS , T., BALAKRISHNAN , G., AND L IM , J. Intermediate-representation recovery from low-level code. In Proceedings of the 2006 ACM SIGPLAN Symposium on Partial Evaluation and Semanticsbased Program Manipulation (New York, NY, USA, 2006), PEPM ’06, ACM, pp. 100–111. R EPS , T., BALL , T., DAS , M., AND L ARUS , J. The use of program profiling for software maintenance with applications to the year 2000 problem. In ESEC ’97/FSE-5. R EPS , T., L IM , J., T HAKUR , A., BALAKRISHNAN , G., AND L AL , A. There’s plenty of room at the bottom: Analyzing and verifying machine code. In Computer Aided Verification, T. Touili, B. Cook, and P. Jackson, Eds., vol. 6174 of Lecture Notes in Computer Science. Springer, 2010, pp. 41–56. ROSENBLUM , N., Z HU , X., AND M ILLER , B. P. Who wrote this code? identifying the authors of program binaries. In ESORICS’11. ROSENBLUM , N. E., M ILLER , B. P., AND Z HU , X. Extracting compiler provenance from program binaries. In PASTE’10. ROSENFELD , R. Two decades of statistical language modeling: Where do we go from here? In Proceedings of the IEEE (2000), vol. 88, pp. 1270–1278. Reversing C++. https://www.blackhat. S ABANAL , P. V., AND YASON , M. V. com/presentations/bh-dc-07/Sabanal_Yason/Paper/bh-dc-07-Sabanal_ Yason-WP.pdf. S AEBJORNSEN , A., W ILLCOCK , J., PANAS , T., Q UINLAN , D., AND S U , Z. Detecting code clones in
282
[76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94]
E. Yahav / Analysis and Synthesis with “Big Code”
binary executables. In ISSTA ’09. S ÆBJØRNSEN , A., W ILLCOCK , J., PANAS , T., Q UINLAN , D. J., AND S U , Z. Detecting code clones in binary executables. In Proceedings of the Eighteenth International Symposium on Software Testing and Analysis, ISSTA 2009, Chicago, IL, USA, July 19-23, 2009 (2009), pp. 117–128. S AHAVECHAPHAN , N., AND C LAYPOOL , K. XSnippet: mining for sample code. In OOPSLA ’06. S CHKUFZA , E., S HARMA , R., AND A IKEN , A. Stochastic superoptimization. In ASPLOS ’13. S HARMA , R., S CHKUFZA , E., C HURCHILL , B., AND A IKEN , A. Data-driven equivalence checking. In OOPSLA’13. S HARMA , R., S CHKUFZA , E., C HURCHILL , B. R., AND A IKEN , A. Data-driven equivalence checking. In OOPSLA (2013), pp. 391–406. S HOHAM , S., YAHAV, E., F INK , S., AND P ISTOIA , M. Static specification mining using automatabased abstractions. In ISSTA ’07. S INAI , M. B., AND YAHAV, E. Code similarity via natural language descriptions. In OBT’15: POPL Off the Beaten Track (2014). S INGH , R., G ULWANI , S., AND S OLAR -L EZAMA , A. Automated feedback generation for introductory programming assignments. In PLDI ’13, pp. 15–26. ˘ , K. Programming by sketching for S OLAR -L EZAMA , A., R ABBAH , R., B ODÍK , R., AND E BCIO GLU bit-streaming programs. In PLDI ’05. stackoverflow. http://stackoverflow.com/. S TROM , R. E., AND Y EMINI , S. Typestate: A programming language concept for enhancing software reliability. IEEE Trans. Software Eng. 12, 1 (1986), 157–171. T HUMMALAPENTA , S., AND X IE , T. PARSEWeb: a programmer assistant for reusing open source code on the web. In ASE’07, pp. 204–213. WAHLER , V., S EIPEL , D., W OLFF , J., AND F ISCHER , G. Clone detection in source code by frequent itemset techniques. In Source Code Analysis and Manipulation (2004). WASYLKOWSKI , A., AND Z ELLER , A. Mining temporal specifications from object usage. In Autom. Softw. Eng. (2011), vol. 18. WASYLKOWSKI , A., Z ELLER , A., AND L INDIG , C. Detecting object usage anomalies. In FSE’07, pp. 35–44. W EIMER , W., AND N ECULA , G. Mining temporal specifications for error detection. In TACAS (2005). W HALEY, J., M ARTIN , M. C., AND L AM , M. S. Automatic extraction of object-oriented component interfaces. In ISSTA’02. YANG , J., E VANS , D., B HARDWAJ , D., B HAT, T., AND DAS , M. Perracotta: mining temporal API rules from imperfect traces. In ICSE ’06, pp. 282–291. Z HONG , H., X IE , T., Z HANG , L., P EI , J., AND M EI , H. MAPO: Mining and recommending API usage patterns. In ECOOP’09.
The author(s) of this publication is/are solely responsible for its content. This publication does not reflect the opinion of the publisher. The publisher cannot be held liable for any loss or damage that may occur because of this publication.
Dependable Software Systems Engineering J. Esparza et al. (Eds.) IOS Press, 2016 © 2016 The authors and IOS Press. All rights reserved.
283
Subject Index active learning 137 assume-guarantee reasoning 117 authentication 33 automata over infinite words and trees 72 big code 244 bounded synthesis 72 bug finding 99 church’s problem 72 concurrent software 159 coordination logic 72 data wrangling 137 decidability 202 deductive synthesis 137 dicsrete-time Markov chains 1 end-user programming 137 energy games 179 equivalence 202 GR(1) 72 incomplete information 72 inductive synthesis 137 learning 117 learning-based compositional verification 117 log files 137 mean-payoff games 179 model checking 1, 117, 159 parameterized verification 59 probabilistic computation tree logic 1
program analysis 24, 159, 244 program synthesis 137, 244 programming by examples 137 reactive systems 72, 179 S1S 72 SAT 24 SAT-based synthesis 72 secrecy 33 security protocol 33 semi-structured data 137 SMT 24 software model checking 99 software verification 159 spreadsheets 137 state machines 117 strategy logic 72 symbolic synthesis 72 synthesis 72, 179 synthesis of distributed systems 72 systematic testing 99 temporal logic 72 theorem proving 24 tree-to-string transducers 202 tree-to-tree transducers 202 two-player games played on graphs 179 UML behavioral systems 117 verification 33, 99 version space algebras 137 XML transformations 202
This page intentionally left blank
Dependable Software Systems Engineering J. Esparza et al. (Eds.) IOS Press, 2016 © 2016 The authors and IOS Press. All rights reserved.
285
Author Index Baier, C. Bjørner, N. Cortier, V. Engelfriet, J. Esparza, J. Finkbeiner, B. Godefroid, P. Grumberg, O. Gulwani, S.
1 24 33 202 59 72 99 117 137
Kemper, G. Kremer, S. Kroening, D. Maneth, S. Meller, Y. Raskin, J.-F. Seidl, H. Yahav, E.
202 33 159 202 117 179 202 244
This page intentionally left blank