Concise Guide to Software Verification: From Model Checking to Annotation Checking [1 ed.] 3031301668, 9783031301667, 9783031301674, 9783031301698

This textbook overviews the whole spectrum of formal methods and techniques that are aimed at verifying correctness of s

183 86 9MB

English Pages 257 [251] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Contents
1 Introduction
2 First-Order Logic and Set Theory
2.1 Booleans
2.2 Propositional Logic
2.3 First-Order Logic
2.4 Set Theory
2.5 Functions and Relations
3 System Modelling
3.1 History and Background
3.2 Finite State Machines
3.3 Finite State Machines in NUSMV
3.3.1 Defining NUSMV Modules
3.3.2 Composition of Multiple Modules
3.4 Finite State Machines in PROMELA
3.4.1 Defining PROMELA Processes
3.4.2 Composition of Multiple Processes
3.5 Kripke Structures
3.6 A Modelling Case Study—An Elevator System
3.7 Further Reading
4 Functional System Properties in Temporal Logic
4.1 History and Background
4.2 Safety Versus Liveness Properties
4.3 Kripke Structures
4.4 Linear Temporal Logic—Syntax and Semantics
4.5 Fairness
4.6 Fairness as an LTL Formula
4.7 Computation Tree Logic—Syntax and Semantics
4.8 Fairness as a CTL Formula?
4.9 Comparison of LTL and CTL
4.10 Further Reading
5 Model Checking Algorithms
5.1 CTL Model Checking in NUSMV
5.2 CTL Model Checking in NUSMV Under Fairness
5.3 Tableau-Based LTL Model Checking in NUSMV
5.4 Checking Safety Properties in SPIN
5.5 Automata-Based LTL Model Checking in SPIN
5.6 Further Reading
6 Analysing Software
6.1 The Relation Between Code and Model
6.1.1 Encoding a Program in NUSMV
6.1.2 Challenges for Software Model Checking
6.1.3 Software Analysis Approaches
6.2 Runtime Monitoring of Software
6.2.1 Implementation of LARVA
6.2.2 Monitor Specifications in LARVA
6.2.3 From Temporal Logic Formula to LARVA Automata
6.3 Bounded Model Checking Using CBMC
6.3.1 SAT-Based Bounded Model Checking of Safety Properties
6.3.2 Unrolling the Transition Relation in CBMC
6.3.3 Properties in CBMC
6.3.4 Modular Verification with CBMC
6.3.5 Further Reading
6.4 Bounded Symbolic Execution Using CIVL
6.4.1 Symbolic Execution
6.4.2 Bounded Symbolic Execution
6.4.3 CIVL Details
6.5 Counter-Example-Guided Abstraction-Refinement
6.5.1 Abstraction
6.5.2 Repeated Abstraction and Refinement with CEGAR
6.5.3 Non-Termination of CEGAR
6.5.4 Other Approaches Using Abstraction and Refinement
6.6 Automatic Test Suite Generation Using CBMC
6.6.1 Further Reading
7 Design by Contract Specification Languages
7.1 History and Background
7.2 Function Contracts
7.2.1 Ingredients of a Function Contract
7.2.2 Behaviours
7.2.3 Various Details on Function Contracts
7.3 Data Specifications
7.4 Multiple Function Behaviours
7.5 Inheritance of Method Specifications in JML
7.6 Specifying Exceptional Behaviour in JML
7.7 Conclusions
8 Abstract Specifications
8.1 Using Functions in Contracts
8.2 Model Variables
8.2.1 Model Variables and Interfaces
8.2.2 Model Variables for Mathematical Abstraction
8.3 Ghost Variables
8.4 Model Versus Ghost Variables
9 Runtime Annotation Checking
9.1 History and Background
9.2 Manually Validating Specifications
9.3 Requirements for a Runtime Annotation Checker
9.4 Executing a Runtime Annotation Checker
9.5 Monitoring Behavioural Properties
9.6 Further Reading
10 Static Annotation Checking
10.1 History and Background
10.2 Hoare Logic and Weakest Preconditions
10.3 Reasoning About Function Calls
10.4 Statement Annotations—Helping the Verifier
10.5 Termination
10.6 Further Reading
Appendix References
Appendix Index
Recommend Papers

Concise Guide to Software Verification: From Model Checking to Annotation Checking [1 ed.]
 3031301668, 9783031301667, 9783031301674, 9783031301698

  • Commentary
  • Publisher PDF
  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Texts in Computer Science

Marieke Huisman · Anton Wijs

Concise Guide to Software Verification From Model Checking to Annotation Checking

Texts in Computer Science Series Editor Orit Hazzan , Faculty of Education in Technology and Science, Technion—Israel Institute of Technology, Haifa, Israel

Titles in this series now included in the Thomson Reuters Book Citation Index! ‘Texts in Computer Science’ (TCS) delivers high-quality instructional content for undergraduates and graduates in all areas of computing and information science, with a strong emphasis on core foundational and theoretical material but inclusive of some prominent applications-related content. TCS books should be reasonably self-contained and aim to provide students with modern and clear accounts of topics ranging across the computing curriculum. As a result, the books are ideal for semester courses or for individual self-study in cases where people need to expand their knowledge. All texts are authored by established experts in their fields, reviewed internally and by the series editors, and provide numerous examples, problems, and other pedagogical tools; many contain fully worked solutions. The TCS series is comprised of high-quality, self-contained books that have broad and comprehensive coverage and are generally in hardback format and sometimes contain color. For undergraduate textbooks that are likely to be more brief and modular in their approach, require only black and white, and are under 275 pages, Springer offers the flexibly designed Undergraduate Topics in Computer Science series, to which we refer potential authors.

Marieke Huisman · Anton Wijs

Concise Guide to Software Verification From Model Checking to Annotation Checking

Marieke Huisman Faculty EEMCS, Formal Methods and Tools University of Twente Enschede, The Netherlands

Anton Wijs Software Engineering and Technology Technische Universiteit Eindhoven Eindhoven, The Netherlands

ISSN 1868-0941 ISSN 1868-095X (electronic) Texts in Computer Science ISBN 978-3-031-30166-7 ISBN 978-3-031-30167-4 (eBook) https://doi.org/10.1007/978-3-031-30167-4 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

To Bas and Ria To Merit, Owen, Emily and Willy

Contents

1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1

2

First-Order Logic and Set Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Booleans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Propositional Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 First-Order Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Set Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Functions and Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7 7 7 9 10 11

3

System Modelling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 History and Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Finite State Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Finite State Machines in NuSMV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Defining NuSMV Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 Composition of Multiple Modules . . . . . . . . . . . . . . . . . . . . . . 3.4 Finite State Machines in Promela . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 Defining Promela Processes . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 Composition of Multiple Processes . . . . . . . . . . . . . . . . . . . . . 3.5 Kripke Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 A Modelling Case Study—An Elevator System . . . . . . . . . . . . . . . . 3.7 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

13 13 15 17 18 23 27 27 34 45 48 55

4

Functional System Properties in Temporal Logic . . . . . . . . . . . . . . . . . . . 4.1 History and Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Safety Versus Liveness Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Kripke Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Linear Temporal Logic—Syntax and Semantics . . . . . . . . . . . . . . . . 4.5 Fairness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 Fairness as an LTL Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7 Computation Tree Logic—Syntax and Semantics . . . . . . . . . . . . . . 4.8 Fairness as a CTL Formula? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.9 Comparison of LTL and CTL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.10 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

57 57 58 60 61 65 69 71 75 76 77

vii

viii

Contents

5

Model Checking Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 5.1 CTL Model Checking in NuSMV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 5.2 CTL Model Checking in NuSMV Under Fairness . . . . . . . . . . . . . 84 5.3 Tableau-Based LTL Model Checking in NuSMV . . . . . . . . . . . . . . 86 5.4 Checking Safety Properties in Spin . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 5.5 Automata-Based LTL Model Checking in Spin . . . . . . . . . . . . . . . . 94 5.6 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

6

Analysing Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 The Relation Between Code and Model . . . . . . . . . . . . . . . . . . . . . . . 6.1.1 Encoding a Program in NuSMV . . . . . . . . . . . . . . . . . . . . . . . 6.1.2 Challenges for Software Model Checking . . . . . . . . . . . . . . 6.1.3 Software Analysis Approaches . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Runtime Monitoring of Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 Implementation of LarVa . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.2 Monitor Specifications in LarVa . . . . . . . . . . . . . . . . . . . . . . 6.2.3 From Temporal Logic Formula to LarVa Automata . . . . 6.3 Bounded Model Checking Using Cbmc . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 SAT-Based Bounded Model Checking of Safety Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.2 Unrolling the Transition Relation in Cbmc . . . . . . . . . . . . . 6.3.3 Properties in Cbmc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.4 Modular Verification with Cbmc . . . . . . . . . . . . . . . . . . . . . . . 6.3.5 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Bounded Symbolic Execution Using Civil . . . . . . . . . . . . . . . . . . . . 6.4.1 Symbolic Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.2 Bounded Symbolic Execution . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.3 Civil Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Counter-Example-Guided Abstraction-Refinement . . . . . . . . . . . . . 6.5.1 Abstraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.2 Repeated Abstraction and Refinement with CEGAR . . . . 6.5.3 Non-Termination of CEGAR . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.4 Other Approaches Using Abstraction and Refinement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6 Automatic Test Suite Generation Using Cbmc . . . . . . . . . . . . . . . . . 6.6.1 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7

Design by Contract Specification Languages . . . . . . . . . . . . . . . . . . . . . . . 7.1 History and Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Function Contracts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.1 Ingredients of a Function Contract . . . . . . . . . . . . . . . . . . . . . 7.2.2 Behaviours . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.3 Various Details on Function Contracts . . . . . . . . . . . . . . . . . . 7.3 Data Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Multiple Function Behaviours . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5 Inheritance of Method Specifications in Jml . . . . . . . . . . . . . . . . . . .

107 107 107 108 111 112 114 115 120 120 122 123 125 126 129 130 130 131 136 136 137 142 148 150 151 155 157 157 160 161 168 171 176 184 186

Contents

ix

7.6 7.7

Specifying Exceptional Behaviour in Jml . . . . . . . . . . . . . . . . . . . . . . 189 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192

8

Abstract Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Using Functions in Contracts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Model Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.1 Model Variables and Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.2 Model Variables for Mathematical Abstraction . . . . . . . . . 8.3 Ghost Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Model Versus Ghost Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

193 194 196 196 201 202 205

9

Runtime Annotation Checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1 History and Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Manually Validating Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Requirements for a Runtime Annotation Checker . . . . . . . . . . . . . . 9.4 Executing a Runtime Annotation Checker . . . . . . . . . . . . . . . . . . . . . 9.5 Monitoring Behavioural Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.6 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

207 207 208 210 211 213 217

10 Static Annotation Checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1 History and Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Hoare Logic and Weakest Preconditions . . . . . . . . . . . . . . . . . . . . . . . 10.3 Reasoning About Function Calls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4 Statement Annotations—Helping the Verifier . . . . . . . . . . . . . . . . . . 10.5 Termination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

219 219 221 224 228 232 234

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245

1

Introduction

Motivation Over the last few years, our society has changed substantially and has become rapidly digitalised. For almost every aspect of our daily lives, we now depend on software. The most obvious example is of course our mobile phone, which we use for all our daily activities: shopping, banking, finding directions, and entertainment. But there are also many examples elsewhere: cars have become computers on wheels, airplanes are flying on software, and doctors use robots for their operations. For many of these applications, it is crucial that we can rely on them, that they can be trusted, and that they do not give any unexpected results. Just as long as we have been developing software, researchers have been thinking about techniques that can help to provide such guarantees about software. These techniques are generally called formal methods. According to Wikipedia1 : formal methods are mathematically rigorous techniques for the specification, development, and verification of software and hardware systems. The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing the appropriate mathematical analysis can contribute to the reliability and robustness of a design. Formal methods employ a variety of theoretical computer science fundamentals, including logic calculi, formal languages, automata theory, control theory, program semantics, type systems, and type theory.

This book gives an overview of a range of techniques, captured by this term formal methods, that are aimed at the analysis of software, and it describes how these techniques can be used to improve the reliability and robustness of software. Formal methods for the analysis of hardware are not in the scope of this book. When we refer to formal methods in this book, we implicitly rule out techniques aimed specifically at hardware analysis.

1 See

https://en.wikipedia.org/wiki/Formal_methods, last visited December 7, 2022.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Huisman and A. Wijs, Concise Guide to Software Verification, Texts in Computer Science, https://doi.org/10.1007/978-3-031-30167-4_1

1

2

1 Introduction

The earliest formal techniques to provide guarantees about software involved a pen-and-paper approach, but nowadays, all formal techniques are supported by powerful tool implementations. The term formal methods refer to a wide range of different techniques that all have their own characteristics. In particular, formal methods can differ in: • how easy they are to use: does the use of a formal method require extra work from the user, for example, to specify the (formal) requirements that are being checked, or to steer the technique (or rather, the tool implementing the technique) to find the evidence whether the software fulfils the specified requirements? • the abstraction level at which the guarantees are provided: does the formal method provide guarantees about an abstract system design, or does it provide guarantees about an actual implementation? • the level of guarantees that are given by the technique: does the formal method just help to quickly identify potential bugs, or does it provide a full correctness guarantee? • the kind of properties or requirements that can be established: does the formal method only help to find a fixed class of pre-defined properties, or can a user specify any property of interest and try to establish it? In this book, our aim is to give an overview of this whole spectrum of formal methods and techniques, and how they can be used in practice. We focus in particular on techniques where the user has some control over the properties that are being checked. We show how we have a wide range of techniques available that cover the whole spectrum from abstract system design to implementation, from bug finding to full proofs, and from techniques that are push-button and give a yes/no answer, to techniques that require the user to provide explicit guidance to steer the analysis process. Because of the choice to focus in particular on techniques where the user can control what properties are being checked, this book does not discuss fully automated techniques, covering a pre-defined set of properties in detail. These are usually handled with static analysis or techniques based on type checking. Where appropriate, we provide references to those, for the interested reader. The focus of this book is in particular on the user’s perspective, i.e., what the user can do with different formal methods, and how the user can use the wide spectrum of available techniques to validate different kinds of properties. We think it is important that the user understands how different techniques can provide different guarantees, and how there often is a trade-off between the amount of work that is needed from the user and the level of guarantee that is provided by the formal technique. Moreover, we also want the user to gain some insight into how different techniques can be combined to reason about a system, for example, checking global properties about the overall design of the system, and then validating local properties on parts of the implementation, or using precise and labour-intensive techniques on crucial data structures, while analysing the other implementation components with more lightweight techniques to just search for potential bugs.

1 Introduction

3

For several of the formal techniques that we discuss, we provide a discussion about the basic implementations, and we provide pointers to more detailed accounts on how the techniques are implemented. We discuss these basic implementations, because understanding them can help the user to use the technique more efficiently; in particular if the technique requires some steering from the user. Finally, as mentioned above, nowadays, practically all formal techniques are supported by one or more tools. In this book, for each of the techniques that we introduce, we also discuss tool support. However, our book is not a tool tutorial: our goal is to explain the important concepts, and we use the tools to illustrate how these concepts are turned into an actual implementation. For a number of techniques, we show how each is supported by multiple tools. We believe that this helps the reader to appreciate the different design choices that can be made when implementing a formal technique, and how these impact the capabilities of the tool. Structure of the Book This book is divided into ten chapters, including this introduction chapter. We first describe techniques that can be used to reason at a more abstract level about global system properties. Then we make the step to implementations, and we look at what can be done to guarantee that these abstract global properties indeed hold for a system implementation. Finally, we zoom in to techniques to reason about parts of an implementation with a high level of detail. This content is addressed as follows by the chapters: Chapter 2 introduces background theory and notation on first-order logic and set theory. Chapter 3 describes how abstract system descriptions can be formalised using system models. The concept is first introduced at a conceptual level, and then the chapter addresses modelling for the model checking tools NuSMV and Spin. Chapter 4 introduces how to specify properties about system models using temporal logic. These properties describe how a system is allowed to execute. Chapter 5 discusses the basic model checking algorithms implemented in NuSMV and Spin that are used to verify whether a system model satisfies a temporal logic formula. Chapter 6 then makes the step to validate a system implementation. It describes the challenges that have to be dealt with, in particular related to the fact that the amount of time and computer memory needed to verify a system tends to increase exponentially as a system grows linearly. The remainder of the chapter is focused on explaining various formal techniques that mitigate this problem. Chapter 7 zooms in on how we can write detailed specifications of part of the program by introducing specification languages based on the notion of design by contract. Using such a language, the behaviour of a small software fragment (e.g., a function or a class) is specified in terms of what is required from the caller, and what guarantees does the implementation provide. The chapter introduces two contract specification languages: Jml and Acsl.

4

1 Introduction

Fig. 1.1 Dependencies between the different chapters

Chapter 8 discusses various techniques to make contract specifications more abstract, so that they abstract away from implementation details of the program, but still can be connected to those details. Chapter 9 shows how contract specifications can be inspected and validated during program execution using runtime assertion checking. Chapter 10 finally shows how contract specifications can be statically verified. It introduces Hoare logic as a theoretical basis for this, and then explains how this can be put in practice. Figure 1.1 shows the order in which the different chapters depend on each other. It should be noted that parts of Chap. 6 lightly depend on Chaps. 3 and 4. Audience We developed the material in this book as part of our Master courses System Validation (taught by Marieke Huisman at the University of Twente, The Netherlands) and Program Verification Techniques (taught by Anton Wijs at the Eindhoven University of Technology, The Netherlands). The material was developed for Master students Computer Science and Embedded Systems. Students that specialise in directions like software technology form an important target group, but our courses are also of interest to students studying Cyber Security or Data Science technology. We believe that the material in this book is also of interest for system or software developers with a background in Computer Science or Embedded Systems, who are interested in understanding which different options are available to provide more formal guarantees about the software that they are developing. To facilitate the learning process of users of this book, we provide an online resource that contains: • exercises and solutions for the different chapters; and • links to tutorials and online documentation for the different tools used in this book. The online material is available from www.software-verification.org.

1 Introduction

5

Acknowledgements Many thanks go to Stefan Blom, Ronald Burgman, Roeland Kegel, Wojciech Mostowski, Sophie Lathouwers, and Wolfgang Ahrendt who carefully read through the lecture notes on which this book is based, and provided useful feedback, references, and graphical representations of Kripke structures. Also many thanks to Naum Tomov, who helped to check that all examples provided in the book are accepted by the current version of the tools, and to Jacco Brandt who spotted some mistakes in the examples. The work on this book has been financially supported by NWO grant 639.023.710 for the Mercedes project and by NWO TTW grant 17249 for the ChEOPS project.

2

First-Order Logic and Set Theory

This chapter briefly recapitulates the main ingredients of Booleans, propositional logic, first-order logic, set theory, functions, and relations. The main purpose of this chapter is to provide a quick reference for formal notation used in later chapters; for more detailed information, examples, and intuition, we refer the reader to textbooks on this topic, such as [154].

2.1 Booleans The term Boolean is used to denote a special type, which only has two values: true and false. Booleans are used to denote a truth value, i.e., whether a property (or condition) holds or not. The name Boolean refers to the English mathematician George Boole, see Fig. 2.1.

2.2 Propositional Logic Formulae in propositional logic consist of atomic propositions, and logical connectives between these atomic propositions. Basically, an atomic proposition is a simple statement which can be either true or false, i.e., it has a Boolean value. The logical connectives between the atomic propositions build up the formulae, stating, e.g., that both atomic propositions have to hold; or if one proposition is true, this implies that the other proposition also has to be true, etc. Definition 2.1 (Propositional Logic) Let p be an atomic proposition. A formula φ in propositional logic is constructed by the following grammar: φ:: = p | ¬φ | φ ∧ φ © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Huisman and A. Wijs, Concise Guide to Software Verification, Texts in Computer Science, https://doi.org/10.1007/978-3-031-30167-4_2

7

8

2 First-Order Logic and Set Theory

Fig. 2.1 George Boole (1815–1864)

The logical connective ¬ is called negation, while ∧ is called conjunction (or and). A formula ¬φ is true, if φ is false. A formula φ1 ∧ φ2 is true if both φ1 and φ2 are true. Other logical connectives can be derived from these basic connectives. Wellknown examples are disjunction (or or), denoted ∨, implication, denoted ⇒, and if-and-only-if, denoted ⇔. φ1 ∨ φ2 = ¬(¬φ1 ∧ ¬φ2 ) φ1 ⇒ φ2 = ¬φ1 ∨ φ2 φ1 ⇔ φ2 = (φ1 ⇒ φ2 ) ∧ (φ2 ⇒ φ1 ) def def def

Validity of a formula is defined in terms of the validity of its constituents. We assume that we have a truth assignment A, which is a function that maps atomic propositions to true or false. A propositional formula p is true if A maps p to true. A formula p ∧ q is true, if p is true and q is true, while, for the formula p ∨ q to be true, it is sufficient if only p or q is true. Notice that a formula p ⇒ q is true if either (1) both p and q are true, or (2) p is not true. Thus, in particular, the formula false ⇒ p is always true! A formula p ⇔ q is true if either (1) p and q are both true, or (2) p and q are both not true. Formally, validity of a formula in propositional logic is defined as follows. Definition 2.2 (Validity of Propositional Logic) Let A be a truth assignment for all atomic propositions in a formula. Validity of a formula φ w.r.t. a truth assignment A, written V A (φ) is defined as follows. V A ( p) if and only if A( p) = true V A (¬φ) if and only if not V A (φ) V A (φ1 ∧ φ2 ) if and only if V A (φ1 ) and V A (φ2 ) Validity of the derived logical connectives can be derived from their definitions.

2.3 First-Order Logic

9

It is important to be aware of the direction of reasoning in an implication: if the condition p holds, this implies that q holds. However, if we know that q holds, this does not allow us to derive anything about p. As an example, I can state: “if I eat too much candy, then I get sick”. However, if I am sick, this does not necessarily mean that I ate too much candy, as there can be many other reasons for getting sick.

2.3 First-Order Logic First-order logic, also known as predicate logic, is an extension of propositional logic with predicates and quantifications. A predicate is a function that given one or more parameters, returns a Boolean value, i.e., either true or false. For example, given a type of coloured points, we can define a predicate isBlue, such that isBlue(c) is true if and only if the colour of the coloured point c is blue. Formulas in first-order logic can range over all possible parameters of a predicate, allowing one to express for example that a predicate is true (holds) for all possible parameters, or that there exists a parameter for which the predicate is true. Formally, the syntax of first-order logic is based on propositional logic, with atomic propositions replaced by parametrised predicates, and extended with universal quantification ∀: Definition 2.3 (First-Order Logic) Let P be a (parametrised) predicate, let x, x1 , . . ., xn be variables. A formula φ in first-order logic is defined by the following grammar: φ:: = P(x1 , . . . , xn ) | ¬φ | φ ∧ φ | ∀x.φ A universally quantified formula ∀x.P(x) holds if, for all possible values of x, the predicate P returns true. Existential quantification ∃ can be derived from universal quantification: ∃x.φ = ¬(∀x.¬φ) def

Thus, ∃x.P(x) holds if there is some value x for which P holds. The other logical connectives from propositional logic can be defined in the usual way. In practice, one often wishes to provide a range for the variables over which the formula quantifies. Therefore, as shortcuts, we introduce the following syntactical abbreviations: ∀x.A(x).B(x) = ∀x.A(x) ⇒ B(x) ∃x.A(x).B(x) = ∃x.A(x) ∧ B(x) def def

Notice the difference between the full formulas: with universal quantification, the range should be interpreted as an implication, i.e., if the x is in the range described by A(x), then B(x) has to hold, whereas, with existential quantification, the range should be interpreted as a conjunction, i.e., we need a variable x that is, both within the range A(x) and, it satisfies the predicate B(x).

10

2 First-Order Logic and Set Theory

As a side remark, a generalisation of first-order logic is higher-order logic. In higher-order logic, predicates can be defined, and quantifiers can range over functions. This not only gives extra expressiveness but also makes reasoning more complex. We do not use higher-order logic in this book. Substitution Given a formula in first-order logic, substitution is used to replace all occurrences of a particular variable (the to-be-substituted variable) that are not bound by a quantifier by some expression. Substitution is a purely syntactic transformation. To define substitution of any occurrence of x by an expression E in a first-order logic formula, we first define substitution of x by E for a variable y. Definition 2.4 (Substitution of a variable) Given variables x and y and expression E, substitution of variable y, written y[x := E], is defined as follows: y[x := E] = if x = y then E else y Now, substitution for a formula in first-order logic is defined as follows. Definition 2.5 (Substitution) Given a formula in first-order logic φ, a variable x and an expression E, substitution of x by E (written φ[x := E]) is defined recursively as follows: P(x1 , . . . , xn )[x (¬φ)[x (φ1 ∧ φ2 )[x (∀y.φ)[x

:= := := :=

E] = P(x1 [x := E], . . . , xn [x := E]) E] = ¬(φ[x := E]) E] = (φ1 [x := E]) ∧ (φ2 [x := E]) E] = if x = y then (∀y.φ) else ∀y.(φ[x := E])

Thus, if a variable is bound by a quantifier, its occurrences will not be replaced by the expression E, only free or unbound occurrences of x are replaced.

2.4 Set Theory A set is an unordered collection of objects. The elements of a set are often written as a sequence, surrounded by curly brackets, e.g., the set with elements 1 and 2 is written as {1, 2}. Given a set S, we write x ∈ S to denote that x is an element of the set S, i.e., x is a member of S. If all elements of a set S1 are also elements of a set S2 , then S1 is said to be a subset of S2 , denoted S1 ⊆ S2 . Formally: S1 ⊆ S2 = ∀x.x ∈ S1 .x ∈ S2 def

The empty set ∅ is the set without any elements. The union of sets S1 and S2 , denoted S1 ∪ S2 , is the set of all elements that are members of S1 , or S2 , or both.

2.5 Functions and Relations

11

The intersection of sets S1 and S2 , denoted S1 ∩ S2 , is the set of all elements that are members of S1 and S2 . The difference of sets S1 and S2 , denoted S1 \S2 , is the set of all elements that are members of S1 and not of S2 . Formally: ∀x.x ∈ ∅ ⇔ false ∀x.(x ∈ S1 ∪ S2 ⇔ x ∈ S1 ∨ x ∈ S2 ) ∀x.(x ∈ S1 ∩ S2 ⇔ x ∈ S1 ∧ x ∈ S2 ) / S2 ) ∀x.(x ∈ S1 \S2 ⇔ x ∈ S1 ∧ x ∈ The Cartesian product of two sets S1 and S2 , written S1 × S2 , is a set of tuples, containing all possible combinations of members of S1 and S2 : S1 × S2 = {(x, y) | x ∈ S1 ∧ y ∈ S2 } The powerset of a set S is the set containing all the possible subsets of S. For example, the power set of {1, 2} is {∅, {1}, {2}, {1, 2}}. Sometimes, the powerset of a set S is written as 2 S . For convenience, we sometimes write ∀x ∈ S.P(x) to abbreviate ∀x.x ∈ S.P(x) (for all elements x in the set S, the property P(x) is true), and ∃x ∈ S.Q(x) to abbreviate ∃x.x ∈ S.Q(x) (there exists an x in the set S, such that P(x) is true).

2.5 Functions and Relations A function is a mapping from a set of elements into a (possibly different) set of elements. If a function f maps elements in the set S1 to elements in the set S2 , we say that the type of f is from S1 to S2 , denoted f : S1 → S2 . S1 is called the domain of function f , S2 the range of f . Sometimes, we use → to denote this mapping per element. For example, given the set {1, 2, 3}, we can define the constant function mapping all elements to 1 as follows. ⎧ ⎨ 1 → 1, toOne : 2 → 1, ⎩ 3 → 1 Such a function definition is often written as follows: toOne(x) = 1. Other simple functions would be, for example: f :Z→Z f (x) = x + 1 g:Z→Q g(x) = 1/x Here, Z is the set of all integers, i.e., {. . . , −1, 0, 1, . . .} and Q is the set of all rational numbers, i.e., {x/y | x, y ∈ Z ∧ y = 0}. With N, we refer to the set of all natural numbers, i.e., {0, 1, 2, . . .}, and B denotes the set of all Booleans, i.e., {false, true}.

12

2 First-Order Logic and Set Theory

A function is partial if it is defined only for a subset of its domain; otherwise, it is total. In the example above, f is total, while g is partial (as it is not defined for 0). A relation R between two sets S1 and S2 is a subset of its Cartesian product S1 ×S2 , often written as R ⊆ S1 × S2 . Given two elements x ∈ S1 and y ∈ S2 that are related by R, i.e., (x, y) ∈ R, this can be written using infix notation: x R y. Well-known examples of relations are equality (=) between any two objects, or ordering relations such as ≤ between two numbers. A relation can also be viewed as a function mapping the Cartesian product S1 × S2 to the set of Booleans {true, false}. If we do this, we would say, for example, that the function ≤ maps the pair (3, 4) to true, and the pair (4, 3) to false. A relation is total if all elements in S1 and S2 are related. A relation R is left-total if for all elements in S1 there is a related element in S2 , i.e., ∀x.x ∈ S1 .∃y ∈ S2 .x R y. A relation R is right-total if for all elements in S2 there is a related element in S1 , i.e., ∀y.y ∈ S2 .∃x ∈ S1 .x R y. The reflexive, transitive closure of a relation R, denoted by R ∗ , is the smallest relation containing R that is both reflexive and transitive. A relation R  ⊆ S1 × S2 is reflexive if and only if it relates every element to itself, i.e., for all x ∈ S1 , we have x R  x. This implies that S1 ⊆ S2 . A relation R  ⊆ S1 × S2 is transitive if and only if x R  y and y R  z implies x R  z.

3

System Modelling

This chapter discusses how to develop a formal model of a system that can be used for model checking. The basic idea of model checking is the following: (i) you define a model that describes the behaviour of your system, (ii) you specify the properties that your system is supposed to satisfy, and (iii) you use a model checking tool to analyse all possible behaviour of the model to verify whether those properties are actually satisfied by the model. The models that we consider here are abstract descriptions of actual software, meaning that the models typically address the functionality of the software in a simplified way, with certain details of the software being ignored. These models can therefore be considered as high-level designs of actual software systems. In a software development workflow, such a model may be used as a basis, and the software may result from incrementally refining the model into actual code. Alternatively, such a model may be the result of analysing already existing software, for instance, by using a model learning algorithm [158]. In Chap. 6, we provide more ideas about the link between software implementation and system models.

3.1 History and Background The idea of model checking was formulated in the beginning of the eighties independently by Ed Clarke and E. Allen Emerson, and Jean-Pierre Queille and Joseph Sifakis. Clarke, Emerson, and Sifakis (see Fig. 3.1) received the Turing Award for this idea in 2007. Model checking emerged initially as a technique to check properties of hardware. However, over time, model checking has been © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Huisman and A. Wijs, Concise Guide to Software Verification, Texts in Computer Science, https://doi.org/10.1007/978-3-031-30167-4_3

13

14

3 System Modelling

(a)

(b)

(c)

Fig. 3.1 Ed Clarke, E. Allen Emerson, and Joseph Sifakis

more and more targeting software as well (and besides, the borders between hardware and software are becoming less and less clear).

To perform model checking, various model checking algorithms have been developed. See, for instance, [44] for an overview. In this book, we discuss the main idea behind a number of these algorithms, but do not address their actual implementation. For instance, we do not discuss data structures to compactly store sets of elements. Various model checking tools are available, such as Spin [100], Cadp [81], LTSmin [28], mCRL2 [35,89], NuSMV [42], Fdr [84], Pat [127], and Tlc [120]. In this book, we focus on both Spin and NuSMV. Both are well-developed and well-maintained model checkers with a long development history. Spin was developed at Bell Labs in the Unix group of the Computing Sciences Research Center by Gerard Holzmann. Its development started in 1980, and its first public release was in 1991. In 2002, it was awarded the prestigious ACM System Software Award. The first version of NuSMV was released in 1999 [43], and was based on the tool SMV [131], which stemmed from the early 1990s. NuSMV has been developed in a joint project of The Embedded Systems Unit in the Center for Information and Communication Technology at the Bruno Kessler Foundation in Trento, Italy, the Model Checking group at Carnegie Mellon University, USA, and the Mechanized Reasoning Groups at the Universities of Genova and Trento in Italy. Both Spin and NuSMV can simulate models, i.e., explore individual possible executions of models, and explore all possible executions of models, to verify properties, i.e., prove whether a property is satisfied or not by a model. Both have an important advantage that those properties can be expressed in multiple ways. Why that is an advantage is addressed in Chap. 4; depending on the property, a suitable method to express it can be selected. On the other hand, the modelling languages of Spin and NuSMV are quite different, making it interesting to consider both in this book. Moreover, “under the hood”, Spin and NuSMV use very different approaches to model checking, called explicit-state and symbolic model checking, respectively.

3.2 Finite State Machines

15

In Chap. 5, when we discuss model checking algorithms, we address this in more detail. The current chapter discusses how a model to describe a system can be defined, and the next chapter discusses how the intended properties can be described. We first discuss how a model can be described in an abstract way, introducing the notion of Finite State Machine. After that, we explain how models can concretely be described in both the NuSMV language and Promela, the modelling language of Spin. Next, we address the formal semantics of models, which can be defined by means of Kripke structures. Finally, we present and discuss a modelling case study, to apply the concepts introduced in this chapter.

3.2 Finite State Machines A common way to provide a high-level description of a system is by means of a Finite State Machine (FSM). Such a machine describes a system using a finite number of states, and transitions between these states. At any time exactly one of these states is designated as the current state, i.e., the state of the system at that moment in time. An FSM can move from one state σ to another state σ  if and only if σ is the current state, there exists a transition from σ to σ  , and a suitable event occurs that matches the label of that transition. As soon as an FSM has moved from a state σ to a state σ  , the new current state of the FSM is σ  . An FSM can be defined as follows. Definition 3.1 (Finite State Machine) A Finite State Machine is a 4-tuple M = (, σˆ , ⇒, A) where • • • •

 is a finite set of states; σˆ ∈  is the initial state; A is a set of events; and ⇒⊆  × A× is a transition relation, such that ⇒ is left-total, i.e., ∀σ ∈ .∃σ  ∈ , a ∈ A.(σ, a, σ  ) ∈⇒.

FSMs can be executed. An execution starts in the initial state of an FSM, and after every step in the execution, the current state is updated according to the state reached in that step. Definition 3.2 (FSM execution) Consider an FSM M = (, σ, ⇒, A). An execution of M is a sequence of states π = σ0 , σ1 , . . . such that • σ0 = σˆ ; • ∀i ∈ N.∃a ∈ A.(σi , a, σi+1 ) ∈⇒.

16

3 System Modelling

Fig. 3.2 Finite State Machines for a Modulo 3 Counter

Note that we consider executions to be infinite, that is, the sequences of states are of infinite size. Example 3.1

A simple example of an FSM is a system that describes a modulo 3 counter. Figure 3.2 presents two FSMs for this. In Fig. 3.2a, such a counter has been modelled as a system with three states, named zero, one, and two. State zero is denoted as the initial state by means of the detached incoming arrow. Increment transitions, labelled inc, make the system go from zero to one, from one to two, and from two to zero. In Fig. 3.2b, the option has been added to reset the counter at any moment. Doing so will cause the counter to be set to 0 again. This is represented by the transitions labelled reset.

A sequence of events a0 , a1 , . . . gives rise to FSM executions σ0 , σ1 , . . . in which σ0 = σˆ and for all i ∈ N, we have (σi , ai , σi+1 ) ∈ ⇒. If each sequence of events gives rise to at most one FSM execution, we call the FSM deterministic. Otherwise, the FSM is non-deterministic. Whether or not an FSM is deterministic can be straightforwardly derived, by checking whether any of its states has more than one outgoing transition with the same label. Non-determinism is a crucial concept when creating models, and in particular when applying abstraction. For our simple FSMs, the need for non-determinism may not be apparent, but in Sects. 3.3 and 3.4, and in particular in Chap. 4, its role becomes more clear. In short, often, we wish to abstract away certain details about a system, to make it easier for that system to be verified. But when we abstract away details, the control flow of the system may suddenly seem to be non-deterministic. For instance, if a system decides to either perform an event a or an event b based on whether a Boolean variable flag evaluates to true or false, respectively, and

3.3 Finite State Machines in NUSMV

17

we decide to abstract away this flag variable, then at that level of abstraction, the system seems to non-deterministically decide whether to execute a or b. This means that if we wish to be able to apply abstraction on models written in a certain modelling language, then it is crucial that this language supports non-determinism. NuSMV and Promela both support non-determinism, and hence also our FSMs, which are conceptually modelled by those languages. In Chap. 6, we address abstraction in more detail, in the context of software model checking, in which abstraction is applied automatically. Example 3.2

For the FSM of the modulo 3 counter of Example 3.1, it is possible for executions to start with the state sequence zero, one, zero, one, two, zero. This part of an execution can result from the following sequence of events: inc, reset, inc, inc, inc. Note that the final event could also be reset, therefore multiple event sequences can result in the same execution, but the reverse is not true for this FSM: multiple executions cannot result from the same event sequence. In other words, the FSM is deterministic, which follows from the fact that none of the states have multiple outgoing transitions with the same label.

In this section, we introduced FSMs to model systems, but realistic systems can have many states, causing the FSMs to grow in size rapidly. Large FSMs can be very difficult to understand. Therefore, instead of using FSMs directly, we will use two modelling languages, the NuSMV language for the NuSMV model checker, and Promela for the Spin model checker (Spin is short for Simple Promela INterpreter). Both languages can express FSMs, but in a symbolic way, due to the fact that they support data variables to encode the system states. In Sect. 3.3, we introduce the NuSMV language, while in Sect. 3.4, we focus on Promela.

3.3 Finite State Machines in NUSMV In this section, we give a short introduction to system modelling in the NuSMV input language. For full details, we refer to the NuSMV tutorial [39] and manual [38]. In Sect. 3.4, we address writing models in Promela. NuSMV is a so-called symbolic model checker, developed and maintained in a joint project by the Embedded Systems Unit at Fondazione Bruno Kessler in Trento, Italy, the Model Checking group at Carnegie Mellon University, USA, the Mechanized Reasoning Group at the University of Genova, Italy, and the Mechanized Reasoning Group at the University of Trento, Italy. NuSMV extends the tool and language SMV [36,47,52,131], which was the first model

18

3 System Modelling

Table 3.1 NuSMV primitive variable types for a variable v Name

Declaration

Values

Boolean

v: boolean

{FALSE, TRUE}

Integer

v: integer

[−231 + 1, 231 − 1]

Integer range

v: n..m

[n, m]

Enumeration

v: { a, b, . . . }

{ a, b, . . . }

Unsigned vector of bits (Booleans)

v: unsigned word[n] [0, 2n − 1]

Signed vector of bits (Booleans)

v: signed word[n]

[−2n−1 , 2n−1 − 1]

Array of type T

v: array n..m of T

Range of T for all m-n+1 elements

checker to compactly store and manipulate sets of states using Binary Decision Diagrams. For more on this data structure, see, for instance, [34].

3.3.1 Defining NUSMV Modules In NuSMV, a system is modelled as a collection of modules. Every module can be parametrised, to pass on information between different modules. Conceptually, each module can be seen as describing an FSM. Model state Within a module, variables (of primitive type, or instances of other modules) can be defined (preceded by the keyword VAR). The available primitive types are listed in Table 3.1. There is support for Booleans and integers,1 and integer ranges such as 0..4. In addition, enumeration types are supported. For example, {ready,busy} defines the enumeration type with the two elements ready and busy. Vectors of bits can also be used. On variables of these types, bit operations can be performed. A bit vector can be either signed or unsigned. When unsigned, the bits in a vector encode integer numbers in the standard way. In a signed vector of bits, the highest bit, when set, indicates that the encoded number is negative. This bit has the value −2n−1 . Finally, arrays of any of these types are allowed, with indices ranging over the given range n..m. For instance, A : array 0..4 of boolean declares a Boolean array A of size 5, with the elements being indexed from 0 to 4. One can also create arrays of arrays, by which multidimensional arrays can be constructed. If we view a module as an FSM, then the variables in a module together encode the possible states of the corresponding FSM. that the smallest possible value for integers is −231 + 1, as opposed to the standard −231 . This is not a mistake, but a restriction imposed by the implementation of NuSMV.

1 Note

3.3 Finite State Machines in NUSMV

19

Example 3.3

Figure 3.3 presents a NuSMV model for the modulo 3 counter with a reset option from Example 3.1. The VAR clause can be found at lines 2–3. At line 3, a variable state is defined that can take the values zero, one, and two, to keep track of the current value of the counter.

Next state function To model the possible state changes (transitions) of a module, it can have an ASSIGN clause that defines the values of the variables in the initial state and in the next states. This is done by defining the functions init and next. There is an alternative way of modelling, which we address later. In the definition of a next function, it is described how the corresponding module can change state, i.e., the transition relation is defined. The different options for this are listed in a case segment, i.e., a block starting with the keyword case and ending with the keyword esac. The options are interpreted from top to bottom when the NuSMV model checker reasons about the possible executions of a model. This means that the first option applicable in the current state will be picked to update the current state. Example 3.4

In Fig. 3.3, at line 5, the initial value of state is defined to be zero. At lines 6–13, the next function is defined, which describes how the module can change state. When the NuSMV model checker reasons about this module, it will first check if the condition at line 7, before the colon (“:”), evaluates to TRUE. This is the case if state has the value zero, and if the given parameter reset

Fig. 3.3 A modulo 3 counter in NuSMV

20

3 System Modelling

currently has the value TRUE. If this is the case, state will be set to the value after the colon, in this case zero. Alternatively, if this is not the case, the condition at line 8 is checked. If state has the value zero, state should be set to one. Note the implicit condition that reset has to be FALSE, since the condition at line 7 apparently did not hold. Similar options are given to change state when state has the values one and two at lines 9–12.

When interpreting a model, the NuSMV model checker syntactically checks that the results of the assignments for the transition relation are within the range of the variables; otherwise, it gives an error and terminates the analysis. The tool also checks that the ASSIGN relation is always complete, i.e., that in every state, the next value of all variables is always defined. It does this by performing a syntactic check on the assignments, making sure that all cases are covered. This ensures that the left-total condition of FSMs is fulfilled (see Definition 3.1). The definition of the transition relation in Fig. 3.3 is not as compact as it could be. Note that the next value of state will always be zero when reset is TRUE, independent of the current value of state. Example 3.5

The transition rule in Fig. 3.4 has the same behaviour as the definition of next(state) in Fig. 3.3.

In Example 3.5, the option with condition TRUE is applicable in all cases where the options at lines 2–4 do not hold. So far, we have modelled the modulo 3 counter in a way that remains very faithful to the original FSM encoding. We defined a variable state to keep track of the current state, and an enumeration type that has exactly one possible value for every state in the FSM. However, we are not restricted to doing this. In the next example, we show an alternative, more compact way of modelling the modulo 3 counter.

Fig. 3.4 An alternative definition for next(state) of Fig. 3.3

3.3 Finite State Machines in NUSMV

21

Fig. 3.5 Another modulo 3 counter in NuSMV

Example 3.6

Figure 3.5 presents an alternative way to model the modulo 3 counter in NuSMV. This time, we have defined an integer variable val that can store an integer value in the range [0, 2]. Initially, it is set to 0. In the next function at lines 6–10, it is defined that val gets the value 0 if reset is TRUE (line 7), and when it is not TRUE, the value of val is incremented if it is smaller than 2 (line 8) and set to 0 otherwise (line 9).

Non-determinism The init and next functions can also be non-deterministic. Example 3.7

Suppose that a counter could non-deterministically choose to reset the state to zero in every possible state of the counter model with the state variable. Then the next assignment for state would be as in Fig. 3.6. In this definition of the next state, if the current state is zero or one and reset is FALSE, then the transition relation non-deterministically moves either to zero (a reset), or to the next state.

If we do not give an initial state function for a variable, it is initialised to a random value within its range. Thus, if we leave out the init function in Fig. 3.3, then state is initialised to zero, one, or two. It is allowed to let conditions in a case expression depend on the next value of a variable. The NuSMV model checker checks that this relation is not-circular, i.e., that variables have no cyclic dependencies on each other’s next values.

22

3 System Modelling

Fig. 3.6 The modulo 3 counter in NuSMV with non-deterministic resetting

Fig. 3.7 Counting the number of times the modulo 3 counter has the value 0

Example 3.8

Suppose that we wish to extend the model of Fig. 3.3 with a variable to keep track of the number of times that the counter has been assigned the value 0. We can do this with a variable nr_resets : 0..100. The value 100 is chosen rather arbitrarily, to give an upper bound. The initial value and the transition relation for this variable can be defined as in Fig. 3.7.

Transition Predicates Sometimes, it is not that straightforward or efficient to describe the values of the variables using assignments. Therefore, NuSMV also allows one to describe the transition relation using a transition predicate (keyword TRANS). When this is used, the module should also have a special INIT clause that describes a predicate over the initial state, and the module should no longer have an ASSIGN clause. Example 3.9

Using TRANS and INIT clauses, the counter model of Fig. 3.3 (and Fig. 3.4) can be rewritten to the model in Fig. 3.8. Notice that the transition predicate consists of a number of options of the form “condition & state effect”. These options map one-to-one to the different options in Fig. 3.4.

3.3 Finite State Machines in NUSMV

23

Fig. 3.8 A modulo 3 counter in NuSMV using INIT and TRANS clauses

In Example 3.9, the benefit of using a transition predicate is not apparent, as the model of Fig. 3.8 is in size comparable to the one in Fig. 3.3. However, transition predicates are particularly useful when updates of different variables highly depend on each other. For instance, the transition predicate NuSMV 1 2

TRANS next(x) + 2*next(y) + 3*next(z) = (x + 2*y + 3*z) mod 7

succinctly defines how variables x, y, and z change values over time, and in relation to each other. This cannot straightforwardly be expressed in an ASSIGN clause. When defining modules by means of a transition predicate, the user has to be careful to ensure that: • the initial state predicate is satisfied by at least one state; and • the transition relation is left-total. Not satisfying both conditions may result in logical absurdities. For instance, the model checker may not find any errors in such a model, while the modelling does not respect the desired property. Fortunately, NuSMV provides an option to check whether these conditions are satisfied, but the user has to remember to use this option.

3.3.2 Composition of Multiple Modules So far, we have only considered the modelling of a single module, but NuSMV models often have multiple modules, with some interaction happening between them. Creating a model as the composition of multiple modules improves the maintainability and understandability of a model, and often reflects the actual interaction of

24

3 System Modelling

multiple agents or entities in the real-life system represented by the model. Modules can be instantiated any number of times. For example, a system using multiple counters could have several variables that are (different) instances of one of the counter modules from the previous section. Every system model in NuSMV should have a main module. In fact, to be able to actually analyse the examples given in the previous section, the counter modules should be instantiated in a main module, and some mechanism should be defined to periodically reset the counter. In the current section, we will look at this in more detail. When composing modules, module parameters can be used to pass information from one module to another. Example 3.10

Suppose we have a very simple increase module, that increases its value (modulo some constant value mx + 1, passed as a parameter) when the Boolean parameter flag has the value TRUE. NuSMV requires upper bounds for all variables, to ensure that a model has a finite number of states. By providing this upper bound as a parameter, it becomes possible to create multiple instances of the increase module with varying upper bounds. The definition of the increase module is given in Fig. 3.9.

We can compose the increase module with our counter, and then have it count how many times the state two has been reached. To pass information from one module m to another module n, we may write arbitrary expressions, as long as we refer to variables v of m as m.v. Let us define a main module in which one instance of each module counter and increase is created, and they are set up in such a way that the increase instance counts the number of times the counter instance has reached the state two (or value 2), and the increase instance informs the counter instance to reset its counter every time the former has reached the value

Fig. 3.9 An increase module to interact with the modulo 3 counter

3.3 Finite State Machines in NUSMV

25

Fig. 3.10 A main module with instances of counter and increase

mx. Figure 3.10 presents the definition of the main module. Note that the expressions passed as parameters achieve the desired interaction between the module instances. The dependencies between c and inc may seem circular, and they are at the level of module instances, but they are not at the level of individual variables, which is what NuSMV actually checks and does not allow. As we said before, one strength of compositional modelling is maintainability. In this case, for instance, if we change the definition of the counter module, ideally the increase module does not need to be changed. We would like to be able to use Figs. 3.3, 3.5 or 3.8 without having to change Fig. 3.9. However, the main module in Fig. 3.10 does not yet achieve this. Because of the expression at line 3, we cannot use Fig. 3.5 for the definition of the counter module. One way to fix this would be to declare an additional Boolean variable in the counter module, and to make sure that this variable always reflects whether or not the counter has reached the value 2. However, this would unnecessarily make each system state larger, as every state would need to incorporate the value of that variable, even though the original expression can be evaluated based on the other state information. An alternative way that avoids this is to make a so-called DEFINE declaration. Figure 3.11 presents a complete definition of the system, with DEFINE declarations for both expressions. Alternatively, Fig. 3.5 can be used to define the counter module, if we add the DEFINE declaration lastState := val = 2 to it. So to recap, variables declared in a DEFINE clause are not actually state variables, but instead refer to expressions that can be evaluated in each state using the actual state variables. Later, when we consider the model checking of properties, these DEFINE declarations will also be of great help (see Chap. 4). Concurrent execution Modules execute synchronously, i.e., at every point in time, all modules make a transition step synchronously. This means for this particular example model that • the increase module never will miss the counter state being equal to two, and • that a two state will never be counted twice. In earlier versions of NuSMV, also asynchronous composition was supported. When systems are composed asynchronously, a single module, in that case, called a process, takes one step at the time, and the behaviour of the overall system is

26

3 System Modelling

Fig. 3.11 The complete counter system model, with DEFINE declarations

described as all possible interleavings of the behaviours of a finite number of parallel processes. Asynchronous composition of parallel processes is useful to reason about communication protocols and asynchronous circuits, but it has been deprecated in NuSMV. On the other hand, the Spin model checker, discussed in Sect. 3.4, specifically focuses on this type of composition. In NuSMV, if one wishes to model an asynchronous system, this has to be resolved at a higher level, i.e., the model needs to have explicit control variables that encode when a module can change state, and when it should remain unchanged. In concurrency theory, one also often uses the term true concurrency. This means that one or more processes can make a step at a time, and that the duration of those steps can vary. This model is not supported by NuSMV at all, and it is out of scope for this book.

3.4 Finite State Machines in PROMELA

27

3.4 Finite State Machines in PROMELA Another language to describe FSMs at a higher level is Promela. In contrast to NuSMV, Promela is more heavily based on imperative programming languages such as C. To properly support the modelling of systems with parallel processes, concepts such as non-determinism, parallelism and communication via synchronous and asynchronous channels have been added to the language, on top of the usual imperative programming concepts, such as assignments and sequential composition. The explicit-state model checker Spin, with its modelling language Promela, is a widely used open-source verification tool. It was developed at Bell Labs in the Unix group of the Computing Sciences Research Center, starting in 1980, by Gerard Holzmann [101], and has been freely available since 1991. It has been successfully used over the years to verify many systems, such as a flood control system [111] and mission-critical software for space missions, and in 2002, Holzmann received the prestigious ACM System Software Award for the tool. In Promela, models are collections of processes. A process can be viewed as representing an FSM, in the same way as modules represent FSMs in NuSMV. Promela is a very rich language, with many constructs, and the current section certainly does not address all of them. More documentation can be found, for instance, in the books of Holzmann [101] and Ben-Ari [21].

3.4.1 Defining PROMELA Processes Model state Similar to NuSMV, the state of a process is modelled by means of variables. Variables can be declared inside a process, but also globally, in which case they are accessible by all processes in the model. A variable must have a type, as in NuSMV. The primitive variable types supported by Promela are listed in Table 3.2. The well-known types bit, boolean, byte, short and int are supported, as well as unsigned integers of a specified number of bits n. Enumeration types are also supported. The keyword mtype by itself can only be used once per model to define an enumeration type, such as mtype = {a, b}, but multiple enumeration types can be defined by using the keyword mtype followed by a colon and a unique name. For instance, one can define mtype:fruit = {apple, banana} and mtype:vegetable = {broccoli, carrot} in the same model. In addition to the primitive types, a user can also define their own new types by defining structs composed of variables of existing types. Figure 3.12 shows an example of such as type. It defines a type car, consisting of a Boolean variable indicating whether or not the car is driving, and a byte variable representing the number of people sitting in the car. An instance of this type can be declared with

28

3 System Modelling

Table 3.2 Promela primitive variable types for a variable v Name

Declaration

Values

Bit

bit v

{0, 1}

Boolean

bool v

{false, true}

Byte

byte v

[0, 28 − 1]

Short

short v

[−215 , 215 − 1]

Integer

int v

[−231 , 231 + 1]

Unsigned integer

unsigned v: n

[0, 2n − 1]

Enumeration

mtype v

The values of the defined mtype

Array of type T

T v[n]

Range of T for all n elements

Fig. 3.12 An example of a user-defined type in Promela

car c, and one can refer to the members of this type, for instance, c.is_driving and c.nr_occupants. Finally, arrays of all primitive and user-defined types can be declared, using the syntax common to many programming languages. Besides the values of the local variables, the state of a process is also defined by the current point of execution. In NuSMV, the next function defines how the module state is updated every time a transition is executed. In Promela, as we discuss in this section, the definition of process behaviour is more akin to program code, and execution of the instructions can sometimes be halted. Hence, there is a notion of the current point of execution, identifying the location in the process definition that the execution has reached in the current state. As we continue to describe how behaviour can be modelled in Promela, this will become more clear. Example 3.11

Figure 3.13 presents a Promela model for the modulo 3 counter with a reset option from Example 3.1. It is comparable to Fig. 3.3. At line 1, an enumeration type is declared for the values zero, one, and two. Furthermore, at line 3, a global Boolean variable reset is declared, and at line 6, a variable state, local to the process counter, is declared and initialised to the value zero.

3.4 Finite State Machines in PROMELA

29

Fig. 3.13 A modulo 3 counter in Promela

Next state function In the body of a Promela process, we describe how the process can change state. Declared variables can immediately be assigned an initial value. There is no dedicated section for this as in NuSMV, but it is recommended to assign an initial value when a variable is declared, to avoid unnecessary states. Different from NuSMV, if one does not provide an initial value for a variable, then it is not assigned one non-deterministically, but instead, it is assigned a default initial value, which is 0 for integer-based variables, and false for Boolean variables. There is also more flexibility to define the structure and control flow of a Promela process compared to a NuSMV module. The structure conceptually closest to the case segment of NuSMV is the repetition construct or do-construct in Promela, but it is not mandatory to use it in a process. In a repetition construct, a finite number of options are listed for execution, and each option consists of a sequence of statements. Figure 3.14 presents the general structure of a repetition construct. The semi-colon is used to express sequential composition of statements, and the start of each option is indicated by a double colon (::). Often, the first statement st-i-0 of a sequence i is used to express a guard for the option, in which case the first semi-colon can be replaced by the equivalent implication arrow (->) for readability. A statement in Promela can be either an assignment, which may refer to processlocal and global variables, an expression or condition statement, which is a Boolean expression, i.e., it evaluates to false or true, the special skip statement, which is the empty statement, or a printf statement, to print something on the screen. Expressions may also refer to process-local or global variables. In addition, there are communication statements, but we discuss these when addressing communication between processes in Sect. 3.4.2.

30

3 System Modelling

Fig. 3.14 The general structure of a Promela repetition construct

In general, a statement is not always enabled, i.e., executable. Assignments, the skip statement and the printf statement are always enabled, but expressions are only enabled if they evaluate to true. A do-option in a repetition construct is enabled if and only if the first statement in its sequence is enabled. If, however, an enabled option i is selected for execution, and the execution reaches another statement st-i- j ( j > 0) within the sequence of option i which is not enabled, then the execution is blocked at statement st-i- j until the statement is enabled. Example 3.12

In Fig. 3.15, a Promela process called blocked is defined. In the repetition construct at lines 5–7, a single do-option is present. Its first statement is an assignment that assigns 1 to the process-local variable y. As assignments are always enabled, the entire do-option is enabled. The second statement, however, is the Boolean expression x == y. As x and y have different values when this expression is reached, the execution is blocked. This results in the entire process blocking, and never being able to execute the printf statement after the Boolean expression.

A repetition construct is enabled if and only if at least one of its options is enabled. An important difference between the repetition construct of Promela and the case segment of NuSMV is that in the former, the order of the options is irrelevant. One needs to keep this in mind when constructing guards for the options. The predefined condition statement else forms an exception to this: when all other options are not enabled, the options starting with else should be executed. At most one such option is allowed to be present in a repetition construct. Example 3.13

In Fig. 3.13, a definition is given for the process counter, indicated with the keyword proctype. Promela processes can have parameters, but these are only used to initially pass values. Therefore, in contrast to the NuSMV module

3.4 Finite State Machines in PROMELA

31

Fig. 3.15 An example Promela process that blocks during execution

in Fig. 3.3, a Boolean variable reset is not passed as a parameter, but defined globally, such that the process counter can access it, and potentially other processes can change its value. At line 6, the initial value of state is defined to be zero. At lines 8–15, the state changes for the process are defined. Note the difference w.r.t. Fig. 3.3. No assumptions are made regarding the guards, as the options are unordered. Negation is expressed with the exclamation mark (!).

Once a do-option of a repetition construct has been completely executed, the control flow returns to the start of the repetition construct, and all do-options can again be considered for execution. Does this mean that an execution can never exit a repetition construct once it has been entered? Actually, no. The special keyword break can be used to indicate when an execution needs to jump out of a repetition construct, similar to how break indicates in many programming languages that a jump out of a loop is needed. Figure 3.16 shows how one can create a loop in Promela. The index variable i is used as loop index. While the upper bound (5) has not been reached, an element of the array b is assigned the value true, and i is incremented. Once the upper bound has been reached, the loop is exited. In contrast to NuSMV, Spin does not interpret value overflows as erroneous. When this occurs during an analysis, a warning message will be printed, but the analysis will not terminate. It is up to the user to determine whether overflows should be avoided. Also, Spin does not check if the transition relation is complete, but it does check for the presence of so-called deadlocks, i.e., system states in an execution in which no more transitions are executable. Due to the left-totality requirement of the transition relation, NuSMV models are by definition deadlock-free, but Spin does not have this requirement. Allowing deadlocks to be present is very useful, as it reflects that procedures such as communication protocols often have (undesired) deadlocks. Similar to what we did in Sect. 3.3 for the NuSMV model of the counter, we can make the Promela model more compact, again using the observation that a reset always leads to state zero.

32

3 System Modelling

Fig. 3.16 An example of a loop in Promela

Example 3.14

Figure 3.17 presents an alternative way to describe the behaviour of the counter process of Fig. 3.13. This example not only uses an else option in the repetition construct but also introduces another construct, the selection construct, indicated by if-fi. This construct works in the same way as the repetition construct, with the single difference being that once an option has been executed, execution continues after the end of the selection construct. In this example, this means that once any of the options at lines 5–7 has been executed, the execution continues at line 1, jumping back to the beginning of the repetition construct of which the selection construct is a part.

As for the first few models in NuSMV, we have remained faithful to the original FSM encoding so far. In the next example, we show another way of modelling the modulo 3 counter in Promela, using an integer variable.

Fig. 3.17 An alternative definition of the behaviour of the counter of Fig. 3.13

3.4 Finite State Machines in PROMELA

33

Fig. 3.18 Another modulo 3 counter in Promela

Fig. 3.19 Non-deterministically assigning a value

Example 3.15

An alternative way to model the modulo 3 counter in Promela is given in Fig. 3.18. This time, an unsigned 2-bit integer variable val is used, which is sufficient to store the value 0–2. In the repetition construct, if reset is true, val is reset to the value 0. In the else option, val is incremented modulo 3.

Non-determinism As already mentioned, Promela also supports non-determinism. For instance, if in a repetition or selection construct, multiple options are enabled, one of them is selected non-deterministically for execution. This can be exploited to express that a variable should be non-deterministically assigned a value. For instance, Fig. 3.19 shows how to non-deterministically assign the byte variable v the value 1, 2 or 3. Next, we extend the modulo 3 counter with non-deterministic resetting.

34

3 System Modelling

Fig. 3.20 The modulo 3 counter in Promela with non-deterministic resetting

Example 3.16

The behaviour for a modulo 3 counter process in Promela with non-deterministic resetting is presented in Fig. 3.20. While in NuSMV, non-determinism is achieved by placing the various options between curly brackets, in Promela, the options should be listed as individual do-options in the repetition construct, each option having the same guard, or at least the guards should all evaluate to true at some point. This also holds for the selection construct. In both the repetition and the selection construct, if multiple options are enabled, then one is chosen non-deterministically. Note the use of the skip statement at line 5. An alternative would have been to write the assignment state = zero, but since that variable already has the value zero when this is executed, the assignment would actually have no effect. Instead, we can also write skip, i.e., the empty statement, which is executed, but has no effect on the state. This option by itself would be meaningless, as it does not even change the current point of execution; executing this option means that the new point of execution is again at the beginning of the repetition construct. However, combined with line 6, its effect is that when state has the value zero, this variable is not necessarily set to one in the next state.

3.4.2 Composition of Multiple Processes In Sect. 3.4.1, we focused on the modelling of an individual Promela process, but often, multiple processes are needed to model an entire system. There are essentially two ways for processes to communicate. One way, already mentioned before, is by means of global variables; when one process updates a global variable and another process can read the result. Another way is by using channels to send messages between processes.

3.4 Finite State Machines in PROMELA

35

We discuss channels later in this section, but first, let us look at an example Promela model that uses global variables for communication. In fact, as in Sect. 3.3, let us add an increase process to interact with the modulo 3 counter, conform Fig. 3.9. Example 3.17

Figure 3.21 presents a definition of an increase process in Promela, comparable to the one given in Sect. 3.3, Fig. 3.9. We use global variables reset and flag, the definition of which we show later, for communication between the processes. The upper-bound mx is given as a parameter when a process instance is created. The repetition construct at lines 4–6 has only one option, which is enabled when flag evaluates to true. In that case, cnt is incremented modulo mx+1, and after this, reset is updated to reflect the predicate cnt == mx.

NuSMV models require a main module, in which typically instances of other modules are created when multiple instances are needed. In a Promela model, process instances can be created in a special process declaration called init. Figure 3.22 shows an example of this, for our modulo 3 counter system. With the keyword run,

Fig. 3.21 An increase process definition to interact with the modulo 3 counter

Fig. 3.22 An init process with instances of counter and increase

36

3 System Modelling

process instances can be created. In fact, this statement can be added to the definition of any process in Promela, hence processes can spawn other processes, but in general, process creation must be bounded. That is, using process creation in some unrestricted recursive way, or as part of an option in a repetition construct which is infinitely often enabled, is not allowed. For the creation of the increase instance (line 6), we use a user-defined constant to set the parameter mx. Constants can be declared as in C. Finally, note the use of the keyword atomic. In Promela, every statement is seen as a single, atomic operation, the execution of which cannot be interrupted by other processes. Sometimes, one wishes to combine multiple statements into a single atomic operation, which can be done with the atomic keyword. In this case, we define that the two process instances are created in a single atomic step, not allowing for the first instance to start executing before the second one has been created. We show another example of the use of atomics when we discuss a variation on the counter process next, and after that, when we discuss asynchronous execution, the importance of atomicity will become more clear. Example 3.18

Figure 3.23 presents another definition for the counter process, this time including statements to update the variable flag whenever needed. This variable should reflect whether the counter is in state two. To achieve this, note that at lines 12–15, we use the atomic keyword. The effect of this is that when the

Fig. 3.23 A counter process definition for the modulo 3 counter system

3.4 Finite State Machines in PROMELA

37

Fig. 3.24 Process instance creation in multiple ways

counter enters state two, or leaves state two, the flag variable is immediately updated in the same atomic step.

We showed the use of an init process to create process instances, but there is an alternative way to do this in Promela, by means of the active keyword. When a process definition is preceded by active, possibly extended with [N] for some natural number N, then 1 (or N) instances of that process definition are created. Example 3.19

In Fig. 3.24, an example Promela model is presented, in which processes are created in multiple ways. A question one can ask is what the eventual value will be of the variable count. At line 3, two instances are created of the process definition P, each of which will increment count. In the init process, the counter is again incremented once, and another instance of P is created, which will do a final increment. In other words, the eventual value of count will be 4. In Spin, when a Promela model is interpreted and analysed, each process instance will be assigned its own, unique, ID number, starting with ID 0. These numbers are assigned in order of instance creation. This order is determined by scanning the text of a model from top to bottom, looking for instances. In the example, the first two instances created due to the presence of active [2] are given the IDs 0 and 1. The init process, itself a process as well, receives the ID 2. The final instance of P created at line 9 gets the ID 3. A process can refer to its own ID using the keyword _pid. This can be useful in those cases where processes need to be aware of some form of identity, for instance, when modelling protocols in which IDs play a role, such as a leader election protocol, or when data stored in an array must be distributed over the processes. In the latter case, the processes can use their ID to obtain array indices of interest.

38

3 System Modelling

Concurrent execution As already mentioned in Sect. 3.3, the execution model of Promela is asynchronous, in contrast to NuSMV, which uses a synchronous execution model. In NuSMV, when a system performs an execution step, this step actually combines from every module in the model a single execution step. In Promela/Spin, when a system performs an execution step, typically only a single process performs an execution step. This process is selected non-deterministically. An exception to this is when two processes communicate synchronously. In that case, those two processes each perform a step. More on this when we discuss communication via channels. Figure 3.25 illustrates how asynchronous execution of process steps works. We consider two simple process definitions, and one instance of each of those definitions. We have active proctype P1() = {a1;b1;c1} and active proctype P2() = {a2;b2;c2}, with a1, a2, b1, b2, c1 and c2 enabled statements. Figure 3.25 presents the state space of possible executions of this system (we discuss state spaces in more detail in Sect. 3.5). Each state is related to a tuple (x, y) of numbers, with x referring to the state of process P1, corresponding to how many statements P1 has executed, and y referring to the state of process P2. In the initial system state, indicated by (0, 0), i.e., neither process has executed a statement, either P1 can do a step, leading to state (1, 0), or P2 can do a step, leading to state (0, 1). Note that from both of these states, if the other process performs a step next, we end up in the state (1, 1). In the same vein, one can continue this reasoning for all steps of the two processes.

Fig. 3.25 Interleaving of the behaviour of the Promela processes active proctype P1() = {a1;b1;c1} and active proctype P2() = {a2;b2;c2}

(0,0)

(1,0)

(1,1)

(2,0)

(3,0)

(0,1)

(2,1)

(3,1)

(0,2)

(1,2)

(2,2)

(3,2)

(1,3)

(2,3)

(3,3)

(0,3)

3.4 Finite State Machines in PROMELA

39

Every step in these executions is seen as atomic, i.e., non-interruptable by other processes. Every single statement in Promela is considered to be atomic, but as we mentioned before, the atomic keyword can be used to combine multiple statements into one atomic instruction. When a process performs such a combined atomic instruction, its execution cannot be interrupted by other processes, but if the execution of this instruction is blocked at some point, when a statement is reached that is not enabled, then atomicity is lost, and other processes can continue their execution. Note that the effect of using atomic in process definition is that their process instances will have fewer states. Interestingly, if two atomic instructions directly follow each other in a process definition, then they can together be interpreted as a single large atomic instruction. Another way to define atomic instructions is by using the keyword d_step. Two important differences w.r.t. atomic are that the former does not allow the use of statements that may block, and that the former resolves any non-determinism inside the atomic instruction in a deterministic way, for instance by always selecting the first enabled option in a selection construct. With the asynchronous execution model in mind, we can actually make an interesting observation about the modulo 3 counter system modelled earlier in Promela, and how it differs from the NuSMV model described in Sect. 3.3. Example 3.20

In Sect. 3.3, we described the intended functionality of the counter and increase processes: every time the counter reaches state two, the increase process should increment its own counter modulo mx+1, and once this counter has reached the value mx, it should inform the counter process to perform a reset. In a synchronous execution model, this works well, and the communication set-up in the NuSMV model achieves this functionality. However, the Promela model described earlier, in Figs. 3.21, 3.22 and 3.23, does not achieve this. For instance, note that once the increase process has been informed that it can increment cnt, i.e., once flag has been set to true, it can repeatedly do so, as long as flag remains true and the increase process is selected to perform a step. Likewise, once the counter process has been informed to perform a reset, it can repeatedly do so until reset has been set to false again.

Communication via channels Promela processes execute asynchronously, but communication via channels can occur both synchronously and asynchronously, depending on the type of channel being used. The general definition of a channel follows the following structure. chan = [] of {, , …, }; We have the following elements in this definition:

40

3 System Modelling

• : the name of the channel, which should be unique in the model; • : the dimension of the channel. This influences the size of the buffer in which messages sent over the channel are stored. A special case is size 0. This means the channel is a rendez-vous channel, or synchronous channel, without a buffer. Messages sent over such a channel cannot be temporarily stored in a buffer, but have to be immediately received by a process when another process sends it. The effect is that the sending and receiving processes synchronise with each other at the moment of communication. • The are the types of the elements of a message sent over the channel. A message can contain a predefined, finite number of data elements, each of any of the available types, except for the unsigned type. Example 3.21

The channel r defined as chan r = [0] of {bit} is a rendez-vous channel over which bits can be sent. Whenever a sending process is ready to send a bit over this channel, and at least one process is ready to receive that bit, the bit is passed from sender to receiver. If multiple processes are ready to receive the message, then one receiver is selected non-deterministically. The channel toR defined as chan toR = [2] of {mtype, bit} is an asynchronous channel over which tuples can be sent, of which the first element is a value allowed by a predefined enumeration type, and the second element is a bit. The buffer associated with toR can store up to two messages. The declaration chan line[2] = [1] of {mtype, short} creates an array of channels (of size 2), each of which is an asynchronous channel with a buffer of size 1, allowing messages that are tuples, with the first element being a value allowed by a predefined enumeration type, and the second element being a short.

A list of statements and functions that can be used in process definitions to use channels is given in Table 3.3. There, cs refers to an arbitrary synchronous channel and ca to an arbitrary asynchronous channel. With c we refer to any channel, which may be synchronous or asynchronous. A process can send a message over a channel c by means of the send statement c! followed by arguments that type-wise match the required elements of a message accepted by c. Example 3.22

For the r channel definition of Example 3.21, r!1 sends the bit 1 over channel r, and with mtype = {apple, banana}, the statement line[0]!apple, 0 sends the message apple, 0 over channel line[0].

3.4 Finite State Machines in PROMELA

41

Table 3.3 Promela statements and functions related to channels, with cs a synchronous channel and ca an asynchronous channel, and a0 . . ., an being the n arguments of a message accepted by the channels Statement/function

Explanation

When enabled

cs!a0, . . ., an

Send a0, . . ., an

A matching cs? is enabled

cs?a0, . . ., an

Receive a0, . . ., an

A matching cs! is enabled

ca!a0, . . ., an

Send a0, . . ., an to the tail of the ca buffer

The cs buffer is not full

ca?a0, . . ., an

Receive a0, . . ., an from the head of the ca buffer

The first message in the cs buffer matches

ca??a0, . . ., an

Receive a0, . . ., an from the ca buffer

A message in the cs buffer matches

ca?[a0, . . ., an]

Check executability of ca?a0, . . ., an

It evaluates to true

ca??[a0, . . ., an]

Check executability of ca??a0, . . ., It evaluates to true an

empty(ca)

Evaluates if ca buffer is empty

nempty(ca)

Evaluates if ca buffer is not empty

It evaluates to true

full(ca)

Evaluates if ca buffer is full

It evaluates to true

nfull(ca)

Evaluates if ca buffer is not full

It evaluates to true

len(c)

The number of messages in ca buffer

Not applicable

It evaluates to true

The basic receive statement for a channel c is c? followed by arguments that type-wise match the required elements of a message accepted by c. Every argument consisting of a single variable name indicates that the corresponding element of the received message should be stored in that variable. All other arguments, consisting of expressions that evaluate to a value of the matching type, express conditions that have to be satisfied to receive the message. It is possible to express that the current value of a variable should actually be used as a condition, by using the eval keyword; for instance, c?eval(v) expresses that a message over channel c can be received if and only if the value of that message corresponds with the current value of variable v. Example 3.23

Consider the send statements of Example 3.22. The receive statement r?v, with v a bit variable, can synchronise with r!1, and when it does, the value 1 will be stored in v. On the other hand, the statements r?1 and r?(0+1) can synchronise with r!1 because of their (value-wise) matching parameter, but no value will be stored as a result of synchronisation, and the statement r?0 cannot synchronise with r!1 at all, because the values of their arguments do not match. The statement line[0]?v0, v1, with v0 being of type mtype and v1 being of type short, is suitable to receive the message sent by line[0]!apple, 0. This message can be received after it has been placed in the message buffer of

42

3 System Modelling

channel line[0]. The statements line[0]?v0, 0 and line[0]?apple, v1 are also suitable, and they store the first and second element of the message, respectively. For a variable v of a user-defined type consisting of an mtype and a short variable, line[0]?v is also suitable for receiving the message.

A send (and receive) statement referring to a synchronous channel is enabled if a suitable receive (and send) statement can be executed by another process at the same time. A send statement referring to an asynchronous channel is enabled if and only if the buffer of that channel is not yet full. When such a send statement is executed, the effect is that the message is inserted at the tail end of the buffer. A receive statement referring to an asynchronous channel is enabled if and only if the buffer of that channel is not empty, and its argument conditions w.r.t. the message at the head of the buffer are satisfied. When such a receive statement is executed, the effect is that the message at the head of the buffer is removed. In other words, channel buffers in Promela are First In First Out (FIFO), at least when using basic receive statements. It is, however, possible to deviate from this strategy by using c?? receive statements. Such a statement is enabled if and only if the buffer of c is not empty, and at least one message in the buffer meets the argument conditions of the statement. If multiple messages meet these conditions, then the first of these, in buffer order, is selected, and removed from the buffer when the receive statement is executed. It should be noted that this type of message receiving cannot be applied on synchronous channels. In addition to send and receive statements, there are other statements to query the contents of buffer channels. These statements provide more flexibility to prevent that a process is forced to block its execution, and they are helpful when expressing system properties (see Chap. 4). The function call empty(c) evaluates to true if and only if channel c has an empty buffer, and nempty(c) evaluates to true if c has a non-empty buffer. In a similar way, full(c) and nfull(c) can be used to check whether the buffer of c is full or not full, respectively. The number of messages currently present in the buffer of a channel c can be obtained using the expression len(c). Finally, one can poll the buffer for a particular message. Essentially, with polling, it is possible to check whether a particular receive statement will be successful, which can be expressed by writing the arguments of that receive statement between square brackets. For instance, the statement c?[eval(v)] evaluates to true if and only if the receive statement c?eval(c) would be successful at the moment of polling. It is also possible to poll for the executability of c?? statements. Example 3.24

A model of the modulo 3 counter system in which channels are used is presented in Fig. 3.26. At line 3, a channel is created for the reset message. This channel is asynchronous, and has a buffer of size 3. Messages sent over this channel are of type Boolean. Actually, we only require a single value to be sent over this

3.4 Finite State Machines in PROMELA

43

channel, but it is mandatory to send data, so we choose the (small) Boolean data type. At line 4, we define a channel flag, and make it synchronous. Also over this channel, Boolean messages should be sent. The counter process definition checks for the presence of a message in the reset buffer at line 10. If a message is available, the reset is performed, i.e., state is set to zero. If the buffer is empty (line 11), the state of the counter can change. If it changes to two (line 14), a synchronous communication via the channel flag is attempted. In the process definition of increase, one can find an attempt to receive a message via this channel at line 24. When the send at line 14 is attempted, and the receive at line 24 is enabled, synchronous communication can occur. When this happens, the increase definition dictates that cnt should be incremented modulo mx+1. After that, in the selection construct at lines 25– 28, it is checked whether cnt == mx. If it is, a reset message is sent (line 26), otherwise the selection construct is left (line 27). Our choice for an asynchronous reset channel and a synchronous flag channel is rather arbitrary. Other decisions can be made here, but those decisions will affect the potential behaviour of the processes, and influence how the processes should be defined. For instance, if one defines the reset channel as being synchronous, the condition at line 11 is no longer allowed, and one needs to find some other way to ensure that counter process instances can change state when they are not told to perform a reset.



> Some features of P ROMELA not addressed in this section

As already mentioned at the beginning of this section, not all features of Promela have been discussed. Some notable features that we have not addressed are: • Goto statements (goto): these can be used to jump from anywhere in a model to particular lines that have been given a unique label. Goto statements are often used in Promela models, but are considered bad practice in programming in general, as their use is prone to introducing errors. Usually, goto’s can be avoided when constructing Promela models. • Channel assertions: with xr c and xs c in a process definition, one can indicate that at most one instantiation of that definition will receive messages from and send messages to channel c, respectively. These assertions can help in making the exploration of all possible executions of a model more efficient. The information is used by the partial order reduction algorithm of Spin (see Sect. 3.7). • Declaring process behaviour that must be executed deterministically: with the D_proctype keyword, one can define processes that must be executed deterministically. In simulations, Spin resolves any non-determinism in a deterministic manner, and during verification, if any non-determinism exists in the process definition, an error is reported.

44

Fig. 3.26 A modulo 3 counter system model with channels

3 System Modelling

3.5 Kripke Structures

45

3.5 Kripke Structures So far, we have mostly considered single executions of models, apart from when we looked into the asynchronous execution model of Spin. In Fig. 3.25, we saw a graph structure, in which multiple possible executions were mapped out. For model checking, it is important that the exact meaning of a model is known, otherwise it is not clear how model checking algorithms should interpret the model. For this reason, modelling languages must have a formal semantics. Structures such as the one in Fig. 3.25 are typically used to represent those semantics for a specific model. A formal semantics of a modelling language defines how an arbitrary model written in that modelling language maps to a corresponding graph structure that captures all the potential executions of that model. Conceptually, we refer to such a structure as the state space of a model. One commonly used definition for such graph structures is the Kripke structure, named after Saul Kripke, see Fig. 3.27, who proposed this structure in 1963 [115]. The current section explains the notion of Kripke structure, and discusses how NuSMV and Promela models map to Kripke structures. The next chapter discusses how properties over these models can be described, and how those properties relate to the executions, or paths, of a Kripke structure. Definition 3.3 (Kripke Structure) A Kripke structure over a set of atomic propositions AP is a 4-tuple K = (S, I, →, λ) where • S is a finite set of states; • I ⊆ S is the non-empty set of initial states; • →⊆ S × S is a transition relation, such that → is left-total, i.e., ∀s ∈ S.∃s  ∈ S.(s, s  ) ∈→; and

Fig. 3.27 Saul Kripke

46

3 System Modelling

• λ : S → 2AP is an interpretation (or labelling) function that maps each state to its set of valid atomic propositions. Typically, when states s1 , s2 ∈ S are related by →, i.e., (s1 , s2 ) ∈→, we write this as s1 → s2 . We call a state s2 a successor of state s1 if and only if s1 → s2 . With succ(s1 ), we refer to the set of all successors of state s1 , i.e., succ(s1 ) = {s2 ∈ S | s1 → s2 }. Recall that an FSM can be executed, and that an FSM execution is a sequence of states (Definition 3.2). A Kripke structure defines all possible behaviour of a system consisting of one or more FSMs. Also in such a structure, executions are represented by sequences of states. We formalise this with the notion of path. Definition 3.4 (Path) Given a Kripke structure K = (S, I, →, λ), a path in K is an infinite sequence of states π = s0 s1 s2 s3 . . ., such that s0 ∈ I and ∀i ≥ 0.si → si+1 . Since → is left-total, in every state there always is a transition enabled (i.e., there always is a step that can be made), and thus the system will not deadlock. Notice that the interpretation function defines when an atomic proposition holds, i.e., atomic proposition p ∈ AP is true in s ∈ S if and only if p ∈ λ(s).



> Finite-state systems and their executions

It is very important to note that the set of states S of a Kripke structure is finite. This implies that we always reason about systems that have a finite number of states. This number may be very large, but it must be finite. However, also note that this formalism still supports executions, corresponding to paths through a Kripke structure, which are infinite, see Definition 3.4. A path must therefore involve at least some of the states in the Kripke structure an infinite number of times. In Chap. 5, we present model checking algorithms that rely on the fact that S is finite. There are techniques to verify infinite-state systems, but these are out of the scope of this book. For more information, see Sect. 3.7.

With the introduction of atomic propositions, we are preparing for the verification of system properties, which will be discussed in Chap. 4. Atomic propositions play an important role when expressing properties. These properties express some desired or undesired behaviour, in the form of possible executions of the system, and atomic propositions form the primitive building blocks for these properties. Hence, when defining atomic propositions, we anticipate the properties that we wish to express. Example 3.25

Consider the modulo 3 counter, introduced in Example 3.1 and used throughout this chapter. If we want to distinguish the states in which the counter has an even value from the states in which the counter has an odd value, then this can be

3.5 Kripke Structures

47

done using an atomic proposition even. The Kripke structure for the counter, as introduced in Example 3.1, can actually be mapped one-on-one on the right FSM of Fig. 3.2. Together with the set of atomic propositions AP = {even}, the Kripke structure can be defined as follows. ( {zero, one, two}, (S) {zero}, (I) {(zero, zero), (zero, one), (one, zero), (one, two), (two, zero)}, (→) {zero → {even}, one → ∅, two → {even}} (λ) ) Graphically, it would be displayed as in Fig. 3.28. Note that, compared to Fig. 3.2, the Kripke structure does not have events associated with the transitions, and the states are labelled with sets of atomic propositions.

In NuSMV and Promela/Spin, atomic propositions should be defined to be able to express most system properties. In NuSMV, the most convenient way to do so is by creating DEFINE declarations, as used in Fig. 3.11. For instance, the even proposition can be introduced by adding even := (state = zero | state = two) to the DEFINE section of the counter module, where “|” is the logical or-operator. In Promela, propositions can be defined as global constants. For instance, we can add the line #define even (state == zero || state == two) to the Promela model of Fig. 3.26, where “||” is the logical or-operator, but one would have to make the variable state global, moving its declaration at line 7

Fig. 3.28 A Kripke structure for a Modulo 3 Counter

48

3 System Modelling

out of the definition of counter, in order to make it accessible for this constant definition. We do not precisely define in this book how NuSMV and Promela models map to Kripke structures. Doing so would result in very long and detailed definitions, which we do not really need, as we only reason at the level of Kripke structures when discussing property verification in Chap. 4. It is sufficient to keep in mind that every possible state of a model, in either NuSMV or Promela, maps to a state in the derived Kripke structure, and every (atomic) transition or execution step in the model maps to a transition in the derived Kripke structure. Consider every possible execution of the model, i.e., all possible executions of the FSMs described in this model, conform Definition 3.2, taking their interaction into account. These executions together exactly describe the Kripke structure of the model. Example 3.26

Consider the counter as modelled in NuSMV in Fig. 3.3. The Kripke structure of this FSM corresponds with the one shown in Fig. 3.28, if we add the definition of the even proposition, as mentioned earlier. For every option in the case segment, there is a corresponding transition in the Kripke structure, between the appropriate states. In the same way, this Kripke structure represents the formal semantics of the Promela model in Fig. 3.13.

Unfortunately, we cannot show the complete Kripke structures for the counter systems as presented in Figs. 3.11 and 3.26. We would have to display a graph consisting of 3,070 states (NuSMV) and 10,257 states (Promela)! This is an example of state space explosion, i.e., the fact that state spaces rapidly grow in size as models are extended, in particular as the number of modules/processes is increased. When modelling a system as a NuSMV or Promela model, and hence essentially describing a Kripke structure, the challenge is to find the right level of abstraction: if there is too much detail, the model checking algorithms will not work anymore, because the state space, i.e., Kripke structure, becomes too large; if there is not enough detail, the model checking algorithms might fail to find problems in the system, if those have been abstracted away. Unfortunately, no formal guidelines can be given to avoid this problem; it simply requires experience.

3.6 A Modelling Case Study—An Elevator System Now that we have explained the languages NuSMV and Promela in detail, we can consider a more elaborate system as a case study. The purpose of this case study is to demonstrate the basics of modelling in NuSMV and Promela, and to further illustrate the differences between the two languages.

3.6 A Modelling Case Study—An Elevator System

49

We look at a simple elevator system in this section. There are a number of floors, each equipped with a request button to request an elevator. Furthermore, there is one elevator present that receives the requests, and processes them by travelling to the floors where the requests were made. Once the elevator has arrived at a floor where a request was made, the request is reset. To keep the models simple, we have ignored various aspects. For instance, once the elevator has reached a floor, its doors are not opened to let passengers in, and requests from passengers inside the elevator, to move to another floor, are also not considered. However, the models presented in this section can be extended to incorporate those aspects. P ROMELA First, let us consider creating a model of the elevator system in Promela. To handle the requests, a natural way seems to be to use an asynchronous channel; when a request button is pressed, a suitable request will be added to the buffer of that channel, and the elevator can process requests whenever it is ready to do so. A synchronous channel is in that sense less obvious, as it would mean that request buttons sometimes might have to wait until the elevator is ready to synchronise with them. Instead, it should be possible to make requests at any moment in time. Figure 3.29 presents our Promela model. We have defined an enumeration type to encode the movement status of the elevator (line 3). The elevator can go down, go up, and be idle. The request channel is defined at line 5. We call the number of floors N, meaning that N should be a sufficient size of the buffer of the request channel. At lines 8–16, the definition of reqbutton processes is given. A reqbutton process instance is at a particular floor, which we pass to the instance as a parameter. An alternative would be to have reqbutton instances derive their own floor based on their process ID (_pid). We do not want request buttons to keep being pushed, i.e., send requests repeatedly via the request channel, while their previous request has not been served yet. To this end, we introduce a Boolean variable requested, to keep track of requests made. When no request has been made, a message can be sent, and requested is set to true (lines 12–13), preventing subsequent requests. Note the use of atomic, since it is natural that the two statements are executed together in one step. This has a large impact on the state space of the model, see our discussion on scalability later. Of course, setting requested to true means that it should be reset to false when the request has been processed. Therefore, we introduce another channel reset (line 6). This channel can be made synchronous, as reqbutton instances waiting for a reset cannot do anything else in the meantime anyway. At line 14, if a reset message has been received for the corresponding floor, requested is set to false again. The definition of the elevator process is given at lines 18–33. An elevator has a current floor (curfloor, line 19), a floor to go to (reqfloor, line 20) and a movement status (line 21). In the repetition construct at lines 23–32, the only option is to wait for requests to arrive. When a request is received, reqfloor stores the floor number where the request was made. Based on how this floor number compares to the current floor number of the elevator, it updates its movement status (lines 25–29).

50

Fig. 3.29 A simple elevator system in Promela

3 System Modelling

3.6 A Modelling Case Study—An Elevator System

51

After that, the current floor number is updated, the status is set to idle again (line 30), and the reqbutton process instance at floor reqfloor is informed to reset its requested variable (line 31). The various process instances are declared in the init process at lines 35–44. This is done in one atomic statement, such that all instances are created together at once. At line 37, the elevator instance is created, and in the repetition construct at lines 39–42, a loop is defined to instantiate the reqbutton process N times, passing to each instance a unique floor ID. N U SMV Because NuSMV follows a very different modelling philosophy compared to Promela, one has to rethink the messaging mechanism when trying to model the elevator system. Similar to the Promela model, we also create a separate module for the request buttons, though. We cannot use channels; communication in NuSMV is achieved by passing the right arguments to the various module instances. Again using a Boolean variable requested, as in the Promela model, makes sense, and we can refer to the requested variable of each request button module instance directly when defining the elevator module (which, of course, we also need). At lines 1–10 in Fig. 3.30, the definition of the reqbutton module is given. Instead of having the elevator instance send a message to reqbutton instances when they need to reset their requested variable, we choose to add an argument elevfloor to the reqbutton module, together with the ID of its own floor (floor), such that reqbutton instances can detect themselves that the elevator has reached their floor, and a reset of requested is justified. At lines 6–10, it is defined how requested should be updated. In Promela, the moments that a particular request button is pressed during an execution are varied due to the asynchronous execution model, in which the execution steps of process instances are selected non-deterministically; the pressing of a button can be delayed indefinitely. It is important that these moments are varied instead of fixed, to ensure that model checking will consider all possible moments at which request buttons are pressed. The asynchronous execution model allows us to simply define that a request button can be pressed (see Fig. 3.29, line 12). In NuSMV, we have a synchronous execution model, so to achieve the same effect, the possibly indefinite delaying of pressing buttons, we add non-determinism to the definition of the reqbutton module. At line 7, if no request has been done yet, a request may or may not be performed. If instead, a request has been done previously, and the elevator is at the floor of this button (line 8), the requested variable should be reset. In all remaining cases, the variable should keep its value. The elevator module is defined at lines 12–35. Similar to the elevator process in Promela, we have the curfloor, reqfloor, and status variables. Not having a channel to receive requests, we instead pass arguments req0, req1 and req2 to the elevator module to check for requests. Note that this means we are creating a model for the case that N = 3. Sequences of statements are not as naturally present in NuSMV as they are in Promela, so we need to achieve these sequences in some indirect way. First, consider the updating of reqfloor (lines 25–30). If the elevator is at its destination floor,

52

Fig. 3.30 A simple elevator system in NuSMV

3 System Modelling

3.6 A Modelling Case Study—An Elevator System

53

it is ready to process another request. Note that in the case construct, due to the fact that options are ordered, a request at floor 0 has a higher priority than one at floor 1, and a request at floor 1 has a higher priority than one at floor 2. Avoiding this is possible, but not straightforward, and to keep the model simple, we refrain from trying that here. When reqfloor has been updated, this may have an effect on status in the next state. If reqfloor has become different from curfloor, status is updated to move the elevator in the right direction, until it can become idle again (lines 31–35). Finally, consider the updating of curfloor at lines 21–24. If the elevator is moving, we can update curfloor to be equal to reqfloor in the next state, otherwise curfloor keeps its current value. Note that by carefully expressing dependencies between the variables in the next functions, we achieve the same sequence of updates as in Fig. 3.29 at lines 23–32: first a request is received, after which the movement status is updated. Then, the floor where the request was made is reached, and finally, the movement status is set to idle again. The relevant reqbutton instance is also informed to reset its requested variable. Finally, module instances are created at lines 39–43. Here, the necessary connections are made between them to enable communication. Scalability With the scalability of a model, we can refer to two aspects: 1. The effort that is needed to scale up the model itself. For instance, how much work is needed to add floors or elevators to the elevator system? and 2. How does the state space grow as we add process instances? If we focus on the effort needed to add floors, then the Promela model is very scalable: one only needs to redefine N. The NuSMV model already requires more work here: an extra reqbutton instance is easily declared, but more needs to be done, such as changing the number of parameters of elevator, and changing the number of options in the case construct at lines 25–30. In both models, we can add additional elevator instances, but probably, one would then also want to have some coordinating process that controls which elevator handles which requests. For both models, adding this requires more modelling work. How the state space grows as a model is extended is always a very important question for model checking. Modelling a system should always be done in such a way that this state space growth is mitigated as much as possible. In general, one advice is to use as few variables in a model as possible, but this mainly addresses the amount of memory it takes to represent a state. A very important tip is to make sure that variables always have a fixed default value in states where their value is irrelevant and they first need to be assigned a new value before the variable becomes relevant again. As the state space grows in size, also the runtime needed to explore the state space increases. Table 3.4 presents the number of states that are reachable from the initial

54

3 System Modelling

Table 3.4 Number of states and runtimes of exploring the state spaces of the elevator models, with a varying number of floors. The experiments were conducted on an iMac with a 3.4 GHz Quad-Core Intel Core i5, 32 GB RAM, running macOS Catalina. We compiled the Promela models using the options -o3 -dmemlim=290000, to set an appropriate memory limit, and ran executables with -m50000000 -c0 -n -w28, to set the maximum search depth appropriately, and claim enough memory before the computation started. The NuSMV models were analysed by running NuSMV with the -r option to show the number of reachable states Model

Floors

Promela

3

Number of states

Promela

4

494,005

0.36

Promela

5

43,085,862

41.50

Promela without atomic

4

1,645,494

1.36

NuSMV

3

88

0.02

NuSMV

4

304

0.02

NuSMV

5

928

0.02

7,957

Runtime (s) 0.01

states, and the time needed to explore those states for the Promela and NuSMV models of the elevator system with three, four, and five floors. The effect of not using the atomic keyword in the Promela model (Fig. 3.29, line 12) on the number of states can be seen by comparing rows 2 and 4 in the table. Carefully defining atomic statements is clearly very important. Note also the impact of using a synchronous execution model, as demonstrated by the NuSMV rows. Scaling up the number of floors has a much smaller impact on the NuSMV model compared to the Promela model, due to there being no interleavings of process execution steps. Another factor, of course, is that the elevator works deterministically. N U SMV or S PIN? Can we conclude whether NuSMV or Spin is more suitable for the verification of software models? Not in general. Whereas the scalability results for the case study in this section clearly show that the NuSMV model scales much better than the Promela model, it should be pointed out that Spin has multiple techniques to reduce the state space size, sometimes drastically. In addition, we chose to model the communication in the Promela model by means of asynchronous channels, but this negatively affects the state space size. It is also possible to use synchronous channels, thereby creating a model that is conceptually closer to the NuSMV model. Besides scalability, the following factors also play a role, when deciding which tool to use: • If the system consists of multiple processes, which execution model would be most natural to reason about its behaviour? For the system properties, we are interested in, is it necessary to reason about specific interleavings of process behaviour, or is it enough to analyse the behaviour in a synchronous execution model?

3.7 Further Reading

55

• Some properties can be expressed for NuSMV, but not for Spin, since NuSMV supports Computation Tree Logic [74], while Spin does not. Computation Tree Logic is addressed in detail in Chap. 4. • Which language feels most natural for modelling the system? Promela has many features, and is conceptually closer to imperative programming languages than the NuSMV language, but the latter allows compactly modelling systems by means of transition predicates.

3.7 Further Reading State Space Reduction In the literature, many ways are discussed to reduce the state space or compactly store the state space, for instance by using Binary Decision Diagrams to efficiently store states [34], and by using abstraction [50]. Often, state space exploration can be restricted to a certain part of the state space, exploiting the fact that some executions may be considered equivalent to others, w.r.t. the property to be checked. For instance, one can exploit symmetry to reduce the state space [46], and apply partial order reduction [85,86,143,159]. Spin supports partial order reduction, and in fact, in the case study of Sect. 3.6, it has been applied to reduce the state spaces. Directed model checking involves the application of heuristics, to guide state space exploration to those areas where bugs are most likely to occur. These heuristics typically use the model and/or the property to identify those areas [71,72,162,164]. Infinite-state systems Special classes of models, such as push-down automata [32, 77], have been defined, that are infinite-state, but for which model checking algorithms nevertheless exist. Modelling languages In Sect. 3.1, we mentioned multiple examples of model checkers, and focussed on two of them, with their corresponding modelling languages. Not surprisingly, the other checkers support different languages. Multiple checkers use a language based on some form of process algebra. In particular, Fdr [84] and Pat [127] support the Communicating Sequential Processes (CSP) process algebra, defined by Tony Hoare [97], while mCRL2 [35,89] and LTSmin [28] use a version of the Algebra of Communicating Processes (ACP), extended with abstract data types. ACP was originally defined by Jan Bergstra and Jan Willem Klop [23]. As already mentioned in this chapter, Promela also involves concepts from process algebra, such as non-determinism and synchronous communication. The Language of Temporal Ordering Specification (Lotos) [30], defined by Tommaso Bolognesi and Ed Brinksma, is based on features of CSP and Robin Milner’s Calculus of Communicating Systems (CCS) [135], extended with abstract data types. This language, and the language Lotos New Technology (LNT) that evolved out of

56

3 System Modelling

it, are supported by the Construction and Analysis of Distributed Processes (Cadp) toolbox [81], which offers model checking, among other functionality. The Tlc model checker, developed by Leslie Lamport, uses the Tla+ modelling language [120]. This language is state-based, like the other languages, and therefore conceptually similar. Quantitative Model Checking If one wishes to also model quantitative aspects of a computation (e.g., time, energy consumption, memory usage, probabilities), one can describe systems using a model that is extended with such information. For many of these quantitative models, dedicated model checkers exist. For example, Uppaal can be used to verify timed models [20], and Prism and Storm can be used to verify probabilistic models [95,118]. The Modest toolset allows the specification of models with quantitative aspects in general, including probabilities, real-time behaviour, and continuous dynamics [91].

4

Functional System Properties in Temporal Logic

In the previous chapter, we have seen how we can define a formal model of a system that we want to reason about. This chapter discusses how we can express the properties that we expect to hold for such a model, so that the model checker can check this for us. It introduces several variants of temporal logic. Temporal logic allows one to express desired properties about possible executions, e.g., to express the order in which certain events must occur, or to state that a property eventually must be satisfied.

4.1 History and Background A common and natural way to describe the behaviour of a system is to describe how it evolves over time. For example, if you describe the traffic lights at a crossing, you might explain their behaviour by stating the following properties: • Traffic lights for crossing lanes are never green at the same time. • If a traffic light is red, it eventually becomes green. • A traffic light can only become orange, if it was green before. Temporal logic is a way to specify such behaviour of a system over time formally. In this case, a system is described as a transition system, for instance, a Kripke structure (see Definition 3.3): for each state, the set of possible transitions describe the possible next states. System paths, i.e., executions allowed by the transition system, are sequences of states, see Definition 3.4. Temporal logic formulae describe properties that have to hold for such paths. The basic ingredients are atomic state properties, i.e., properties that have to hold in a single state, and logical and temporal connectors that describe in which states the atomic state properties should be satisfied. For example, in the traffic light example above, atomic state properties would © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Huisman and A. Wijs, Concise Guide to Software Verification, Texts in Computer Science, https://doi.org/10.1007/978-3-031-30167-4_4

57

58

4 Functional System Properties in Temporal Logic

be properties like light 1 is green, light 2 is orange, etc. Temporal connectors are expressions like: always (in some reachable state), eventually (a state is reachable), next (a property about two consecutive states). The development of temporal logic as we study it here, is closely connected to the development of computer science. Pure logicians typically did not consider time, but with the emergence of computer science, and the need to reason about the evolving behaviour of systems, temporal logic emerged as a natural extension of classical logic. In particular, two widely used variants of temporal logic are Linear Temporal Logic (LTL), proposed by Pnueli in 1977 [144], and Computation Tree Logic (CTL), a branching time logic proposed by Clarke and Emerson in 1980 [74]. LTL constrains all possible executions of a system in a uniform way, whereas CTL also allows one to express that a property holds for one possible execution of a system. Phrased in another way: LTL expresses properties that have to hold for all paths, whereas CTL expresses properties over a (possibly infinite) tree of possible computations. In particular, CTL properties can also hold if an execution is possible, it does not necessarily have to happen.

The difference between the two logics has led to fierce debates for many years, with strong proponents on both sides. This discussion has to do both with the expressiveness of the logics, and the complexity of checking whether a system satisfies a property. LTL and CTL are incompatible in expressiveness, i.e., there are properties that can only be expressed in LTL and not in CTL, and vice versa. In the context of this book, we are pragmatic. Instead of choosing one of the two logics, we just use whichever is the most appropriate to express the property at hand. Checking whether a system respects a temporal logic property is called model checking. The previous chapter described how to define a system. The current chapter focuses on how the desired properties of the system can be expressed. The next chapter addresses how a system can be formally verified w.r.t. those properties.

4.2 Safety Versus Liveness Properties Two important classes of temporal properties can be distinguished, namely those of safety and liveness properties. A safety property expresses that the system operates in a safe way, and will never reach a bad state. Safety properties are often characterised as “nothing bad will happen”. In contrast, a liveness property expresses that a certain event must happen at some point in time: “eventually something good will happen”. Typical examples of safety properties are: • in all reachable states, a variable x has a positive value;

4.2 Safety Versus Liveness Properties

59

• a system will never be in a critical state, unless it has received explicit permission to do so; and • a write operation on a file can only occur if the file first has been opened. Typical examples of liveness properties are: • whatever happens, the system will always reach a state where variable x has a negative value; • when a system enters a critical state, it will also leave this critical state again after a finite number of steps; and • if a file is opened, it will eventually be closed. When developing verification algorithms for temporal properties (the focus of Chap. 5), the difference between safety and liveness properties is important to understand. In particular, the sets of safety properties that can be expressed for LTL and CTL are roughly the same, but the set of liveness properties that can be expressed in LTL is very different from the set of liveness properties that can be expressed in CTL. Moreover, another important difference is that if a safety property is not valid, there always is a finite counter-example that illustrates why it is not valid. For instance, the prefix of a path to a state in which variable x has a negative value is a counter-example for the first safety property example given above. On the other hand, liveness properties can only have infinite counter-examples: if something does not happen within a bounded amount of time, this does not imply that it will never happen. If we consider the first liveness property example above, then a counter-example for this property would have to demonstrate that an execution exists in which never a state is reached in which variable x has a negative value. Since executions are infinite (see Definitions 3.2 and 3.4), this means an infinite counter-example is required. Note that a bounded liveness property is actually a safety property. For instance, consider a variant of the last liveness example given above: if a file is opened, it will be closed within 10 min. Although this property refers to something good eventually happening (the closing of a file), a counter-example to the property is finite, due to the restriction on when the good thing must happen. For the example, an execution in which a file, once opened, is not closed within 10 min suffices as a counter-example. Finally, before focussing on temporal logic, we discuss deadlocks. Deadlocks are situations in which an execution is “stuck”, i.e., no further execution steps are possible. Kripke structures rule out such states by definition (recall Definition 3.3, included in the current chapter as Definition 4.1), since their transition relation is lefttotal, meaning that from every state, there are transitions leading to successor states. This is unfortunate, since deadlocks are a common problem in real-life systems. A simple deadlock can result from two processes both waiting for a message that should be sent by the other process. Since deadlocks are relevant for real systems, Spin actually works with a relaxed form of Kripke structure, in which the left-total requirement of the transition relation

60

4 Functional System Properties in Temporal Logic

has been dropped. By default, whenever Spin is used to explore the state space of a model, the system is checked for deadlocks, actually called invalid end states in Spin terminology. In NuSMV, one can inform the model checker to check that the transition relation of the Kripke structure of a model is left-total. If it is, then there are no deadlocks.



> Is deadlock absence a safety or a liveness property?

It may seem debatable whether deadlock absence is a safety or a liveness property. On the one hand, deadlocks can prevent something good from ever happening, which would indicate that their absence refers to a liveness property, but on the other hand, a counter-example for this absence is a finite path leading from an initial state to a deadlock state. The issue here is that the corresponding execution is finite, and when checking properties, we typically assume executions to be infinite in length. Deadlock absence is therefore typically considered to be a safety property. The bad situation is here the fact that no further execution steps are possible, and a counter-example to the property is finite.

4.3 Kripke Structures In the previous chapter, we have already seen Kripke structures as a way to define the formal semantics of system models. The semantics of the temporal logic formulae will be defined over these Kripke structures, therefore we quickly recall the definition. Definition 4.1 (Kripke Structure) A Kripke structure over a set of atomic propositions AP is a 4-tuple K = (S, I, →, λ) where • S is a finite set of states; • I ⊆ S is the set of initial states; • →⊆ S × S is a transition relation, such that → is left-total, i.e., ∀s ∈ S.∃s  ∈ S.(s, s  ) ∈→; and • λ : S → 2AP is an interpretation (or labelling) function that maps each state to its set of valid atomic propositions. And likewise, we recall the notion of path. Definition 4.2 (Path) Given a Kripke structure K = (S, I, →, λ), a path in K is an infinite sequence of states π = s0 s1 s2 s3 . . ., such that s0 ∈ I and ∀i ≥ 0.si → si+1 .

4.4 Linear Temporal Logic—Syntax and Semantics

61

Fig. 4.1 An example of a Kripke structure

We use pathss (K ) to denote the set of all paths of K, starting in s, and paths(K ) to denote the set of all paths of K starting in any of its initial states. Furthermore, we use πi to denote the ith element of the path π , and π i to denote the path starting in πi , i.e., π i = πi πi+1 πi+2 . . .. Notice that paths are always infinite, because → is left-total. Example 4.1

Let AP be the set { p, q}. The Kripke structure ({s0 , s1 }, {s0 }, {(s0 , s1 ), (s1 , s0 )}, {s0 → { p, q}, s1 → ∅}) can be graphically depicted as in Fig. 4.1. The only possible path in this structure is π = s0 s 1 s 0 s 1 s 0 s 1 . . . Thus for example, π0 = s0 , π1 = s1 , π2 = s0 , π3 = s1 , etc., and π 1 = s1 s0 s1 . . ..

4.4 Linear Temporal Logic—Syntax and Semantics Now we are ready to define two different temporal logics. First, we present Linear Temporal Logic (LTL). Formulae in LTL are propositional logic formulae, combined with the connectives G (Globally), F (Eventually), X (Next), U (Until) and W (Weak Until). Intuitively the formulae have the following meaning (where ϕ and ψ are LTL formulae): • • • • •

G ϕ means that ϕ holds in every (reachable) state; F ϕ means that there always exists a reachable state where ϕ holds; X ϕ means that in the next state ϕ holds; ϕ U ψ means that ψ will always hold eventually, and until that point, ϕ must hold; ϕ W ψ means that either ϕ holds forever, or if it does not hold anymore, then ψ must hold. The Weak Until operator is sometimes also called the Unless operator. Formally, the syntax and semantics of LTL are defined as follows.

62

4 Functional System Properties in Temporal Logic

Definition 4.3 (LTL) Let p ∈ AP be an atomic proposition. Then a formula in LTL is defined by the following syntax: ϕ:: = p | ¬ϕ | ϕ ∧ ϕ | ϕ ∨ ϕ | G ϕ | F ϕ | X ϕ | ϕ U ϕ | ϕ W ϕ Given a Kripke structure K, validity of an LTL formula ϕ for a path π in K, denoted by K , π |= ϕ, is defined by the following inductive definition. K , π |= true K , π |= false K , π |= p ⇔ p ∈ λ(π0 ) K , π |= ¬ϕ ⇔ K , π |= ϕ K , π |= ϕ1 ∧ ϕ2 ⇔ K , π |= ϕ1 and K , π |= ϕ2 K , π |= ϕ1 ∨ ϕ2 ⇔ K , π |= ϕ1 or K , π |= ϕ2 K , π |= G ϕ ⇔ ∀i.K , π i |= ϕ K , π |= F ϕ ⇔ ∃i.K , π i |= ϕ K , π |= X ϕ ⇔ K , π 1 |= ϕ K , π |= ϕ1 U ϕ2 ⇔ ∃ j.K , π j |= ϕ2 and ∀k.k < j.K , π k |= ϕ1 K , π |= ϕ1 W ϕ2 ⇔ ∀ j.(∀k.k ≤ j.K , π k |= ¬ϕ2 ) implies K , π j |= ϕ1 Given a Kripke structure K, an LTL formula ϕ is valid, denoted by K |= ϕ, if and only if for all paths π starting in an initial state s ∈ I, the formula is valid on the path, i.e., K , π |= ϕ. Core LTL We can identify a core LTL which is smaller but still has the same expressiveness, using only the temporal operators X and U. This is comparable to the fact that all propositional logic formulae can be expressed using negation (¬) and conjunction (∧) only, and all other connectives can be expressed in terms of these connectives. All other LTL operators can be expressed in terms of X and U, by using the following equivalences: G ϕ ⇔ ¬ F(¬ϕ) F ϕ ⇔ true U ϕ ϕ1 W ϕ2 ⇔ (ϕ1 U ϕ2 ) ∨ G ϕ1 It is important to understand the difference between U and W, i.e., the Until and the Weak Until operator. A formula ϕ1 U ϕ2 states that ϕ2 must hold at some point in time, and up to that point, ϕ1 has to hold. In contrast, a formula ϕ1 W ϕ2 states only that ϕ2 may hold at some point. If ϕ2 holds, then after this point ϕ1 does not have to hold any longer, otherwise, ϕ1 has to hold forever. Because ϕ1 W ϕ2 is a weaker requirement on ϕ2 , this explains the name Weak Until. Any expression using the Until operator can be transformed into one using the Weak Until operator instead as follows: ϕ1 U ϕ2 = F ϕ2 ∧ (ϕ1 W ϕ2 )

4.4 Linear Temporal Logic—Syntax and Semantics

63

An alternative that is also sometimes used in the literature is the Release operator R: a formula ϕ1 R ϕ2 has the following meaning: ϕ2 has to hold until and including the point where ϕ1 first becomes true; if ϕ1 never becomes true, ϕ2 must remain true forever. The Release operator can be expressed in terms of the Weak Until operator as follows: ϕ1 R ϕ2 = ϕ2 W(ϕ1 ∧ ϕ2 ) Example 4.2

Below are some example properties and their formalisation in LTL. Note that we can use implication (“⇒”), due to p1 ⇒ p2 being equivalent to ¬ p1 ∨ p2 . • p1 ⇒ F p2 intuitively means “if p1 is true now, then at some future moment p2 will be true”. Thus, if p1 is not true now, p2 never has to be true. • G( p1 ⇒ F p2 ) intuitively means “whenever p1 is true, p2 will be true at some subsequent moment”. • G( p1 ⇒ X p2 ) intuitively means “whenever p1 is true, p2 will be true at the next moment”. Notice that this is a much stricter property than G( p1 ⇒ F p2 ), because it enforces p2 to be true immediately after p1 . • G( p1 ⇒ X F p2 ) intuitively means “whenever p1 is true, p2 will be true at some strictly subsequent moment”. Whereas G( p1 ⇒ F p2 ) holds if p1 and p2 are true at the same moment, this formula enforces that p2 should really come after p1 . • p ∧ G( p ⇒ X p) means “ p is true now, and whenever p is true in a state, it also holds in the next state”. This can be simplified to p ∧ G X p, which implies G p, i.e., p is always true.

LTL in N U SMV When we wish to check an LTL property for a module in NuSMV, the property can be written in the LTL property section of the module (preceded by the keyword LTLSPEC). The syntax for the LTL operators can be found in Table 4.1. NuSMV actually supports more operators that have been added over time to extensions of LTL, such as the R operator, and even some referring to past behaviour instead of future behaviour, but we do not cover those in this book. Note that the Weak Until operator is not supported, but it can be expressed using the U and G operators, as explained before. An additional operator supported by NuSMV, not listed in Table 4.1, is V, which is the dual of U: ϕ V ψ is true if and only if ¬(¬ϕ U ¬ψ) is true. All Boolean expressions can be used as atomic propositions. Often, it is convenient to define these atomic propositions as DEFINE declarations (see Sect. 3.5).

64

4 Functional System Properties in Temporal Logic

Table 4.1 The syntax for writing LTL formulae in NuSMV and Promela LTL

NuSMV

Promela

G

G

[]

F

F

X

X

not supported

U

U

U

W

not supported

W

¬

!

!



&

&&



|

||



->

->

Fig. 4.2 Expressing a property in LTL for the system of Fig. 3.11

Example 4.3

For the counter in Fig. 3.11, we can specify, for example, that whenever the counter is odd, it eventually will become even again. Using the DEFINE declaration even := state = zero | state = two, this can be expressed as presented in Fig. 4.2.

LTL in P ROMELA The syntax for writing LTL in Promela is given in Table 4.1. Note that Promela uses a different notation for G and F, referring to the often used nomenclature of “box” and “diamond” operator for G and F, respectively. Note also that X is not supported. Excluding X simplifies the algorithms needed for verifying LTL formulae (see Chap. 5), while not giving up much expressiveness in practice. Typically, properties of interest can be formalised without the use of X. Promela also supports the V operator. As mentioned in Sect. 3.5, atomic propositions can be defined using the #define keyword. Make sure that the variables referenced in these propositions are global.

4.5 Fairness

65

Fig. 4.3 Expressing a property in LTL for the system of Fig. 3.26

Example 4.4

Figure 4.3 presents how to add an LTL formula for the property that whenever the counter is odd, it will eventually be even again to the Promela model of Fig. 3.26. Note that the state variable has been made global (line 6), to be able to define an appropriate atomic proposition even at line 8. At line 9, the formula is given, indicated by the keyword ltl, with the actual formula written between curly brackets. The formula is given the name c_evt_even.

4.5 Fairness When studying properties of the possible behaviours of a system, one sometimes finds that a property is violated, but that it is only violated on a path that will not happen in reality, because it has a transition that is enabled, but is never executed. To prevent this situation, we introduce the notion of fairness. Consider for example the Kripke structure of the modulo 3 counter, introduced in Chap. 3, as represented by the Kripke structure shown in Fig. 3.28, and again presented in Fig. 4.4. Note that in the state zero, the next state visited during an

Fig. 4.4 A Kripke structure for a modulo 3 counter

66

4 Functional System Properties in Temporal Logic

execution can be either zero or one. Selecting one of the states as a successor state is done non-deterministically. However, typically, a system that resets itself all the time, and thus never leaves the state zero is not realistic, and one may wish to exclude that execution. This can be done by saying that the system should behave fairly. In the literature, two notions of fairness can be found. Weak fairness expresses that if a transition is enabled for an unbounded time, then it eventually must be taken. Thus in the case of the modulo 3 counter, if we only consider executions that are weakly fair, then the execution that stays in the initial state zero forever is not part of the system. Strong fairness expresses that if a transition is enabled infinitely often, then it eventually must be taken. Notice that in this case, the transition does not have to be enabled for a continuous period. It is sufficient that it becomes enabled over and over again. Example 4.5

Consider a variation on the modulo 3 counter, with non-deterministic resetting, as presented in Fig. 4.5. In this module, consider the situation that the variable state has the value zero. If reset = TRUE, the next value of state has to be zero (line 15), but if reset = FALSE, the checker can non-deterministically choose to select the value one, or keep the current value zero (line 16). The value of reset is alternating between TRUE and FALSE (line 20). Thus, the option that allows

Fig. 4.5 A modulo 3 counter with non-deterministic resetting

4.5 Fairness

67

changing the value of state to one is not continuously enabled, and hence, weak fairness would not enforce that state is eventually set to one. However, it is enabled infinitely often, as reset is set to FALSE infinitely often. This means that strong fairness would enforce that state is eventually set to one.

In the next section, we will see that fairness can be expressed as a temporal logic formula, which allows ruling out unfair behaviour when checking a property. However, often it is convenient that the model checker itself has an option to rule out unfair behaviour, and both NuSMV and Promela have this option. NuSMV supports two notions of fairness. You can explicitly add fairness to a model in the form of constraints. • JUSTICE ϕ or FAIRNESS ϕ expresses that NuSMV should ignore any path along which ϕ is not satisfied infinitely often. • COMPASSION(ϕ, ψ) expresses that NuSMV only has to consider paths where the following holds: if ϕ holds infinitely often on the path, then ψ has to hold infinitely often on this path as well. Notice that any path where ϕ does not hold infinitely often is allowed by this constraint. Note that JUSTICE and FAIRNESS corresponds to the notion of weak fairness described above, where ϕ is instantiated to express: “the transition is not enabled or the transition is taken”. Since the model checker ignores any path where this property is not satisfied infinitely often, it means that it will ignore those paths where the transition is enabled all the time and never taken. Strong fairness can be obtained using COMPASSION, instantiated as follows: ϕ becomes “the transition is enabled”, and ψ becomes “the transition is taken”. For such a constraint, NuSMV considers only paths such that if the transition is enabled infinitely often, it will also be taken infinitely often. This explanation might give the impression that enforcing fairness is complicated. However, usually it is possible to give a fairly simple formula to express the desired fairness constraint. Example 4.6

Consider again the modulo 3 counter of Fig. 4.5. We can enforce that the system cannot stay in state zero forever by adding the following constraint. FAIRNESS !(ctr.state = zero);

Next, we discuss some examples of a system requiring fairness to rule out unrealistic executions, this time using Promela. Example 4.7

68

4 Functional System Properties in Temporal Logic

Fig. 4.6 An example in which fairness is needed if n should eventually be 10

Figure 4.6 shows a Promela model, in which the single instance of process definition P repeatedly updates the value of global variable n as long as flag has the value false (line 8). Note that since n has the initial value 0 (line 1), this means that n will alternatingly be set to 1 and 0, as long as flag is set to false. The single instantiation of the process definition Q can only do a single thing, namely setting flag to true. As soon as it does that, the instance of P is able to break out of the repetition construct (line 7) and jump to line 10, where n is assigned the value 10. We are interested in the property that n eventually has the value 10, meaning that we want the sequence of events which we just outlined to eventually happen. At line 3, this property has been formalised as an LTL formula labelled ev10. Remember that Promela models adhere to an asynchronous execution model (see Fig. 3.25): an execution is a sequence of execution steps, each step performed by a single process instance (unless synchronous communication is performed, but this is not relevant for our current model). This means that in every possible execution of the modelled system, either the instance of P performs a step, or the instance of Q. Whenever both can perform a step, the scheduler picks one process non-deterministically. Is the formula ev10 satisfied by the model? The answer, not surprisingly, is no. Why? Because there exists a schedule in which the possible execution step of the instance of Q is indefinitely postponed, and only the instance of P is performing steps. This may be seen as an unrealistic execution. Note that a weak fairness constraint can rule out this execution, since the step of the instance of Q is

4.6 Fairness as an LTL Formula

69

continuously enabled. Spin supports verification under weak fairness. Different from NuSMV, instead of having to add specific constraints to the model, we only need to select the weak fairness option when running Spin, and it will automatically exclude any execution in which weakly unfair behaviour occurs. Example 4.8

Consider the model with the counter and increase processes defined in Sect. 3.3, shown in Figs. 3.21, 3.22 and 3.23. In Example 3.20, we argued that the Promela model has some issues regarding the fact that the increase process can repeatedly increment cnt, while flag is true, and that the counter process can repeatedly perform a reset, while reset is false. In both situations, it is technically even possible for one of the processes to infinitely often perform its step, i.e., increment cnt and reset the counter. When this occurs, it leads to starvation of the other process. Both scenarios are weakly unfair, and can therefore be excluded during verification.

On the one hand, Spin can exclude all weakly unfair executions without having to specify additional fairness constraints as in NuSMV, but on the other hand, Spin has no direct support for excluding strongly unfair executions, while NuSMV allows expressing strong fairness constraints. And yet, it is still possible to add strong fairness constraints, as both weak and strong fairness constraints can be expressed in LTL. We address how to do that in the next subsection.

4.6 Fairness as an LTL Formula As discussed in the previous subsection, when studying properties of the possible behaviours of a system, one sometimes finds that a property is violated, but that this is only violated on a path that does not correspond to any realistic behaviour. To express that a property does not have to hold for such infeasible paths, we introduced the notion of fairness. We saw how we can add requirements to a module in NuSMV by explicitly adding a fairness (or compassion) constraint, and that we can simply tell Spin to ignore weakly unfair executions. However, in some cases, we want to express fairness in temporal logic. For instance, we might be interested in whether there exist unfair paths in the state space at all, or, in the case of Spin, there simply is no other way to rule out strongly unfair executions. Fortunately, both weak and strong fairness can be expressed in LTL. First we define some appropriate atomic propositions: • enabledt means that transition t is enabled; and • takent means that transition t is actually taken / executed.

70

4 Functional System Properties in Temporal Logic

Weak fairness W.r.t. a transition t, weak fairness is expressed as follows: (F G enabledt ⇒ G F takent ) This expresses that if from a certain point in the computation transition t is continuously enabled, then it must be executed infinitely often. An alternative but equivalent way to phrase this is the following formula. G F(¬enabledt ∨ takent ) This expresses that there always is a point in the future where either the transition is not enabled, or it is taken. Strong fairness W.r.t. a transition t, strong fairness can be expressed as follows: (G F enabledt ⇒ G F takent ) This expresses that if a step is infinitely often enabled then it is infinitely often executed. We can use the above patterns to construct formulae to concretely express certain fairness conditions for a model, and check if it satisfies those conditions. In that way, we verify whether there exist any unfair executions. Example 4.9

Consider the Promela model given in Fig. 4.6. As explained in Example 4.7, without weak fairness constraints, it is possible for process instance P to continuously update the value of n, by which an execution step of process instance Q is indefinitely postponed. It is possible to verify with Spin that this is possible, by expressing a weak fairness condition in LTL. Weak fairness can be expressed as G F(¬enabledt ∨ takent ), with enabledt referring to the ability to execute transition t, and takent referring to transition t having been executed. Process instance Q can only perform one statement, and this statement is always enabled, since it is an assignment. Hence, ¬enabledt never holds, when t refers to the execution of the assignment of Q. The effect of executing the assignment is that flag is set to true. Therefore, takent in this case can be expressed as flag == true. This leads to the LTL formula G F flag == true (or [](flag == true) in Promela notation). A counter-example for this formula is indeed a path in which flag is never set to true, meaning that only P performs execution steps.

We can also use formulae that express fairness conditions to check other properties under weak or strong fairness. If we have a fairness LTL formula, let us say with the name fair, and a Kripke structure K, then for K, we have that an LTL formula ϕ is valid under the fairness condition expressed by fair, denoted K |=fair ϕ, if and only

4.7 Computation Tree Logic—Syntax and Semantics

71

if we have K |= fair ⇒ ϕ. For any execution in which fair is not satisfied, fair ⇒ ϕ is vacuously true, and for all fair executions, ϕ must be satisfied, in order for K to satisfy fair ⇒ ϕ. Example 4.10

For the Promela model given in Fig. 4.6, the LTL formula F (n==10), i.e., (n==10) in Promela notation, can be verified under the weak fairness condition G F flag == true, as identified in Example 4.9. The LTL formula (G F(flag == true)) ⇒ F(n == 10) can be used for this. For weakly unfair paths, this implication is vacuously true, since the premise is false. Thereby, weakly unfair paths cannot be counterexamples for this formula, whereas they are for the formula F(n == 10).

4.7 Computation Tree Logic—Syntax and Semantics LTL formulae address what individual executions need to satisfy. Because of this, they are unable to refer to the potential of something particular occurring. Figure 4.7 shows two Kripke structures. If the atomic proposition black is satisfied by a state if and only if it is coloured black in the figure, then the Kripke structure on the left clearly satisfies F black. Note that it also satisfies X black, F G black, and X G black. The Kripke structure on the right, however, does not satisfy any of those formulae, since in order for an LTL formula to be satisfied, all possible executions must satisfy it.

Fig. 4.7 Two Kripke structures to illustrate linear versus branching time

72

4 Functional System Properties in Temporal Logic

Clearly, if we take the left option from the initial state, we will follow a path in which we never reach a black state. Nor does this Kripke structure satisfy the negation of any of those formulae, since the right option from the initial state satisfies the original formulae. If we would be interested in expressing that it is possible to perform an execution in which eventually a black state is reached, then LTL is not expressive enough. For this, we can use Computation Tree Logic (CTL). In CTL, one distinguishes between state operators and path operators. State operators are used to express state formulae, i.e., formulae that hold for a particular state. They can express which atomic propositions hold in the state, and address the branching structure of the paths starting in the state. Path operators can be used to express properties of paths, i.e., they are used to describe path formulae. CTL’s path operators resemble the LTL operators. The use of state operators adds extra expressiveness to CTL, compared to LTL. However, since CTL restricts how state and path operators can be combined, not all LTL formulae can be expressed in CTL. Thus, effectively, LTL and CTL are incomparable logics. In Sect. 4.9, we illustrate the incomparability of the two logics. At the top level, CTL formulae are always state formulae. State formulae can be constructed using the typical logical operators, atomic propositions, and two temporal operators: 1. The universal operator A expresses that a given path formula has to hold for all paths starting in a particular state; 2. The existential operator E expresses that a given path formula has to hold for at least one path starting in a particular state. The properties that have to hold for the paths are described in path formulae, using essentially the same operators as for LTL. However, their arguments have to be state formulae again. The validity of a state formula is defined for a state, and the validity of a path formula is defined for a path. Formally, the following grammar defines how to construct CTL formulae. Definition 4.4 (CTL) Let AP be a set of atomic propositions. A CTL formula is defined as a state formula , according to the following grammar (where p ∈ AP).  :: = p | ¬ |  ∧  |  ∨  | A ϕ | E ϕ ϕ :: = G  | F  |  U  | X  Here,  is a state formula, and ϕ is a path formula.

4.7 Computation Tree Logic—Syntax and Semantics

73

Given a Kripke structure K and a state s, the validity of a CTL formula , denoted K, s |= , is defined inductively as follows. K , s |= true K , s |= false K , s |= p ⇔ p ∈ λ(s) K , s |= ¬ ⇔ K , s |=  K , s |= 1 ∧ 2 ⇔ K , s |= 1 and K , s |= 2 K , s |= 1 ∨ 2 ⇔ K , s |= 1 or K , s |= 2 K , s |= A ϕ ⇔ ∀π.π ∈ pathss (K ).K , π |= ϕ K , s |= E ϕ ⇔ ∃π.π ∈ pathss (K ).K , π |= ϕ K , π |= true K , π |= false K , π |= G  ⇔ ∀i.K , πi |=  K , π |= F  ⇔ ∃i.K , πi |=  K , π |= X  ⇔ K , π1 |=  K , π |= 1 U 2 ⇔ ∃ j.K , π j |= 2 and ∀k.k < j.K , πk |= 1 Given a Kripke structure K = (S, I, →, λ), a CTL formula  is valid, denoted K |= , if for all initial states s ∈ I, the formula is valid in s, i.e., K , s |= . Formulae that do not adhere to the restriction that state and path formulae should be used in an iterative fashion are not CTL formulae. Example 4.11

Consider the following example formulae. • The formula A F p is a CTL formula, since p is a state formula, F p is a path formula, and A F p is a state formula. • The formula E G(A F p) is a CTL formula, since A F p is a state formula, G(A F p) is a path formula, and E G(A F p) is a state formula. • A A F p is not a CTL formula, since A should be applied on a path formula, but A F p is a state formula. • G(A F p) is not a CTL formula, since at the top level, a CTL formula must be a state formula, but G(A F p) is a path formula.

To avoid the possibility of constructing formulae that do not adhere to this iterative use of state and path formulae, there is an alternative syntax for CTL formulae that use the “combined” operators AG, AF, AX, AU, EG, EF, EX, EU. For this notation, the syntax can be defined as follows.  :: = p | AG  | AF  | AX  | AU(, ) | EG  | EF  | EX  | EU(, ) | ¬ |  ∧  |  ∨ 

74

4 Functional System Properties in Temporal Logic

It is straightforward to see that this grammar produces an equivalent set of formulae. The syntax as defined in Definition 4.4 is cleaner and corresponds closer to the separation between state and path formulae in the semantics, but the alternative syntax is easier for tools. It is, for instance, the one supported by NuSMV. Example 4.12

Below are some example properties expressed as CTL formulae. • AG p intuitively means “a property p has to hold throughout”. • EG p intuitively means “there is at least one execution, where p always holds”. • EF p intuitively means “there is at least one execution, where p holds at some point”. • AF p intuitively means “in every possible execution, p eventually holds”. • AG(EG p) intuitively means “we can always pick a possible continuation where p holds forever”.

Core CTL Just as for LTL, CTL formulae can be expressed with a smaller set of operators, i.e., we can identify a core of operators. For this, we can use the following equalities. AG  ⇔ ¬ EF ¬ AG  ⇔ ¬ EU(true, ¬) AF  ⇔ ¬ EG ¬ AF  ⇔ AU(true, ) AX  ⇔ ¬ EX ¬ AU(1 , 2 ) ⇔ ¬(EU(¬2 , ¬(1 ∨ 2 ))) ∧ AF 2 EG  ⇔ ¬ AF ¬ EG  ⇔ ¬ AU(true, ¬) EF  ⇔ ¬ AG ¬ EF  ⇔ EU(true, ) EX  ⇔ ¬ AX ¬ For CTL, there are multiple options to form a core set of operators. For instance, the sets {AU, EU, EX}, {EG, EU, EX} and {EX, EU, AF} are all suitable core sets. In Chap. 5, when discussing an algorithm applied by NuSMV to verify CTL formulae, we work with the core {EX, EU, AF}. CTL in NUSMV When we wish to check a CTL property for a NuSMV model, the property can be written in the CTL property section of the model, which is indicated by the keyword SPEC. All Boolean expressions and DEFINE declarations can be used as atomic propositions. The temporal logic operators are written as AU, AF, EU, etc., and the Boolean operators are written as in Table 4.1.

4.8 Fairness as a CTL Formula?

75

Fig. 4.8 Expressing a property in CTL for the system of Fig. 3.11

Example 4.13

For the counter system in Fig. 3.11, we can specify that at any point, it is always possible to choose a path from where an even state can be reached. Since this property refers to the potential of doing something, LTL is not sufficient, and CTL must be used. Figure 4.8 presents how to specify this property in NuSMV.

CTL in PROMELA Spin only supports LTL to specify temporal logic formulae, and hence does not allow checking properties that require branching-time logic.

4.8 Fairness as a CTL Formula? A natural question is whether fairness can be expressed in a CTL formula in a similar way as can be done in an LTL formula (conform Sect. 4.6). The answer to this question is that, unfortunately, this is not possible. Most fairness constraints cannot be expressed in CTL. This can be understood as follows. Fairness constraints address expectations at the level of paths, and hence they influence the interpretation of the path formulae in CTL. They replace the standard meaning “for all paths” for the A operator and “there exists a path” for the E operator with “for all fair paths” and “there exists a fair path”, respectively. Hence, if a fairness condition would be expressed by a formula fair, then we would need CTL formulae encoding the intuitive meaning of A(fair ⇒ ϕ) and E(fair ⇒ ϕ). However, these formulae are not correct CTL formulae, since ⇒ is not allowed at the level of a path formula (in fact, no logical operator is allowed at that level). Furthermore, fairness conditions cannot be expressed by state formulae. The alternative approach, changing the semantics of A and E to consider only fair paths, can be selected in NuSMV by using its built-in support to express fairness (see Sect. 4.5). In Chap. 5, we address how a model checker can algorithmically support fairness for CTL formulae.

76

4 Functional System Properties in Temporal Logic

4.9 Comparison of LTL and CTL As already mentioned before, LTL and CTL are not equivalent in expressiveness. In fact, they are incomparable: there are LTL formulae that cannot be expressed in CTL and vice versa. On the one hand, CTL formulae involving the existential path operator E cannot be expressed as LTL formulae: LTL formulae always have to hold for all possible paths. On the other hand, an LTL formula such as F G p, i.e., there always is a point in time such that afterwards p holds forever, cannot be expressed in CTL. Consider for example the Kripke structure in Fig. 4.9. This Kripke structure satisfies F G p, but it does not satisfy AF AG p. To understand why it satisfies F G p we have to consider the possible paths of the model. These are of the form s0 s0 s0 . . . or s0 . . . s0 s1 s2 s2 . . .. Clearly, for both kinds of paths, the formula is valid. However, for CTL, at every point we have to consider all possible continuations of the model. Thus, if we have a prefix s0 s0 s0 . . ., there always are two possible next states: s0 and s1 , and thus after this prefix, the formula AG p does not hold. Therefore, the formula AF AG p does not hold for this model. On the other hand, the CTL formula AF EG p is weaker than F G p, as the CTL formula holds for the Kripke structure shown in Fig. 4.10, because you can always choose to continue with the path s0 s0 . . .. The LTL formula is not satisfied by this Kripke structure, due to the execution s0 s1 s0 s1 . . . being possible. CTL∗ However, there is a logic that subsumes both LTL and CTL. This logic is called CTL∗ , defined by Emerson and Halpern in 1986 [75]. CTL and LTL both have been developed independently before CTL∗ . Both CTL and LTL have become very important in the model checking community, while CTL∗ is not (yet) of practical importance. The main use of CTL∗ is that it allows an effective technique for representing and comparing different temporal logics. CTL∗ has the same temporal logic operators as CTL, i.e., A, E, G, F, U, X, but it has no restrictions on how they can be mixed. Thus, one can write a formula such as A F G p—which is equivalent to the LTL formula F G p—and A E G p—meaning that in all states there is always a possible continuation of the execution in which p holds forever. Fig. 4.9 A Kripke structure satisfying F G p but not AF AG p

Fig. 4.10 A Kripke structure satisfying AF EG p but not FG p

4.10 Further Reading

77

However, there are still properties that even cannot be expressed in CTL∗ , such as for example: “there exists a path where the atomic proposition p holds at every even position”, i.e., “after every two execution steps, any valuation is allowed at odd positions”. If we wish to verify this type of properties, we need even more powerful logics, such as the modal μ-calculus [113] (see Sect. 4.10). The modal μ-calculus subsumes CTL∗ .

4.10 Further Reading Many other different temporal logics exist. It falls out of the scope of this book to really address the details of all these logics, but this section mentions a few to give an impression of what is available. Past Time Logic An often used variation of LTL and CTL uses past time operators. This means that one can specify for example that a property was holding at some point in the past, a property has to hold before another property becomes valid, or a property was holding up to a certain point in time. However, adding past time operators does not add any expressiveness to the logic, i.e., all past time operators can be expressed as future time temporal logic operators (see, e.g., [73] for more information). As we already mentioned, NuSMV supports past time operators for LTL. Modal Logic A logic that has much in common with temporal logic is modal logic. Modal logic is an extension of propositional logic with the box () and the diamond (♦) operators. Let K be a Kripke structure. A formula ϕ holds for a state s if all states s  ∈ succ(s) satisfy ϕ. More formally: K , s |= ϕ ⇔ ∀s  ∈ succ(s).K , s  |= ϕ A formula ♦ϕ holds for a state s if there exists a state s  ∈ succ(s) that satisfies ϕ: K , s |= ♦ϕ ⇔ ∃s  ∈ succ(s).K , s  |= ϕ Note that we mentioned earlier that the LTL operators G and F are sometimes also called the box and diamond operator, respectively, and that Promela uses that notation. The main difference between modal and temporal logic is that modal logic is restricted to the next state, while temporal logic can specify a property for a whole execution (without knowing the length of the execution in advance). For more information, see, for instance, [73]. Modal μ-calculus Modal logic is restricted to reason about the next state only, but there is an extension of modal logic that has the capability to reason about entire executions. This is the modal μ-calculus [113]. It extends modal logic with two fixed point operators: a least fixed point operator μ and a greatest fixed point operator ν.

78

4 Functional System Properties in Temporal Logic

The fixed point operators basically encode recursion: a formula ν X.ϕ ∧ X corresponds to the LTL formula G ϕ. It holds for a state s if in the current state the formula ϕ holds, and in the next state, the property holds again. The difference between least and greatest fixed points is that for least fixed points, the recursion has to terminate at some point, otherwise the formula is not satisfied, whereas for greatest fixed points, the recursion never has to terminate. This means that the formula F ϕ can be encoded with a least fixed point: μX.ϕ ∨ X —either the formula holds now, or after some finite number of steps, the formula holds. Since in the modal μ-calculus, different fixed points can be combined into a single formula, the logic is very expressive. In particular, all the logics LTL, CTL, and CTL∗ can be encoded in the modal μ-calculus. For more information on this, see for instance [63].

5

Model Checking Algorithms

In Chap. 4, we have explained how NuSMV can be used to verify LTL and CTL formulae, and how Spin can be used to verify LTL formulae. But how do model checkers such as NuSMV and Spin actually perform those computations? In the current chapter, we explain the basics of the algorithms applied by NuSMV and Spin. First, we explain how CTL formulae are commonly checked in NuSMV, after which we present a method used by NuSMV to check LTL formulae that is based on the CTL model checking procedure. After that, we address how Spin verifies assertions, and how an extended version of that procedure is used by Spin to check LTL formulae.

5.1 CTL Model Checking in NUSMV Let us consider the question, given a Kripke structure K and a CTL formula , whether K satisfies , i.e., whether K |= . First, to simplify the procedure to determine this, recall the following equalities. AG  ⇔ ¬ EF ¬ AG  ⇔ ¬ EU(true, ¬) AF  ⇔ ¬ EG ¬ AF  ⇔ AU(true, ) AX  ⇔ ¬ EX ¬ AU(1 , 2 ) ⇔ ¬(EU(¬2 , ¬(1 ∨ 2 ))) ∧ AF 2 EG  ⇔ ¬ AF ¬ EG  ⇔ ¬ AU(true, ¬) EF  ⇔ ¬ AG ¬ EF  ⇔ EU(true, ) EX  ⇔ ¬ AX ¬ © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Huisman and A. Wijs, Concise Guide to Software Verification, Texts in Computer Science, https://doi.org/10.1007/978-3-031-30167-4_5

79

80

5 Model Checking Algorithms

Furthermore, propositional logic provides us the equality 1 ∨ 2 ⇔ ¬(¬1 ∧ ¬2 ) (see Chap. 2). Recall that {EX, EU, AF} is a core set of temporal logic operators for CTL (see Sect. 4.7). This means that we can always rewrite a CTL formula to one that only consists of atomic propositions that are combined using any of the operators ¬, ∧, EX, EU, and AF. In the remainder of this section, we assume that a CTL formula only consists of those components. This allows us to explain a CTL model checking algorithm that can ignore the other operators. Example 5.1

In Example 4.13, we specified the CTL formula AG EF ctr.even. This formula can be translated in three steps to a formula that only uses ¬, ∧, EX, EU, and AF as follows. AG EF ctr.even = AG EU(true, ctr.even) = ¬ EF ¬(EU(true, ctr.even)) = ¬ EU(true, ¬(EU(true, ctr.even)))

To verify a CTL formula , we first have to identify the subformulae of . We define a function chld(), which identifies the direct children of a formula: chld( p) = ∅ chld(¬) = {} chld(1 ∧ 2 ) = {1 , 2 } chld(EX ) = {} chld(EU(1 , 2 )) = {1 , 2 } chld(AF ) = {} With chld∗ (), we refer to the transitive, reflexive closure of chld().1 For a formula , this means that chld∗ () contains  itself (it is reflexive), it contains all its direct children (chld()), and the direct children of those children, and so on (it is transitive). We say that chld∗ () consists of exactly all subformulae of . For a CTL formula , the function chld() can be used to obtain its parse tree. At the root of this tree, we have the original formula , and its direct children are the subformulae one level down into the formula. Those children may again have children, etc. The set of all nodes in this tree is identified by chld∗ ().

1 Technically,

transitive, reflexive closures can be defined for relations, as opposed to functions, as explained in Chap. 2. However, chld() effectively defines a relation C: for CTL subformulae 1 and 2 of , we have 1 C 2 if and only if 2 ∈ chld(1 ). The reflexivity of chld∗ () therefore refers to the fact that for all CTL subformulae 1 of , we have 1 ∈ chld∗ (1 ), and the transitivity of chld∗ () refers to the fact that for all CTL subformulae 1 , 2 and 3 of , 2 ∈ chld∗ (1 ) and 3 ∈ chld∗ (2 ) implies that 3 ∈ chld∗ (1 ).

5.1 CTL Model Checking in NUSMV

81

Example 5.2

Consider the CTL formula = EX a ∧EU(b, ¬ AF c), with AP={a, b, c}. The set of subformulae of  is chld∗ () = {, EX a, a, EU(b, ¬ AF c), b, ¬ AF c, AF c, c}. The definition of chld(), in line with the parse tree of , is visualised in Fig. 5.1. In this tree, every node corresponds with a subformula, with  being represented by the root of the tree.

The verification procedure to check whether a Kripke structure K satisfies a CTL formula  works as follows. First, we identify for the atomic propositions in , i.e., those subformulae  with chld( ) = ∅, which states in K satisfy them, in other words, for which states s, we have K , s |=  . Then, using the sets of states satisfying the atomic propositions, we can move up one level in the parse tree, i.e., to those subformulae  that have only atomic propositions as their children in the parse tree, repeat the procedure, and move up one level in the parse tree again, and so on, until  has been reached in the parse tree, at which point we can identify the states that satisfy . Algorithm 1 presents this basic CTL model checking algorithm (“basic”, since in practice, various optimisations are typically added, which we ignore here). Given a Kripke structure K and a CTL formula , the algorithm determines whether K |= . The notation A ← B means that the set A is assigned the content of the set B. At line 1, a set of formulae F is constructed, consisting of the subformulae of  that are atomic propositions. In the large while-loop starting at line 2, the parse tree of  is essentially traversed bottom-up. While there are subformulae to process (line

Fig. 5.1 An example parse tree for the CTL formula EX a ∧ EU(b, ¬ AF c)

82

5 Model Checking Algorithms

Algorithm 1: Basic CTL model checking algorithm Data: Kripke structure K = (S, I, →, λ), CTL formula  Result: K |=  1 F ← { ∈ chld∗ () | chld( ) = ∅}; 2 while F = ∅ do 3 foreach  ∈ F do

// All states with atomic proposition  .

4

5

6

if  ∈ AP then Sat  ← {s ∈ S |  ∈ λ(s)} ; // All states not satisfying 1 . else if  = (¬1 ) then Sat  ← S \ Sat 1 ; // All states satisfying 1 and 2 . else if  = (1 ∧ 2 ) then Sat  ← Sat 1 ∩ Sat 2 ; // All states with a successor satisfying 1 .

7

else if  = (EX 1 ) then Sat  ← {s ∈ S | succ(s) ∩ Sat 1 = ∅} ;

8

else if  = (EU(1 , 2 )) then

9 10 11 12 13

Z ← Sat 2 ; while Z = ∅ do

Z ← {s ∈ Sat 1 | succ(s) ∩ Z = ∅} \ Sat  ; else if  = (AF 1 ) then

14

Z ← Sat 1 ;

15

while Z = ∅ do

16 17 18 19

// Repeat until no more new states are found.

Sat  ← Sat  ∪ Z ;

// Repeat until no more new states are found.

Sat  ← Sat  ∪ Z ; Z ← {s ∈ S | succ(s) ⊆ Sat  } \ Sat  ;

 ← { ∈ chld∗ () | chld( ) ⊆ F }; F   ; F ← F

20 return I ⊆ Sat  ;

2), each subformula  in F is considered (line 3). At lines 4–17, all possibilities regarding the structure of  are listed. In each case, we construct a set of states Sat  , by which we keep track of the states satisfying  . As we move up the parse tree, we use the sets of states of the children of each subformula  of  to identify Sat  . If  is an atomic proposition, we construct a set of states Sat  consisting of all states that satisfy this proposition (line 4). Alternatively, if  is of the form ¬1 , then all states not in Sat 1 are added to a new set Sat  (line 5). If  is of the form 1 ∧ 2 , we add all states that satisfy both 1 and 2 , i.e., that are both in the set Sat 1 and in the set Sat 2 , to Sat  (line 6). For  = EX 1 , we have to identify all states that have at least one successor state in Sat 1 (line 7).

5.1 CTL Model Checking in NUSMV

83

The remaining two cases are more involved and require repeatedly adding states to Sat  until no more states can be added. In the case that  = EU(1 , 2 ), we first need to consider Sat 2 , as those states satisfy EU(1 , 2 ). We create a temporary set of states Z and initialise it to Sat 2 (line 9). We use Z to keep track of new states to be added to Sat  . As long as there are new states to add (line 10), we add them (line 11), and next identify states in Sat 1 that can reach states in Z . Those states satisfy EU(1 , 2 ) as well. Note that once a state has been added to Sat  , it cannot be added again to Z (due to the “\Sat  ” at line 12). This ensures termination of the procedure, since K has a finite number of states. Finally, for  = AF 1 , we perform a very similar procedure, but in this case, we start by adding Sat 1 to Sat  (line 14), and then repeatedly add states to Sat  that have all their successors in Sat  , i.e., for which it is unavoidable to reach a state satisfying  (line 17). Once all formulae in F have been processed, their immediate parents in the parse tree are identified to form a new set of formulae F (line 18). Once this has been done, this new set replaces the old one (line 19), and the condition of the outer while-loop is checked to determine whether another iteration through the loop is needed (line 2). When at some point, no new iteration is needed, the algorithm returns whether I is a subset of Sat  , i.e., whether K satisfies . Example 5.3

This example is inspired by an example in [4]. A Triple Modular Redundant (TMR) System is a system that performs the same computation on three different processors, to increase reliability. If a processor fails, the system can still produce reliable outputs. A voting circuit is used to compare the outputs of the three processors. The circuit selects a majority vote of the outputs of the processors as output of the system as a whole. It is assumed that only one processor may fail at a time, and only one can be repaired at a time. If the circuit itself fails, the entire system fails, and when the circuit is repaired, the system starts again with the initial situation of three processors and one circuit. A Kripke structure for the TMR system is given in Fig. 5.2. States are labelled si, j , with i the number of active processors and j the number of active circuits. The TMR system is operational if at least two processors and one circuit are active. An interesting property that can be expressed in CTL is “possibly the system never goes down”. In CTL, this can be expressed as EG ¬down. This formula can be rewritten to ¬ AF down, which can be processed by Algorithm 1 to verify whether the TMR system satisfies the property. First, we identify Sat down , which corresponds with {s0,0 } (line 4 of Algorithm 1). In turn, Sat AF down should consist of all states that have all their successors in Sat down (lines 13–17 of Algorithm 1). Clearly, this holds for no state, hence Sat AF down = ∅. Finally, Sat ¬ AF down contains all states that are not in Sat AF down (line 5 of Algorithm 1), hence Sat ¬ AF down =S.

84

5 Model Checking Algorithms

Fig. 5.2 A Kripke structure for a TMR system

Since this means that s3,1 ∈ Sat ¬ AF down , we can conclude that the TMR system satisfies the property (line 20 of Algorithm 1). Another interesting property is “it is always possible to restart”, which can be formalised with the CTL formula AG EF up3 . This formula can be rewritten to AG EU(true, up3 ) = ¬ EF ¬(EU(true, up3 )) = ¬ EU(true, ¬(EU(true, up3 ))). Clearly, Sat true = S and Sat up3 = {s3,1 }. As all states can reach s3,1 , we have Sat EU(true,up3 ) =S, and Sat ¬ EU(true,up3 ) =∅. This means Sat EU(true,¬(EU(true,up3 ))) =∅, and hence, Sat ¬ EU(true,¬(EU(true,up3 ))) = S. Since s3,1 ∈ Sat ¬ EU(true,¬(EU(true,up3 ))) , the TMR system satisfies this property as well.

5.2 CTL Model Checking in NUSMV Under Fairness As mentioned before, fairness cannot be expressed in CTL. Instead, we have to enforce somehow that we only consider fair paths when verifying CTL formulae. We can do so by formalising fairness constraints for CTL. Recall the notions of strong and weak fairness (see Sect. 4.6). If 1 and 2 are CTL formulae, then weak fairness for these formulae can be expressed as follows: F G 1 ⇒ G F 2 This expresses that if eventually 1 holds, then globally eventually 2 must hold. Note that the fairness relation between 1 and 2 is not expressed in CTL, but in LTL, making the above formula a mix of the two (more specifically, CTL∗ ). Reasoning about the fairness principle is done using an LTL formula, while CTL formulae 1 and 2 are verified in the individual states. In a similar way, we can express strong fairness: G F  1 ⇒ G F 2 A weak fairness constraint can be defined for a NuSMV model by adding JUSTICE ¬1 ∨ 2 to the model (see Sect. 4.5). Note that ¬1 ∨ 2 can be rewritten to ¬(1 ∧ ¬2 ). NuSMV will identify the set of states Sat ¬(1 ∧¬2 ) in

5.2 CTL Model Checking in NUSMV Under Fairness

85

the way described by Algorithm 1. A path π is now called weakly fair w.r.t. the weak fairness property involving 1 and 2 if and only if it contains an infinite number of states in Sat ¬(1 ∧¬2 ) : ∃i ∈ N.πi ∈ Sat ¬(1 ∧¬2 ) ∧ ∀i ∈ N.πi ∈ Sat ¬(1 ∧¬2 ) ⇒ ∃ j > i.π j ∈ Sat ¬(1 ∧¬2 )

When verifying a CTL formula  under the weak fairness constraint, NuSMV applies the path quantifiers only on the weakly fair paths of K. In other words, Algorithm 1 is still applied, but only on weakly fair paths. The formalisation of a weak fairness constraint given above is in fact a specific instance of fairness expressed by means of a set of constraints, where each constraint is expressed as a subset of S. This general formalisation is relevant for Sect. 5.3. Given such a set F = {F0 , F1 , . . .}, with each fairness constraint F j (0 ≤ j < |F|) being a subset of S, a path π of K is called fair under F if and only if it visits each of the F j ∈ F infinitely often: ∀F j ∈ F.∃i ∈ N.πi ∈ F j ∧ ∀i ∈ N.πi ∈ F j ⇒ ∃k > i.πk ∈ F j A strong fairness constraint can be defined in NuSMV as COMPASSION (1 , 2 ). For this, NuSMV identifies the sets Sat 1 and Sat 2 , and a path π is called strongly fair, or just fair, if and only if the presence of an infinite number of states in π from Sat 1 implies that also an infinite number of states from Sat 2 are present in π : ∃i ∈ N.πi ∈ Sat ϕ1 ∧ ∀i ∈ N.πi ∈ Sat ϕ1 ⇒ ∃ j > i.π j ∈ Sat ϕ1 ⇒ ∃k ∈ N.πk ∈ Sat ϕ2 ∧ ∀k ∈ N.πk ∈ Sat ϕ2 ⇒ ∃l > k.πl ∈ Sat ϕ2 Again, when a strong fairness constraint is specified, NuSMV will restrict the verification of a CTL formula  to fair paths. Example 5.4

Let us consider the TMR system of Example 5.3 again. In practice, we expect the circuit to always eventually fail and require repairing. However, if we check the formula AF down, then it will be concluded that the property does not hold, since paths exist that do not include s0,0 at all, or only include s0,0 a finite number of times. To rule these paths out (because we consider them as being unrealistic), we can express the strong fairness formula G F EX down ⇒ G F down. In words, if we globally eventually have that there exists a path in which the next state satisfies down, then we must have globally eventually down, in other words, that state must be globally eventually visited. As Sat down = {s0,0 }, the set Sat EX down corresponds with {s3,1 , s2,1 , s1,1 , s0,1 }. The set of fair paths, therefore, consists of those paths that include s0,0 infinitely often if any of the other states is included infinitely often. As the latter is unavoidable, all fair paths must include s0,0 infinitely often. For such paths, AF down holds.

86

5 Model Checking Algorithms

5.3 Tableau-Based LTL Model Checking in NUSMV Over the years, multiple algorithms have been developed to verify LTL formulae. In the current section, we focus on the method used by NuSMV. It uses the CTL model checking algorithm described in Sect. 5.1, and applies fairness constraints as explained in Sect. 5.2. This method was originally proposed in [47]. Intuitively, an LTL formula ϕ can be interpreted as a CTL formula A ϕ. Of course, this is strictly speaking not a correct CTL formula (it is a CTL∗ formula), otherwise LTL would be completely expressible in CTL, but we temporarily reason about ϕ in this way to make explicit that LTL formulae formalise properties that should hold for all paths. If we want to identify for which states s ∈ S we have K , s |= A ϕ, then it suffices to prove that K , s |= ¬ E ¬ϕ. Hence, it is sufficient to be able to check whether formulae of the form E ¬ϕ are satisfied or not. As we already mentioned, ¬ϕ may not be expressible in CTL. To still be able to reason about ¬ϕ, the method we describe here constructs a tableau T for ϕ, which is a Kripke structure containing all possible paths that satisfy ¬ϕ. Once we have the tableau, we compute the product P of T and K . This product consists of all paths that are both in T and K . A state s ∈ S satisfies E ¬ϕ if and only if it is the start of an infinite path in P. CTL model checking can be applied to check this. What remains is to rule out unfair paths. As T may contain unfair paths, P may contain paths that do not satisfy ¬ϕ. We need to ignore those. To be precise, the method performs the following steps for a given formula E ¬ϕ: 1. 2. 3. 4.

Construct a tableau T for ϕ. Generate the product P of K and T . Construct fairness constraints for ¬ϕ. Check whether P satisfies EG true under the fairness constraints, i.e., check whether P has fair infinite paths. A schematic overview of this method is presented in Fig. 5.3.

Fig. 5.3 An overview of the LTL tableau method

LTL formula

Kripke structure

Tableau (Kripke structure)

Fairness constraints

Product (Kripke structure)

CTL model checking of 'EG true'

5.3 Tableau-Based LTL Model Checking in NUSMV

87

First, we construct a tableau for ϕ. A tableau is a Kripke structure T = (ST , IT , →T , λT ). The set of states of the tableau consists of all possible sets of elementary subformulae of ϕ. We define the elementary subformulae of an LTL formula ϕ using a function el() as follows. el( p) = { p} el(¬ϕ) = el(ϕ) el(ϕ1 ∨ ϕ2 ) = el(ϕ1 ) ∪ el(ϕ2 ) el(X ϕ) = {X ϕ} ∪ el(ϕ) el(ϕ1 U ϕ2 ) = {X(ϕ1 U ϕ2 )} ∪ el(ϕ1 ) ∪ el(ϕ2 ) Note that we are using core LTL here (see Sect. 4.4), with only the logical and temporal operators ¬, ∨, X and U. Each set in 2el(ϕ) represents a state s ∈ ST , therefore in the following, we reason about states as being sets of elementary formulae. Labelling function λT assigns to each state s ∈ ST the set of atomic propositions that are in s. For elementary LTL formulae ϕ, ϕ1 , ϕ2 , the following sets of states can be identified. Sat ϕ = {s ∈ ST | ϕ ∈ s} Sat ¬ϕ = ST \ Sat ϕ Sat ϕ1 ∨ϕ2 = Sat ϕ1 ∪ Sat ϕ2 Sat X ϕ = {s ∈ ST | succ(s) ⊆ Sat ϕ } Sat ϕ1 U ϕ2 = Sat ϕ2 ∪ (Sat ϕ1 ∩ Sat X(ϕ1 U ϕ2 ) ) The set of initial states IT is defined as Sat ¬ϕ , as we are interested in paths that violate ϕ, i.e., that satisfy ¬ϕ. What remains is to define →T . We want to define it in such a way that each elementary subformula in a state actually holds in that state. This means that if a formula X ϕ is in a state s, then all successors of s must satisfy ϕ. Furthermore, because we are reasoning about LTL formulae, i.e., a property is expressed that should hold for all paths, if X ϕ is not in s, then s must satisfy ¬ X ϕ, and no successor of s should satisfy ϕ. The definition for →T is now as follows. s →T s  ⇔ ∀ X ϕ ∈ el(ϕ) .(s ∈ Sat X ϕ ⇔ s  ∈ Sat ϕ )

Example 5.5

This example is taken from [47]. In Fig. 5.4, the tableau for the LTL formula ϕ = a U b is given. We identify the different states by means of the red labels. The states are represented by all possible subsets of the set of elementary subformulae el(ϕ) = {X(a U b)} ∪ el(a) ∪ el(b) = {X(a U b), a, b}, and the subformulae present in each state determine the transition relation →T . Each bidirectional arrow represents two transitions between the two connected states, in opposing directions.

88

5 Model Checking Algorithms

Fig. 5.4 The tableau for the LTL formula a U b

To identify →T , we can start with the states containing X(a U b). For these states, i.e., {s0 , s1 , s2 , s4 }, it must hold that all their successors satisfy a U b. Therefore, we need to identify Sat a U b . By definition, this is the set Sat b ∪ (Sat a ∩ Sat X(a U b) ), which is equal to {s0 , s1 , s3 , s5 } ∪ ({s0 , s2 , s5 , s6 } ∩ {s0 , s1 , s2 , s4 }) = {s0 , s1 , s3 , s5 } ∪ {s0 , s2 } = {s0 , s1 , s2 , s3 , s5 }. This means that each of the states in {s0 , s1 , s2 , s4 } must have a transition to each of the states in {s0 , s1 , s2 , s3 , s5 }. In addition, the states that do not contain X(a U b), i.e., {s3 , s5 , s6 , s7 }, must have transitions to all states not satisfying a U b. Since Sat a U b = {s0 , s1 , s2 , s3 , s5 }, we have Sat ¬(a U b) = {s4 , s6 , s7 }.

A constructed tableau may have states without successors. For instance, a state with the subformulae { p, X p, X ¬ p} has no successors. As a Kripke structure cannot have such deadlock states, and since LTL reasons about paths (which are by definition infinite, see Definition 3.4), we may safely remove those states from T . Next, we define the product of the two Kripke structures T and K as follows. Definition 5.1 (Product of a tableau and a Kripke structure) Consider the Kripke structures T = (ST , IT , →T , λT ) and K = (S, I, →, λ), with T being a tableau for the LTL formula ϕ. The product of T and K is also a Kripke structure P = (S P , I P , → P , λ P ), with • S P = {(s, s  ) | s ∈ ST ∧ s  ∈ S ∧ λ(s  ) ∩ APϕ = λT (s)}, with APϕ the atomic propositions in ϕ; • I P = {(s, s  ) ∈ S P | s ∈ IT ∧ s  ∈ I }; • (s, s  ) → P (t, t  ) ⇔ s →T t ∧ s  → t  ; • λ P ((s, s  )) = λT (s).

5.3 Tableau-Based LTL Model Checking in NUSMV

89

It is interesting to note that P may have states without successors, even if T does not have these. We can safely remove them, as doing so will not remove any (infinite) path. Each path (s0 , s0 )(s1 , s1 ) . . . in P, with (s0 , s0 ) an initial state, corresponds with a path s0 s1 . . . in T with λT (si ) = λ P ((si , si )) for all i ∈ N, and a path s0 s1 . . . in K with λ(si ) ∩ APϕ = λ P ((si , si )) for all i ∈ N. Therefore, P will contain at least one path starting from an initial state if and only if K has at least one path, starting from an initial state, satisfying ¬ϕ, i.e., K satisfies E ¬ϕ. If we verify whether P satisfies the CTL formula EG true, we check for the presence of a path. However, before doing so, there is still one issue left to address. T may contain paths that do not satisfy ¬ϕ, in case ¬ϕ has subformulae of the form ϕ1 U ϕ2 . In that situation, there may be paths in T consisting entirely of states satisfying ϕ1 . In other words, the eventuality of ϕ2 is never fulfilled. An example of this is present in Fig. 5.4. The path consisting only of the upper-right state and its self-loop has the property that in each state we have X(a U b), and yet b never holds. To fix this, we must involve the following set of fairness constraints. F = {Sat ¬(ϕ1 U ϕ2 )∨ϕ2 | (ϕ1 U ϕ2 ) ∈ chld∗ (¬ϕ)} By doing so, we only consider paths that are fair under F, i.e., that infinitely often involve states from each of the sets Sat ¬(ϕ1 U ϕ2 )∨ϕ2 . That this is sufficient to fulfil the eventualities of each of the ϕ1 U ϕ2 ∈ chld∗ (¬ϕ) can be observed as follows. If a path π contains an infinite number of states from Sat ¬(ϕ1 U ϕ2 )∨ϕ2 , then from every state s in π from Sat ϕ1 U ϕ2 , a state in the former set must be reachable via π in a finite number of steps. The first such state, let us call it s  , must be from Sat ϕ2 , since if it were from Sat ¬(ϕ1 U ϕ2 ) , this would violate the fact that s ∈ Sat ϕ1 U ϕ2 . Hence, the eventuality that ϕ2 holds along a path with states from Sat ϕ1 U ϕ2 is realised. Example 5.6

Consider again the first Kripke structure given in Sect. 4.9 (Fig. 4.9), together with the LTL formula F G p. This formula can be rewritten to F G p = F ¬ F (¬ p) = F ¬(true U ¬ p) = true U(¬(true U ¬ p)). We call the latter one ϕ, and have el(ϕ) = {X ϕ} ∪ el(¬(true U ¬ p)) = {X ϕ} ∪ el(true U ¬ p) = {X ϕ, X(true U ¬ p)} ∪ el(¬ p) = {X ϕ, X(true U ¬ p), p}. In the following, we refer with ψ to true U ¬ p. Figure 5.5 presents the tableau for F G p. Note that Sat p = {s0 , s2 , s5 , s6 }, Sat X ϕ = {s0 , s1 , s2 , s4 }, and Sat X ψ = {s0 , s1 , s3 , s5 }, with the numbers referring to the states as they are numbered in the figure. Furthermore, by definition, Sat ψ = Sat ¬ p ∪ (Sat true ∩ Sat X ψ ) = Sat ¬ p ∪ Sat X ψ = {s1 , s3 , s4 , s7 } ∪ {s0 , s1 , s3 , s5 } = {s0 , s1 , s3 , s4 , s5 , s7 }. Finally, Sat ϕ = Sat ¬ψ ∪ (Sat true ∩ Sat X ϕ ) = Sat ¬ψ ∪ Sat X ϕ = {s2 , s6 } ∪ {s0 , s1 , s2 , s4 } = {s0 , s1 , s2 , s4 , s6 }. These sets of states define →T as depicted in the figure. For instance, state s0 is both in Sat X ϕ and Sat X ψ . Therefore, the successors of state s0 are both in Sat ϕ

90

5 Model Checking Algorithms

Fig. 5.5 The tableau for the LTL formula F G p

and Sat ψ , i.e., the successors are all the states in the set Sat ϕ ∩Sat ψ = {s0 , s1 , s4 }. In a similar way, we find the following successors for each state: • State s1 has successors {s0 , s1 , s4 } (the same reasoning as for state s0 is applicable). • State s2 has successors Sat ϕ ∩ Sat ¬ψ = {s2 , s6 }. • State s3 has successors Sat ¬ϕ ∩ Sat ψ = {s3 , s5 , s7 }. • State s4 has successors {s2 , s6 } (similar to state s2 ). • State s5 has successors {s3 , s5 , s7 } (similar to state s3 ). • State s6 has successors Sat ¬ϕ ∩ Sat ¬ψ = ∅. • State s7 has successors ∅ (similar to state s6 ). Finally, we have IT = Sat ¬ϕ = {s3 , s5 , s7 }. Subsequently, the product P is defined as follows. • S P = { (s0 , s0 ), (s2 , s0 ), (s5 , s0 ), (s6 , s0 ), (s1 , s1 ), (s3 , s1 ), (s4 , s1 ), (s7 , s1 ), (s0 , s2 ), (s2 , s2 ), (s5 , s2 ), (s6 , s2 )}; • I P = {(s5 , s0 )}; • → P is defined as follows (for convenience, we define → P as a function). – – – –

→P →P →P →P

(s5 , s0 ) = {(s5 , s0 ), (s3 , s1 ), (s7 , s1 )}; (s3 , s1 ) = {(s5 , s2 )}; (s7 , s1 ) = ∅; (s5 , s2 ) = {(s5 , s2 )}.

Fairness constraints can be constructed for ϕ = true U(¬(true U ¬ p)) and ψ = true U ¬ p. For ϕ, we identify Sat ¬ϕ∨¬ψ = Sat ¬ϕ ∪ Sat ¬ψ = {s2 , s3 , s5 , s6 , s7 } as a fairness constraint set, and for ψ, we identify Sat ¬ψ∨¬ p = Sat ¬ψ ∪ Sat ¬ p =

5.4 Checking Safety Properties in SPIN

91

{s1 , s2 , s3 , s4 , s6 , s7 }. Hence, a path is fair if and only if it includes both at least one state in {s2 , s3 , s5 , s6 , s7 } an infinite number of times, and at least one state in {s1 , s2 , s3 , s4 , s6 , s7 } an infinite number of times. In P, this actually means that no paths are fair. For instance, from (s5 , s0 ), we can stay forever in (s5 , s0 ), but that is not fair. Furthermore, we can traverse (s5 , s0 ), (s3 , s1 ), (s5 , s2 ), (s5 , s2 ), . . ., but that is not fair either. What remains is to traverse (s5 , s0 ), (s7 , s1 ), but that leads to a deadlock. Concluding, there is no path satisfying ¬ϕ, hence the original Kripke structure satisfies ϕ.

5.4 Checking Safety Properties in SPIN To verify LTL formulae, Spin applies an automata-based LTL algorithm. This algorithm is explained in Sect. 5.5. However, first, we explain a more straightforward way to verify safety properties. In doing so, we also set up the basics for the automatabased LTL algorithm for the next section. The basic verification algorithm for safety properties provided by Spin can be used when those properties have been expressed by means of assertions as opposed to temporal logic. Note that as safety properties express that something bad never happens, and as a counter-example is finite, leading to a state where the bad thing has happened, the checking of a safety property is really a reachability property: is it possible to reach from an initial state in the Kripke structure a state in which the bad thing has happened? In Promela, it is possible to use assertions to express safety properties. These assertions are similar to how they are used in programming languages: assert(e) in a certain position in a model expresses that whenever an execution reaches that assertion, e must hold. If it does not, an assertion violation is reported. To express in Promela that some bad situation b should never happen, one can use an assertion to check that b does not hold. The position where the assertion is inserted in a model is important, and sometimes hard to determine, for instance, if the bad situation can only result from the behaviour of multiple process instances. There is, however, a general way to handle this: the creation of a dedicated monitor process. The process definition of a monitor process should be added to the model, and one instance should be declared of that definition. Its entire behaviour is defined by the assertion to be checked. In a simulation or state space exploration of Spin, this will result in the assertion being checked in every reached state, but other than that, no additional behaviour is added to the system. One can use a single monitor to check multiple assertions; by listing these in an atomic block, no additional states and transitions are produced. Example 5.7

For the counter in Fig. 3.26, we can check the safety property that cnt should never have a value higher than MAXINCREASE by adding a monitor process. Figure 5.6 presents such a monitor process, which can be inserted in the model

92

5 Model Checking Algorithms

Fig. 5.6 Checking a safety property using a monitor process

Algorithm 2: DFS-based safety property checking algorithm Data: Set of initial states I , successor function succ, logical formula expressing monitored assertions ϕ, function colour : S → {Grey, Blue, Black} with colour(x) = Grey. Result: false (and a counter-example) if and only if ∃s ∈ S.s |= ϕ. 1 2

procedure check-safety(): foreach s ∈ I do

3

result ← dfs(s);

4

if result = false then

5 6 7

return false and a finite path prefix consisting of all Blue states; return true; procedure dfs(s):

8

colour(s) ← Blue;

9

if s |= ϕ then

10 11

return false; foreach s  ∈ succ(s) do

12

if colour(s  ) = Grey then

13

result ← dfs(s  );

14

if result = false then

15

return false;

16

colour(s) ← Black;

17

return true;

of Fig. 3.26 at line 32. Note the use of the active keyword, which will result in a single instance being created of this process definition.

Algorithm 2 describes how Spin checks for assertion violations using Depth-First Search (DFS). The tool also has a method using breadth-first search (BFS), but we do not cover that in this book. See other books on Spin for more details [21,101].

5.4 Checking Safety Properties in SPIN

93

As input, Algorithm 2 gets a set of initial states I , and a definition of the successor function succ. You can think of both as being defined by a given Promela model. We particularly avoid using a Kripke structure to represent the input, as this would give the suggestion that the entire Kripke structure is known at the start of executing the algorithm. Instead, this Kripke structure is discovered on-the-fly, and the algorithm checks for safety property violations while discovering that Kripke structure. For convenience, we sometimes write S to refer to the set of all states reachable from I , even if that set is not explicitly known yet. For instance, the function colour initially maps all states to the colour Grey, which in practice means that as soon as a state is newly discovered, it has that colour. The set of assertions to be checked is represented by a logical formula ϕ. If there are multiple assertions a0 , a1 , . . . to be checked, the formula has the structure a0 ∧ a1 ∧ . . .. Note that we consider the case that these assertions should hold for all reachable states as opposed to some particular states, i.e., in Promela, one would use a monitor process to check these violations. Checking whether assertions hold for particular states is done in a similar manner, but it makes presentation of the algorithm slightly more involved. With s |= ϕ, it is denoted that the state s satisfies the assertions represented by ϕ. In the procedure check-safety, for each state in I , a DFS search is initiated (lines 2–3). Before discussing what the result of that may be, we first focus on the procedure dfs, which describes how a DFS search proceeds. In one step of a DFS search for a given state s, first, that state is given the colour Blue (line 8). Then, it is actually checked whether s satisfies the set of assertions ϕ (line 9), and if not, false is returned (line 10). Alternatively, all the successors of s are considered in a sequence, each time checking whether the current successor s  has the colour Grey (line 12), and if so, recursively continues the DFS search with s  (line 13). Once the search has backtracked to the DFS step for s, and the returned result is false, this result is propagated back (lines 14–15). If this result never occurs for any of the successors, s is coloured Black (line 16) and true is returned (line 17). Note that if false is ever returned by one dfs call, this value is propagated back all the way to a dfs call inside check-safety. Once this is done (line 4), that procedure in turn returns false (line 5) and a finite counter-example, i.e., a finite prefix of a path that starts at a state in I and includes the state where the assertion violation was detected. The state colours allow us to construct that finite prefix: exactly all the states that are Blue are involved in it. This can be seen as follows. Once a state s is involved in a DFS search, it is first coloured Blue (line 8), and it keeps that colour until it has been determined that s does not violate an assertion and that none of the successors of s lead to an assertion violating state. At line 16, the colour of s is changed to Black to reflect that. If, on the other hand, a violation has been detected (lines 9 and 14), the procedure returns false, resulting in s keeping the colour Blue. Furthermore, note that the colours ensure that each state is processed at most once: if a state s  is coloured Grey when reached in a DFS search (line 12), the procedure dfs will be called for it, at which point it will be coloured Blue (line 8). Once it has been coloured Blue, it can become Black at some point, but never Grey again, hence dfs will never be called again for s  .

94

5 Model Checking Algorithms

Example 5.8

Consider the Kripke structure given at the top of Fig. 5.7. The figure shows how the DFS-based safety property checking algorithm may traverse this Kripke structure when it checks the assertion that p1 always holds (whether it will traverse the Kripke structure in this way depends on whether it will consider state s1 before s3 ). Time passes from top to bottom, meaning that initially, the situation at the top is the current situation, and after each step of the algorithm, the current situation evolves to the one below it. In this execution, the algorithm first starts at the initial state s0 . It then discovers the state s1 , followed by s2 . Since all successors of s2 are not coloured Grey, it becomes Black. After that, the algorithm backtracks to s1 , and colours that state Black as well. Next, after backtracking to s0 , the other successor of s0 , state s3 , is discovered, followed by s4 , at which point an assertion violation is detected, as p1 does not hold in s4 . The Blue states together form a finite path prefix s0 s3 s4 that is a counter-example for the safety property that ¬ p1 never occurs.

5.5 Automata-Based LTL Model Checking in SPIN To check whether a Promela model satisfies a given LTL formula, Spin uses the automata-based method of LTL model checking. In the current section, we explain this method, which is based on the one originally proposed by Vardi and Wolper in 1986 [160], and uses the Nested Depth-First Search algorithm [61]. Recall that counter-examples for safety properties are finite prefixes of paths, and that counter-examples for liveness properties are infinite (see Chap. 4). For safety properties, this means that it is sufficient to check whether a state can be reached in which “something bad” happens. In the previous section, we looked at an algorithm to do this. For checking liveness properties, however, and therefore for LTL model checking (since liveness properties can be expressed in LTL) we need to understand what a counter-example actually entails. Figure 5.8 illustrates a counter-example for the LTL formula F p2 . Since we consider finite-state Kripke structures, and since counter-examples for LTL formulae need to be infinite, a counter-example must involve some cycle in the Kripke structure. In the example, the path π = s0 s1 s2 s1 s2 . . . is a path where, indeed, p2 never holds, and it involves the cycle between s1 and s2 . In fact, in general, a counter-example for an LTL formula in a finite-state Kripke structure has a lasso structure: part of it is a finite path prefix from an initial state s to some state s  (in the example, the single transition from s0 to s1 ), and the remaining part is a cycle that involves s  (in the example, the cycle between s1 and s2 ). For automata-based LTL model checking, the LTL formula must first be encoded in a suitable type of automaton. Such an automaton must be able to accept infinite

5.5 Automata-Based LTL Model Checking in SPIN

95

Fig. 5.7 Applying the DFS-based safety property checking algorithm on a Kripke structure to check that p1 is never violated. Time passes from top to bottom

96

5 Model Checking Algorithms

Fig. 5.8 The path π = s0 , s1 , s2 , s1 , s2 , . . . is a counter-example for F p2 Fig. 5.9 Julius Richard Büchi

words. We call such automata ω-automata. Although multiple types of ω-automata exist, often, (non-deterministic) Büchi automata are used. This type of automaton is named after its inventor, Julius Richard Büchi (see Fig. 5.9). Definition 5.2 (Non-deterministic Büchi Automaton) A Non-deterministic Büchi Automaton (NBA) is a 5-tuple B = (Q, A, →, Q 0 , F), where • • • • •

Q is a finite set of states; A is an alphabet; →⊆ Q × A × Q is a transition relation; Q 0 ⊆ Q is a set of initial states; F ⊆ Q is a set of accepting states. a

→ q  , we denote that q  ∈ →(q, a). Similar to a path through a Kripke With q − structure (see Definition 3.4), we can define a path through an NBA as an infinite a → qi+1 , for sequence of states π = q0 q1 q2 . . ., such that q0 ∈ Q 0 and ∀i ≥ 0.qi − some a ∈ A. NBAs have an acceptance condition: a path π = q0 q1 q2 . . . is accepting if and only if qi ∈ F for infinitely many i ≥ 0. With paths(B), we refer to the set of all accepting paths of NBA B. In automata-based LTL model checking, an NBA represents an LTL formula ϕ to be verified and, as for the other algorithms presented in this chapter, the system under verification is represented by a Kripke structure K = (S, I, →, λ). As we already mentioned, automata accept words ω = A0 A1 A2 . . ., with the Ai being elements from some alphabet. As for paths, we use ωi to denote the ith element of the word ω, and ωi to denote the word starting in ωi , i.e., ωi = ωi ωi+1 ωi+2 . . ..

5.5 Automata-Based LTL Model Checking in SPIN

97

In the context of automata-based LTL model checking, the words of the system under verification are determined by the paths in paths(K ) and the interpretation function λ. We refer with words(K ) to the set of all words of K , and define this set as follow. words(K ) = {ω | ∃π ∈ paths(K ).∀i ≥ 0.λ(πi ) = ωi } Since every Ai in these words is a set of atomic propositions, the alphabet we use is 2AP . The NBAs we use therefore have as alphabet 2APϕ , with APϕ the atomic propositions in ϕ. In other words, every transition in the NBA is labelled with a set of atomic propositions. The set of words words(B) of NBA B is defined as follows (note that paths(B) contains all the accepting paths of B). ωi

→ πi+1 } words(B) = {ω | ∃π ∈ paths(B).∀i ≥ 0.πi − For convenience, the labels of NBA transitions are typically represented by propositional logic formulae. In the following, we need to be able to express that in a Kripke structure K = (S, I, →, λ), a state s ∈ S satisfies a given propositional logic formula φ. With K , s |= φ, we denote this. This is defined as follows, with φ1 , φ2 propositional logic formulae, and p an atomic proposition. K , s |= true K , s |= false K , s |= p ⇔ K , s |= ¬φ ⇔ K , s |= φ1 ∧ φ2 ⇔ K , s |= φ1 ∨ φ2 ⇔

p ∈ λ(s) K , s |= φ K , s |= φ1 and K , s |= φ2 K , s |= φ1 or K , s |= φ2

For every LTL formula ϕ, it is possible to construct an NBA Bϕ that accepts exactly all paths that satisfy ϕ, i.e., for every Kripke structure K and every LTL formula ϕ, there exists an NBA Bϕ such that for all paths π ∈ paths(K ), we have K , π |= ϕ ⇐⇒ ∃ω ∈ words(Bϕ ).∀i ≥ 0.K , πi |= ωi

(5.1)

This formula expresses that an LTL formula ϕ is satisfied by a Kripke structure path π if and only if there exists a word ω in the NBA Bϕ encoding ϕ, for which it holds that every “step” in π is “matched” by ω, in the sense that every state πi in the sequence of π satisfies the corresponding propositional logic formula ωi . How to construct Bϕ for a given LTL formula ϕ is not trivial, and we do not address how to do this in general (see Sect. 5.6 for further reading if you are interested). Instead, we discuss some example NBAs for commonly occurring patterns in LTL formulae. Figure 5.10 presents these. In the figure, the double circles represent accepting states, and the detached incoming arrows identify the initial states.

98

5 Model Checking Algorithms

Fig. 5.10 Examples of NBAs for specific LTL formulae

Example 5.9

Consider the NBA in Fig. 5.10a. Note that the word ¬ p¬ p . . . corresponds with the path π = q0 q0 . . .. Since no state in π is accepting, π is clearly not accepted by the NBA. On the other hand, any path in which state q1 is reached is accepting, since once q1 is reached, q1 cannot be left anymore, and hence occurs infinitely often in the path. This means that any word with the prefix ¬ p . . . ¬ pp is accepting, which corresponds with the LTL formula F p. Example 5.10

Consider the NBA in Fig. 5.10b. The word pp . . . corresponds with the accepting path q0 q0 . . .. However, if in a path, after a finite number of times q0 has been visited, q1 is visited, then the path can only be accepting if at some later point in the path q0 is visited again. For the corresponding words, this means that after every occurrence of ¬ p, eventually p occurs again. Hence, this NBA encodes the LTL formula G F p. Example 5.11

Consider the NBA in Fig. 5.10c. As long as p1 ∧ ¬ p2 holds, and correspondingly, q0 is repeatedly visited, the path is not accepted unless at some point, q1 is visited, i.e., p2 holds. In other words, this NBA encodes the LTL formula p1 U p2 . Note that when in q0 , ¬ p1 ∧ ¬ p2 is not an option. There is no transition for this in the NBA, causing the automaton to block, meaning that words in which this condition holds at some point cannot be accepted.

5.5 Automata-Based LTL Model Checking in SPIN

99

A counter-example for an LTL formula ϕ is a path π through the Kripke structure K that does not satisfy ϕ, i.e., K , π |= ϕ. For the corresponding NBA Bϕ , this means that there does not exist a word ω ∈ words(Bϕ ) such that ∀i ≥ 0.K , πi |= ωi (see Eq. 5.1). However, algorithmically, it is not convenient to search for a particular word that is not accepted by Bϕ . Instead, in automata-based LTL model checking, the negation of ϕ is used as a starting point. Note that a path satisfying ¬ϕ is in fact a counter-example for ϕ, i.e., K , π |= ϕ implies that K , π |= ¬ϕ. This means that, according to Eq. 5.1, there exists a suitable word of B¬ϕ that matches every step of π . Before we look at how to algorithmically find such as word, we look at a few more examples, this time considering the negation of the formulas we looked at in the previous examples. Example 5.12

Consider the NBA in Fig. 5.10d. We are interested in the negation of the LTL formula F p, i.e., ¬ F p. Note that this is equal to the LTL formula G ¬ p: the equivalences for core LTL (see Sect. 4.4) tell us that G ϕ = ¬ F(¬ϕ). If we select ϕ = ¬ p, we get G ¬ p = ¬ F ¬¬ p = ¬ F p. The NBA accepts only the word ¬ p¬ p . . ., which obviously corresponds with G ¬ p. Any occurrence of p in a word causes the NBA to block, i.e., it cannot follow a transition, hence that word is not accepted. Example 5.13

Consider the NBA in Fig. 5.10e. We are interested in the negation of the LTL formula G F p, i.e., ¬ G F p. Note that this is equal to the LTL formula F G ¬ p: if we apply the equivalence G ϕ = ¬ F(¬ϕ) on ϕ = F p, we get G F p = ¬ F(¬ F p). From Example 5.12, we know that ¬ F p = G ¬ p, meaning that ¬ F(¬ F p) = ¬ F G ¬ p. Since G F p = ¬ F G ¬ p, we have ¬ G F p = F G ¬ p. The NBA accepts all words in which eventually ¬ p holds and then continuously holds for the remainder of the word. This clearly corresponds with F G ¬ p. Note that the NBA blocks if, once it has transitioned to state q1 , ¬ p no longer holds at some point. Note also the non-determinism at state q0 , as true can also match ¬ p. This non-determinism is needed, as a path in which eventually globally ¬ p holds may include states in which ¬ p holds before it holds continuously. Example 5.14

Consider the NBA in Fig. 5.10f. We are interested in the negation of the LTL formula p1 U p2 , i.e., ¬( p1 U p2 ). Note that this is equivalent to G ¬ p2 ∨ ¬ p2 U(¬ p1 ∧ ¬ p2 ): p1 U p2 is not true if either p2 never holds (G ¬ p1 ) or p2 continuously does not hold until neither p1 nor p2 hold (¬ p2 U(¬ p1 ∧ ¬ p2 )).

100

5 Model Checking Algorithms

Using the equivalences of Sect. 4.4, we can rewrite ¬( p1 U p2 ) as follows. ¬( p1 U p2 ) = G ¬ p2 ∨ ¬ p2 U(¬ p1 ∧ ¬ p2 ) = ¬ p2 U(¬ p1 ∧ ¬ p2 ) ∨ G ¬ p2 = ¬ p2 W(¬ p1 ∧ ¬ p2 )

(ϕ1 = ¬ p2 , ϕ2 = (¬ p1 ∧ ¬ p2 ))

= ¬ p1 R ¬ p2

(ϕ1 = ¬ p1 , ϕ2 = ¬ p2 )

The NBA accepts all words in which p1 ∧ ¬ p2 continuously holds. If, at some point, p2 holds, the NBA blocks, and the word is not accepted. On the other hand, once ¬ p1 ∧ ¬ p2 holds, the NBA transitions to q1 , after which all possible alphabet elements are accepted. This corresponds exactly with ¬ p1 R ¬ p2 : ¬ p2 holds either forever, or up to, and including, the moment ¬ p1 holds.

Finding a word of B¬ϕ that matches every step of a counter-example π for an LTL formula ϕ can be done algorithmically by computing the product of K and B¬ϕ . Conceptually, this product is similar to the product of two Kripke structures (see Definition 5.1), but since B¬ϕ is an NBA, the product of K and Bϕ is an NBA as well. Definition 5.3 (Product of Kripke structure and NBA) Consider the Kripke structure K = (S, I, →, λ) and the NBA B¬ϕ = (Q, A, →, Q 0 , F). The product of K and →⊗ , Q ⊗,0 , F⊗ ), with B¬ϕ is an NBA K ⊗ B¬ϕ = (Q ⊗ , A⊗ , − • Q ⊗ = {s, q | s ∈ S ∧ q ∈ Q}; • A⊗ = A; • − →⊗ is the smallest relation satisfying the rule φ

→ q  K , s  |= φ s → s  q − φ

s, q − →⊗ s  , q   φ

→ q ∧ K , s |= φ}; • Q ⊗,0 = {s, q ∈ Q ⊗ | s ∈ I ∧ ∃q0 ∈ Q 0 .q0 − • F⊗ = {s, q ∈ Q ⊗ | q ∈ F}. The product of a Kripke structure K and an NBA B¬ϕ is an NBA K ⊗B¬ϕ , in which the states are all possible combinations of the states in K and B¬ϕ . Furthermore, the alphabet of K ⊗ B¬ϕ is equal to the alphabet of B¬ϕ . The transitions are defined as follows. For every state s, q ∈ Q ⊗ , if K has a transition from s → s  , and B¬ϕ φ

has a transition q − → q  , with K , s  |= φ, i.e., in s  , the propositional logic formula φ

→⊗ s  , q  . φ holds, then K ⊗ B¬ϕ has a transition s, q − The initial states are those states s, q where s is an initial state of K , and q can φ

be reached from an initial state q0 of B¬ϕ via a transition q − → q  with K , s |= φ.

5.5 Automata-Based LTL Model Checking in SPIN

101

Finally, the accepting states of K ⊗ B¬ϕ are those states s, q where q is an accepting state of B¬ϕ . The following now holds. K |= ϕ ⇐⇒ ∃π ∈ paths(K ⊗ B¬ϕ )

(5.2)

This can be seen as follows. If there exists a path π = s0 , q0 s1 , q1  . . . in paths(K ⊗ B¬ϕ ), then this demonstrates that 1. π K = s0 s1 . . . is a path in paths(K ); 2. π B = q0 q1 . . . is a path in paths(B¬ϕ ): Since π is accepting, by the definition of paths() for NBAs, we have for an infinite number of states si , qi  that they are in F⊗ , and hence, that qi ∈ F, therefore, π B is an accepting path of B¬ϕ , i.e., π B ∈ paths(B¬ϕ ). From these two facts, it follows that there is a path paths(K ) that satisfies ¬ϕ, and therefore violates ϕ. In the other direction, if K does not satisfy ϕ, then there is at least one path π K ∈ paths(K ) that violates ϕ, in other words, satisfies ¬ϕ. Since B¬ϕ accepts all paths satisfying ¬ϕ, it can match every step of π K , leading to a path π = π K ,0 , π B,0 π K ,1 , π B,1  . . . ∈ paths(K ⊗ B¬ϕ ). Equation 5.2 can also be stated differently: K |= ϕ ⇐⇒ paths(K ⊗ B¬ϕ ) = ∅

(5.3)

Equation 5.3 explains why searching for a counter-example in the automata-based way is often referred to as solving the emptiness problem: it boils down to checking whether or not K ⊗ B¬ϕ is empty, i.e., has no accepting paths. Algorithm 3 is the Nested Depth-First Search (NDFS) algorithm [61], used by Spin, to search for paths in the product of a given Kripke structure K and an NBA B¬ϕ constructed for a given LTL formula ϕ. As for Algorithm 2, we use a successor function succ⊗ to make explicit that K ⊗ B¬ϕ is not known a priori, but rather constructed on-the-fly while the NDFS algorithm is being executed. The NDFS algorithm performs a DFS, and within it, a second DFS, hence the name Nested Depth-First Search. The first DFS is performed to search for accepting states, i.e., s, q ∈ F⊗ in the product K ⊗ B¬ϕ . From every accepting state s, q, in turn a second DFS is eventually launched, with the purpose of finding a cycle that contains s, q. If such a cycle exists, a path can be constructed that leads from an initial state to s, q, and subsequently runs through the cycle infinitely often, and hence contains an infinite number of accepting states. In Algorithm 3, the procedure check-liveness() (lines 1–6) launches the first DFS once from every initial state. This DFS is performed by recursively calling the procedure dfs-reach() (lines 7–20). As in Algorithm 2, initially, states are coloured Grey. When they are discovered by the first DFS, they are coloured Blue (line 8), and for all Grey successors (lines 9–10), dfs-reach() is called

102

5 Model Checking Algorithms

Algorithm 3: Nested Depth-First Search algorithm Data: Set of initial states Q ⊗,0 , successor function succ⊗ , function colour : S → {Grey, Blue, Black, Red, Violet} with colour(x) = Grey. Result: false (and a counter-example) if and only if K |= ϕ. 1 2

procedure check-liveness(): foreach s, q ∈ Q ⊗,0 do

3

result ← dfs-reach(s, q);

4

if result = false then return false and a path consisting of all Blue and Red states;

5 6 7

return true; procedure dfs-reach(s, q):

8

colour(s, q) ← Blue;

9

foreach s  , q   ∈ succ⊗ (s, q) do

10

if colour(s  , q  ) = Grey then

11

result ← dfs-reach(s  , q  );

12

if result = false then return false;

13 14

if s, q ∈ F⊗ then

15

result ← dfs-cycle(s, q);

16

if result = false then return false;

17 18 19 20 21 22 23 24 25

else colour(s, q) ← Black; return true; procedure dfs-cycle(s, q): foreach s  , q   ∈ succ⊗ (s, q) do if colour(s  , q  ) = Blue then return false; else if colour(s  , q  ) = Black then

26

colour(s  , q  ) ← Red;

27

result ← dfs-cycle(s  , q  );

28

if result = false then

29

return false;

30

colour(s, q) ← Violet;

31

return true;

5.5 Automata-Based LTL Model Checking in SPIN

103

recursively (line 11). Once all successors of s, q have been processed, it is checked whether s, q is accepting (line 14), and if it is, the second DFS is launched, which is performed by recursively calling the procedure dfs-cycle() (lines 21–31). If s, q is not accepting, it is coloured Black (line 19). In the procedure dfs-cycle(), the successors of s, q are checked again. If a successor is Blue (line 23), a cycle has been found, since the Blue states form a finite path prefix leading to the accepting state from which the second DFS was launched. Otherwise, if a successor is Black (line 25), it means the state has not been visited by the second DFS before. It is then coloured Red (line 26), after which dfs-cycle() is called recursively (line 27). Once all the successors of a state have been processed by the second DFS, the state is coloured Violet (line 30). Example 5.15

Consider (part of) the product K ⊗ B¬ϕ of a given Kripke structure K and a given NBA B¬ϕ , as presented at the top of Fig. 5.11. The figure shows how the NDFS algorithm may traverse this product when it checks for a path of K satisfying F G ¬ p, or in other words a counter-example for G F p. In this execution, the NDFS algorithm first starts at the initial state s0 q0 , which represents the state s0 , q0 . It then discovers the state s1 q0 . At that point, since all successors of s1 q0 are Blue, s1 q0 is changed to Black. The algorithm then continues with the other successor of s0 q0 , i.e., s2 q1 . Note that B¬ϕ has now changed state, since ¬ p is satisfied (see the NBA in Fig. 5.10e) Next, the discovered Grey states are all coloured Blue. Once s5 q1 has been reached, which has no successors, line 14 of Algorithm 3 is reached, and since s5 q1 is accepting, the second DFS is initiated. However, again, since s5 q1 has no successors, the state is immediately coloured Violet. The algorithm then backtracks to the state s4 q1 . Since it is accepting, the second DFS is initiated again. State s5 q1 is ignored, since it is not Blue, nor Black. As state s3 q1 is Blue, line 24 of Algorithm 3 is reached, at which point the algorithm terminates. The counter-example for G F p is now identified by the Blue states. Indeed, these states and the transitions between them form a path in which p never holds.

Note that in Example 5.15, no states were coloured Red. This is due to the fact that when the algorithm backtracked to the cycle s3 q1 , s4 q1 , s3 q1 , . . . from s5 q1 , the first encountered state, s4 q1 , was accepting, and its successor was Blue, by which the algorithm could conclude that a cycle existed. If s4 q1 was not accepting, the algorithm would have backtracked over it, colour it Black, and move on to s3 q1 . From s3 q1 , it would have initiated the second DFS, which would then colour s4 q1 Red. Since B¬ϕ (see Fig. 5.10e) has no cycles involving both accepting and nonaccepting states, every cycle in the product of B¬ϕ and an arbitrary Kripke structure K that involves at least one accepting state, actually consists entirely of accepting states, ruling out the possibility of backtracking to a non-accepting state that is part

104

5 Model Checking Algorithms

Fig. 5.11 Applying the NDFS algorithm to check for violations of G F p in part of a product K ⊗ B¬ϕ . Time passes from top to bottom

5.6 Further Reading

105

of a cycle with accepting states. Hence, it is ruled out that states can be coloured Red for this particular LTL formula. Finally, note that, in order for the algorithm to be correct, it is crucial that the second DFS is started as soon as the first DFS backtracks over an accepting state (line 14 of Algorithm 3). Let us for a minute consider the alternative: the second DFS is initiated once an accepting state is reached for the first time, and the second DFS can also move over Grey states. In the example product part of Fig. 5.11, as soon as state s2 q1 is reached, the second DFS would be started, but then, what should be done when after that, the accepting state s3 q1 is reached? If the algorithm would continue by colouring the reachable states Red, then the cycle s3 q1 , s4 q1 , s3 q1 , . . . would be entirely coloured Red, and the algorithm would not be able to conclude that there exists a cycle with at least one accepting state (the colour Red does not give us that information). Additionally, the fact that the NDFS algorithm launches the second DFS when the first DFS backtracks over an accepting state ensures that every reachable state is processed at most twice: once in the first DFS, and once in the second DFS. The second DFS can ignore Grey, Red, and Violet states, and is hence restricted to those states already visited by the first DFS, but not yet by the second.

5.6 Further Reading The algorithms described in this chapter are presented without addressing implementation details and suitable data structures. For instance, Ordered Binary Decision Diagrams (OBDDs) [34] can be used to compactly store transition relations, and to identify fair paths for CTL model checking, as discussed in Sect. 5.2, a cycledetection procedure is needed. For more details, see, for instance, [4]. The original paper proposing the Tableau-based LTL model checking algorithm, described in Sect. 5.3, provides more details about that algorithm [47]. In Sect. 5.5, we refrain from explaining how to construct a Büchi automaton for a given LTL formula. Such a construction typically involves multiple transformation steps, and explaining these would require a considerable amount of space. Over the years, multiple algorithms have been proposed to transform LTL formulae to Büchi automata. The interested reader is referred to, for instance, [4,70,82,83,153]. Furthermore, we use Büchi automata in our explanation of automata-based LTL model checking, but other ω-automata can be used as well, such as Streett automata [155], Rabin automata [146], and Muller automata [136]. These different types of ω-automata have different accepting conditions. An in-depth comparison of various types of ω-automata can be found in [116]. For a more in-depth treatment of model checking in general, see the excellent books of Baier and Katoen [4], Clarke et al. [49], and Groote and Mousavi [89]. Finally, modern computers are equipped with multi-core Central Processing Units (CPUs) and Graphics Processing Units (GPUs). To exploit the computational power of this hardware, parallel algorithms are needed. For model check-

106

5 Model Checking Algorithms

ing, such algorithms have been created in the last 10–15 years. Both multi-core and GPU algorithms exist for explicit-state LTL model checking, see, for instance, [7,8,78,79,102,119,161,163]. Some of these directly extend the NDFS algorithm presented in this chapter [78,79,119]. Parallel symbolic CTL model checking has been implemented in LTSmin [67].

6

Analysing Software

In Chaps. 3–5, we consider systems modelled at an abstract level in the form of one or more FSMs. However, if a model has been derived from an actual piece of software, or once software has been developed based on a given model, then what does the functional correctness of the model say about the software? In order to more directly determine the functional correctness of software, one can try to directly verify source code. This chapter discusses various ways to do this. First, we discuss how we could manually encode a program as a model. However, as we will see, this approach quickly gets out of hand, because we need to consider too many details, and the state space of the resulting model quickly becomes too large.

6.1 The Relation Between Code and Model 6.1.1 Encoding a Program in NUSMV If we wish to check a property about a program, we can try to encode it as a NuSMV model, and then use the NuSMV tool to check the desired property. This section sketches a possible, systematic way to do this. Suppose that we have a (very simple) program that we wish to verify with NuSMV. We encode it via the following steps: 1. First, we annotate each instruction with a unique label. These labels can be considered as the states of the model. 2. All program variables become variables in the NuSMV main module. 3. We add a transition relation to the main module that encodes the possible control flows in the program by addressing how the program can move from label to label: for a sequence of instructions (and their labels), the transition relation relates the labels of consecutive instructions. A while loop condition has two possible next states: © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Huisman and A. Wijs, Concise Guide to Software Verification, Texts in Computer Science, https://doi.org/10.1007/978-3-031-30167-4_6

107

108

6 Analysing Software

a. One state if the condition holds. This state corresponds with the first instruction inside the loop body. b. Another state if the condition does not hold. This state corresponds with the first instruction to be executed once the while loop has been exited. 4. In addition, the transition relation specifies how the program variables can change when making a step from one instruction label to the next. The result is a NuSMV model over which simple properties can be checked. Example 6.1

Consider the small Java program in Fig. 6.1. First, we annotate all individual program instructions with unique labels, resulting in the code shown in Fig. 6.2. Using these annotations, we encode the program as the NuSMV model of Fig. 6.3. Notice that the types of the variable declarations for n and i encode the domain of bytes. Notice further that in the rule defining next(i), we explicitly add i < n as a condition when control is equal to l3, i.e., when the execution has reached line 7 in Fig. 6.2. Strictly speaking this should not be necessary, but NuSMV cannot statically derive that this holds, and therefore without this condition, it will complain about a possible overflow of i + 1, i.e., it says that it cannot assign 128 to i. Using the encoding, we can check that the program always terminates: AF (control = l4). However, when we try to verify that i Y ∧ X = Y ∧ Y  = X. • The second path has the path condition ¬(X > Y ). It then does nothing, thus it will end in a final state (X, Y ), with Y ≥ X . Figure 6.20 gives a visual representation of this symbolic state space.

6.4.2 Bounded Symbolic Execution Example 6.6 is relatively simple, since from the constraints on the final state, we are actually able to conclude that after the conditional swap function maxSwap has terminated, y is always greater than or equal to x. However, in general, it is not always that easy to enumerate all possible symbolic executions and to draw conclusions from them.

132

6 Analysing Software

Fig. 6.21 Civl-C program exponent

Example 6.7

Consider the program in Fig. 6.21, which calls the exponent function. If we try to create the symbolic state space for this program, this process does not terminate (see Fig. 6.22 for the first steps of this process). As we do not know the initial value for e, when exploring the state space, we always have to consider both the option that the current value of i is less than e, and the (terminating) option that i is not less than e. In order to still be able to analyse this program, Civl provides several options to the user. The first option is that the user specifies a concrete value for e. For example: C / Civl 1

$input int e = 24;

As an alternative, the user can also specify the range of possible values for e to consider:

6.4 Bounded Symbolic Execution Using CIVL Fig. 6.22 Partial symbolic state space for exponent function

133

134

6 Analysing Software

C / Civl 1 2

$input int e; $assume (23 = 0);

Civl reports that this might not always hold, and inspection of the error information tells us that the property is violated if n is negative. Adding an assumption that n is actually positive enables Civl to confirm that the property holds: C / Civl 1

$assume (n >= 0);

Finally, we discuss a more involved example of a typical program that one might want to analyse with Civl. Example 6.8

Consider the dining philosophers program in Civl-C, as shown in Fig. 6.23.3 This program is parameterised with a symbolic variable n for the number of forks. Again, symbolic execution by itself would not terminate, because we do not have any bound on the number of forks n: the first statement is branching on the parameter n, and thus symbolic execution creates possible execution paths for all possible values of n. However, by adding an explicit bound on the number of forks, for example by adding the assumption that the input variable n always ranges between 2 and 4, the analysis becomes tractable.

3 From

http://vsl.cis.udel.edu:8080/civl/index.html, last visited February 14, 2023.

6.4 Bounded Symbolic Execution Using CIVL

135

Fig. 6.23 Civl-C: Dining philosophers

C / Civl 1 2 3

$input int B = 4; // upper bound on number of philosophers $input int n; // number of philosophers $assume(2 10) smaller y = (y < 8) To define the abstract model, we first identify the abstract states. Essentially, Sabs is characterised by the evaluation of the predicates, i.e., {biggerx ∧ smaller y , ¬biggerx ∧ smaller y , biggerx ∧ ¬smaller y , ¬biggerx ∧ ¬smaller y }). We identify how the concrete states map to the abstract states. Graphically, this looks as presented in Fig. 6.26. Next, we add the abstract transition relation, resulting in the abstract model shown in Fig. 6.27. Here, the concrete transition relation is “lifted” to the abstract states.

Abstract Programs So far, the abstraction was defined for Kripke structures. However, as explained at the beginning of this section, we want to apply this technique

6.5 Counter-Example-Guided Abstraction-Refinement

139

Fig. 6.25 A Kripke structure to illustrate CEGAR

Fig. 6.26 The abstract states of the Kripke structure of Fig. 6.25 w.r.t. predicates biggerx = (x > 10) and smaller y = (y < 8)

Fig. 6.27 The abstract states of Fig. 6.26 with the abstract transition relation

140

6 Analysing Software

Fig. 6.28 An example C program

to programs. For this, the notion of Boolean program is used [5,88]. A Boolean program is a program where the only variables are of type Boolean. They represent predicates over the concrete variables of the original program. A Boolean program is computed as an abstraction over the concrete program, where every update to a concrete variable is replaced by appropriate updates of the Boolean variables. We illustrate the procedure to derive a Boolean program from a concrete program with an example. Example 6.10

Consider the C program given in Fig. 6.28. We abstract this into a Boolean program for the following two predicates: p1 = (x > 4) p2 = (x % 2 == 0) This results in the Boolean program presented in Fig. 6.29. With *, it is indicated that a value is undetermined. Notice the following points: • The type of the original program int has been changed into void; • Initially, both predicates hold (because initially x == 1500). • The condition of the if-statement at line 3 of Fig. 6.28 has been replaced by p1 ? * : FALSE at line 6 of Fig. 6.29. The reason this has been done is the following. We can only use predicates p1 and p2 in the Boolean program. For the greater-than comparison in the condition of the original if-statement, it is natural to use p1. We have to make a case distinction. If p1 is TRUE, i.e., x >

6.5 Counter-Example-Guided Abstraction-Refinement

141

Fig. 6.29 A Boolean program for Fig. 6.28, p1 = (x > 4) and p2 = (x % 2 == 0)









4, we do not know whether x is also greater than 1000 (we use * to indicate this). If p1 is FALSE, i.e., x 4) and p2 = (x % 2 == 0), this results in the Boolean program presented in Fig. 6.33, which is the same as the Boolean program in Fig. 6.29, apart from the assertion at line 20. For convenience, the original program instructions and their labels have been

Fig. 6.32 An example C program with an assertion

146

6 Analysing Software

Fig. 6.33 A Boolean program for Fig. 6.32, p1 = (x > 4) and p2 = (x % 2 == 0)

added as documentation. Remember that * is used to denote a non-deterministic value. Checking for assertion violations in the Boolean program can result in the execution trace l0 l1 l3 l4 l6 l7 l8: at line 6, when p1 is equal to TRUE, we can decide that it should become FALSE, since the new value of p1 is undetermined. Therefore, we skip the body of the if-statement. Subsequently, at line 10, p1 can keep its current value. We skip the body of the while-loop starting at line 13. Since p2 has the value TRUE, we enter the body of the if-statement at line 16. At line 17, again, p1 can keep its current value, leading to an assertion failure at line 20. Mapping this trace back, based on the instruction labels, to the original program gives the trace shown in Fig. 6.34. Note that the if- and while-conditions have been replaced by assume statements, expressing whether those conditions should evaluate to TRUE or FALSE for the counter-example. To see if this indeed corresponds with a correct trace, we transform the program into Single Static Assignment (SSA) form [62]. In this form, every variable is assigned to exactly once. The next assignment to the same variable is replaced by an assignment to a fresh copy of the variable, and all consecutive uses of the variable refer to the fresh copy. For this concrete example, this results in the sequence of statements given in Fig. 6.35.

6.5 Counter-Example-Guided Abstraction-Refinement

147

Fig. 6.34 An error trace for the Boolean program of Fig. 6.33, with the steps mapped back to the concrete instructions of Fig. 6.32

Fig. 6.35 The error trace of Fig. 6.34 in Single Static Assignment form

Note that the variables have been numbered, and that whenever a fresh copy of x is needed, the number is incremented. Since in SSA form, every variable is assigned to only once, the new sequence of statements can be transformed into a mathematical conjunction of equations and inequalities. x1 = 1500 ∧ ¬(x1 > 1000) ∧ x2 = x1 + 48 ∧ ¬(x2 > 4)∧ x2 % 2 = 0 ∧ x3 = x2 + 1 ∧ ¬(x3 > 4) Note that the last statement is preceded by a negation: if it is indeed representing an error trace, the asserted formula should not hold. This conjunction of equations and inequalities corresponds to a counter-example if it is satisfiable, i.e., if all variables can have a value assigned to it, such that the whole formula holds. If the formula is satisfiable, it means that there exists an execution where the assertion does not hold. To check if this formula is satisfiable, we can use an SMT solver, such as Z3 [132]. An SMT solver checks if there is a way to assign values to the variables to make the formula evaluate to true. In this case, it quickly realises that the formula is unsatisfiable, i.e., it is not possible to find values for x1 , x2 and x3 such that the equations and inequalities are all satisfied. In fact, already x1 = 1500 ∧ ¬(x1 >

148

6 Analysing Software

1000) is unsatisfiable, corresponding with the fact that in the program, the ifcondition at line 3 of Fig. 6.32 never evaluates to false.

Refinement The last step of the CEGAR-cycle to consider is refinement. Different techniques exist to perform refinement. We do not go into all the details here, but sketch the main ideas. Essentially, there are two possibilities. Using a program logic (further explained in Sect. 10.2), we can compute either the weakest precondition or the strongest postcondition along the error trace. This results in new predicates for all program points along the trace, and these can be added to the set of predicates to construct a new abstract model or Boolean program. Since the predicates are derived from the spurious error trace, this ensures that the spurious trace is excluded from the traces of the new abstract model or Boolean program. Example 6.15

Consider again the simple C program from Example 6.14 and the error trace described in Fig. 6.34. If we use the weakest precondition calculus, we get the annotated program given in Fig. 6.36. Adding these predicates to compute the abstract model removes the spurious error trace. From the annotated program of Fig. 6.36, a CEGAR-based tool would typically do a second iteration using (some of) the following predicates, which correspond with (the negation of) components in the predicates generated during the weakest precondition computation. p3 = p4 = p5 = p6 = p7 = p8 =

x 1000

4 4 % 2 != 0 1 credits >= 180 && s->status == bachelor; ensures s->credits == \old(s->credits) && s->status == master; */

7.2 Function Contracts

163

Fig. 7.3 Student functions: Acsl specification and implementation

Example 7.2

For comparison, Fig. 7.4 gives a similar specification example in Jml. We have an interface Student, where all the method headers are specified with contracts similar to the ones in Fig. 7.3. We discuss a few noteworthy differences.

164

7 Design by Contract Specification Languages

• For the getter-methods getName, getStatus and getCredits we specify that they are pure, which means that they may not have any (visible) side effects. Pure functions may be used in specification expressions, because specifications should be side-effect free. This allows us to write specifications in terms of the getter-methods (see for example the postcondition of addCredits) rather than in terms of the variables directly. This provides an extra level of abstraction in our specifications, because it hides which variables we are using. • The postcondition of method setName is expressed in terms of the standard method equals defined in class Object.5 This provides a less strict guarantee than if the == symbol would have been used, which denotes object identity in Java.

Specifications and Implementations Notice that the function specifications are written independently of possible implementations. Different function implementations might satisfy the same specification. By making sure our specification is suitably abstract (see also Chap. 8), we are able to use the same specification for multiple implementations. For example, there are many different implementations of sorting algorithms, but they all satisfy the same specification: after sorting the list of elements is sorted, and the sorted list is a permutation of the input list. Also for the student example, we can think of different implementations, for example, rather than keeping track of the total number of credits obtained by the student, we can keep track of a list of finished courses (and we can compute the number of obtained credits from that). We would like such an implementation to still satisfy the same specification (possibly using abstraction techniques for specifications, as shown in Chap. 8). Function specifications do not always have to specify the exact behaviour of a function; they give minimal requirements that the implementation should respect. Example 7.3

Consider the specification in Fig. 7.3 again. The contract for changeStatus prescribes that the credits may not be changed by this function. In contrast, function addCredits only constrains the new value of credits, and thus the implementation is allowed to update the status of the student. For instance, an implementation as in Fig. 7.5 that silently updates the status from Bachelor to Master whenever appropriate respects the specification.

Notice also that both addCredits and changeStatus would be free to change the name of the student, according to the specification, even though we would typically not expect this to happen. A way to avoid this is to add an explicit postcon-

5 See for its specification http://www.eecs.ucf.edu/~leavens/JML-release/javadocs/java/lang/ Object.html , last visited February 14, 2023.

7.2 Function Contracts

Fig. 7.4 Jml specification of interface Student

Fig. 7.5 Alternative implementation of addCredits

165

166

7 Design by Contract Specification Languages

Fig. 7.6 Constructor specification for BasicStudent

dition expressing that the variable name is unchanged (name == \old(name)) to all these functions. In Sect. 10.3, we address how frame conditions (written as modifies or assigns clause) can be used to explicitly disallow these unwanted changes in a more convenient way. J ML Specifications for Constructors For Java programs, we have to give special attention to the specification of constructors. Constructors can be considered as special functions, where in the pre-state of the constructor, the object does not yet exist. Thus a precondition of a constructor can only put constraints on the constructor parameters or on the global program state (static variables), it cannot require anything about the internal state of the object—as the object does not yet exist when the constructor is called. However, the postcondition of the constructor can specify constraints on the state of the object. Typically, it relates the object state to the constructor’s parameters. Example 7.4

Suppose we have a class BasicStudent that implements the Student interface in Fig. 7.4. The constructor for this class can be specified as is done in Fig. 7.6. Notice that it would be incorrect to specify, e.g., requires getCredits () >= 0; or requires getStatus() == bachelor—these specifications are meaningless at the moment that the constructor is invoked.

Default Specifications You might wonder why not all specifications in Student have a pre- and a postcondition. Implicitly though, they have. For every specification clause, there is a default. For pre- and postconditions this is the predicate \true in

7.2 Function Contracts

167

Fig. 7.7 Default specifications

Acsl,6 or the Java expression true in Jml, i.e., no constraints are placed on the caller of the function, or on the function’s implementation. Example 7.5

The complete Acsl specification of function getStatus is shown in Fig. 7.7a.

In Jml there is one more additional subtlety to take into account, because there all reference values are implicitly assumed to be non-null, except when explicitly annotated otherwise (using the keyword nullable).

Acsl \true and \false are introduced to write the values of the mathematical Boolean type, rather than using 0 for false, and 1 (or any other positive number) for true, as is usually done in C. 6 In

168

7 Design by Contract Specification Languages

Fig. 7.8 Declaration of a LinkedList

Example 7.6

In Jml, the functions getName and setName have implicit pre- and postconditions about the non-nullity of the parameter and the result. Explicitly, their specifications are as given in Fig. 7.7b.

Notice that the non-null by default in Jml also can have some unwanted effects, as illustrated by the following example. Example 7.7

Consider the declaration of a LinkedList in Fig. 7.8. Because of the non-null by default behaviour of Jml, this means that all elements in the list are non-null. Thus the list must be cyclic, or infinite. This is usually not the intended behaviour, and therefore, to solve this, the next reference should be explicitly annotated as nullable.

7.2.2 Behaviours It is important to know when a function specification is actually satisfied. And in particular, if a function does not terminate, does it then satisfy its specification? The specifications as we have seen them here specify a partial correctness condition. If function m is specified as follows: Code / Annotations 1 2 3 4

/*@ requires P; ensures Q; */ ... m(...) { ... }

this means the following:

7.2 Function Contracts

169

Fig. 7.9 Jml specification for a terminating function

if function m is executed in a pre-state where P holds, and if execution of function m from this pre-state terminates, then in the post-state the postcondition Q holds. Thus, if function m never terminates, it vacuously respects any partial correctness specification. If one explicitly wishes to specify that a function has to terminate, this is called total correctness. Total correctness specifications can be created by adding a terminates (Acsl) or diverges (Jml) clause. The terminates clause in Acsl describes the conditions under which a function must terminate, while in contrast, the diverges clause in Jml specifies the conditions under which a function may diverge (i.e., not terminate). Example 7.8

Consider the Jml specification in Fig. 7.9. This specifies that function m has to terminate–it only may be non-terminating (diverging) if false holds, which is never the case.

Adding terminates or diverges clauses can be useful, in particular if one wishes to express that for certain parameters a function might not terminate. Example 7.9

Consider the Jml specification in Fig. 7.10a. This specifies that the function might diverge if x < 0. For comparison, Fig. 7.10b shows the Acsl specification that captures the same termination behaviour, i.e., the function must terminate if x >= 0.

Heavyweight Behaviours in J ML To express directly that a function must terminate, Jml provides an alternative option, namely to use heavyweight specifications. These are preceded by a keyword behavior. However, this does not exclude that a function may terminate because of an exception. If we also want to exclude this

170

7 Design by Contract Specification Languages

Fig. 7.10 Conditional termination specifications

case, then a normal_behavior specification is to be used: this states that function execution has to terminate in a normal state, and in the post-state the postcondition has to hold.7 Thus, to summarise, consider the specifications in Fig. 7.11. These specifications state the following: if the function mi (for i = 1, 2, 3) is executed in a pre-state where precondition Pi holds, then: • m1 may not terminate, but if m1 terminates normally, then postcondition Q1 holds; • m2 has to terminate, either normally or with an exception, and if m2 terminates normally, then postcondition Q2 holds; and • m3 has to terminate normally, and in the post-state postcondition Q3 has to hold. A single function can be specified with several function specifications. These should be interpreted as a conjunction of function specifications (see Sect. 7.4 for more details).

7A

function is said to terminate normally if either it reached the end of its body, in a normal state, or it terminated because of a return instruction. Below, in Sect. 7.6 we discuss how in Jml we can specify which conditions hold if a function terminates because of an exception.

7.2 Function Contracts

171

Fig. 7.11 Examples of heavyweight behaviour specifications in Jml

7.2.3 Various Details on Function Contracts For the remainder of this section, we discuss various details related to writing function contracts. Specification Expressions Above, we have already seen that standard program expressions can be used as predicates in the specifications. These expressions have to be side-effect free, thus, for example, assignments are not allowed. As mentioned above, in Jml these expressions may contain calls to pure functions. In addition, Acsl and Jml define several specification-specific constructs. The use of the \result and \old keywords has already been demonstrated above, and the official language specifications contain a few more of these. Besides the standard logical operators, such as conjunction &&, disjunction || and negation !, extra logical operators are allowed in Acsl and Jml specifications, e.g., implication ==>, and logical equivalence . Also the standard quantifiers ∀ and ∃ are allowed in specifications, using keywords \forall and \exists.

172

7 Design by Contract Specification Languages

Fig. 7.12 Quantifier specifications

Example 7.10

Using quantifier specifications, we can specify in Acsl that an array argument should be sorted, see Fig. 7.12a. The first argument (integer i, j)8 is the declaration of the variables over which the quantification ranges. The second argument is an implication, where the antecedent (the sentence to the left of the implication (0 0, i.e., there are students in the list, then the method terminates and the result is the average value of the credits obtained by these students. If sl.length == 0 then the method will terminate exceptionally, with an ArithmeticException.

In this example, the two preconditions together cover the complete state space for the value of sl.length. For sl.length < 0, the method’s behaviour would not be specified. Finally, it is important to realise that invariants and constraints also should hold if a method terminates exceptionally. This might seem strange at first: something goes wrong during the execution, so why would it be necessary that the object stays in a state that respect the invariants and constraints. But in many cases, the object can recover from the exception, and normal execution can be resumed. This means

7.6 Specifying Exceptional Behaviour in JML

191

Fig. 7.25 Class Average with a more extensive specification

that it is necessary that also when an exception occurs, the object stays in a “welldefined” state, i.e., a state in which the invariants hold, and that evolves according to the constraints. A Note on False In exceptional behaviour specifications, one often sees specifications like: Java / Jml 1

signals (ArithmeticException e) false;

This is a way to state that an exception should not occur: if the exception occurs, the property false should hold. And as this is never the case, the exception may not occur. Similarly, if one specifies a postcondition ensures false; this states that a method should not terminate normally. Thus a method specification:

192

7 Design by Contract Specification Languages

Java / Jml 1 2

ensures false; signals (Exception e) false;

implicitly says that a method should never terminate (either normally, or exceptionally). Finally, a method can also be specified with a precondition requires false;. This means that the method may not be invoked, as no caller can fulfil the precondition of the method. This can be useful to specify that a method is deprecated.

7.7 Conclusions In this chapter, we have discussed the most essential features of two important Design by Contract specification languages, namely Acsl and Jml. However, it is impossible to cover all aspects of these languages, as they are large languages, and moreover not for all syntactic constructs, a commonly agreed-upon semantics is defined. Function specifications written in the Design by Contract style are often quite directly connected to the code. In the next chapter we discuss various constructs that help to raise the level of abstraction in the specifications. Chapters 9 and 10 discuss different techniques to validate whether an implementation respects a specification. In particular static checking, as discussed in Chap. 10, often requires additional annotations such as frame conditions, assert statements, and loop invariants, which are all introduced in that chapter.

8

Abstract Specifications

To make sure that Design by Contract specifications are easily readable and understandable, it is important that they are presented at the right level of abstraction. In particular, one often does not want the specification to be a direct copy of the code, but instead it should capture the intention of the code. As an example that was mentioned before already (see Sect. 7.2.1), there are many different ways to implement a sorting algorithm, but the abstract behaviour specification is always the same: • it requires as input a list or array of numbers (or other elements that can be sorted), • it ensures that the resulting list or array is sorted, and • it ensures that each element of the input list occurs an equal number of times in the output list (i.e., the output is a permutation of the input). Thus, no matter which sorting algorithm we use, when reasoning about its result, we can always rely on this abstract specification. Another reason why it is useful to support abstraction in Design by Contract specifications is that this enables specification of program parts where the implementation is not yet available. If the specification is written up-front, it can serve as a guideline for the possible implementations. Writing specifications at the right level of abstraction is mainly a matter of experience and thinking hard, making sure that the specification expresses what the function does, without giving too many details about how it is doing this. Design by Contract specification languages such as Jml and Acsl provide specific functionality to support writing suitably abstract specifications, such as: • support for using functions and predicates in specifications; • support to define a more abstract state space of the program, using model variables; and

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Huisman and A. Wijs, Concise Guide to Software Verification, Texts in Computer Science, https://doi.org/10.1007/978-3-031-30167-4_8

193

194

8 Abstract Specifications

• support to add extra specification-only variables that can make implicit knowledge about the program state explicit, using ghost variables. This chapter discusses these different features.

8.1 Using Functions in Contracts One convenient way to hide implementation details from a specification is to use function calls in it. Technically speaking, this is not an abstraction in the sense that it abstracts the state space of a program, but it allows one to hide details from the specification reader. In Example 7.2, we saw that in Jml it is possible to use any pure function in a specification. Acsl does not allow one to use C-functions in your specifications, but it does provide support to define special “mathematical” or logical functions which can be used in specifications. Example 8.1

Consider the specification in Fig. 8.1. This defines a mathematical function _abs which specifies the mathematical definition of the absolute value (without updating the state space). To distinguish this from program code, it is preceded by the keyword logic, and the function name is preceded by an underscore to avoid name clashes. Notice that this mathematical definition is defined in terms of the mathematical integer type, rather than the C primitive type int. Thus, this

Fig. 8.1 Logical functions in Acsl

8.1 Using Functions in Contracts

195

Fig. 8.2 Example: use of model method in Jml

function is not suffering from overflow issues.1 Next, the C-function abs is specified in terms of this mathematical function.

Of course, for this small example, the code and the mathematical function look very similar, but for larger examples, the ability to specify in terms of a mathematical function can be very useful to keep the specification clean and readable. In Jml, a similar effect can be achieved by defining model methods: these are specification-only method definitions, which can be used in specifications only. Example 8.2

Figure 8.2 shows the Jml equivalent of the logical function defined in Acsl in Fig. 8.1. Note that Jml leaves it up to the tool developer whether overflows are considered when using integers. Java provides a Math.bigint library for mathematical arithmetics.

In addition, Acsl provides support to define mathematical specification-only predicates (i.e., functions that return a Boolean value). Example 8.3

Figure 8.3 shows a frequently occurring use of predicates, where validity of a whole data structure representing dates is captured by a single predicate validDate.

1 More on the difference between the mathematical type integer and the computer type int in Acsl can be found in Section 2.2.4 of the Acsl language specification [18].

196

8 Abstract Specifications

Fig. 8.3 Example of an Acsl predicate to capture validity of a data structure

8.2 Model Variables In addition to using functions in specifications, we sometimes need more abstraction, for example, by describing the program behaviour in terms of an abstraction of the state space. To support this, Jml provides model variables. These are the Jml version of data abstractions as originally introduced by Hoare [96]. Model variables are used typically in the following cases: • when the concrete state space is not yet defined, for example, because we are specifying an interface behaviour, which might be implemented in different ways; or • when we would like to specify the behaviour of the program at a more abstract— often more mathematical—level, rather than in terms of the program variables directly. This section illustrates both use cases with examples. Model variables are supported and developed in Jml, see, e.g., [55] for recent developments. Support for them in Acsl is still future work.

8.2.1 Model Variables and Interfaces First, we show how model variables can be used when the concrete state space is not yet defined.

8.2 Model Variables

197

Example 8.4

In the previous chapter, Fig. 7.4 defines an interface Student to illustrate several specification constructs of Jml. This interface introduces several constants (i.e., public static final variables), but otherwise, all function behaviours are described in terms of getter methods. An alternative approach would be to add public model variables to the Student interface and to define all function specifications in terms of these public model variables, as is done in Fig. 8.4. In Example 7.12, we showed how the constructor of a class BasicStudent implementing the Student interface could be specified in terms of spec_public variables. However, if the behaviour of the interface is defined in terms of model variables, rather than making the fields of BasicStudent public, we should specify how they connect to the model variables in the Student interface. For this purpose, the represents clause is used, as shown in Fig. 8.5.

With a represents clause, one can define how the value of an abstract model variable is computed in terms of the concrete program variables. Internally, whenever a specification in terms of abstract variables is encountered, the tools handling Jml specifications translate this into a specification that refers to the concrete variables, using the represents clauses, and then adherence of the implementation w.r.t. this concrete specification is validated. On spec_public Variables In Chap. 7, spec_public variables were introduced, in order to be able to use private variables in specifications. In fact, declaring a variable x as spec_public implicitly is equivalent to declaring a model variable, say _x, and a represents clause represents _x 0, then this can only be ensured if the value that is assigned to x is greater than 0, i.e., the precondition has to be y + 2 > 0. Formally, this is expressed by the following rule. {Q[x := E]}x := E{Q} Finally, there are rules for function declarations that state that to prove correctness of a function specification w.r.t. the function implementation, one has to prove correctness of the function body.1 {P}body{Q} {P}void m(){body}{Q} A key feature of Hoare logic is that it allows one to prove something about a function for all possible input states and all possible arguments, without executing the code. Example 10.1

We specify that the body of a function swap always swaps the values of the variables x and y, whatever their initial values. In Acsl, the specification for this function would be as shown in Fig. 10.2. In classical Hoare logic, this would be specified as follows: {x = A ∧ y = B}swap(){x = B ∧ y = A}

where A and B are often called logical variables. Implicitly, the correctness triple holds for all possible values of A and B, and as demonstrated by the specification above, their typical use is to remember some earlier value of a variable, as is expressed using the \old keyword in Jml and Acsl.

1 This rule comes in many variations, for functions with and without return values, parameters, etc., but the basic idea is always the same.

10.2 Hoare Logic and Weakest Preconditions

223

Fig. 10.2 Function swap with specifications

Example 10.2

The correctness proof for function swap in Example 10.1 roughly looks as follows.2 This proof tree should be read from bottom to top: the bottom property is what we would like to prove. Above the lines, we state the Hoare triples that are sufficient to derive this conclusion (by applying the appropriate rules), and we continue this process until we can apply the axioms from Hoare logic. {x = A ∧ y = B ∧ t = A}x = y{x = B ∧ y = B ∧ t = A} {x = A ∧ y = B}int t = x{x = A ∧ y = B ∧ t = A}

{x = B ∧ y = B ∧ t = A}y = t{x = B ∧ y = A}

{x = A ∧ y = B ∧ t = A}x = y; y = t{x = B ∧ y = A} {x = A ∧ y = B}swap(){x = B ∧ y = A}

Total Correctness The Hoare triples we have seen so far describe partial correctness relations: if a function terminates, its postcondition is established. Sometimes one wishes to specify explicitly that a function must terminate. For this purpose, a total correctness relation is defined. For this, we use the notation [P]S[Q]. Such a total correctness triple should be read as follows: if we execute statement S from a state x in which precondition P holds, then execution of statement S terminates in a state y, and in this state y, postcondition Q holds. Mechanising Hoare Logic As mentioned above, Hoare logic as it is, is not directly suitable for developing tool support, because one needs to “invent” the intermediate predicates that hold between the composition of two statements. However, Dijkstra observed [68] that it is actually not necessary to invent this intermediate predicate. Instead one can compute the weakest predicate that ensures the required postcondition. It is then sufficient to show that the specified precondition implies this weakest precondition. This computation of the weakest precondition is expressed by the rules from the weakest precondition calculus, where wp(S, Q) denotes the weakest predicate such

2 In fact, a completely formal proof requires the use of weakening and strengthening rules. That falls out of the scope of this book.

224

10 Static Annotation Checking

that {wp(S, Q)}S{Q} is a correct triple: wp(S1 ; S2 , Q) = wp(S1 , wp(S2 , Q)) wp(x:=E, Q) = Q[x := E] wp(if c then S1 else S2 , Q) = c ⇒ wp(S1 , Q) ∧ ¬c ⇒ wp(S2 , Q) Thus, instead of “inventing” the intermediate predicate for a statement composition, the weakest precondition calculus “computes” it. To show that a function implementation respects its specification, one has to do the following: given precondition P, postcondition Q and function body B, compute wp(B, Q) and show that P ⇒ wp(B, Q). Notice that both the predicate P and the weakest precondition of the body and the postcondition are predicates in first-order logic. For proving properties in first-order logic, many different automated theorem provers exist (e.g., Z3 [132], CVC3 [13], and Simplify [66] are all used as backend provers for program verifiers). By implementing the rules of the weakest precondition calculus, and using one (or multiple) automated first-order theorem prover(s) for the generated proof obligations, an automated program verification tool can be built that allows one to prove that for any possible input state and any possible input parameters, a function respects its specification. Tools for Program Verification To apply this approach on a realistic programming language, such as C or Java, the rules have to be adapted for side-effects, abrupt termination because of returns and breaks, etc., following the underlying programming language semantics. See e.g., [104,107,114,137] for the challenges that need to be addressed. As mentioned in Sect. 10.1 there are various tools that address these challenges to some extent.

10.3 Reasoning About Function Calls An important feature of Hoare logic-like approaches is that the verification is modular. Each function is verified in isolation, and any function call inside a body is approximated by its function specification. Example 10.3

Suppose we verify function activityBonus from the student program (in Fig. 9.3 of Sect. 9.4). This implementation calls function addCredits. Instead of inlining the implementation of addCredits, the verification uses the specification of addCre dits as defined in Fig. 7.3 (of Sect. 7.2.1).

10.3 Reasoning About Function Calls

225

Suppose that we are verifying the case where s->credits + bonus status == bachelor). To guarantee the postcondition, one has to prove the following: ∧ ⇒ ⇒ ⇒

0 status== bachelor \old(s->credits) + bonus credits ==\old(s->credits) + bonus s->credits ==\old(s->credits) + bonus

[precond. activityBonus] [precond. activityBonus] [case distinction] [postcond. addCredits] [postcond. activityBonus]

In addition, one has to show that the precondition of addCredits (c >= 0) is implied by the conjunction of the precondition of activityBonus and the if condition, which is clearly the case.

As a proof rule, this can be expressed as follows (where we use Prem (x) and Post m (x) to denote the pre- and the postcondition of function m with the actual parameters x correctly instantiated, respectively. P ⇒ Prem (x)

P ∧ Post m (x) ⇒ Q

{P}m(x){Q} Frame Problem However, things are not always that simple, as is illustrated by the following example. Example 10.4

Consider the Jml specification and implementation of class Point in Fig. 10.3. Also, consider Fig. 10.4 that shows the Jml contract for class Line along with its implementation. For convenience, only a minimal amount of specifications has been given, sufficient to illustrate the problem. When we verify function stretchLine in Line, we use the specification of function moveHorizontal, as explained above. This seems to be sufficient— but it is not! The specification of moveHorizontal does not state anything about what happens with the value of the y field—and therefore, we cannot assume anything about y after the call to moveHorizontal. As we cannot assume anything, we cannot be sure anymore that the line is horizontal, and thus the postcondition of stretchLine cannot be formally established.

This problem is known as the frame problem [31,138]. Basically, the point is that for modular verification one needs to know what the frame of a function is, i.e., which variables may be changed by the function, and what the anti-frame is, i.e., which variables may not be changed by the function.

226

Fig. 10.3 Specification of class Point in Jml

10 Static Annotation Checking

10.3 Reasoning About Function Calls

227

Fig. 10.4 Specification of class Line in Jml

To specify this, Acsl and Jml use assignable clauses.3 Such a clause specifies a list of variable locations that may be modified by a function (thus, it may be an over-approximation of the actual set of locations modified by the function). An assignable clause can also denote a set of locations; typical examples are \nothing (the empty set), thus a function without side-effects, and a[i..j], all elements in array a between indices i and j. If no assignable clause is specified, Acsl and Jml by default interpret this as a specification that any location may be modified by the function. Example 10.5

In Example 10.4, we of course should add a clause assigns x to the specification of moveHorizontal. This implicitly specifies that y may not be changed by this function.

Acsl, the keyword is assigns. In Jml, there are multiple keywords for this, these clauses can be called assignable, modifiable or modifies clauses.

3 In

228

10 Static Annotation Checking

In Example 10.4, it would be possible to add a postcondition y == \old(y). However, in general, this is not a satisfactory solution: a program might have many variables and only a few are typically changed by a function. Moreover, when a new variable is added, for every function that does not change it, an additional postcondition about this variable not being changed would have to be added. As one can imagine, this is error-prone, and leads to overly verbose specifications. Using assignable clauses, we can formulate a weakest precondition rule to verify function calls.4 Suppose that the call x.m() resolves to a function with precondition Prem , postcondition Post m and assignable clause Am , then roughly the rule looks as follows. wp(x.m(), Q) = Prem ∧ (∀v ∈ Am .Post m ⇒ Q) The easiest way to remember the rule is that every function call to m gives rise to two proof obligations: • The postcondition of the function call has to ensure the postcondition Q, where for all variables in the assignable clause of the function, nothing is known about their values, except what is specified in the postcondition of m. • The weakest precondition of the code preceding the function call w.r.t. the precondition of m has to be implied by the precondition of the calling function. This ensures that when the function m is called, its precondition indeed holds.

10.4 Statement Annotations—Helping the Verifier Asserts Sometimes, the program verifier has to get some guidance. In exceptional cases, a complex intermediate predicate has to be given explicitly, using an assert P; annotation. This is necessary, e.g., when complex calculations are made, and the automated theorem provers need some guidance on how to reason about them. Every @assert P; annotation means the following. • We have to show that P holds every time control reaches this particular program point; • We can assume that P holds when reasoning about the code that follows the assert statement. An interesting example in the literature where assertions are really needed to guide the prover is the verification of addition and multiplication of the Java smart card class Decimal [33]. Loop Specifications For code that contains loops, extra annotations are almost always necessary for static verification. To explain this, we first present the Hoare logic rule for while loops.

4 Similar

rules exist for rules with return value, parameters, etc.

10.4 Statement Annotations—Helping the Verifier

229

{c ∧ I }S{I } {I }while c do S{¬c ∧ I } This rule features a loop invariant I : a predicate that is preserved by every iteration of the loop. To show that the invariant is preserved by every iteration of the loop, one has to show that: if the condition holds—and thus the loop body is executed once more—and if the invariant holds before the loop body is executed, then it also holds after the loop body has terminated. From this we can conclude that if the whole loop is executed in a state in which the loop invariant holds, then after termination of the loop, the loop invariant still holds, and in addition the negation of the loop condition holds.5 In general, such a loop invariant cannot be found automatically. Instead the user is supposed to specify it. Specifying a loop invariant basically gives rise to three proof obligations: • the precondition P of the function has to be sufficient to guarantee that when the loop starts, the loop invariant holds, e.g., if the loop is preceded by a statement S1 , then P ⇒ wp(S1 , I ) has to hold; • the loop body has to preserve the loop invariant, thus c ∧ I ⇒ wp(S, I ); • ¬c ∧ I has to imply the weakest precondition of the code after the loop body and the postcondition Q of the function, i.e., if the loop is followed by a statement S2 , then ¬c ∧ I ⇒ wp(S2 , Q) has to hold. In Acsl, the user can specify not only the property that is preserved by every iteration of the loop (using the keyword loop invariant), but in addition there are also keywords loop assigns and loop variant. A loop assigns clause specifies the variables that may be modified by a loop iteration, and thus all other variables implicitly are unchanged—and it is not necessary to add that explicitly to the loop invariant. A loop variant is used to prove termination of the loop, see Sect. 10.5. In Jml, the decreases clause has the same meaning as Acsl’s loop variant. Example 10.6

Figures 10.5 and 10.6 show two examples of non-trivial loop invariants. For completeness, the corresponding function contracts are defined in Fig. 10.7. Function thirdPower in Fig. 10.5 computes n3 without actually using the power func-

5 In fact, to reason about C/Java programs, variations of this rule exist, allowing to reason about loops that terminate abruptly e.g., because of an exception or a return statement [104].

230

10 Static Annotation Checking

Fig. 10.5 Function thirdPower and its loop invariants

tion. Its loop invariant describes the intermediate values for all local variables. Function search in Fig. 10.6 checks whether a given value occurs in an array. The loop invariant expresses that found is true if and only if the value was among the elements in the array that have been examined so far.

Loop invariants for the function search show a very common loop invariant pattern for functions iterating over an array. All the elements that have been examined so far respect a certain property, and since the loop terminates when all the elements in the array have been examined, from this loop invariant and the negation of the loop condition we can conclude a property for all the elements in the array. The loop invariant restricting the range of the value of the loop variable ‘i’ (0