379 58 15MB
English Pages XIV, 325 [339] Year 2020
Formal Methods LNCS 12310
Frank de Boer Antonio Cerone (Eds.)
Software Engineering and Formal Methods 18th International Conference, SEFM 2020 Amsterdam, The Netherlands, September 14–18, 2020 Proceedings
Lecture Notes in Computer Science Founding Editors Gerhard Goos, Germany Juris Hartmanis, USA
Editorial Board Members Elisa Bertino, USA Wen Gao, China Bernhard Steffen , Germany
Gerhard Woeginger , Germany Moti Yung, USA
Formal Methods Subline of Lectures Notes in Computer Science Subline Series Editors Ana Cavalcanti, University of York, UK Marie-Claude Gaudel, Université de Paris-Sud, France
Subline Advisory Board Manfred Broy, TU Munich, Germany Annabelle McIver, Macquarie University, Sydney, NSW, Australia Peter Müller, ETH Zurich, Switzerland Erik de Vink, Eindhoven University of Technology, The Netherlands Pamela Zave, AT&T Laboratories Research, Bedminster, NJ, USA
12310
More information about this series at http://www.springer.com/series/7407
Frank de Boer Antonio Cerone (Eds.) •
Software Engineering and Formal Methods 18th International Conference, SEFM 2020 Amsterdam, The Netherlands, September 14–18, 2020 Proceedings
123
Editors Frank de Boer Informatica Centrum voor Wiskunde en Informatica (CWI) Amsterdam, The Netherlands
Antonio Cerone Department of Computer Science Nazarbayev University Astana, Kazakhstan
ISSN 0302-9743 ISSN 1611-3349 (electronic) Lecture Notes in Computer Science ISBN 978-3-030-58767-3 ISBN 978-3-030-58768-0 (eBook) https://doi.org/10.1007/978-3-030-58768-0 LNCS Sublibrary: SL1 – Theoretical Computer Science and General Issues © Springer Nature Switzerland AG 2020 Chapters 1, 7 and 8 are licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/). For further details see license information in the chapters. This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
This volume contains the proceedings of the 18th International Conference on Software Engineering and Formal Methods (SEFM 2020), which was originally planned to take place during September 14–18, 2020, in Amsterdam, The Netherlands (hosted by the Centrum Wiskunde & Informatica – CWI). Because of the COVID-19 pandemic, SEFM 2020 could not take place physically but had to be replaced by a virtual event (still held during September 14–18, 2020). The general aim of the conference is to bring together leading researchers and practitioners from academia, industry, and government to advance the state of the art in formal methods, to facilitate their uptake in the software industry, and to encourage their integration within practical software engineering methods and tools. There were 58 full paper submissions, which were reviewed for quality, correctness, originality, and relevance. Each submission was reviewed by four Program Committee Members and an online post-reviewing discussion, open to the entire Program Committee, was held to make the final decisions. The committee decided to accept 16 papers (27.6% acceptance rate). This volume contains the revised versions of those 16 papers, which cover a wide variety of topics, including testing, formal verification, program analysis, runtime verification, meta-programming, and software development and evolution. The papers address a wide range of systems, such as IoT systems, human-robot interaction in healthcare scenarios, navigation of maritime autonomous systems, and operating systems. The conference program also featured three keynote talks by Paola Inverardi (University of L’Aquila, Italy), Roberto Di Cosmo (Paris Diderot University, France), and Eelco Visser (Delft University of Technology, The Netherlands). This volume includes an extended abstract of Paola Inverardi’s talk titled “A Software Exoskeleton to Protect Ethics and Privacy of Users in the Digital World” and a full paper of Eelco Visser’s talk titled “Multi-Purpose Syntax Definition with SDF3” – co-authored by Luís Eduardo de Souza Amorim. We would like to thank Paola Inverardi, Roberto Di Cosmo, and Eelco Visser for accepting our invitations to give keynote talks, and the authors who submitted their work to SEFM 2020. We are grateful to the members of the Program Committee and the external reviewers for providing timely and insightful reviews, as well as for their involvement in the post-reviewing discussions. We would also like to thank the SEFM Steering Committee for their advices and support, the workshop co-chairs Loek Cleophas (TU/e, The Netherlands) and Mieke Massink (ISTI, Italy), Jacopo Mauro (SDU, Denmark) for taking care of the publicity, and Hans-Dieter Hiep and Benjamin Lion (CWI, The Netherlands) for setting up and maintaining the conference web pages. We would like to thank all people involved in SEFM 2020 for their contributions in these exceptional circumstances of the COVID-19 pandemic.
vi
Preface
We greatly appreciated the convenience of the EasyChair system for handling the submission and review processes, and for preparing these proceedings. Finally, we gratefully acknowledge the technical support from CWI. July 2020
Frank de Boer Antonio Cerone
Organization
Program Co-chairs Frank de Boer Antonio Cerone
CWI, The Netherlands Nazarbayev University, Kazakhstan
Steering Committee Frank de Boer Radu Calinescu Antonio Cerone (Chair) Rocco De Nicola Peter Ölveczky Gwen Sala€n Marjan Sirjani
CWI, The Netherlands University of York, UK Nazarbayev University, Kazakhstan IMT Lucca, Italy University of Oslo, Norway University of Grenoble Alpes, France Malardalen University, Sweden
Program Committee Erika Abraham Wolfgang Ahrendt Alessandro Aldini Luís Soares Barbosa Maurice H. ter Beek Dirk Beyer Frank de Boer Ana Cavalcanti Antonio Cerone Alessandro Cimatti Marieke Huisman Alexander Knapp Tiziana Margaria Paolo Masci Jacopo Mauro Peter Müller Hans de Nivelle Catuscia Palamidessi Anna Philippou Ka I. Pun Grigore Rosu Gwen Salaün Augusto Sampaio
RWTH Aachen University, Germany Chalmers University of Technology, Sweden University of Urbino, Italy University of Minho, Portugal ISTI-CNR, Italy LMU Munich, Germany CWI, The Netherlands University of York, UK Nazarbayev University, Kazakhstan Fondazione Bruno Kessler, Italy University of Twente, The Netherlands Universität Augsburg, Germany Lero, Ireland National Institute of Aerospace (NIA), USA University of Oslo, Norway ETH Zurich, Switzerland Nazarbayev University, Kazakhstan Inria, France University of Cyprus, Cyprus Western Norway University of Applied Sciences, Norway University of Illinois at Urbana-Champaign, USA University of Grenoble Alpes, France Federal University of Pernambuco, Brazil
viii
Organization
Ina Schaefer Gerardo Schneider Roberto Segala Marjan Sirjani Martin Steffen Meng Sun Silvia Lizeth Tapia Tarifa Simone Tini Elena Troubitsyna M. Birna van Riemsdijk Heike Wehrheim Gianluigi Zavattaro Peter Ölveczky
Technische Universität Braunschweig, Germany Chalmers—University of Gothenburg, Sweden University of Verona, Italy Malardalen University, Sweden University of Oslo, Norway Peking University, China University of Oslo, Norway University of Insubria, Italy KTH Royal Institute of Technology, Sweden University of Twente, The Netherlands University of Paderborn, Germany University of Bologna, Italy University of Oslo, Norway
Additional Reviewers Abbaspour Asadollah, Sara Amadini, Roberto Antonino, Pedro Attala, Ziggy Bagheri, Maryam Basile, Davide Baxter, James Bordis, Tabea Broccia, Giovanna Bugariu, Alexandra Castiglioni, Valentina Chimento, Jesus Mauricio Cledou, Guillermina Clochard, Martin Din, Crystal Chang Eilers, Marco Enoiu, Eduard Paul Filipovikj, Predrag Fontaine, Pascal Friedberger, Karlheinz Giallorenzo, Saverio Haltermann, Jan Hnetynka, Petr Holzner, Stephan Iyoda, Juliano Khamespanah, Ehsan Knüppel, Alexander Kouzapas, Dimitrios Krishna, Ajay
Kristensen, Lars König, Jürgen Lanotte, Ruggero Lathouwers, Sophie Lee, Nian-Ze Lemberger, Thomas Lu, Yuteng Madeira, Alexandre Mallozzi, Piergiuseppe Matheja, Christoph Mazzanti, Franco Miranda, Breno Monti, Raúl E. Mota, Alexandre Neves, Renato Nieke, Michael Oortwijn, Wytse Park, Daejun Pauck, Felix Paulson, Lawrence Rasouli, Peyman Richter, Cedric Runge, Tobias Safari, Mohsen Sankaranarayanan, Sriram Schlatte, Rudolf Sedaghatbaf, Ali Serwe, Wendelin Sewell, Thomas
Organization
Sharma, Arnab Spiessl, Martin Steffen, Bernhard Steffen, Martin Steinhöfel, Dominic Stolz, Volker Sun, Weidi Syeda, Hira Tschaikowski, Max
Turin, Gianluca Tveito, Lars Valencia, Frank van den Bos, Petra Vandin, Andrea Wendler, Philipp Windsor, Matt Zhang, Xiyue Zhang, Yi
ix
A Software Exoskeleton to Protect Ethics and Privacy of Users in the Digital World (Abstract of a Keynote Talk)
Paola Inverardi Università dell’Aquila, L’Aquila, Italy
Abstract. In recent years there has been an increasingly amount of interest on the impact that the digital society can have on the fundamental rights of individual, citizens and societies. Starting from the raising of the economy of data to the appearance of the present and future AI fueled autonomous systems the level of attention has lifted from privacy concerns to more general ethical ones [6, 7]. Although there is a general consensus on the vulnerability of users and societies this perspective has been only followed by the regulatory approach that by putting at work new regulations, notably GDPR has effectively enhanced the protection of users. Differently, in research the approach is mainly focusing on AI and concerns the systems/software developers and companies by proposing code of ethics and guidelines for the development of trustworthy systems in order to achieve transparency, accountability and explainability of decisions [1, 5]. Therefore, despite the claim for a human centric AI and the recommendation to empower the user, the user is left unpaired in her interactions with digital systems beyond the basic choice of accepting or not accepting the interaction with a system with all the consequences this might imply. From the case of privacy preferences in the app domain [12] to the more complex case of autonomous driving cars [4] the average user is unprotected and inadeguate in her interaction with the digital world. In the talk I will present the approach and preliminary results undertaken in the project EXOSOUL [2, 8–11] that stands on the side of users. EXOSOUL aims at equipping humans with an automatically generated exoskeleton, a software shield that protects and empowers them and their personal data in all interactions with the digital world by mediating or discarding those ones that would result in unacceptable or morally wrong behaviors according to the user’s ethical and privacy preferences [3]. Keywords: Ethics • Privacy • Software exoskeleton
Supported by organization Università dell’Aquila.
xii
P. Inverardi
References 1. European commission: White paper on artificial intelligence. https://ec.europa.eu/info/sites/ info/files/commission-white-paper-artificial-intelligence-feb2020-en.pdf 2. Autili, M., DiRuscio, D., Inverardi, P., Pelliccione, P., Tivoli, M.: A software exoskeleton to protect and support citizen’s ethics and privacy in the digital world. IEEE Access 7, 62011– 62021 (2019). https://doi.org/10.1109/access.2019.2916203 3. Autili, M., Inverardi, P., Spalazzese, R., Tivoli, M., Mignosi, F.: Automated synthesis of application-layer connectors from automata-based specifications. J. Comput. Syst. Sci. 104, 17–40 (2019). https://doi.org/10.1016/j.jcss.2019.03.001 4. Awad, E., et al.: The moral machine experiment. Nature 563, 59–64 (2018). https://doi.org/ 10.1038/s41586-018-0637-6 5. Commission, E.: High-level expert group on artificial intelligence: Ethics guidelines for trustworthy ai (2019) 6. Floridi, L.: Soft ethics and the governance of the digital. Philos. Technol. 31(1), 1–8 (2018). https://doi.org/10.1007/s13347-018-0303-9 7. Inverardi, P.: The european perspective on responsible computing. Commun. ACM 62(4), 64–64 (2019). http://doi.acm.org/10.1145/3311783 8. Migliarini, P., Scoccia, G.L., Autili, M., Inverardi, P.: On the elicitation of privacy and ethics preferences of mobile users. In: 7th IEEE/ACM International Conference on Mobile Software Engineering and Systems (MOBILESoft 2020), Vision Track (2020) 9. Scoccia, G.L., Autili, M., Inverardi, P.: A self-configuring and adaptive privacy-aware permission system for android apps. In: 1st IEEE International Conference on Autonomic Computing and Self-Organizing Systems (ACSOS 2020) (2020) 10. Scoccia, G.L., Fiore, M.M., Pelliccione, P., Autili, M., Inverardi, P., Russo, A.: Hey, my data are mine! active data to e mpower the user. In: IEEE/ACM 39th International Conference on Software Engineering (ICSE 2017) (ICSE2020-NIER) (2020) 11. Scoccia, G.L., Malavolta, I., Autili, M., DiSalle, A., Inverardi, P.: Enhancing trustability of android applications via user-centric flexible permissions. IEEE Trans. Softw. Eng. (2019). https://doi.org/10.1109/tse.2019.2941936 12. Scoccia, G.L., Ruberto, S., Malavolta, I., Autili, M., Inverardi, P.: An investigation into android run-time permissions from the end users’ perspective. In: 5th IEEE/ACM International Conference on Mobile Software Engineering and Systems (MOBILESoft 2018) (2018)
Contents
Multi-purpose Syntax Definition with SDF3 . . . . . . . . . . . . . . . . . . . . . . . . Luís Eduardo de Souza Amorim and Eelco Visser
1
Finding and Fixing a Mismatch Between the Go Memory Model and Data-Race Detector: A Story on Applied Formal Methods . . . . . . . . . . . Daniel Schnetzer Fava
24
Formal Verification of COLREG-Based Navigation of Maritime Autonomous Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fatima Shokri-Manninen, Jüri Vain, and Marina Waldén
41
End-to-End Verification of Initial and Transition Properties of GR(1) Designs in SPARK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Laura R. Humphrey, James Hamil, and Joffrey Huguet
60
Affine Systems of ODEs in Isabelle/HOL for Hybrid-Program Verification . . . . Jonathan Julián Huerta y Munive Interoperability and Integration Testing Methods for IoT Systems: A Systematic Mapping Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Miroslav Bures, Matej Klima, Vaclav Rechtberger, Xavier Bellekens, Christos Tachtatzis, Robert Atkinson, and Bestoun S. Ahmed
77
93
FRED: Conditional Model Checking via Reducers and Folders . . . . . . . . . . . Dirk Beyer and Marie-Christine Jakobs
113
Difference Verification with Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . Dirk Beyer, Marie-Christine Jakobs, and Thomas Lemberger
133
A Formal Modeling Approach for Portable Low-Level OS Functionality . . . . Renata Martins Gomes, Bernhard Aichernig, and Marcel Baunach
155
Model-Based Testing Under Parametric Variability of Uncertain Beliefs . . . . Matteo Camilli and Barbara Russo
175
Hoare-Style Logic for Unstructured Programs . . . . . . . . . . . . . . . . . . . . . . . Didrik Lundberg, Roberto Guanciale, Andreas Lindner, and Mads Dam
193
Synthesis of P-Stable Abstractions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Anna Becchi, Alessandro Cimatti, and Enea Zaffanella
214
xiv
Contents
Runtime Verification of Contracts with Themulus . . . . . . . . . . . . . . . . . . . Alberto Aranda García, María-Emilia Cambronero, Christian Colombo, Luis Llana, and Gordon J. Pace
231
Sound C Code Decompilation for a Subset of x86-64 Binaries . . . . . . . . . . . Freek Verbeek, Pierre Olivier, and Binoy Ravindran
247
Statically Checking REST API Consumers . . . . . . . . . . . . . . . . . . . . . . . . . Nuno Burnay, Antónia Lopes, and Vasco T. Vasconcelos
265
A Layered Implementation of DR-BIP Supporting Run-Time Monitoring and Analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Antoine El-Hokayem, Saddek Bensalem, Marius Bozga, and Joseph Sifakis
284
Formal Verification of Human-Robot Interaction in Healthcare Scenarios . . . . Livia Lestingi, Mehrnoosh Askarpour, Marcello M. Bersani, and Matteo Rossi
303
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
325
Multi-purpose Syntax Definition with SDF3 Lu´ıs Eduardo de Souza Amorim1 and Eelco Visser2(B) 1 2
Australian National University, Canberra, Australia Delft University of Technology, Delft, The Netherlands [email protected]
Abstract. SDF3 is a syntax definition formalism that extends plain context-free grammars with features such as constructor declarations, declarative disambiguation rules, character-level grammars, permissive syntax, layout constraints, formatting templates, placeholder syntax, and modular composition. These features support the multi-purpose interpretation of syntax definitions, including derivation of type schemas for abstract syntax tree representations, scannerless generalized parsing of the full class of context-free grammars, error recovery, layout-sensitive parsing, parenthesization and formatting, and syntactic completion. This paper gives a high level overview of SDF3 by means of examples and provides a guide to the literature for further details. Keywords: Syntax definition
1
· Programming language · Parsing
Introduction
A syntax definition formalism is a formal language to describe the syntax of formal languages. At the core of a syntax definition formalism is a grammar formalism in the tradition of Chomsky’s context-free grammars [14] and the Backus-Naur Form [4]. But syntax definition is concerned with more than just phrase structure, and encompasses all aspects of the syntax of languages. In this paper, we give an overview of the syntax definition formalism SDF3 and its tool ecosystem that supports the multi-purpose interpretation of syntax definitions. The paper does not present any new technical contributions, but it is the first paper to give a (high-level) overview of all aspects of SDF3 and serves as a guide to the literature. SDF3 is the third generation in the SDF family of syntax definition formalisms, which were developed in the context of the ASF+SDF [5], Stratego/XT [10], and Spoofax [38] language workbenches. The first SDF [23] supported modular composition of syntax definition, a direct correspondence between concrete and abstract syntax, and parsing with the full class of context-free grammars enabled by the Generalized-LR (GLR) parsing algorithm [44,56]. Its programming environment, as part of the ASF+SDF MetaEnvironment [40], focused on live development of syntax definitions through incremental and modular scanner and parser generation [24–26] in order to provide fast turnaround times during language development. c The Author(s) 2020 F. de Boer and A. Cerone (Eds.): SEFM 2020, LNCS 12310, pp. 1–23, 2020. https://doi.org/10.1007/978-3-030-58768-0_1
2
L. E. de Souza Amorim and E. Visser
The second generation, SDF2 encompassed a redesign of the internals of SDF without changing the surface syntax. The front-end of the implementation consisted of a transformation pipeline from the rich surface syntax to a minimal core (kernel) language [58] that served as input for parser generation. The key change of SDF2 was its integration of lexical and context-free syntax, supported by Scannerless GLR (SGLR) parsing [60,61], enabling composition of languages with different lexical syntax [12]. SDF3 is the latest member of the family and inherits many features of its predecessors. The most recognizable change is to the syntax of productions that should make it more familiar to users of other grammar formalisms. Further, it introduces new features in order to support multi-purpose interpretations of syntax definitions. The goals of the design of SDF3 are (1) to support the definition of the concrete and abstract syntax of formal languages (with an emphasis on programming languages), (2) to support declarative syntax definition so that there is no need to understand parsing algorithms in order to understand definitions [39], (3) to make syntax definitions readable and understandable so that they can be used as reference documentation, and (4) to support execution of syntax definitions as parsers, but also for other syntactic operations, i.e to support multi-purpose interpretation based on a single source. The focus on multipurpose interpretation is driven by the role of SDF3 in the Spoofax language workbench [38]. In this paper, we give a high-level overview of the features of SDF3 and how they support multi-purpose syntax definition. We give explanations by means of examples, assuming some familiarity of the reader with grammars. We refer to the literature for formal definitions of the concepts that we introduce. Figure 1 presents the complete syntax definition of a tiny functional language (inspired by OCaml [42]), which we will use as running example without (necessarily) referring to it explicitly.
2
Phrase Structure
A programming language is more than a set of flat sentences. It is the structure of those sentences that matters. Users understand programs in terms of structural elements such as expressions, functions, patterns, and modules. Language designers, and the tools they build to implement a language, operate on programs through their underlying (tree) structure. The productions in a context-free grammar create the connection between the tokens that form the textual representation of programs and their phrase structure [14]. Such productions can be interpreted as parsing rules to convert a text into a tree. But SDF3 emphasizes the interpretation of productions as definitions of structure [39]. A sort (also known as non-terminal) represents a syntactic category such as expression (Exp), pattern match case (Case), or pattern (Pat). A production defines the structure of a language construct. For example, the production Exp.Add = Exp "+" Exp
Multi-purpose Syntax Definition with SDF3
module fun imports lex context-free start-symbols Exp sorts Exp Case Bnd Pat context-free syntax Exp = {bracket} Exp.Int = INT Exp.Var = ID Exp.Min = [-[Exp]] Exp.Sub = {left} Exp.Add = {left} Exp.Eq = {left} Exp.Fun = [fun [ID*] -> [Exp]] Exp.App = {left} Exp.Let = < let in > Exp.IfE = < if then
else
> Exp.IfT = < if then
> Exp.Match = < match with > {longest-match} Bnd.Bnd = Case.Case = [| [Pat] -> [Exp]] Pat.PVar = ID Pat.PApp = {left} Pat = {bracket} context-free priorities Exp.Min > Exp.App > {left: Exp.Sub Exp.Add} > Exp.Eq > Exp.IfE > Exp.IfT > Exp.Match > Exp.Fun > Exp.Let, Exp.App .> Exp.Min template options ID = keyword {reject} keyword -/- [a-zA-Z0-9]
module lex lexical sorts ID lexical syntax ID = [a-zA-Z] [a-zA-Z0-9]* lexical restrictions ID -/- [a-zA-Z0-9] lexical sorts INT lexical syntax INT = [\-]? [0-9]+ lexical restrictions INT -/- [0-9] context-free restrictions "-" -/- [0-9] lexical sorts AST EOF lexical syntax LAYOUT = [\ \t\n\r] LAYOUT = Com Com = "/*" (~[\*] | Ast | Com)* "*/" Ast = [\*] LAYOUT = "//" ~[\n\r]* ([\n\r] | EOF) EOF = lexical restrictions AST -/- [\/] EOF -/- ~[] context-free restrictions LAYOUT? -/- [\ \t\n\r] LAYOUT? -/- [\/].[\/] LAYOUT? -/- [\/].[\*] let // length of a list len = fun xs -> match xs with | nil -> 0 | cons x xs -> 1 + len xs in len (cons 1 (cons 2 nil))
Fig. 1. Syntax of a small functional language in SDF3 and an example program.
3
4
L. E. de Souza Amorim and E. Visser
defines that an addition expression is one alternative for the Exp sort and that it is the composition of two expressions. A production makes the connection with sentences by means of literals in productions. In the production above, the two expressions making an addition are separated by a + operator. Finally, a production defines a constructor name for the abstract syntax tree structure of a program (Add in the production above). The pairs consisting of sort and constructor names should be unique within a grammar and can be used to identify productions. (Such explicit constructor names are new in SDF3 compared to SDF2.) A set of such productions is a grammar. The productions of a grammar generate a set of well-formed syntax trees. For example, Fig. 2 shows a well-formed tree over the example grammar. The language defined by a grammar are the sentences obtained by taking the yields of those trees, where the yield of a syntax tree is the concatenation of its leaves. Thus, the sentence corresponding to the tree in Fig. 2 is (fun x -> x + 3) y. The grammars of programming languages frequently feature lists, including lists of statements in a block, lists of field declarations in a class, and lists of parameters of a function. SDF3 supports direct expression of such list sorts by means of Kleene star and plus operators on sorts. In Fig. 1 the formal parameters of a Fun is defined as ID*, a list of zero or more identifiers. Other kinds of list include A+ (one or more As), {A sep}* (zero or more As separated by seps), and {A sep}+ (one or more As separated by seps). Lists with separators are convenient to model, for example, the arguments of a function as {Exp ","}*, i.e. a list of zero or more expressions separated by commas. Exp.App
Exp.Var
Exp
Exp.App
y ( fun
Exp.Fun ID* x
)
->
Exp.Fun
Exp.Add
ID*
Exp.Var + Exp.Int x
3
Fig. 2. Concrete syntax tree
Exp.Var
Exp.Add
y
x Exp.Var Exp.Int x
3
Fig. 3. Abstract syntax tree
Abstract Syntax. Concrete syntax trees contain irrelevant details such as keywords, operator symbols, and parentheses (as identified by the bracket attribute on productions). These details are irrelevant since the constructor of a production of a node uniquely identifies the language construct concerned. Thus, from
Multi-purpose Syntax Definition with SDF3
5
a concrete syntax tree we obtain an abstract syntax tree by omitting such irrelevant details. Figure 3 shows the abstract syntax tree obtained from the concrete syntax tree in Fig. 2. Abstract syntax trees can be represented by means of firstorder terms in which a constructor is applied to a (possibly empty) sequence of sub-terms. For example, the abstract syntax tree of Fig. 3 is represented by the term. App(Fun(["x"], Add(Var("x"), Int("3"))), Var("y"))
Note that lists are represented by sequences of terms between square brackets. Signatures. A grammar is a schema for describing well-formed concrete and abstract syntax trees. That is, we can check that a tree is well-formed by checking that the subtrees of a constructor node have the right sort according to the corresponding production, and a parser based on a grammar is guaranteed to produce such well-formed trees. To further process trees after parsing, we can work on a generic tree representation such as XML or ATerms [6], or we can work with a typed representation. The schemas for such typed representations can be derived automatically from a grammar. For example, the Statix language for static semantics specification [3] uses algebraic signatures to describe wellformed terms. The following signature in Statix defines the algebraic signature of a selection of the constructors of the example language: signature sorts Exp constructors Fun : list(ID) * Exp -> Exp Add : Exp * Exp -> Exp App : Exp * Exp -> Exp Var : ID -> Exp Int : INT -> Exp
The SDF3 compiler automatically generates signatures for Statix [3], Stratego [10], and DynSem [57].
3
Declarative Disambiguation
Multiple trees over a grammar can have the same yield. Or, vice versa, a sentence in the language of a grammar can have multiple trees. If this is the case, the sentence, and hence the grammar is ambiguous. One strategy to disambiguate a grammar is to transform it to an unambiguous grammar that describes the same language, but has exactly one tree per sentence in the language. However, this may not be easy to do, may distort the structure of the trees associated with the grammar, and changes the typing scheme associated with the grammar. SDF3 supports the disambiguation of an ambiguous grammar by means of declarative disambiguation rules. In this section we describe disambiguation by means of associativity and priority rules. In the next section we describe lexical disambiguation rules.
6
L. E. de Souza Amorim and E. Visser
Disambiguation by Associativity and Priority Rules. Many language reference manuals define the disambiguation of expression grammars by means of priority and associativity tables. SDF3 formalizes such tables as explicit associativity and priority rules over the productions of an ambiguous context-free grammar. While grammar formalisms such as YACC also define associativity and priority rules, these are defined in terms of low-level implementation details (e.g. choosing sides in a shift/reduce conflict.) The semantics of SDF3 associativity and priority rules has a direct formal semantics that is defined independently of a particular implementation [53]. The semantics is defined by means of subtree exclusion, that is, disambiguation rules are interpreted by rejecting trees that match one of the subtree exclusion patterns generated by a set of disambiguation rules. If a set of rules is sound and complete (there is a rule for each pair of productions), then disambiguation is sound and complete, i.e. assigns a single tree to a sentence. (Read the fine print in [53].) A priority rule A.C1 > A.C2 defines that (the production identified by the constructor) A.C1 has higher priority than (the production identified by the constructor) A.C2. This means that (a tree with root constructor) A.C2 cannot occur as a left, respectively right recursive child of (a tree node with constructor) A.C1 if A.C2 is right, respectively left recursive. A left associativity rule A.C1 left A.C2 defines that A.C1 and A.C2 are mutually left associative. This means that A.C2 cannot occur as a right recursive child of A.C1. (Right associativity is defined symmetrically.) Figure 1 defines the disambiguation rules for the example language. According to these rules the expression a - b + c == d should be parsed as ((a - b) + c) == d (since Sub and Add are left associative and have higher priority than Eq) and the expression match a with | b -> c + d should be parsed as match a with | b -> (c + d) (since Add has higher priority than Match). The semantics of priority shown above is particularly relevant for prefix and postfix operators. A prefix operator (such as Match) may occur as right child of an infix operator (such as Sub), even if it has lower priority, since such a combination of productions is not ambiguous. For example, the expression a match b with | c -> d has only one abstract syntax tree. This semantics is safe, i.e. it does not reject any sentences that are in the language of the underlying context-free grammar. However, with the rules defined so far the semantics is not complete. As an example consider two of the trees for the sentence a - match b with | c -> d + e in Fig. 4. Both these trees are conflict free according to the rules above; a Match may occur as right hand child of a Sub and Sub and Add are left associative. The problem is that the conflict between Match as a left child of Add is hidden by the Sub tree. To capture such deep conflicts, the priority rule involving Add, Sub and Match is amended to require that a right-most occurrence of a production A.C2 in the left recursive argument of a production A.C1 is rejected if A.C1 > A.C2. (And symmetrically for left-most occurrences in right recursive arguments.) Thus, the priority rules of Fig. 1 select the left tree of Fig. 4.
Multi-purpose Syntax Definition with SDF3 Sub Var a
-
7
Add Match
Sub
+
Var
match Var with Case
e Var
b
-
Match
| PVar -> Add a c
Var + Var d
e
match Var with Case b
| PVar -> Var c
d
Fig. 4. Concrete syntax trees for the expression a - match b with | c -> d + e.
The longest-match attribute of the Match production is a short hand for deep priority conflicts for lists. The Match construct gives rise to nested pattern match clauses such as the following match a with | d -> match e with | f -> g + h | i -> j + k
The longest match attributes disambiguates such nested lists by associating trailing cases with the nearest match statement. Afroozeh et al. [1] showed that semantics of disambiguation in SDF2 [7,61] was not safe. They define a safe interpretation of disambiguation rules by means of a grammar transformation. Amorim and Visser [53] define a direct semantics of associativity and priority rules by means of subtree exclusion including prefix and postfix operators, mixfix productions, and indirect recursion. They show that the semantics is safe and complete for safe and complete sets of disambiguation rules for expression grammars without overlap. They also discuss the influence of overlap on disambiguation of expression grammars. For example, in Fig. 1, the productions Min, Sub, and App have overlap. The expression x - y can be parsed as App(Var("x"), Min(Var("y"))) or as Sub(Var("x"), Var("y")). This is not an ambiguity that can be solved by means of safe associativity and priority rules. The indexed priority rule Exp.App .> Exp.Min solves this ambiguity by forbidding the occurrence of Min as second argument of App. (The index is 0 based.) Amorim et al. show that deep conflicts are not only an artifact of grammars, but do actually occur in the wild, i.e. that they do occur in real programs [52]. One possible implementation of disambiguation with deep conflicts is by means of data dependent parsers. Amorim et al. show that such parsers can have near zero overhead when compared to disambiguation by grammar rewriting [55]. Parenthesization. In the previous section we saw that parentheses, i.e. productions annotated with the bracket attribute, are omitted when transforming a
8
L. E. de Souza Amorim and E. Visser
concrete syntax tree to an abstract syntax tree (Fig. 3). Furthermore, by using declarative disambiguation, the typing scheme for abstract syntax trees allows arbitrary combinations of constructors in well-formed abstract syntax trees. This is convenient, since it allows transformations on trees to create new trees without regard for disambiguation rules. Before formatting such trees (Sect. 5), parentheses need to be inserted in order to prevent creating a sentence that has a different (abstract) syntax tree when parsed. That is, we want the equation parse(format(t)) = t to hold for any well-formed abstract syntax tree. The advantage of declarative disambiguation rules is that they can be interpreted not only to define disambiguation during parsing, but can also be interpreted to detect trees that need disambiguation. For example, without parenthesization the tree Add(Eq(Var("a"), Var("b")), Var("c")) would be formatted as a == b + c, which would be parsed as Add(Var("a"), Eq(Var("b"), Var("c"))). Parenthesization recognizes that the first tree has a priority conflict between Add and Eq and inserts parentheses around the equality expression, such that the tree is formatted as (a == b) + c, which has the original tree as its abstract syntax tree. The implementation of SDF3 in Spoofax supports parenthesization following the disambiguation semantics of Amorim and Visser [53].
4
Lexical Syntax
The lexical syntax of a language concerns the lexemes, words, or tokens of the language and typically includes identifiers, numbers, strings, keywords, operators, and delimiters. In traditional parsers and parser generators, parsing is divided into a lexical analysis (or scanning) phase in which the characters of a program are merged into tokens, and a context-free analysis phase in which a stream of tokens is parsed into phrase structure. Inspired by Salomon and Cormack [45], SDF2 adopted character-level grammars using the single formalism of context-free productions to define lexical and context-free syntax, supported by scannerless parsing [60]. SDF3 has inherited this feature. Character-Level Grammars. In character-level grammars, the terminals of the grammar are individual characters. In SDF3, characters are indicated by means of character classes. For example, the definition of identifiers uses the character class [a-zA-Z0-9] comprising of lower and upper case letters and digits. Tokens are defined using the same productions that we use for contextfree phrase structure, except that it is not required to associate a constructor with a lexical production. For example, the syntax of identifiers is defined using the production ID = [a-zA-Z] [a-zA-Z0-9]*, i.e. an identifier starts with a letter, which is followed by zero or more letters or digits. In a production such as Exp.Let = "let" Bind* "in" Exp it appears that "let" and "in" are terminals. However, SDF3 defines such literals by means of a lexical production in which the literal acts as a sort, which is defined in terms of character classes. Thus, the use of the literal "let" implies a production "let" = [l] [e] [t]. SDF3 also supports case-insensitive literals; in this case, the literal ’let’ implies a production ’let’ = [lL] [eE] [tT].
Multi-purpose Syntax Definition with SDF3
9
Lexical Disambiguation. Just as phrase structure, lexical syntax may be ambiguous, requiring lexical disambiguation. The root cause of lexical ambiguity is overlap between lexical categories. For example, an identifier ab overlaps with the prefix of a longer identifier abc and let may be an identifier or a keyword. The two common lexical disambiguation policies are (1) prefer longest match, and (2) prefer one category over another. In scanner specification languages such as LEX [43] these policies are realized by (1) preferring the longest match and by (2) ordering the definitions of lexical rules and selecting the first rule that applies. This works well when recognizing tokens independent of the context in which they appear. In a character-level grammar that approach does not work, since tokenization may depend on the phrase structure context (see also the discussion on language composition below), and due to modularity of a syntax definition, there is no canonical order of lexical rules. Thus, lexical disambiguation is defined analogously to subtree exclusion for phrase structure in the previous section, by defining what is not allowed using follow restrictions and reject productions. We discuss an example of each. The expression ab can be a single identifier or the application of a to b, i.e. App(Var("a"),Var("b")). This ambiguity is solved by means of the follow restriction ID -/- [a-zA-Z0-9] which states that an identifier cannot be followed directly by a letter or digit. The expression if x then y can be an if-then expression, i.e., IfT(Var("x"), Var("y")), or it can be the application of the variable if to some other variables, i.e., App(App(App(Var("if"), Var("x")), Var("then")), Var("y"))
This ambiguity is solved by means of reject productions ID = "if" {reject} and ID = "else" {reject} to forbid the use of the keywords if and else as identifiers. Layout. Another aspect of lexical syntax is the whitespace characters and comments that can appear between tokens, which are known as ‘layout’ in SDF. The definition of layout is a matter of lexical definition as that of any other lexical category. Module lex in Fig. 1 defines layout as whitespace, multi-line comments (delimited by /* and */), and single-line comments (starting with //). The multi-line comments can be nested to enable commenting out code with comments. This is not supported by scanner generators based on regular expressions. Note the use of follow restrictions to ensure that an asterisk within a multi-line comment is not followed by a slash (which should be parsed as the end of the comment), and to characterize end-of-file as the empty string that is not followed by any character (which is in turn defined as the complement of the empty character class). What is special about layout is that it can appear between any two ordinary tokens. In a scanner-based approach layout tokens are just skipped by the scanner, leaving only tokens that matter for the parser. A character-level grammar needs to be explicit about where layout can appear. This would result in boilerplate code as illustrated by the following explicit version of the Fun production:
10
L. E. de Souza Amorim and E. Visser
syntax Exp-CF.Var Exp-CF.Add ID-CF ID-LEX "+" LAYOUT?-CF LAYOUT?-CF LAYOUT-CF LAYOUT-CF LAYOUT-LEX restrictions LAYOUT?-CF ID-LEX
= = = = = = = = = =
ID-CF Exp-CF LAYOUT?-CF "+" LAYOUT?-CF Exp-CF {left} ID-LEX [\65-\90\97-\122] [\48-\57\65-\90\97-\122]*-LEX [\43] LAYOUT-CF LAYOUT-CF LAYOUT-CF {left} LAYOUT-LEX [\9-\10\13\32]
-/- [\9-\10\13\32] -/- [\48-\57\65-\90\97-\122]
Fig. 5. Normalized syntax and restrictions for a selection of productions from Fig. 1.
Exp.Fun = "fun" LAYOUT? ID* LAYOUT? "->" LAYOUT? Exp
To avoid such boilerplate, the SDF3 compiler applies a transformation to productions in context-free syntax sections in order to inject optional layout [61]. Figure 5 shows the result of that normalization to a small selection of productions from Fig. 1. Note that in lexical productions (such as for ID-LEX) no layout is injected, since the characters of tokens should not be separated by layout. Note the use of -LEX and -CF suffixes on sorts to distinguish lexical sorts from contextfree sorts (This transformation is currently applied to the entire grammar, which may hinder grammar composition between modules specifying different layout.) Layout Sensitive Syntax. In Sect. 3 we showed how associativity and priority rules can be used to disambiguate an ambiguous grammar. For example, we saw how longest match for Match ensures that a match case is always associated with the nearest match. Similarly, Fig. 1 disambiguates the dangling-else ambiguity between IfT and IfE such that an else branch is always associated with the closest if. An alternative approach to disambiguation is to take into account the layout of a program. For that purpose, SDF3 supports the use of layout constraints, which pose requirements on the two dimensional shape of programs [17,54]. We illustrate layout constraints with layout-sensitive disambiguations of the Match and IfE productions in Figs. 6 and 7. The layout constraints in Fig. 6 require that the if and else keywords of the IfE production are aligned. The examples in the figure show how the else branch can be associated with either if by choosing the layout. In addition, the indent constraints require that the conditions and branches of the IfT and IfE constructs appear to the right of the if and else keywords. Figure 7 disambiguates the association of the match cases with a match by requiring that the
Multi-purpose Syntax Definition with SDF3
Exp.IfE = "if" exp1:Exp "then" exp2:Exp "else" exp3:Exp { layout(align "if" "else" && indent "if" "then" && indent "if" exp1 && indent "if" exp2 && indent "else" exp3) } Exp.IfT = "if" exp1:Exp "then" exp2:Exp { layout(indent "if" "then" && indent "if" exp1 && indent "if" exp2) } if a then if b then c else d
IfT( Var("a") , IfE(Var("b"), Var("c"), Var("d")) )
if a then if b then c else d
IfE( Var("a") , IfT(Var("b"), Var("c")) , Var("d") )
Fig. 6. Layout-sensitive disambiguation of dangling-else.
Exp.Match = "match" Exp "with" cases:Case+ { layout(indent "match" "with" && indent "match" exp && align-list cases) } match a with | d -> match e with | f -> g | i -> j
match a with | d -> match e with | f -> g | i -> j
Match( Var("a") , [ Case( PVar("d") , Match( Var("e") , [ Case(PVar("f"),Var("g")) , Case(PVar("i"),Var("j")) ] ) ) ] )
Match( Var("a") , [ Case( PVar("d") , Match( Var("e") , [Case(PVar("f"),Var("g"))] ) ) , Case(PVar("i"),Var("j")) ] )
Fig. 7. Layout-sensitive disambiguation of longest match for nested match cases.
11
12
L. E. de Souza Amorim and E. Visser
Fig. 8. Syntax-aware editor for the fun-query language with syntax highlighting, parse error recovery, error highlighting, and syntactic completion.
cases are aligned. Thus, one can obtain the non-longest match (second example) without using parentheses. Syntax Highlighting. The Spoofax language workbench [38,64] generates a syntax-aware editor from a syntax definition. Based on the lexical syntax, it derives syntax highlighting for programs by assigning colors to the tokens in the syntax tree as illustrated in Fig. 8. The default coloring scheme assigns colors to lexical categories such as keywords, identifiers, numbers, and strings. The coloring scheme can be adjusted in a configuration file by associating colors with sorts and constructors. Language Composition. SDF3 supports a simple module mechanism, allowing large syntax definitions to be divided into a collection of smaller modules, and allowing to define a library with reusable definitions. For example, the lex module provides a collection of common lexical syntax definitions. A module may extend the definition of syntactic categories of another module. This can be used, for example, to organize the syntax definition for a language as a collection of components (such as variables, functions, booleans, numbers) that each introduce constructs for a common set of syntactic categories (such as types and expressions). Another application of the module mechanism is to compose the syntax definitions of different languages into a composite language. For example, Fig. 9 defines a tiny query language in module query and its composition with the fun language of Fig. 1. The composition introduces the use of a query as an expression, and a quoted expression as a query identifier. The languages have a different lexical syntax, i.e. the keywords of the fun language are not reserved in the query language, and vice versa. Thus, from can be used as a variable in a fun expression, while it is a keyword in a query (see Fig. 8). Language composition with SDF2/3 has been used for the embedding of domain-specific languages [12], for the embedding of query and scripting languages [9], and for the organization of composite languages such as AspectJ [11] and WebDSL [27,62]. A consequence of merging of productions for sorts with the same name and injecting layout between symbols of a production, is that the layout of com-
Multi-purpose Syntax Definition with SDF3 module query sorts Query lexical sorts QID lexical syntax QID = [a-zA-Z0-9]+ lexical restrictions QID -/- [a-zA-Z0-9] context-free syntax Query.Select = < select from where > Cond.Eq = template options QID = keyword {reject} keyword -/- [a-zA-Z0-9]
13
module fun-query imports fun query context-free syntax Exp.Query = Query QID.Exp = [~[Exp]] ID = [select] {reject}
Fig. 9. Composition of languages with different lexical syntax.
posed languages is unified. It is future work to preserve the layout of composed languages.
5
Formatting
Formatting is the process of mapping abstract syntax trees to text. This can be used to improve the layout of a manually written program, or it can be used to turn a generated or transformed abstract syntax tree into a program text. Formatting is preceded by parenthesization to correctly insert parentheses such that parsing the formatted text preserves the tree structure (see Sect. 3). Template Productions. Formatting comes in two levels. The basic level of formatting, also known as ugly-printing, is concerned with inserting the ‘irrelevant’ notational details that were removed in the translation to abstract syntax. After ugly-printing, parsing the generated text should produce the original abstract syntax tree. This translation can be obtained from a grammar mechanically. For example, the Stratego/XT transformation tool suite featured a ‘pretty-print’ table generator [35] that formalized for each constructor a mapping to these notational details. The second level of formatting, also known as pretty-printing, is concerned with producing white space to make the generated program text readable. The Box language [8,34] provides abstractions for horizontal and vertical composition and horizontal (e.g. indentation) and vertical (line breaks) spacing. This is a useful intermediate representation for formatting, which allows the prettyprinter writer to abstract from an actual pretty-print algorithm. (Libraries for pretty-printing are built on the same principle [29].) Still, a mapping from abstract syntax trees to Box expressions requires human judgement and cannot be derived mechanically from a grammar. The pretty-print table generator
14
L. E. de Souza Amorim and E. Visser
mentioned above featured heuristics for associating Box expressions with language constructs. However, in many cases, it was necessary to edit the table to produce useful results, creating a bidirectional update problem to reflect changes to the grammar. SDF3 solves this problem by means of template productions, originally motivated to support syntactic completion (see below) [63]. (Template productions are a signature feature of SDF3, as they changed the syntax of productions from defined non-terminal on the right in SDF and SDF2, to defined non-terminal on the left, and the template quotes have a distinct influence on the typography of syntax definitions.) A regular context-free grammar production (Sect. 2) such as Exp.IfE = "if" Exp "then" Exp "else" Exp
combines sorts and literals. Sorts are identifiers referring to other productions and become the sub-terms of an abstract syntax tree node. Literals are quoted strings and are removed in the mapping to abstract syntax, needing to be restored during pretty-printing. Sorts and literals are implicitly separated by layout as discussed in Sect. 4. In a template production the usual quotation is Exp.IfE = < inverted. Consider the template version of the IfE if then production in Fig. 10. The outer quotes (),
quote a literal piece of text. The inner quotes () else are escapes to sorts. A template not only captures
literals and sorts, but also captures a two dimen- > sional shape. For the purposes of parsing this shape is ignored. That is, whitespace between symbols is Fig. 10. Template proturned into optional layout analogous to the trans- duction formation discussed in Sect. 4. (For the purpose of layout-sensitive parsing it would be interesting to interpret the layout in a template as layout constraints, but it is not easy to distinguish which layout should be enforced, and which layout is incidental.) For the purpose of pretty-printing, the two dimensional shape is interpreted as horizontal and vertical composition and spacing. That is, new- Exp.Let = < let lines are interpreted as vertical space and spaces in are interpreted as indentation (with respect to the > first non-whitespace character of the template). The template in Fig. 11 shows how the spacing of list Fig. 11. Separator layout elements can be configured with whitespace in the separator. Templates are translated to a transformation from abstract syntax terms to Box expressions. Thus, after every change to the grammar, the pretty-printer is automatically regenerated and up-to-date, without requiring a bidirectional update process. Plain productions with quoted literals can also be obtained automatically from template productions. The formatters derived from SDF3 templates have some limitations, which are partly due to (the interpretation of) the Box intermediate representation.
Multi-purpose Syntax Definition with SDF3
15
First, formatting is fairly rigid. It does not take into account the composition and size of expressions, but formats a language construct always in the same manner. Furthermore, it is not customizable with user preferences, as is customary in integrated development environments such as Eclipse. When formatting manually written programs to improve their layout, or when formatting a program after applying some transformation (e.g. a refactoring), it can be important to preserve the layout (comments and/or whitespace) of the original program. De Jonge and Visser [32] developed a layout preserving formatting algorithm with heuristics for moving comment blocks. This algorithm is currently not integrated in the SDF3 tool suite. Completion. Formatting is also an issue when proposing and inserting syntactic completions in an editor. The first version of Spoofax [38] featured syntactic completion templates instructing the editor what to do on particular triggers, which redundantly specified syntactic patterns. Vollebregt et al. [63] introduced template productions with the goal to automatically generate completion templates and support a program completion workflow in the style of structured editors. Amorim et al. [51] generate explicit placeholder syntax for all syntactic sorts in order to explicitly represent incomplete programs. Syntactic completion becomes a matter of generating completion proposals for placeholders based on the productions of the grammar. The resulting editor behaves like a combination of text editor and structure editor as illustrated in Fig. 8.
6
Parsing
Finally, we discuss the parsing strategy of SDF3. Character-level grammars do not fit in restricted grammar classes such as LL or LR grammars; deciding which alternative to take may require an unbounded number of characters of lookahead [61]. Furthermore, only the full class of context-free grammars is closed under composition [28], i.e. the composition of two LL or LR grammars is not necessarily an LL or LR grammar. Thus, SDF3 uses a generalized parsing algorithm that can deal with the full class of context-free grammars. Lazy Parse Table Generation. The SDF3 compiler first transforms a modular syntax definition to a monolithic and normalized syntax definition, which makes layout and deep priority conflicts explicit in the grammar [53,61]. A static analysis checks whether all used sorts are defined and warns for missing associativity and priority rules. A parser generation algorithm is used to generate a shift/reduce parse table from the normalized grammar. The algorithm is based on SLR parse table generation [28] adapted to deal with shallow priority conflicts [59]. Follow restrictions are implemented by restricting the follow set of non-terminals in the parse table. Follow restrictions that are longer than one character are added as dynamic checks. The resulting table may contain shift/reduce conflicts.
16
L. E. de Souza Amorim and E. Visser
a + b == c + d amb([ Add(Var("a"), amb([ Add(Eq(Var("b"), Var("c")), Var("d")) , Eq(Var("b"), Add(Var("c"), Var("d")))])) , Add(amb([ Eq(Add(Var("a"), Var("b")), Var("c")) , Add(Var("a"), Eq(Var("b"), Var("c")))]), Var("d")) , Eq(Add(Var("a"), Var("b")), Add(Var("c"), Var("d"))) ])
Fig. 12. Sentence and abstract syntax tree with (shared) ambiguities.
LR parse table generation is a non-local operation, requiring the entire grammar, implying that separate compilation is not possible. If one module of the syntax definition is changed, it needs to be recompiled entirely. This is a disadvantage for scenarios that depend on language extension [12,16]. Bravenboer and Visser developed a representation and algorithm for parse table composition that realized a form of separate compilation for syntax definitions [13]. However, the algorithm did not support cross-module priority declarations and was not adopted in practice. As a more pragmatic approach, Amorim et al. [52] adopted lazy parse table generation [26], which starts with an empty parse table, and only generates those states that are needed at parse time. This ensures fast turnaround times during development of syntax definitions. Scannerless Generalized LR Parsing with Error Recovery. The shift/reduce parse tables generated from SDF3 definitions are not deterministic, i.e. may have shift/reduce conflicts due to proper ambiguities or unbounded lookahead. To handle both these cases, SDF3 uses a Scannerless Generalized-LR (SGLR) parsing algorithm [60]. The GLR algorithm handles conflicts in the parse table by forking off separate parsers for each alternative of a conflict [44]. If the parser has encountered a genuine ambiguity, the parallel parsers will eventually end up in the same parse state, and the branches give rise to alternative parse trees. The result of parsing is a parse forest, a compact representation of all possible parse trees. A language engineer using SDF3 can inspect the ambiguities of a grammar by inspecting the (abstract) syntax trees with ambiguities, instead of inspecting shift/reduce conflicts. Figure 12 shows an abstract syntax tree with ambiguities for an expression in the example language using a syntax definition without disambiguation rules. Another reason for shift/reduce conflicts is the limited lookahead of the parser generator. For example, consider parsing the expression a == b /* a comment */ + c. After reading the identifier b, the parser can reduce to create Eq(Var("a"),Var("b")) or it can shift, expecting to eventually parse some subexpression of Eq, i.e. resulting in a term of the form Eq(Var("a"),?(Var("b"), ...)). This decision can only be made when parsing the + operator. But before the parser sees that operator, it first needs to process the comment. Forking
Multi-purpose Syntax Definition with SDF3
17
module matching language mpsd-sdf3 start symbol Exp test match longest match [[ match a with | b -> match c with | e -> f | g -> h ]] parse to [[ match a with | b -> (match c with | e -> f | g -> h) ]]
Fig. 13. Testing longest match disambiguation of the match-with expression.
the parser allows delaying the decision. Eventually only one of the parsers will survive and produce a tree without ambiguities. A GLR parser becomes a scannerless parser by reading characters as tokens and handling lexical disambiguation such that invalid forks are prevented or killed as early as possible [60]. Follow restrictions are handled by means of a dynamic lookahead check on reductions. Reject productions are implemented by rejecting states that are reached with a reject production. That requires postponing the reduction from rejectable states until it is certain no reject productions will appear. The SGLR algorithm is extended to support parse error recovery and produce a parse tree even if a program text contains syntactic errors [30,33,36]. This is important in interactive settings such as editors in an integrated development environment in order to enable editor services such as syntax highlighting and type analysis for programs with errors, as arise during program development. Error recovery is realized by an extension of SDF3 with recovery productions, which are only used in case normal parsing fails. There are two main categories of recovery rules. Inspired by island grammars [31], so called water productions turn normal tokens into layout, which allows skipping some tokens when they cannot be parsed otherwise. Productions such as ")" = {recover} allow the insertion of missing literals (or complete sorts). The SDF3 normalizer automatically generates a permissive grammar with recovery rules, but such rules can also be added manually. Error recovery is the basis for reporting syntax errors. Improving the localization and explanation of error messages is a topic for future work. An extension of SGLR to support incremental parsing based on the work of Wagner et al. [65] is under development [49]. Testing. Testing SDF3 syntax definitions is supported by the Spoofax Testing (SPT) language, a domain-specific language for testing various aspects of language definitions, including parsing [37]. An SPT test quotes a language fragment and specifies a test expectation. For testing syntax, the expectations are parse succeeds, parse fails, and parse to a specific term structure. Figure 13 illustrates the testing of disambiguation in SPT by specifying the disambiguated expression as parse result.
18
7
L. E. de Souza Amorim and E. Visser
Related Work
We have referred to previous and related work throughout this paper. The papers that we cited about particular aspects of SDF3 provide extensive discussions of related technical work, which is beyond the scope of this paper. Here we provide a couple of high-level pointers to related efforts. The design and implementation of SDF3 is motivated by its use in the Spoofax language workbench [38,64]. Erdweg et al. [18,19] give an overview of general concerns of the design and implementation of language workbenches. SDF3 is bootstrapped, i.e. the syntax of SDF3 has been defined in SDF3. Other significant applications of SDF3 are the NaBL2 [2] and Statix [3] languages for type system specification, the IceDust language for data modeling [20–22], and the FlowSpec language for data-flow analysis specification [50]. Many languages originally developed with SDF2 are being ported to SDF3, including the Stratego transformation language [10]. Several syntax definition languages share aims with SDF3, in particular regarding the support for language composition. The syntax definition sublanguage of the RASCAL meta-programming language [41] has a common root in SDF2. RASCAL has adopted generalized GLL parsing [48] instead of GLR parsing. The syntax definition language of the Silver [66] attribute grammar system takes a different approach to language composition. Instead of relying on scannerless generalized parsing, it relies on context-aware scanners and restrictions on grammars in order to guarantee absence of ambiguities in composed grammars [46]. Based on these restrictions it can support parse table composition for language composition [47]. The Eco editor [15] supports language composition using language boxes, where the editor keeps track of transitions between languages, avoiding the composition of grammars.
8
Conclusion
In this paper we have presented SDF3, a mature language for the definition of syntax. The design and implementation of SDF3 are based on many years of research and engineering, fed by the experience of numerous researchers, developers, and students. The multi-purpose interpretation of SDF3 specifications allows quick prototopying of language designs and enables testing these designs in a full-fledged environment with a syntax aware editor. Acknowledgment. We would like to thank our numerous co-authors (see the References section) for their contributions to the SDF family of languages. We would like to thank Peter Mosses for comments on this paper.
References 1. Afroozeh, A., van den Brand, M., Johnstone, A., Scott, E., Vinju, J.: Safe specification of operator precedence rules. In: Erwig, M., Paige, R.F., Van Wyk, E. (eds.) SLE 2013. LNCS, vol. 8225, pp. 137–156. Springer, Cham (2013). https://doi.org/ 10.1007/978-3-319-02654-1 8
Multi-purpose Syntax Definition with SDF3
19
2. van Antwerpen, H., N´eron, P., Tolmach, A.P., Visser, E., Wachsmuth, G.: A constraint language for static semantic analysis based on scope graphs. In: Erwig, M., Rompf, T. (eds.) Proceedings of the 2016 ACM SIGPLAN Workshop on Partial Evaluation and Program Manipulation, PEPM 2016, St. Petersburg, FL, USA, January 20–22, 2016, pp. 49–60. ACM (2016). https://doi.org/10.1145/2847538. 2847543 3. van Antwerpen, H., Poulsen, C.B., Rouvoet, A., Visser, E.: Scopes as types. Proc. ACM Program. Lang. 2(OOPSLA), 1–30 (2018). https://doi.org/10.1145/3276484 4. Backus, J.W.: The syntax and semantics of the proposed international algebraic language of the Zurich ACM-GAMM conference. In: IFIP Congress. pp. 125–131 (1959) 5. van den Brand, M.G.J., et al.: The ASF+SDF meta-environment: a componentbased language development environment. In: Wilhelm, R. (ed.) Compiler Construction, 10th International Conference, CC 2001 Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2001 Genova, Italy, April 2–6, 2001, Proceedings. Lecture Notes in Computer Science, vol. 2027, pp. 365–370. Springer (2001). https://doi.org/10.1016/S1571-0661(04)80917-4 6. van den Brand, M.G.J., de Jong, H.A., Klint, P., Olivier, P.A.: Efficient annotated terms. Softw.: Pract. Exp. 30(3), 259–291 (2000) 7. van den Brand, M.G.J., Scheerder, J., Vinju, J.J., Visser, E.: Disambiguation filters for scannerless generalized LR parsers. In: Horspool, R.N. (ed.) CC 2002. LNCS, vol. 2304, pp. 143–158. Springer, Heidelberg (2002). https://doi.org/10.1007/3540-45937-5 12 8. van den Brand, M.G.J., Visser, E.: Generation of formatters for context-free languages. ACM Trans. Softw. Eng. Methodol. 5(1), 1–41 (1996). https://doi.org/10. 1145/226155.226156 9. Bravenboer, M., Dolstra, E., Visser, E.: Preventing injection attacks with syntax embeddings. Sci. Comput. Program. 75(7), 473–495 (2010). https://doi.org/10. 1016/j.scico.2009.05.004 10. Bravenboer, M., Kalleberg, K.T., Vermaas, R., Visser, E.: Stratego/XT 0.17. A language and toolset for program transformation. Sci. Comput. Program. 72(12), 52–70 (2008). https://doi.org/10.1016/j.scico.2007.11.003 ´ Visser, E.: Declarative, formal, and extensible syntax 11. Bravenboer, M., Tanter, E., definition for AspectJ. In: Tarr, P.L., Cook, W.R. (eds.) Proceedings of the 21th Annual ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications, OOPSLA 2006, pp. 209–228. ACM (2006). https:// doi.org/10.1145/1167473.1167491 12. Bravenboer, M., Visser, E.: Concrete syntax for objects: domain-specific language embedding and assimilation without restrictions. In: Vlissides, J.M., Schmidt, D.C. (eds.) Proceedings of the 19th Annual ACM SIGPLAN Conference on ObjectOriented Programming, Systems, Languages, and Applications, OOPSLA 2004, pp. 365–383. ACM, Vancouver (2004). https://doi.org/10.1145/1028976.1029007 13. Bravenboer, M., Visser, E.: Parse table composition. In: Gaˇsevi´c, D., L¨ ammel, R., Van Wyk, E. (eds.) SLE 2008. LNCS, vol. 5452, pp. 74–94. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-00434-6 6 14. Chomsky, N.: Three models for the description of language. IRE Trans. Inf. Theory 2(3), 113–124 (1956). https://doi.org/10.1109/TIT.1956.1056813 15. Diekmann, L., Tratt, L.: Eco: a language composition editor. In: Combemale, B., Pearce, D.J., Barais, O., Vinju, J.J. (eds.) SLE 2014. LNCS, vol. 8706, pp. 82–101. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-11245-9 5
20
L. E. de Souza Amorim and E. Visser
16. Erdweg, S., Rendel, T., K¨ astner, C., Ostermann, K.: Sugarj: library-based syntactic language extensibility. In: Lopes, C.V., Fisher, K. (eds.) Proceedings of the 26th Annual ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications, OOPSLA 2011, part of SPLASH 2011, Portland, OR, USA, October 22–27, 2011, pp. 391–406. ACM (2011). https://doi.org/10. 1145/2048066.2048099 17. Erdweg, S., Rendel, T., K¨ astner, C., Ostermann, K.: Layout-sensitive generalized parsing. In: Czarnecki, K., Hedin, G. (eds.) SLE 2012. LNCS, vol. 7745, pp. 244– 263. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-36089-3 14 18. Erdweg, S., et al.: The state of the art in language workbenches. In: Erwig, M., Paige, R.F., Van Wyk, E. (eds.) SLE 2013. LNCS, vol. 8225, pp. 197–217. Springer, Cham (2013). https://doi.org/10.1007/978-3-319-02654-1 11 19. Erdweg, S., et al.: Evaluating and comparing language workbenches: existing results and benchmarks for the future. Comput. Lang. Syst. Struct. 44, 24–47 (2015). https://doi.org/10.1016/j.cl.2015.08.007 20. Harkes, D., Groenewegen, D.M., Visser, E.: IceDust: incremental and eventual computation of derived values in persistent object graphs. In: Krishnamurthi, S., Lerner, B.S. (eds.) 30th European Conference on Object-Oriented Programming, ECOOP 2016, July 18–22, 2016, Rome, Italy. LIPIcs, vol. 56. Schloss Dagstuhl Leibniz-Zentrum fuer Informatik (2016). https://doi.org/10.4230/LIPIcs.ECOOP. 2016.11 21. Harkes, D., Visser, E.: Unifying and generalizing relations in role-based data modeling and navigation. In: Combemale, B., Pearce, D.J., Barais, O., Vinju, J.J. (eds.) SLE 2014. LNCS, vol. 8706, pp. 241–260. Springer, Cham (2014). https://doi.org/ 10.1007/978-3-319-11245-9 14 22. Harkes, D., Visser, E.: IceDust 2: derived bidirectional relations and calculation strategy composition. In: M¨ uller, P. (ed.) 31st European Conference on ObjectOriented Programming, ECOOP 2017, June 19–23, 2017, Barcelona, Spain. LIPIcs, vol. 74. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik (2017). https://doi. org/10.4230/LIPIcs.ECOOP.2017.14 23. Heering, J., Hendriks, P.R.H., Klint, P., Rekers, J.: The syntax definition formalism SDF - reference manual. SIGPLAN Not. 24(11), 43–75 (1989). https://doi.org/10. 1145/71605.71607 24. Heering, J., Klint, P., Rekers, J.: Incremental generation of parsers. IEEE Trans. Softw. Eng. 16(12), 1344–1351 (1990) 25. Heering, J., Klint, P., Rekers, J.: Incremental generation of lexical scanners. ACM Trans. Program. Lang. Syst. 14(4), 490–520 (1992). https://doi.org/10.1145/ 133233.133240 26. Heering, J., Klint, P., Rekers, J.: Lazy and incremental program generation. ACM Trans. Program. Lang. Syst. 16(3), 1010–1023 (1994). https://doi.org/10.1145/ 177492.177750 27. Hemel, Z., Groenewegen, D.M., Kats, L.C.L., Visser, E.: Static consistency checking of web applications with WebDSL. J. Symb. Comput. 46(2), 150–182 (2011). https://doi.org/10.1016/j.jsc.2010.08.006 28. Hopcroft, J.E., Motwani, R., Ullman, J.D.: Introduction to Automata Theory, Languages, and Computation, 3rd edn. Addison-Wesley, Boston (2006) 29. Hughes, J.: The design of a pretty-printing library. In: Jeuring, J., Meijer, E. (eds.) AFP 1995. LNCS, vol. 925, pp. 53–96. Springer, Heidelberg (1995). https://doi. org/10.1007/3-540-59451-5 3
Multi-purpose Syntax Definition with SDF3
21
30. de Jonge, M., Kats, L.C.L., Visser, E., S¨ oderberg, E.: Natural and flexible error recovery for generated modular language environments. ACM Trans. Program. Lang. Syst. 34(4), 15 (2012). https://doi.org/10.1145/2400676.2400678 31. de Jonge, M., Nilsson-Nyman, E., Kats, L.C.L., Visser, E.: Natural and flexible error recovery for generated parsers. In: van den Brand, M., Gaˇsevi´c, D., Gray, J. (eds.) SLE 2009. LNCS, vol. 5969, pp. 204–223. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-12107-4 16 32. de Jonge, M., Visser, E.: An algorithm for layout preservation in refactoring transformations. In: Sloane, A., Aßmann, U. (eds.) SLE 2011. LNCS, vol. 6940, pp. 40–59. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-28830-2 3 33. de Jonge, M., Visser, E.: Automated evaluation of syntax error recovery. In: Goedicke, M., Menzies, T., Saeki, M. (eds.) IEEE/ACM International Conference on Automated Software Engineering, ASE 2012, Essen, Germany, September 3–7, 2012, pp. 322–325. ACM (2012). https://doi.org/10.1145/2351676.2351736 34. de Jonge, M.: A pretty-printer for every occasion. In: The International Symposium on Constructing Software Engineering Tools (CoSET2000). University of Wollongong, Australia (2000) 35. de Jonge, M.: Pretty-printing for software reengineering. In: 18th International Conference on Software Maintenance (ICSM 2002), Maintaining Distributed Heterogeneous Systems, 3–6 October, 2002, Montreal, Quebec, Canada, pp. 550–559. IEEE Computer Society (2002) 36. Kats, L.C.L., de Jonge, M., Nilsson-Nyman, E., Visser, E.: Providing rapid feedback in generated modular language environments: adding error recovery to scannerless generalized-LR parsing. In: Arora, S., Leavens, G.T. (eds.) Proceedings of the 24th Annual ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications, OOPSLA 2009, pp. 445–464. ACM (2009). https://doi.org/10.1145/1640089.1640122 37. Kats, L.C.L., Vermaas, R., Visser, E.: Integrated language definition testing: enabling test-driven language development. In: Lopes, C.V., Fisher, K. (eds.) Proceedings of the 26th Annual ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications, OOPSLA 2011, Part of SPLASH 2011, Portland, OR, USA, October 22–27, 2011, pp. 139–154. ACM (2011). https://doi.org/10.1145/2048066.2048080 38. Kats, L.C.L., Visser, E.: The Spoofax language workbench: rules for declarative specification of languages and IDEs. In: Cook, W.R., Clarke, S., Rinard, M.C. (eds.) Proceedings of the 25th Annual ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications, OOPSLA 2010, pp. 444–463. ACM, Reno/Tahoe (2010). https://doi.org/10.1145/1869459.1869497 39. Kats, L.C.L., Visser, E., Wachsmuth, G.: Pure and declarative syntax definition: paradise lost and regained. In: Cook, W.R., Clarke, S., Rinard, M.C. (eds.) Proceedings of the 25th Annual ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications, OOPSLA 2010, pp. 918–932. ACM, Reno/Tahoe (2010). https://doi.org/10.1145/1869459.1869535 40. Klint, P.: A meta-environment for generating programming environments. ACM Trans. Softw. Eng. Methodol. 2(2), 176–201 (1993). https://doi.org/10.1145/ 151257.151260 41. Klint, P., van der Storm, T., Vinju, J.J.: RASCAL: a domain specific language for source code analysis and manipulation. In: Ninth IEEE International Working Conference on Source Code Analysis and Manipulation, SCAM 2009, Edmonton, Alberta, Canada, September 20–21, 2009, pp. 168–177. IEEE Computer Society (2009). https://doi.org/10.1109/SCAM.2009.28
22
L. E. de Souza Amorim and E. Visser
42. Leroy, X., Doligez, D., Frisch, A., Garrigue, J., R´emy, D., Vouillon, J.: The OCaml system release 4.10 (2020) 43. Lesk, M.E.: Lex–a lexical analyzer generator. Tech. rep. CS-39. AT&T Bell Laboratories, Murray Hill, N.J. (1975) 44. Rekers, J.: Parser generation for interactive environments. Ph.D. thesis, University of Amsterdam, Amsterdam, The Netherlands (January 1992) 45. Salomon, D.J., Cormack, G.V.: Scannerless NSLR(1) parsing of programming languages. In: PLDI, pp. 170–178 (1989) 46. Schwerdfeger, A., Wyk, E.V.: Verifiable composition of deterministic grammars. In: Hind, M., Diwan, A. (eds.) Proceedings of the 2009 ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI 2009, Dublin, Ireland, June 15–21, 2009, pp. 199–210. ACM (2009). https://doi.org/10.1145/1542476. 1542499 47. Schwerdfeger, A., Van Wyk, E.: Verifiable parse table composition for deterministic parsing. In: van den Brand, M., Gaˇsevi´c, D., Gray, J. (eds.) SLE 2009. LNCS, vol. 5969, pp. 184–203. Springer, Heidelberg (2010). https://doi.org/10.1007/9783-642-12107-4 15 48. Scott, E., Johnstone, A.: GLL parsing. Electron. Notes Theor. Comput. Sci. 253(7), 177–189 (2010). https://doi.org/10.1016/j.entcs.2010.08.041 49. Sijm, M.P.: Incremental scannerless generalized LR parsing. In: Smaragdakis, Y. (ed.) Proceedings Companion of the 2019 ACM SIGPLAN International Conference on Systems, Programming, Languages, and Applications: Software for Humanity, SPLASH 2019, Athens, Greece, October 20–25, 2019, pp. 54–56. ACM (2019). https://doi.org/10.1145/3359061.3361085 50. Smits, J., Visser, E.: FlowSpec: declarative dataflow analysis specification. In: Combemale, B., Mernik, M., Rumpe, B. (eds.) Proceedings of the 10th ACM SIGPLAN International Conference on Software Language Engineering, SLE 2017, Vancouver, BC, Canada, October 23–24, 2017, pp. 221–231. ACM (2017). https:// doi.org/10.1145/3136014.3136029 51. de Souza Amorim, L.E., Erdweg, S., Wachsmuth, G., Visser, E.: Principled syntactic code completion using placeholders. In: van der Storm, T., Balland, E., Varr´ o, D. (eds.) Proceedings of the 2016 ACM SIGPLAN International Conference on Software Language Engineering, Amsterdam, The Netherlands, October 31–November 1, 2016, pp. 163–175. ACM (2016). https://doi.org/10.1145/2997364.2997374 52. de Souza Amorim, L.E., Steindorfer, M.J., Visser, E.: Deep priority conflicts in the wild: a pilot study. In: Combemale, B., Mernik, M., Rumpe, B. (eds.) Proceedings of the 10th ACM SIGPLAN International Conference on Software Language Engineering, SLE 2017, Vancouver, BC, Canada, October 23–24, 2017, pp. 55–66. ACM (2017). https://doi.org/10.1145/3136014.3136020 53. de Souza Amorim, L.E., Visser, E.: A direct semantics of declarative disambiguation rules. In: ACM TOPLAS (2020). under revision 54. de Souza Amorim, L.E., Steindorfer, M.J., Erdweg, S., Visser, E.: Declarative specification of indentation rules: a tooling perspective on parsing and pretty-printing layout-sensitive languages. In: 0005, D.P., Mayerhofer, T., Steimann, F. (eds.) Proceedings of the 11th ACM SIGPLAN International Conference on Software Language Engineering, SLE 2018, Boston, MA, USA, November 05–06, 2018, pp. 3–15. ACM (2018). https://doi.org/10.1145/3276604.3276607 55. de Souza Amorim, L.E., Steindorfer, M.J., Visser, E.: Towards zero-overhead disambiguation of deep priority conflicts. Programming Journal 2(3), 13 (2018). https://doi.org/10.22152/programming-journal.org/2018/2/13
Multi-purpose Syntax Definition with SDF3
23
56. Tomita, M.: An efficient context-free parsing algorithm for natural languages. In: IJCAI, pp. 756–764 (1985) 57. Vergu, V.A., N´eron, P., Visser, E.: DynSem: a DSL for dynamic semantics specification. In: Fern´ andez, M. (ed.) 26th International Conference on Rewriting Techniques and Applications, RTA 2015, June 29 to July 1, 2015, Warsaw, Poland. LIPIcs, vol. 36, pp. 365–378. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik (2015). https://doi.org/10.4230/LIPIcs.RTA.2015.365 58. Visser, E.: A family of syntax definition formalisms. In: van den Brand, M.G.J., Estrela, V.V. (eds.) ASF+SDF 1995. A Workshop on Generating Tools from Algebraic Specifications. Technical report P9504, Programming Research Group, University of Amsterdam (May 1995) 59. Visser, E.: A case study in optimizing parsing schemata by disambiguation filters. In: International Workshop on Parsing Technology (IWPT 1997), pp. 210–224. Massachusetts Institute of Technology, Boston (September 1997) 60. Visser, E.: Scannerless generalized-LR parsing. Tech. rep. P9707, Programming Research Group, University of Amsterdam (July 1997) 61. Visser, E.: Syntax definition for language prototyping. Ph.D. thesis, University of Amsterdam (September 1997) 62. Visser, E.: WebDSL: a case study in domain-specific language engineering. In: L¨ ammel, R., Visser, J., Saraiva, J. (eds.) GTTSE 2007. LNCS, vol. 5235, pp. 291– 373. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-88643-3 7 63. Vollebregt, T., Kats, L.C.L., Visser, E.: Declarative specification of template-based textual editors. In: Sloane, A., Andova, S. (eds.) International Workshop on Language Descriptions, Tools, and Applications, LDTA 2012, Tallinn, Estonia, March 31–April 1, 2012, pp. 1–7. ACM (2012). https://doi.org/10.1145/2427048.2427056 64. Wachsmuth, G., Konat, G., Visser, E.: Language design with the Spoofax language workbench. IEEE Softw. 31(5), 35–43 (2014). https://doi.org/10.1109/MS.2014. 100 65. Wagner, T.A., Graham, S.L.: Efficient and flexible incremental parsing. ACM Trans. Program. Lang. Syst. 20(5), 980–1013 (1998). https://doi.org/10.1145/ 293677.293678 66. Wyk, E.V., Bodin, D., Gao, J., Krishnan, L.: Silver: an extensible attribute grammar system. Sci. Comput. Program. 75(1–2), 39–54 (2010). https://doi.org/10. 1016/j.scico.2009.07.004
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made. The images or other third party material in this chapter are included in the chapter’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Finding and Fixing a Mismatch Between the Go Memory Model and Data-Race Detector A Story on Applied Formal Methods Daniel Schnetzer Fava(B) Department of Informatics, University of Oslo, Oslo, Norway [email protected]
Abstract. Go is an open-source programming language developed at Google. In previous works, we presented formalizations for a weak memory model and a data-race detector inspired by the Go specification. In this paper, we describe how our theoretical research guided us in the process of finding and fixing a concrete bug in the language. Specifically, we discovered and fixed a discrepancy between the Go memory model and the Go data-race detector implementation—the discrepancy led to the under-reporting of data races in Go programs. Here, we share our experience applying formal methods on software that powers infrastructure used by millions of people.
1
Introduction
Go is an open-source programming language designed for concurrency. Developed at Google, the language has gained traction in the areas of cloud computing [6], where it is used to implement various client-server applications and container management systems, such as Docker [12] and Kubernetes [2]. One of the language’s main features are light-weight threads, called goroutines, which are spawned during function invocation. Any function can be made to execute asynchronously by simply prepending the keyword go to the function’s name during invocation. Go’s approach to synchronization also stands out. Do not communicate by sharing memory; instead, share memory by communicating [8]—is a catchphrase among Go programmers. The language’s feature-mix encourages a style of programming where (1) variables are implicitly owned by goroutines, and (2) variables are shared when this ownership is transferred through direct communication. So, in contrast to locks, which favor synchronization via mutual exclusion, Go has channels, which typically enforce a happensbefore relation [10] between a message sender and its receiver. The discipline prescribed by Go’s share by communicating slogan is not, however, enforced at compile time.1 It is, therefore, possible for programs to 1
There are good reasons why a type checker cannot enforce such a discipline without seriously restricting the language.
c Springer Nature Switzerland AG 2020 F. de Boer and A. Cerone (Eds.): SEFM 2020, LNCS 12310, pp. 24–40, 2020. https://doi.org/10.1007/978-3-030-58768-0_2
Finding and Fixing a Mismatch
25
harbor data races. Since data races often lead to counter intuitive behavior, the Go programming language comes with a data-race detector built into its toolchain. The Go memory model is relaxed and its specification describes the behavior of well-synchronized programs. In [4], we gave a small-step operational semantics of a memory model inspired by Go’s. There, we proved the DRF-SC guarantee, which states that data-race free (DRF) program executions behave sequentially consistently (SC) under the proposed model. Given the importance of flagging data races, in [5] we explore the use of our semantics for the sake of datarace detection. Armed with these formalisms, we turned our attention to Go’s implementation. With that, we discovered that the Go data-race detector was not strictly abiding by the rules of the Go memory model specification. This oversight lead to the under-reporting of data races in Go programs. We then proposed and implemented a fix in conjunction with the Go community. Here, we discuss how the theoretical modeling of the language helped us find and address this issue. In Sects. 2 and 3, we will visit the Go memory model and explore examples of synchronization via channel communication. Having covered this background, we discuss how the Go data-race detector is built into the language (Sect. 4). In Sect. 4.1, we show that the detector’s implementation inadvertently mismatched rules governing channel communication. We address the issue in Sect. 5 and share lessons we learned in Sect. 6.
2
Synchronization via Channel Communication
Two concurrent memory accesses constitute a data race if they reference the same memory location and at least one of the accesses is a write. Data races can be eliminated through synchronization, that is, the enforcement of an order between conflicting memory accesses. In Go, synchronization is performed via channel communication. Go channels assure FIFO communication from a sender to a receiver sharing the channel’s reference. Channels can be dynamically created and closed—their type and finite capacity are fixed upon creation. When attempting to receive from an empty channel, a thread blocks until, if ever, a value is made available by a sender. A thread also blocks when attempting to send on a channel that is full. According to the Go memory model specification [7], the following two main rules govern synchronization. Given a channel c with capacity C: I. A send on c happens-before the corresponding receive from c completes. II. The k th receive from c happens-before the (k + C)th send on c completes. The first rule establishes a causal relationship between a sender and its communicating partner. In contrast, the second rule establishes a relationship between a sender and some past receiver, without there being any message transmission between the two goroutines. Note also that the second rule accounts for channel
26
D. S. Fava
capacity: a current sender is able to place a new message because some past receiver, by taking an older message out, has made space in the channel’s buffer. Figure 1a is an example of synchronization via rule (I), and Fig. 1b is an example via rule (II). Throughout the paper, we will follow the syntax in [4], which closely matches Go’s. The term c ← v, with the arrow pointing into c, stand for the sending of value v over channel c. Let ← c, with the arrow pointing away from c, stand for the reception of a value from the channel. Assuming a channel of capacity one, Fig. 1a is the classic message passing example, while Fig. 1b enforces mutual exclusion.
Fig. 1. Synchronization via channel communication (channels of capacity one).
In the message passing example, the goroutine T 0 writes to a shared variable z and, by sending a message over a channel, the routine transfers its implicit ownership of z. Goroutine T 1 blocks until a message is ready to be received. Once a message has been received, T 1 proceeds to load from z. This program is properly synchronized, which means T 1 necessarily loads the value of 42 as opposed to potentially observing an uninitialized variable value. Using the happens-before (HB) rules of the Go memory-model specification, we can show that the memory accesses are properly synchronized as follows: z := 42 hb c ← 0 c ← 0 hb ← c ← c hb load z z := 42 hb
load z
via program order via channel rule (I) via program order
(1) (2) (3)
via (1), (2), (3) and transitivity of HB.
While Fig. 1a and rule (I) account for direct communication, Fig. 1b relies on rule (II) and the use of channels as locks. The example in Fig. 1b involves two threads attempting to write to the same shared variable. Before writing, a thread sends a message onto a channel. Because the channel has capacity one, all subsequent attempts to send again will block until the prior message is received. Therefore, it is not possible for T 0 and T 1 to execute their critical sections at the same time. The send is thus analogous to acquiring a lock, and the receive to releasing the lock. Again, we can use the Go memory model to reason about
Finding and Fixing a Mismatch
27
this example. Without loss of generality, assume T 0 sends its message first, then z := 42 hb ← c by T 0 ← c by T 0 hb c ← 0 by T 1 c ← 0 by T 1 hb z := 43 z := 42 hb z := 43
via program order via channel rule (II)
(4) (5)
via program order via (4), (5), (6) and transitivity.
(6)
While mutual exclusion is ensured, we cannot ascertain the final value of z. If T 0 sends a message before T 1, then z equals 43; otherwise, z = 42. Note also that, in this example, rule (I) is obviated by the program order; therefore, the rule has no synchronization effect here.2 The Go memory model is described plain and succinctly in English [7]. The word “completes,” present in both rules (I) and (II), can easily be overlooked. By overlooking the distinction between an operation and its completion, the Go data-race detector over-synchronizes and, therefore, fails to report certain races. The bug, which we describe in detail in the next section, is related to the following question: Is it possible for the detector to account for the mutex paradigm (Fig. 1b) and, at the same time, observe the distinction between a channel operation and its completion?
3
The Go Memory Model: Every Word Counts
The completion of a channel operation, in addition to the operation itself, is an important part of the Go memory model. In rule (I), it is not the case that a send happens-before the corresponding receive. Instead, the send happens-before the completion of the corresponding receive. Similar for rule (II), involving a past receive and the completion of a current send. To illustrate, consider Fig. 2, where each vertical arrow represents the execution of a thread (flowing from top to bottom). Both the top and the bottom diagrams depicts consecutive send sd and receive rv operations on a channel of capacity one—the operations are indexed as to show their order of execution. According to the Go memory model, channel operations are related as shown in Fig. 2a. The operations are broken into two halves of a circle: the top is the operation and the bottom its completion. The arrows in the diagram represent the happens-before relation—arrows are labeled with the memory-model rule that justify their existence. According to rule (I), the 0th send happens-before the completion of the 0th receive—this relation is captured by the arrow starting at the top half-circle on the far left (sd0 ) and ending at the bottom half-circle to the right (completion of rv0 ). The next arrow establishes the happens-before relation 2
The relation below can be derived by both program-order as well as by rule (I). Similar for the send and receive operations performed by T 1. c ← 0 by T 0 hb .
← c by T 0
28
D. S. Fava
between receive rv0 and send sd1 according to rule (II), and so forth. Note from Fig. 2a that an operation is related to its immediate predecessor. There is no chain of happens-before starting from the “distant” past. For example, although sd0 is related to the completion of rv0 , and rv0 is related to the completion of sd1 , it is not the case that sd0 and sd1 are related to each other.
Fig. 2. The Go memory model specification, every word counts.
Figure 2b captures an alternative formulation of the happens-before rules (I) and (II) where the word “completes” is left out. Sends and receives are not split into the operation and the operation’s completion. Instead, sends and receives happen-before each other. This formulation leads to a chain starting at the very first send, and connecting every send and receive operation ever performed onto the channel. From an application programmer’s perspective, this chain leads to an accumulation of happens-before information: after interacting with a channel, a goroutine’s behavior is now dependent, not only on its communicating partner but, on every thread that has previously interacted with the channel. From the point of view of data races, this alternate formulation leads to oversynchronization. The Go data-race detector’s implementation matches the behavior of Fig. 2b and, therefore, deviates from the Go memory model specification. Note that the over-synchronization on the part of the detector is not the result of careful deliberation, for example when false-negatives are accepted in exchange for lower runtime overheads. Rather, the implementation springs from an interpretation of synchronization from the perspective of locks rather than of channels. As will be discussed in Sect. 5, addressing this issue not only eliminates false-negatives but also yields lower runtime overhead.
Finding and Fixing a Mismatch
4
29
The Go Data-Race Detector
By adding -race to the command line, a Go program can be compiled and run with data-race detection enabled. The Go data-race detector is based on TSan, the Thread Sanitizer library [9]. The library is part of the LLVM infrastructure [11] and was originally designed to find races in C/C++11 applications. When data-race detection is enabled, each thread becomes associated with an additional data structure. This data structure keeps track of the operations that are in happens-before from a thread’s point of view. In most data-race detectors, including TSan, this data structure is a vector clock (VC) [10]. Vectorclocks offer a compact representation of the relative order of execution between threads. With this bookkeeping, data-race detectors are able to find synchronization issues in programs—where synchronization means the transfer of happensbefore information between threads. In the setting of locks, a thread performs an acquire operation in order to “learn” the happens-before information stored in a lock. By performing a release operation, a thread deposits its happens-before information onto a lock. In the setting of channels, we can think of happens-before as being transferred via sends and receives.
Fig. 3. Snippets of Go’s send and receive operations from runtime/chan.go.
Figure 3 contains snippets from Go’s implementation of the send and receive operations. Unsurprisingly, Go implements a channel of capacity C as an array
30
D. S. Fava
of length C. This array is contained in a struct called hchan. Struct member sendx is the index where a new message is to be deposited, while recvx is the index of the next message to be retrieved. Function chanbuf takes a channel struct and an index—the function returns a pointer to the channel’s array at the given index. Note from lines 19 to 22 that a channel array is treated as a circular buffer. When data-race detection is enabled, each channel array entry becomes associated with a vector-clock. Also, when detection is enabled, a send operation (Listing 1.1) generates calls to acquire and release—lines 11 to 14. The acquire causes the sender to “learn” the happens-before (HB) information associated with the channel entry at c.sendx. The release causes the thread’s HB information to be stored back into that entry.3 The receive operation is similarly implemented and shown in Listing 1.2. In light of the implementation described above, we now revisit the message passing and mutual exclusion examples of Sect. 2. In the case of message passing, a thread sends a message onto a channel of capacity one, then another thread receives this message before accessing a shared resource—see Fig. 1a. According to the data-race detector’s implementation, the channel array entry at index 0 observes an acquire followed by release on behalf of the sender. Then, again, a sequence of acquire followed by release on behalf of the receiver. In effect, the happens-before information of the sender is transferred to the receiver: specifically, the release by T 0 followed by the acquire by T 1 places T 0’s write operation in happens-before relation with respect to T 1’s read operation. The message passing example of Fig. 1 is thus deemed properly synchronized by the Go datarace detector. We can reason about the mutual exclusion example of Fig. 1b in similar terms. A thread sends onto a channel, accesses a shared resource, and then receives from the channel. With the receive operation, this thread deposits its happens-before information onto the channel—line 13 of Listing 1.2. The second thread then acquires this happens-before information when it sends onto the channel—line 12 of Listing 1.1. Again, the Go data-race detector’s implementation correctly deems the example as properly synchronized. 4.1
The Bug
Although the Go data-race detector behaves correctly on the message-passing and mutual-exclusion examples, the detector’s implementation does not reflect the Go memory model specification. The acquire/release sequence performed on behalf of send and receive operations follows the typical lock usage. Channel
3
In the implementation of the send operation, a message is moved from the sender’s buffer to a receiver’s buffer ep on line 16. The index c.sendx is incremented in line 19 and the increment wraps around based on the length of the array—lines 20 to 22. The number of elements in the array is incremented, the lock protecting the channel is unlocked and the function returns—lines 23 to 25.
Finding and Fixing a Mismatch
31
programming is, however, different from lock programming. The current implementation of the detector leads to an accumulation of happens-before information associated with channel entries. This monotonic growth of happens-before information, however, is not prescribed by the Go memory model. In the example that follows, we illustrate the mismatch between (1) the implementation of the data-race detector and (2) the memory model specification. We show how this mismatch leads to over-synchronization and the under reporting of data races.
Fig. 4. Example that highlights a mismatch between the Go memory model and the Go data-race detector implementation. (Capacity of channel c equals one).
Let c in Fig. 4 be a channel of capacity one. The example is then a mix of mutual exclusion and message passing: T 0 is using the channel as a lock in an attempt to protect its access to a shared variable,4 and we can interpret T 1 as using the same channel to communicate with T 2.5 Now, consider the interleaving in which T 0 runs to completion, followed by T 1, then T 2—shown in Trace 7. Is the write to z by T 0 in a race with the read of z by T 2? (c ← 0)T 0
(z := 42)T 0
(← c )T 0
(c ← 0)T 1
(← c )T 2
(load z)T 2
(7)
The original Go data-race detector does not flag these accesses as racy:6 T 0 releases its happens-before (HB) by sending on the channel. This HB is stored in the vector-clock associated with c’s 0th array entry. The send by T 1 performs an acquire followed by a release, at which point the VC associated with the entry contains the union of T 0’s and T 1’s happens-before. Finally, the receive by T 2 performs an acquire and a release, causing T 2 to learn the happens-before of T 0 and T 1. Formally, the data-race detector derives a happens-before relation between the write and the read as follows: z := 42 hb ← c by T 0 ← c by T 0 hb c ← 0 by T 1 c ← 0 by T 1 hb ← c by T 2 hb z := 42 hb 4 5 6
← c by T 2 load z load z
via program order release by T 0, acquire by T 1 release by T 1, acquire by T 0 via program order via transitivity of HB
The send operation by T 0 is analogous to acquire and the receive to release. Recall that the mutual exclusion and message passing patterns were introduced in Fig. 1 and discussed in Sect. 2. GitHub issue https://github.com/golang/go/issues/37355.
32
D. S. Fava
According to the Go memory model specification, however, the receive from c in T 0 is not in happens-before relation to the send in T 1. Instead, the receive is in happens-before relation to the completion of the send! Information about the write to z by thread T 0 is transmitted to T 1, but this information is only incorporated into T 1 after the thread has transmitted its message to T 2. Therefore, T 2 does not receive T 0’s happens-before information. In other words, according to the Go memory model, there is no chain of happens-before connecting T 0 to T 2. The trace captured by Eq. (7) is thus racy, with the race depicted in Fig. 5. Specifically, the race is captured by the absence of a path between the write to z in T 0 and the load of z in T 2.
Fig. 5. Partial order on events according to the Go memory model. The HB relation is represented by arrows labeled with the Go memory model rule justifying the arrow’s existence. The top part of the half-circle corresponds to a channel operation and the bottom to its completion.
The Go memory model calls for a mix between channel communication as described by Lamport [10] and lock programming. Lamport [10] was studying distributed systems in the absence of shared memory: the shared resources were the channels themselves, and the absence of races (of channel races) was related to determinism. In contrast, Go employs channels as a primitive for synchronizing memory accesses. In Go, some happens-before relations are forged solely between communicating partners—these relations are derived from rule (I), which is also present in [10]. Similar to lock programming, some happens-before relations are the result of an accumulation of effects from past interactions on a channel. This accumulation occurs when we incorporate rule (II), which is not present in [10]. So, while the language favors a discipline where an implicit notion of variable ownership is transferred via direct communication, as prescribed by rule (I), by incorporating rule (II), Go also supports the use of channels as locks.
Finding and Fixing a Mismatch
5
33
The Fix: Capturing the Semantics of Channels
The repair to the Go data-race detector’s deviation from the memory model specification comes from acknowledging that a primitive, different from acquire and release, can better fit the semantics of synchronization via channel communication. We propose the primitive depicted in Fig. 6, which we call releaseacquire-exchange or rea. Let Tt be the happens-before information of thread t, m be the channel entry where a message will be deposited or retrieved, and Cm be the happens-before information associated with m. The primitive is implemented with a thread releasing onto a place-holder and then acquiring from Cm . The happens-before in Cm is then overwritten with the HB information from the place-holder.7 We added this new synchronization primitive into TSan, the data-race detection library that powers the Go data-race detector. With the new primitive in place, changes to the Go sources became trivial:8 it involved changing sequences of acquire/release calls with a call to release-acquire-exchange.
Fig. 6. Semantics of “release-acquire-exchange,” a new primitive added to TSan.
Given the addition of rea into TSan, let us revisit trace (7). While the original implementation of the Go data-race detector did not flag this trace as racy, the updated version does. Given the detector’s updated implementation, we can reason about the race as follows. Let TT 0 , TT 1 , and TT 2 be data-structures storing happens-before information of threads T 0, T 1, and T 2. Let Cc[0] be the happens-before associated with the 0th array entry of channel c. We denote the write event to z as !z and, for simplicity, we represent happens-before information as a set of memory events. The race-detector state is then the tuple [TT 0 , TT 1 , TT 2 , Cc[0] ], with initial state [{}, {}, {}, {}]. The data-race detector performs the following transitions as the program executes:
7 8
The place-holder is a variable local to a function in TSan, as opposed to an extra memory region allocated in Go. Changes in Go https://golang.org/cl/220419 and TSan https://reviews.llvm.org/ D76322.
34
D. S. Fava
CT 0
CT 1
CT 2
Lc[0]
[ {},
{},
{},
{} ] ⇒(c←0)T 0
[ {},
{},
{},
{} ] ⇒(z:=42)T 0
(8)
[{!z},
{},
{},
{} ] ⇒
(9)
[{!z},
{},
{},
{!z}] ⇒
(10)
[{!z},
{!z}, {},
{} ] ⇒(←c )T 2
(11)
[{!z},
{!z}, {},
{} ] ⇒
(12)
(←c )T 0
(c←0)T 1
(loadz)T 2
The write to z by T 0 places !z into TT 0 —transition from Eq. (8) to (9). Sends and receives are interpreted according to their formal semantics in Fig. 6. The receive by T 0 places the write event into the channel-entry’s happens-before—Eqs. (9) and (10). The send by T 1 places the write event into the thread’s happens-before and overwrites the channel-entry’s happens-before with the empty set—Eqs. (10) and (11). The receive by T 2 retrieves the empty happens-before information— Eqs. (11) and (12). Therefore, at the time T 2 loads from the shared variable, the write to z by T 0 is not in happens-before with respect to T 2. In conclusion, the execution is racy. Note that the fix to the Go data-race detector does not invalidate the use of channels as locks. Without loss of generality, let the trace below be an execution of the mutual exclusion example of Fig. 1b. (c ← 0)T 0
(z := 42)T 0
(← c )T 0
(c ← 0)T 1
(z := 43)T 1
(← c )T 1
(13)
The detector’s execution, from initial state [TT 0 , TT 1 , Cc[0] ] = [{}, {}, {}], is TT 0
TT 1
[ {},
{},
{} ] ⇒(c←0)T 0
[ {},
{},
{} ] ⇒(z:=42)T 0
[{!z},
{},
{} ] ⇒(←c )T 0
Cc[0]
[{!z}, {}, {!z}] ⇒(c←0)T 1 [{!z}, {!z}, {} ] Note that the event !z capturing the write by T 0 is contained in TT 1 before T 1 attempts to write to z. In other words, the writes are ordered by happens-before and the execution is properly synchronized. Thus, the answer to the question raised at the end of Sect. 2, “is it possible to support the use of channels as locks (as in the mutex example) and still avoid over-synchronization?” is yes. We implement the new synchronization primitive rea in TSan with one pass, as opposed to two passes, over the data-structure storing happens-before information. Therefore, the updated data-race detector implementation provides
Finding and Fixing a Mismatch
35
better performance than the original sequence of acquire followed by release. Another consequence of our fix is a potential reduction in the memory footprint associated with data-race detection. This savings comes from the fact that vector clocks associated with channel entries no longer observe as large of an accumulation of happens-before information—this point was previously touched upon in Sect. 3, Fig. 2. We include a short experimental evaluation in Appendix A. 5.1
From Small-Step Operational Semantics to rea
The release-acquire-exchange primitive comes from our previous formalizations of Go channels. It is conceptually useful to distinguish between happens-before information transmitted on behalf of rule (I) versus (II). In our earlier formalization of a memory model inspired by the Go specification [4], a channel c is split into two: a forward and a backward one. The forward channel cf holds messages and thread-local information to be transmitted, as prescribed by rule (I), from a sender to its corresponding receiver. The backward channel cb , which flows from a prior receiver to a current sender, captures rule (II) of the memory model.9 In [4], threads or goroutines pσ, t have a unique identifier p, contain threadlocal information σ, and a term t corresponding to the program under execution. When it comes to data-race detection, thread-local information σ is composed of happens-before data. This data could be stored in a vector-clock or, more simply, it could be a set of read- and write-events that are in happens-before with respect to the thread. Synchronization, therefore, entails the exchange of thread-local information σ via channel communication.
Fig. 7. Send and receive reduction rules in the calculus of [4].
Configurations consist of the parallel composition of goroutines, memory events, and channels. The semantics of [4] is operational. We give the reduction rules for sends and receives in Fig. 7—other rules are omitted and can be found in the original paper. The let-construct in R-Rec is a binder for the local variable r in a term t. In the case of R-Rec, the let-construct allows t to refer to the value obtained when receiving from a channel. 9
As noted in [5], “the interplay between forward and backward channels can also be understood as a form of flow control. Entries in the backward channel’s queue are not values deposited by threads. Instead, [these entries] can be seen as tickets that grant senders a free slot in the communication channel.”.
36
D. S. Fava
According to reduction rule R-Send, when a thread sends a value v, the thread’s local state σ is placed on the forward channel alongside v. The rule captures captures the placement of the message (v, σ) onto the forward channel as follows: if q2 is the content of the forward channel before transmission, (v, σ) :: q2 are the contents after. The transmission of σ models rule (I) of the Go memory model: a receiver who receives (v, σ) will learn about the sender’s actions up to the given send operation. Besides transmitting, a sender also learns HB information in accordance to rule (II). Precisely, the (k + C)th sender obtains, from the backward channel, thread-local state from the k th receiver. This is captured by the update σ = σ+σ with the state σ coming from the backward channel. Thus, if the contents of the backward channel were q1 :: σ before the send, the channel is left with q1 after the send. Note that the update to the sender state occurs on completion of the send operation: the update “occurs after” the sender has deposited its message onto the forward channel—concretely, the send transmits the thread state σ as opposed to the updated thread state σ . When receiving, a goroutine obtains a value v as well as a state σ from a sender. As dictated by rule (I), the receiver updates its state given the corresponding sender state: σ = σ + σ . The sender also deposits its state onto the backward channel. Similar to R-Send, the original thread state σ is deposited as opposed to the updated thread state σ . For both reduction rules R-Send and R-Rec, local thread state σ is deposited onto a channel as opposed to the update thread state σ . This discipline creates a distinction between an operation and its completion. In effect, the reduction rules do not cause the over-synchronization observed by the Go data-race detector. 5.2
Why Not acquire and release?
The formalization in [4] speaks of synchronization in terms of channel communication. Since TSan operates at the level of locks, we might be tempted to implement the reduction rules with acquire and release operations. The reduction rule R-Send could be implemented with a thread releasing its happens-before information into the forward queue, and then acquiring happens-before information from the backward queue. Similarly, R-Rec can be implemented with a release to the backward queue, followed by an acquire from the forward queue. One invariant of the semantics in [4] is that the number of elements in the forward and backward queues equals the capacity of the channel. Since a thread must first release its HB into the channel before acquiring from the channel, there would be more than C entries in the queues while a send or receive is inflight. In fact, when using acquire and release operations as primitives, the Go data-race detector would need to allocate an array of length C + 2 for a channel of capacity C. Given such an array, sends and receives can be implemented with acquire/release operations as shown on Listings 1.3 and 1.4. Recall that c.sendx and c.recvx are the indices into the array where the next message is
Finding and Fixing a Mismatch
37
to be deposited and retrieved respectively. Recall also that chanbuf returns a pointer to a channel’s array at a given index.
Listing 1.3. Implementation of send. Listing 1.4. Implementation of receive. 1 2 3 4 5 6 7
idx := c.sendx+1 if idx == C+2 idx = 0 qf := chanbuf(c, c.sendx) qb := chanbuf(c, idx) racerelease (qf) raceacquire(qb)
idx := c.recvx−1 if c.recvx == 0 idx = C+1 qf := chanbuf(c, c.recvx) qb := chanbuf(c, idx) racerelease (qb) raceacquire(qf)
1 2 3 4 5 6 7
Although correct, there are major downsides to the solution of Listings 1.3 and 1.4. First, it requires additional memory allocation. Second, because the Go runtime expects a channel of capacity C to be implemented with an array of length C, the solution would require intrusive changes. Third, from a timing perspective, in order to implement a single channel operation, the solution performs two passes over the data-structure storing happens-before information—we want a solution that performs less passes. Compared to acquire and release, the release-acquire-exchange primitive requires no additional allocation in Go, involves minimal changes to the Go runtime, and has lower overhead.
6
Lessons Learned
When we started looking at TSan’s source code, our goal was to improve Go’s data-race detector by expressing synchronization in terms of channels as opposed to locks [5]. We began reading the source code of Go and TSan in November 2019. In January 2020, we started experimenting by compiling the projects from source and making modifications in order to gain experience and intuition. This tinkering lead us to find, in early and mid February, a small Go compiler bug10 and a small performance bug in TSan.11 Shortly after, around late February, we found the bug described in this paper. Given our experience formalizing the calculus in [4], we could see similarities between our reduction rules and the Go implementation.12 The implementations of send and receive, however, stood out. The bug was thus found by inspection. We created a test (Fig. 4) to showcase what we believed was discrepancy between the detector and the memory model. From there, we filed an issue on GitHub and started interacting with the Go community. With this interaction, which 10 11 12
https://github.com/golang/go/issues/37012. https://reviews.llvm.org/D74831. For example, the closing of channels in both [4] and in Go cause happens-before information to be deposited onto the channel, regardless of whether the channel is full.
38
D. S. Fava
went until May, an initial patch was iteratively improved until being accepted for future release. In this section, we collect insights drawn from our experience in both (1) formalizing aspects of the Go programming language and (2) interacting with the TSan and Go communities. Models Do Not Have to Be Right, They Have to Be Useful. In [4], we developed a memory model based on the Go specification. Before embarking on studying the Go source code, we found ourselves at cross-roads. Since our model is not as relaxed as Go’s, more theoretical research remained to be done. We pondered whether to continue working on formalizations or whether to investigate how the current model fits the “real world.” Both avenues are interesting to us. By taking, for now, the second avenue, we learned that models do not have to be right, they have to be useful. Our memory model formalization in [4] is not the memory model of Go, but it is close enough to allow us to reason about Go and its implementation. Mind the Gap. In one hand, we have the concept of a data-race according to the synchronization rules of the Go memory model specification. The specification is expressed in English. On the other hand, we have the Go data-race detector implementation, with thousands of lines of code spawning different projects and repositories and involving at least three languages (Go, C/C++, and assembly). These are two ends of a spectrum. Our model was useful, in part, because of where it sits in this spectrum. When developing the model in [4], we followed the English text of the Go memory model specification very closely. Our model, however, is expressed in structural operational semantics—its rules form an executable implementation. Our calculus, therefore, forms a bridge between source code and the specification expressed in natural language.13 Bad News Is Good News. The effort in formalizing and proving a nontrivial property of a software system is often high. Before finding the issue described in this paper, we had been working on formalisms related to Go for over two years. This high barrier of entrance is both good and bad. It is good, less obviously so, because it opens opportunities for collaboration between industry and academia. While industry excels at delivering software, academia can provide artifacts, such as formalisms and proofs, which are still not as commonly produced in industry.14
13
14
Our observation about the representational different between specification and implementation is not new. The idea of bridging specification and implementation has been tackled by many fronts, for example [1]. Because of stigma, the “formal” qualifier has been de-emphasized when disseminating formal methods in industry [13]. This stance has shifted dramatically [3].
Finding and Fixing a Mismatch
7
39
Conclusion
The bug described in this paper evaded skilled developers for about six years, nearly since the data-race detector was bolted onto the Go runtime. In this paper, we share how formal methods played an integral role in bringing the issue to light, and giving it closure. Acknowledgments. I would like to thank Martin Steffen for his feedback on this manuscript, Dmitry Vyukov for his feedback and guidance on incorporating the proposed changes into Go and TSan, Keith Randall for rebuilding the TSan library files that ship with Go, and everyone who gave constructive feedback during the code review process. I would also like to thank the reviewers for their comments on this manuscript, and reviewer 2 in particular.
A
Memory Footprint
Here we illustrate how our fix to the Go data-race detector leads to a smaller memory foot-print. Consider an in-place parallel sorting algorithm where an array is recursively split, up to some depth, in approximately half. Each region of the array is assigned to a thread for sorting. When a thread completes sorting, it signals its parent. The parent merges, in-place, the consecutive array regions previously assigned to its children. 10K
VC entries
8K 6K 4K 2K 0 0
5M 10M 15M Instructions executed
20M
Fig. 8. Number of VC entries associated with channels during the execution of an in-place parallel sorting algorithm: before (solid line) and after (dashed line) the introduction of release-acquire-exchange.
We tracked the number of entries in the vector-clocks associated with channel array entries. Measurements of the number of VC entries were taken multiple times during the program’s execution. For ease of collecting and plotting the data, we modified TSan to call out to a reference data-race detector implemented
40
D. S. Fava
in Python.15,16 Figure 8 shows the number of VC entries before and after the fix to the data-race detector—meaning, with a race detector that performed an acquire followed by release versus a race detector that implements the releaseacquire-exchange primitive. The x-axis is the number of instructions executed, the y-axis is the number of vector-clock entries consumed so far in the execution. As the program makes progress, more entries accumulate in the vector-clocks associated with channel entries. This accumulation is much more accentuated before the fix to the data-race detector. In fact, for this workload, the fix lead to larger than 30% reduction in the number of VC entries after 12.5M instructions were executed.
References 1. Back, R., von Wright, J.: Refinement Calculus - A Systematic Introduction. Graduate Texts in Computer Science. Springer, Heidelberg (1998). https://doi.org/10. 1007/978-1-4612-1674-2 2. Brewer, E.A.: Kubernetes and the path to cloud native. In: Ghandeharizadeh, S., Barahmand, S., Balazinska, M., Freedman, M.J. (eds.) Proceedings of the Sixth ACM Symposium on Cloud Computing, SoCC 2015, Kohala Coast, Hawaii, USA, August 27–29, 2015, p. 167. ACM (2015) 3. Cook, B.: Formal reasoning about the security of amazon web services. In: Chockler, H., Weissenbacher, G. (eds.) CAV 2018. LNCS, vol. 10981, pp. 38–47. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-96145-3 3 4. Fava, D., Steffen, M., Stolz, V.: Operational semantics of a weak memory model with channel synchronization. J. Log. Algebr. Methods Program. 103, 1–30 (2019). An extended version of the FM 18 publication with the same title 5. Fava, D.S., Steffen, M.: Ready, set, Go! data-race detection and the Go language. Sci. Comput. Program. 195, 102473 (2020) 6. Go Developer Survey (2019). https://blog.golang.org/survey2019-results 7. Go Memory Model (2014). https://golang.org/ref/mem. Accessed May 31. covering Go version 1.9.1 8. Go Share Memory by Communicating. The Go blog (2010). https://blog.go-lang. org/codelab-share 9. Google.thread.sanitizer (2015). https://github.com/google/sanitizers/wiki/ ThreadSanitizerAlgorithm 10. Lamport, L.: Time, clocks, and the ordering of events in a distributed system. Commun. ACM 21(7), 558–565 (1978) 11. Lattner, C., Adve, V.: LLVM: a compilation framework for lifelong program analysis & transformation. In International Symposium on Code Generation and Optimization, 2004, CGO 2004, pp. 75–86. IEEE (2004) 12. Merkel, D.: Docker: lightweight Linux containers for consistent development and deployment. Linux J. 2014(239), 2 (2014) 13. Newcombe, C., Rath, T., Zhang, F., Munteanu, B., Brooker, M., Deardeuff, M.: How Amazon web services uses formal methods. Commun. ACM 58(4), 66–73 (2015) 15 16
https://github.com/dfava/paper.go.mm.drd. Because of differences in how vector clocks are allocated and managed, the memory gains reported by the reference data-race detector may be different from TSan’s.
Formal Verification of COLREG-Based Navigation of Maritime Autonomous Systems Fatima Shokri-Manninen1(B) , J¨ uri Vain2 , and Marina Wald´en1 1
2
˚ Abo Akademi University, Turku, Finland {fatemeh.shokri,marina.walden}@abo.fi Tallinn University of Technology, Tallinn, Estonia [email protected]
Abstract. Along with the very actively progressing field of autonomous ground and aerial vehicles, the advent of autonomous vessels has brought up new research and technological problems originating from the specifics of marine navigation. Autonomous ships are expected to navigate safely and avoid collisions following COLREG navigation rules. Trustworthy navigation of autonomous ships presumes applying provably correct navigation algorithms and control strategies. We introduce the notion of maritime game as a special case of Stochastic Priced Timed Game and model the autonomous navigation using UPPAAL STRATEGO. Furthermore, we use the refinement technique to develop a game model in a correct-by-construction manner. The navigation strategy is verified and optimized to achieve the goal to safely reach the manoeuvre target points at a minimum cost. The approach is illustrated with a case study inspired by COLREG Rule 15. Keywords: Verification · Refinement · Maritime autonomous systems · COLREG rules · Collisions avoidance · Navigation · Safety Optimization · Game theory · UPPAAL STRATEGO
1
·
Introduction
The demand for unmanned ships has risen aiming at reducing operation costs due to minimal crew on board and safety at sea but also promoting remote work. Autonomous ships are expected to make more and more decisions based on their current situation at sea without direct human supervision. This means that an autonomous ship should be able to detect other vessels and make appropriate adjustments to avoid collision by maintaining maritime traffic rules. However, the existence of a ‘virtual captain’ from the shore control centre (SCC) is still a must to perform critical or difficult operations [2] and there is a need for reconfirmation when inconsistent or corrupted commands are detected by the onboard system. c Springer Nature Switzerland AG 2020 F. de Boer and A. Cerone (Eds.): SEFM 2020, LNCS 12310, pp. 41–59, 2020. https://doi.org/10.1007/978-3-030-58768-0_3
42
F. Shokri-Manninen et al.
The connectivity between ships and SCC has to guarantee sufficient communication for sensor monitoring and remote control [10] when SCC intervention is needed. This connectivity also plays an important role for the safety of operations concerning collision avoidance in the remote-controlled scenarios for transforming the data and receiving information regarding the decision from SCC. Sub-second reaction time is, however, not critical regarding safe navigation in the maritime sector as it takes up to minutes for the ship to change its course in case of detection of another ship or an obstacle. In this paper the goal is to model maritime autonomous systems so that the unmanned ships learn a safe and optimal strategy for navigation pursuing collisions avoidance. One of the most critical safety issues in the development of autonomous vehicles and self-driving cars is their poor performance under adverse weather conditions, such as rain and fog due to sensor failure [15]. However, when modelling maritime specification, we do not take into account sensor inaccuracies and possible transmission errors, since there are standard sensor redundancy design and error correction measures applied on modern vessels to ensure that ships notice each other in a timely manner. For safety assurance, a ship is able to communicate with another ship or shore via VHF radio, satellite services, etc. For unambiguous navigation protocol, the International Maritime Organization (IMO) [11] published navigation rules to be followed by ships and other vessels at sea which are called Convention On the International Regulations (COLREG). When developing the autonomous ship navigation system, quality assurance via tool supported model-based control synthesis and verification is of utmost importance. UPPAAL STRATEGO [9] is a branch of the UPPAAL [6] synthesis and verification tool family. It uses machine learning and model checking techniques to synthesize optimal control strategies. Hence, it is a good candidate for control synthesis tool which satisfies above mentioned needs. In our research, we aim at adapting formal modelling with UPPAAL STRATEGO for verifying and synthesizing safe navigation of autonomous ships. As an additional contribution, we improve the autonomous ships navigation performance regarding its safety and security at the same time planning for optimal route and scheduling maneuvers according to COLREG rules.
2
Related Work
There has been a variety of studies on autonomous ship navigation obeying COLREG rules. Among these fuzzy logic [17], interval programming [7], and 2D grid map [22] could be mentioned. However, the previous approaches do not deal with verification for safe navigation. Moreover, (potentially) nondeterministic behaviour of autonomous ships, communication delays, sensor failure and weather conditions are not considered in their models.
Formal Verification of Maritime Autonomous Systems
43
Recently, in MAXCMAS project [23], COLREG rules have been implemented in collision avoidance module (CAM) software where collision avoidance decision is generated and action taken as soon as collision risk is detected. In spite of their various simulation tools, verification methods are discussed only implicitly. Furthermore, to the best of our knowledge, our work is the first one that synthesizes a safe and optimal navigation strategy that also takes into account some of the weather conditions. There is a fair number of publications on autonomous navigation control synthesis methods and tools that rely on various sets of assumptions - for example continuous, discrete and hybrid dynamics, as well as piece-wise linear and non-linear processes [12,16,18]. The main issue of the controller synthesis is the scalability of synthesis methods in case of complex control objects. Hierarchical control architectures, e.g. in SCADA are addressing this issue. While lowlevel control typically should tackle with continuous (often nonlinear) processes the upper control layers deal with the abstract representation which typically describes hybrid or discrete dynamics. In this work, we model the vessels dynamics on a high level of abstraction using discrete state space and continuous time. Among the tools that are oriented to timed discrete-state models and timed game based control synthesis, UPPAAL STRATEGO has proven its relevance in several case studies, where optimal strategies have been generated using Statistical Model Checking (SMC) and machine learning. Examples include, for instance, adaptive cruise control [16], railway systems [13] and autonomous driving systems [4,5]. In [16] the authors synthesize a safe control strategy with the goal of maintaining a safe distance between vehicles where one of them is considered to be uncontrollable by the given controller. Railway control systems, are modelled as a Stochastic Priced Timed Game in [13] by using game theory, where a set of trains considered as an environment and lights, points and sections in the railway, are assumed to be controllable. In [5] the authors also model a railway signalling system with autonomously moving blocks as a Stochastic Priced Timed Game, but in addition they consider stochastic delays in the communication. A safe and optimal driving strategy for the model is synthesised in UPPAAL STRATEGO. In [4], on the other hand, SMC has been used for formal modelling uncertainty in autonomous positioning systems. The safety of the position of a tram is proved with the levels of uncertainty and possible hazards induced by onboard satellite positioning equipment. In our work, we introduce the notion of a maritime game for control synthesis that is based on navigation specification of the ship where weather conditions are integrated. We model the navigation problem as a special case of Stochastic Priced Timed Game with a goal of collisions avoidance between two ships. Furthermore, we use the refinement technique [3] for a stepwise development of the model for avoiding complexity and ambiguity in the modelling.
44
3 3.1
F. Shokri-Manninen et al.
Case Study and Navigation Specification Overview of the Case Study
When modelling navigation manoeuvres of autonomous ships, we focus on standard situations, addressed in COLREG. As an example, let us consider a scenario where two ships have intersecting courses as depicted in Fig. 1. In this example, in spite of the existence of remote monitoring from the SCC, we assume also that ships have autonomous navigation capability. According to Rule 15 of COLREG [19]; when two power driven vessels have intersecting courses with the risk of collision, the vessel which has the other on her own starboard (right) side shall keep out of the way and avoid crossing ahead of the other vessel. In this case the vessel giving way should adjust its speed and/or course to pass behind the approaching vessel. The adjustment will therefore be made to the starboard side. In the case depicted in Fig. 1, shipB should give way while shipA maintains its direction and speed.
Fig. 1. Autonomous navigation of ships
The navigation control of shipB has a choice to slow down instead of altering its path to pass shipA. By doing this, the expected arrival time might not be as late as when following a redirected route. However, if for some reason shipA is slowing down, then the controller should navigate shipB safely to another route through a sequence of waypoints [1] (see Fig. 1). 3.2
Ship Waypoints (WP) and Path Plan
For safe navigation of vessels, we consider a set of waypoints along the route which define the routing subgoals and that the ship has to traverse during the
Formal Verification of Maritime Autonomous Systems
45
maneuver. When a vessel plans the voyage from the current position to its next waypoint position, a course change may occur. In case of rerouting to a waypoint, a new heading should be calculated. Figure 2 shows the heading relationship between the ship and waypoint.
Fig. 2. Heading relation between ship and waypoint
Assume that (X0 , Y0 ) is the initial position of the ship and (Xi , Yi ) are the coordinates of the targeted waypoint, the bearing angle ψ of the waypoint from the ship is calculated as follows [1]: θ = a tan 2
(Yi − Y0 ) (Xi − X0 )
ψ = θ − θ
(1) (2)
where θ is the heading of the ship, θ is the encountering angle of the waypoint from the vertical axis. Here, if the value of ψ becomes negative, then 2π (360) is added to make it positive. To calculate the position of the ship after altering the course based on the new heading of the ship (ψ), the following calculations should be performed: X = X0 + V cos(ψ)
Y = Y0 + V sin(ψ)
(3)
where (X0 , Y0 ) is the initial position of the ship, (X, Y ) are the coordinates of the next position of the ship and V is the speed of the ship. In our scenario with two ships assuming that (XA , YA ) are the coordinates of shipA and (XB , YB ) are the coordinates of shipB, the distance between the two ships is calculated as Euclidean distance. This could require an update of the position of either one or both ships following Eqs. 1, 2 and 3. After the update, the distance between the ships should be re-calculated to evaluate whether the risk of collision still remains.
46
3.3
F. Shokri-Manninen et al.
Influence of Wind on the Ship Navigation
When the ship moves in the presence of wind in addition to navigating along the true course (heading), it will also drift as a consequence of wind which is called leeway [24]. Thus, leeway (α) is the angle between the heading (TC) and the drift track (CG). Figure 3 shows the leeway angle with the presence of wind.
Fig. 3. Leeway angle (α)
If the wind pressure comes from portboard, it deviates the angle of the ship heading to the right then α is positive ‘+’. In case of the pressure from starboard the α is negative ‘-’. The leeway is calculated as follows [24]. α = K(
VW 2 ) sin Qw VE
(4)
where, VW is the speed of the wind and VE is the speed of the ship. In this formula Qw is the relative bearing of the wind whereas K is the leeway coefficient. After calculating the drift angle based on the wind conditions, the heading of the ship should be corrected periodically.
4
Reinforcement Learning and Game Theory
To analyze the navigation options of autonomous ships, and more importantly, to verify the navigation decisions, especially in combination with the other ships we use game theory as a base formalism for modelling and for optimizing the games. Reinforcement learning (RL) is commonly used for solving Markov-decision processes, where an agent interacts with the world and collects rewards [21]. RL is a powerful method for learning an optimal strategy for the agent from trial and error. UPPAAL STRATEGO [9] uses RL and model checking by combining the
Formal Verification of Maritime Autonomous Systems
47
Fig. 4. An example of a timed game automaton in the UPPAAL STRATEGO
techniques to synthesize optimal control strategies. Any of the moves (modeled as transitions) which the agent chooses in its current state, incurs a cost, e.g. fuel consumption (fc). In Fig. 1, the first time derivative of fc is denoted fc’. In this setting, the priced timed game [8] defines an infinite-state Markov (reward) decision process with the goal of finding a strategy that will minimize the cost to reach the goal state. Reinforcement learning can be considered in the context of a game, often for one player (agent) or as a two-player game in which the other player is typically the environment. The agent playing against the environment tries to learn a policy that maximizes the agent’s reward in the environment. Our ship navigation control model is based on a game G = (S, E), consisting of a set S of states and a set E of directed edges between those states, the state transitions. Since there are two players in the considered game, where the agent (player) plays against the environment (Agent vs. Env), we also have two types of transitions, controllable and uncontrollable. As can be seen from the example in Fig. 4, the driver as the playing agent can choose one of the two alternative roads which are represented as controllable transitions (indicated by the solid line). However, the actual travel-time depends on the intensity of traffic (environment) on a particular day, which is reflected as the upper time-bounds in the invariants of locations High road and Low road. Since the outgoing edges from these locations are uncontrollable (indicated by the dashed line), leaving these locations may occur latest at these time-bounds. Such a two-player (possibly antagonistic) game is represented by a tuple G= (S, →, , s0 , Bad, Goal) where [13], – – – – –
→⊆ S × S: set of controllable transitions (Player). ⊆ S × S: set of uncontrollable transitions (Environment). s0 ∈ S: initial state. Bad ⊆ S: set of states where player loses the game. Goal ⊆ S: set of states where player wins the game. We assume that sets Bad and Goal do not intersect.
In game theory, a run is a finite or infinite sequence of states r = (s0 , s1 , s2 , . . . ). In case of a finite sequence of states, the player reaches its terminal state that can be a goal state that is a winning state, or a bad state that
48
F. Shokri-Manninen et al.
corresponds to the winning state of the adversary. We call a run safe, if there is no bad state (s ∈ Bad) in the run. The player’s strategy is a complete algorithm for playing the game, telling what move a player should do for every possible situation throughout the game. In a state transition system setting a player decides depending on its current state (s ∈ S) and strategy σ which transition to execute next. Formally, a strategy for the player is a mapping, σ : S → S such that ∀r ∈ ρ, r.s ∈ S : (r.s, σ(r.s)) ∈ E, where ρ is a set of runs, such that from any state s of a run r in ρ, strategy σ chooses the next state reachable by an outgoing edge of s. A strategy is called safe if in the run, any of the outgoing transitions in the state (s ∈ S) does not lead to bad states: σsaf e = {(si , sj ) |si , sj ∈ S ∧ (si , sj ) ∈ E ∧ sj ∈ Bad}. A strategy is feasible if it is a safe strategy and reaches the goal state (s ∈ Goal) in the run: σf easible = {(si , sj ) |(si , sj ) ∈ σsaf e ∧ (∃s : s ∈ S ∧ s = sj ∧ s ∈ Goal)}. Similarly we define a feasible run. A run r is feasible if it is finite of length |r|, safe and reaches a goal state (s ∈ Goal): rf easible = {r|(∀si : si ∈ r, i ∈ [1, |r| − 1] : si ∈ Bad) ∧ (∃sj : sj ∈ r ∧ j = |r| ∧ sj ∈ Goal)}. A run is called optimal, if it is feasible and reaches the goal state (s ∈ Goal) in the run with a minimum cost: roptimal = {r|r ∈ rf easible ∧ cost(r) ≤ min(ran(cost))}. where cost function cost : ρ → R assigns a real-valued number to each run. A winning strategy is optimal, if it is a safe and feasible strategy and there is a run ending up in goal state (Goal) with a minimum cost: σoptimal = {(si , sj ) |(si , sj ) ∈ σf easible ∧ (∃r : r ∈ roptimal ∧ si ∈ r ∧ sj ∈ r)}.
5
Maritime Game
To formally model the navigation problem, we formalise it in the game tuple GM = (S, →, , s0 , Bad, Goal) where, – S: is a set of states of the ships. To grant the decidability of GM , we consider S as a finite set. In principle, S consists of as many partitions as there are ships involved in the game. For the two ships in the paper we consider S = SA × SB , where
Formal Verification of Maritime Autonomous Systems
49
SA denotes the state component of shipA and SB of shipB. Both SA and SB are composed of the same set of variables and the state vector of a ship has the following structure:
where the state variables and their domains are defined as follows: • P ∈ {(x, y)|x ∈ X ∧ y ∈ Y }: Position of the ship in the 2D coordinate system. The position will be updated periodically based on the heading and speed of the ship. • W P ∈ {(xwp0 , ywp0 ), (xwp1 , ywp1 ), . . . , (xwpn , ywpn )}: Ordered set of waypoints of the ship in the coordinate system. The next waypoint for a ship may change in case the ship changes course. • H ∈ [0, 360]: Heading of the ship in the degree interval. Heading of the vessel will be updated in case of rerouting. • HS ∈ {initial, deviated}: Heading Status of the ship. Before changing course, the value of the heading status is initial. As soon as the heading becomes updated due to navigation to the waypoint, it will be assigned to deviated. If the heading of the ship is deviated, the ship needs to do a rerouting to go back to the initial path and original heading. • V el ∈ [0, 10]: Velocity of the vessel in the normalized integer interval. During navigation at sea, vessels can accelerate, decelerate or continue its voyage with the same speed. The discrete interval [0, 10] is a reasonable approximation for the speed, since we mainly consider big cargo ships with a cruising speed of about 12 knots. Moreover, the approximated speed has always been rounded up to the worst case. In addition to the above state variables, we introduce new variables used in the model refinement. ◦ (Refinement) W W ∈ BOOL: Windy Weather condition in boolean expressions. If Windy Weather has the value true, it means that weather is windy. It is introduced in the refinement step, we introduce environmental condition which affects the vessel course. We consider that the heading of the ship is drifted due to the wind pressure. ◦ (Refinement) W S ∈ VW × Qw : Wind Specification of vessels. For calculation of drift angle by Eq. 4 when the wind pressure is present, we need to know the wind speed (VW ) and angle of the wind (Qw ) that comes to the vessel. Thus, at each time instant the state variables of the ships’ state vectors acquire one value from their domain as determined by the transition relations, either → or . We use the same navigation specifications for both ships in the game, with the difference that shipA does not contain the controllable navigation variables W P and HS that shipB has or weather features W W and W S that affect the vessel. This is because we are interested in capturing the behaviour of shipB as an agent under different circumstances. – →, : The transition relations are defined as →⊆ S × G(V ) × Act × S and ⊆ (S × G(V ) × P r × Act × S) where,
50
F. Shokri-Manninen et al.
• G(V ): is the set of constraints in guards and V denotes the set of integer and boolean variables. • Act: is a sequence of assignment actions with integer and boolean expressions. According to player preference, one of the enabled transitions will be chosen in the current state. We define four functions that define the effect of player actions. The functions for player transitions are as follows: ◦ U pdate Heading: (P, W P ) → H is a function that calculates a angle for the ship’s heading angle H from the ship’s current position P and its next waypoint W P . ◦ U pdate Speed: (V el, [−2, 2]) → V el is a function that assigns a new velocity to the vessel based on the current velocity V el and a value in the interval [−2, 2]. This interval indicates the acceleration/deceleration of 0-2 speed units. ◦ U pdate P osition : (H, V el, P ) → P is a function that gives a new position for the ship given its previous position P , heading H and velocity V el. ◦ (Refinement) Leeway angle: (W S, V el) → H is a function that assigns a calculated drift angle for the ships’s heading H, depending on the wind specification WS and velocity V el. • P r: denotes the set of integer-valued probabilities. In this two-player game, we model shipB with its controller as a player. As a consequence, all transition for this player are considered to be controllable. However, we assume shipA as an Environment for shipB with the same functionality except that shipA has some stochastic transitions with probability (P r) in addition to normal transitions. – s0 ∈ sB0 × sA0 : is initialized with random speed, initial position and heading of ships. For analyzing concrete incidents these initial values could be based on values from existing datasets. – Bad: is the state where two ships get to collision zone (see Fig. 1). – Goal: is the state that is reached when two ships have passed each other within a safe distance.
6
Model Development in UPPAAL STRATEGO
We model the navigation problem1 as a Stochastic Priced Timed Game using the tool UPPAAL STRATEGO where the controller of shipB should dynamically plan its maneuver, while the opponent (shipA) moving according to its preset trajectory forces shipB to change its route. In this game, we define the fuel consumption (fc) as a the price to be minimized under the safe strategy. The change in velocity of the ship is directly related to fc, so that the consumption of fuel increases if the ship slows down and speeds up again rather than changes the route, causing the price to increase. The goal is that the ships move to their target positions in a safe way (without the risk of a collision) while at the same time optimizing the fuel consumption. 1
The game model is found in: https://github.com/fshokri/Game-model.
Formal Verification of Maritime Autonomous Systems
51
To avoid ambiguity, we use refinement [3], which enables the system to be created in a stepwise manner gradually adding details into the model and proving that each refinement step preserves the correctness of the previous steps. Models can be refined either via superposition refinement [14,20], where new features are added to the automaton, or by data refinement, where abstract features are replaced by more concrete ones. The current refinement process of the model consists of one abstract model with one refinement step using both superposition and data refinement. The abstract model presents the general view of safe navigation with given waypoints. In the refinement step, we introduce weather conditions windy or clear for the ship navigation. We assume that strong wind from shipB starboard increases fuel consumption when turning right. The two step timed automata model introduced here should be seen as a modeling step towards an implementation. 6.1
An Abstract Model of Autonomous Ships
For synchronizing the state transitions of the two ships, shipB and shipA the scheduler template is created with two channels for each ship; ShipBgo!, ShipAgo! as well as U pdate B! and U pdate A! (see Fig. 5). The first channel enables each ship to move while the second one updates their positions after moving action. We define two functions in this automaton namely initialize() and U pdateDistance(). The former initializes two ships with initial headings and positions in the coordinate system. The latter calculates the distance between two ships after movement. If the distance becomes smaller than the defined safe distance, it means that the ship collides with another ship and the game is over. For restarting the game, we add one transition from state End to Init. In this template, we also add exponential rates where the process may be delayed an arbitrary long time in the worst case. The user-supplied rate (here 5) has been chosen as an expert estimate for unbounded delays but it can be tuned according to a particular situation.
Fig. 5. The scheduler in UPPAAL STRATEGO
52
F. Shokri-Manninen et al.
In the abstract model the behaviour of shipB is depicted in Fig. 6 where the model is divided into ship (upper) and controller (lower) automata templates. In the shipB template, we model the different states of a ship that are reachable from the initial state (M oving), while in the controller we only consider events for giving permissions to take actions. Since shipB is the player in this game, all transitions for shipB and its controller are considered to be controllable.
Fig. 6. The abstract model of ShipB (upper) and ShipB controller (lower)
In Fig. 6, shipB starts its move in state M oving if it gets permission via ShipBgo? If shipB already did adjustment after detection, it periodically updates its position via the self-loop edge of state M oving that is synchronized with the scheduler via channel U pdate B? (see Fig. 5). In state Adjustment shipB sends the request to controller about detection (detection!) (see Fig. 6). If the controller detects a ship by its sensor (distance ( I s I n i t (C) => S y s I n i t ( S ) and ( not I s I n i t (C ) ) , others => S y s T r a n s (C’ Old , t i c k , S ) and ( not I s I n i t (C ) ) ) ; private function S t a t e T o I n p u t M a p p i n g (C : C o n t r o l l e r ) return B o o l e a n with Ghost ; function S t a t e T o O u t p u t M a p p i n g (C : C o n t r o l l e r ) return B o o l e a n with Ghost ; subtype State Num i s
I n t e g e r range 1 . .
7;
type C o n t r o l l e r i s record S t a t e : State Num := State Num ’ L a s t ; t i c k : B o o l e a n ; S : System ; end record with T y p e I n v a r i a n t => ( S t a t e T o I n p u t M a p p i n g ( C o n t r o l l e r ) and State To Output Mapping ( C o n t r o l l e r ) ) ; end T r a f f i c L i g h t ;
Fig. 1. SPARK specification for a traffic light controller.
cannot arbitrarily manipulate its state. In the private part of the specification, Controller (lines 33–37) is a record that stores the controller’s internal state number, current input value(s), and current output value(s). The internal state number is always initialized to the last possible state number, i.e. the largest value for State_Num; this “controller initialization” state encodes the status of the controller before any inputs are received, and the only transition(s) out of this state are those allowed by ϕei . In this example, each input and output is of type Boolean. (Salty also allows for enumerations and integers, to be discussed later.) When there are multiple inputs or outputs, they are wrapped in a record of type Environment (not used here) or System (lines 4–6), respectively. There is only one input in this example, so it is not wrapped in a record, which is why there is no Environment record in Fig. 1. Since Controller is a Moore machine, for each internal state, there is exactly one set of output values produced in that state. Furthermore, recall that controllers synthesized from GR(1) specifications have the additional property that there is exactly one set of input values that brings the controller into each state. Controller therefore includes a type invariant (lines 36–37) that captures this property, where functions
66
L. R. Humphrey et al.
State_To_Input_Mapping and State_To_Output_Mapping evaluate whether the controller’s state corresponds to the expected input and output value(s), respectively. Both of these functions are declared (lines 26–29) with aspect Ghost, indicating that they are intended mainly for proof purposes, i.e. they will create verification conditions related to type Controller but will not be executed in the actual program, unless it is specified during compilation that they should be. Executing ghost code is often used for debugging, e.g. to check an unproved property through testing. In this case, removing the ghost code can save significant memory, since State_To_Input_Mapping and State_To_Output_Mapping internally encode large lookup tables that have to store information on every state of the controller, which is often thousands of states or even millions of states for the largest example in our database. The logic for these functions is given in the body (not shown), but for instance when C.State = 1, State_To_Input_Mapping(C) returns True if and only if C.tick = False, since tick being false brings the system into state 1; and State_To_Output_Mapping(C) returns True if and only if C.S = System’(red => True, yellow => False, green => False), since red is true and yellow and green are false in state 1. The public function Is_Init (line 8) checks whether the controller is in its initialization state, i.e. no inputs have yet been received. The public function Env_Init (line 9) checks whether input(s) satisfy ϕie . It is implemented as an expression function, i.e. the implementation is given directly in the specification, because all terms needed to define it are visible in the public part of the specification. In this example, since ϕie = in the Salty specification, this automatically generated function always simply returns True. The public function Sys_Init (lines 10–11) checks whether outputs(s) satisfy ϕis . It is implemented as an expression function for the same reason. From the Salty specification, ϕsi = red ∧ ¬yellow ∧ ¬green, so this function returns the value of the expression S.red and not S.yellow and not S.green. But unlike Is_Init and Env_Init, it is marked with aspect Ghost because it is mainly used for proof and does not need to be executed. Is_Init and Env_Init are used for proof but are also callable in functional code. We chose to make these functions non-ghost functions for reasons related to the meaning of GR(1) specifications. Recall that GR(1) specifications have the form ϕe → ϕs . If ϕe ever becomes false, i.e. if the environment produces input value(s) that violate ϕe , then the specification as a whole is satisfied regardless of whether the system produces output value(s) that satisfy ϕs . In theory, the system could then produce arbitrary outputs and still satisfy the overall specification. In practice, we believe a user would generally want to know that the environment violated its specification, so that the user could either choose the system output value(s) explicitly or fall back to some other routine. Therefore, a user needs to be able to check inputs with Env_Init if Is_Init returns true, which is why both are callable. At the moment, they are not used in the code of the Move procedure. Public functions Env_Trans and Sys_Trans (lines 13–16) check whether the next set of input value(s) and output value(s) satisfy ϕte and ϕts , respectively. For the same reasons as above, Sys_Trans is a ghost function but Env_Trans is not. Note that Env_Trans has
End-to-End Verification of GR(1) Designs in SPARK
67
a precondition that the controller must not be in its initialization state, since ϕte can depend on both the current and next set of input value(s). This precondition is similarly necessary for Sys_Trans, since it can depend on both current and next input and output value(s). As with Env_Init and Sys_Init, the logic for these functions is synthesized from the Salty specification and implemented in the body (not shown), since they make use of input and output values stored in the Controller, whose fields are private. The public procedure Move (lines 18–23) transitions the Controller based on its current internally stored values (i.e. state number and most recent input and output values) and next set of input value(s), and it produces the next set of output value(s). It has a precondition that if Controller is in its initialization state (i.e. it has not yet received any inputs), inputs must satisfy ϕie ; otherwise they must satisfy ϕte . The aspect Contract_Cases specifies additional sets of preconditions paired with postconditions, where the set of all preconditions must be mutually exclusive and exhaustively cover the entire input space. The others keyword can be used to cover the set of all input conditions not covered in any explicit cases of the contract. Note that for the left-hand side of each case (i.e. left of =>), variable names refer to values before evaluation of the subprogram; for the right-hand side, they refer to values after evaluation. Therefore, the aspect Old can be used on the right to reference the value of a variable before evaluation of the subprogram. Combined with the previous precondition, Contract_Cases asserts that if the controller is in its initialization state, then after execution of Move, the first set of output value(s) produced should satisfy ϕis and the controller should no longer be in its initialization state. If the controller is not in its initialization state, then the output value(s) produced should satisfy ϕts , which is evaluated based on the most recent input and output values stored in C’Old, the next input value(s) just provided (in this case stored in tick), and the next output values just generated (in this case stored in the record System). This set of contract cases embodies our main proof goal, i.e. verification of system initial and transition properties from the original GR(1) specification. A fragment of the body of Move is shown in Fig. 2. Note that there are cases that can lead to Program_Error. This is because case statements require all possible cases to be covered, so we programmatically use others to cover all possible input combinations that would not be allowed due to ϕie or ϕte . In the traffic light example, these are unnecessary because all possible combinations of input values are allowed out of each state. In any case, SPARK will prove that such cases are not reachable if the preconditions of Move are met, i.e. if the environment satisfies its specification. We briefly note that Salty includes language features that can result in different constructs being used to represent inputs and outputs in synthesized controllers, including enumerations and integers. For instance, enumerations encode that an enumerated input or output can have exactly one of a set of values at a time, as in the traffic light being exactly one color as expressed in ϕis and part (1) of ϕts . In such cases, enumerations or integers can make both specifications and code more compact and easier to read and understand. As a technical aside,
68
L. R. Humphrey et al.
procedure Move (C : in out C o n t r o l l e r ; t i c k : in B o o l e a n ; S : out System ) i s begin case C . S t a t e i s when 1 => case t i c k i s when F a l s e => C . S t a t e := 1 ; C . S . r e d := True ; C . S . y e l l o w := F a l s e ; C . S . g r e e n := F a l s e ; when True => C . S t a t e := 3 ; C . S . r e d := F a l s e ; C . S . y e l l o w := F a l s e ; C . S . g r e e n := True ; when others => raise Program Error ; end case ; ... when 7 => case t i c k i s when F a l s e => C . S t a t e := 1 ; C . S . r e d := True ; C . S . y e l l o w := F a l s e ; C . S . g r e e n := F a l s e ; when True => C . S t a t e := 2 ; C . S . r e d := True ; C . S . y e l l o w := F a l s e ; C . S . g r e e n := F a l s e ; when others => raise Program Error ; end case ; end case ; C . t i c k := t i c k ; S := C . S ; end Move ;
Fig. 2. The body of the Move procedure.
during synthesis, enumerations and integers are translated to a bit vector representation along with additional initial and transition specifications that encode properties inherent to these types, such as values being mutually exclusive and rules for addition and subtraction over integers. Once synthesis is complete, values are translated back to their original enumeration or integer representation.
3
Example Controller for a Team of UAVs
In order to evaluate the feasibility and utility of our approach, we collected GR(1) specifications from a variety of sources. This includes the Salty repository, which contains examples of GR(1) specifications for control of teams of unmanned air vehicles (UAVs) performing different missions. One of these encodes the rules for a “VIP Escort” mission, in which one UAV is designated as a “very important person” (VIP) that must always be “escorted” by a friendly surveillance UAV when it moves, and it must also be protected from an “enemy” UAV. The VIP and surveillance UAVs are controlled by the system, while the enemy UAV is controlled by the environment. The mission map contains regions that the UAVs can move between. “Escorting” consists of ensuring that 1) the VIP only enters regions previously visited by a surveillance UAV and 2) whenever the VIP changes regions, a surveillance UAV “tracks” it, i.e. moves with it between
End-to-End Verification of GR(1) Designs in SPARK
69
regions. “Protection” consists of ensuring that the VIP is never in the same region as the enemy UAV. Additional rules include constraints and liveness requirements on how the UAVs can move between regions and which regions they start in.
Fig. 3. A VIP escort multi-UAV mission as simulated in OpenAMASE using a Saltysynthesized controller that issues commands to the UAVs through OpenUxAS.
A particular instantiation of this mission is shown in Fig. 3, as depicted in the open source Aerospace Multi-Agent Simulation Environment (OpenAMASE)2 . It includes the VIP, two surveillance UAVs numbered 1 and 2, one enemy UAV, and five regions numbered 1 to 5. Environment inputs include integer variable loce ∈ {1 . . . 5}, which encodes the current region of the enemy UAV, and Boolean variables sri for i = {1 . . . 5}, where sri is true if and only if region i has been visited at some point by a surveillance UAV. System outputs include integer variables locv , locs1 , locs2 ∈ {1 . . . 5}, which indicate the current region of the VIP and surveillance UAVs 1 and 2, and Boolean variables vT rack1 and vT rack2 , which indicate whether surveillance UAVs 1 and 2 are executing a behavior to follow the VIP. Note that low-level control, e.g. waypoint planning and sensor steering, is implemented by the open source Unmanned Systems Autonomy Services (OpenUxAS)3 . The controller synthesized by Salty implements high-level decision logic, and OpenUxAS monitors the state of the controller and translates its current output values to a set of UAV commands that are simulated in OpenAMASE. For example, when locv changes value to 2, OpenUxAS commands the VIP to follow a path from its current region to region 2, or when vT rack1 changes from false to true, OpenUxAS commands surveillance UAV 1 to fly next to the VIP. The high-level controller synthesized 2 3
https://github.com/afrl-rq/OpenAMASE. https://github.com/afrl-rq/OpenUxAS.
70
L. R. Humphrey et al.
by Salty makes some assumptions about the low-level behaviors implemented by OpenUxAS, in this case, mainly that all UAVs move at the same speed and transition to new regions at the same time. A workflow for connecting Saltyimplemented controllers with OpenUxAS and OpenAmase is described in [7], and all of the scripts, configuration files, etc. needed to run this example are available on the Salty repository. For this mission, the GR(1) specifications for the environment are: ϕie = (loce = 4) ∧ ¬sr1 ∧ ¬sr2 ∧ sr3 ∧ ¬sr4 ∧ sr5 (locs1 = i) ∨ (locs2 = i) → sri ∧
ϕte =
i={1...5}
(4) (5)
¬(locs1 = i) ∧ ¬(locs2 = i) ∧ ¬sri → ¬sri ∧ (6)
i={1...5}
sri → sri
∧
(7)
¬(loce = 1) ∧ ¬(loce = 2)
(8)
ϕle = ♦¬(loce = 3) ∧ ♦¬(loce = 4) ∧ ♦¬(loce = 5).
(9)
These express that (4) the enemy UAV starts in region 4, and regions 3 and 5 start as surveilled; (5) a region is considered to be surveilled after either one of the surveillance UAVs is in it; (6) a previously unsurveilled region remains unsurveilled if neither surveillance UAV is in it; (7) once a region is surveilled, it remains surveilled; (8) the enemy UAV cannot go to regions 1 or 2; and (9) the enemy UAV must infinitely often not be in each region 3, 4, and 5. GR(1) specifications for the system are: ϕis = (locv = 2) ∧ (locs1 = 3) ∧ (locs2 = 5) ∧ ¬vT rack1 ∧ ¬vT rack2
(10)
ϕts = ¬(locv = locv ) → (vT rack1 ∨ vT rack2 ) ∧
(11)
i={1...2}
vT racki → (sloci = locv ) ∧
(12)
(locv = i) → ¬(loce = i) ∧
(13)
i={1...5}
(loci = 1) → (loci = 1) ∨ (loci = 2) ∨ (loci = 3) ∧
(14)
i={v,s1,s2}
(loci = 2) → (loci = 1) ∨ (loci = 2) ∨ (loci = 3) ∧
(loci = 3) →
(loci = j) ∧
j={1...5}
(15) (16)
End-to-End Verification of GR(1) Designs in SPARK
71
(loci = 4) → (loci = 3) ∨ (loci = 4) ∨ (loci = 5) ∧
(17)
(loci = 5) → (loci = 3) ∨ (loci = 4) ∨ (loci = 5)
(18)
ϕls = ♦(locv = 1) ∧ ♦(locv = 5).
(19)
These express that (10) the VIP starts in region 2, surveillance UAV 1 in region 3, and surveillance UAV 2 in region 5, with neither surveillance UAV tracking the VIP; (11) the VIP does not change regions unless a surveillance UAV is tracking it; (12) a surveillance UAV can only track the VIP if they are in the same region at the same time; (13) the VIP cannot be in the same region as the enemy UAV at the same time; (14) and (15) the VIP and surveillance UAVs can move from regions 1 or 2 to regions 1, 2, or 3; (16) the VIP and surveillance UAVs can move from region 3 to any other region; (17) and (18) the VIP and surveillance UAVs can move from regions 4 or 5 to regions 3, 4, or 5; and (19) the VIP must go to regions 1 and 5 infinitely often. We have chosen to describe this particular example in detail because during the process of extending Salty to produce SPARK implementations, we discovered a previously undetected problem with this example’s specification. As written, the specification is realizable and produces what appears at a glance to be a reasonable controller with 97 states. In fact, we had previously run this example with OpenAMASE, OpenUxAS, and a Python controller synthesized by Salty. However, we did not originally notice in the Python implementation that 34 of the controller’s 97 states do not have successors. Since no special logic was added to Salty to handle this situation, the generated SPARK code included empty case statement alternatives in the Move procedure. For example, in state 2 the case statement alternative is simply when 2 =>, with no statements in the body. This code failed to compile, since SPARK does not allow for fall-through behavior in case statements (nor does Ada); explicit statements are expected for each case statement alternative. In Python, this error went undetected; the transition relation for the controller is encoded as a map, and entries corresponding to states without successors simply had an empty list of “next state” values. We briefly note that if we had encoded the controller using a map in SPARK, the error would still have been detected through SPARK analysis rather than a syntactic check of the code; the case statement encoding was chosen for efficiency reasons to be discussed in Sect. 5. This error is the result of a subtlety of the semantics of GR(1) specifications. Recall that GR(1) specifications take the form ϕe → ϕs . Obviously a specification of this form is satisfied if ϕe and ϕs are both true, but it is also satisfied if ϕe is false. Also recall from the introduction that GR(1) specifications are interpreted in the context of a two-player game in which the environment takes its turn first and the system takes its turn second. The issue here is that the environment is able to take transitions that will necessarily cause it to violate ϕte in the next time step. Note that ϕte contains terms of the form ¬p, specifically
72
L. R. Humphrey et al.
¬(loce = i) for i = {1, 2} (8). Note that a term of the form ¬p is not the same as a term of the form ¬p. The distinction is important. If the environment chooses p for the “next” time step, this does not violate ¬p in the “current” time step. However, once the next state is reached, p becomes the new “current” value, and ¬p will now be violated no matter what the environment chooses. Generally, specifications should follow the latter form ¬p, which prohibits the environment from choosing p in the “next” time step. This was indeed an error in these specifications, so we changed ¬(loce = 1) ∧ ¬(loce = 2) to ¬(loce = 1) ∧ ¬(loce = 2) in (8). However, we also modified Salty to raise Program_Error in cases with no successors and checked that we were still able to prove ϕe → ϕs (since reaching these cases would require violating the precondition ϕe ). Such cases amount to instances in which the precondition on Move would have to be violated, which is why we allow the user to execute Env_Trans as discussed in Sect. 2. We also plan to have Salty issue a warning when there are states with no successors, since such cases are likely unintended.
4
Results
To further evaluate the utility and feasibility of our approach, we pulled additional example GR(1) specifications from a variety of sources including Anzu, LTLMoP, TuLiP, Slugs, and Salty, all of which make their examples publicly available for download. GR(1) specifications in Salty format, synthesized SPARK packages, and SPARK analysis results for all of these examples are available on the Salty GitHub repository, including the traffic light example of the previous section. We note that while some examples are small and simple, e.g. demos along the lines of the traffic light example, there are many in our collection that are more realistic. For instance, Anzu has controller specifications for a generalized IBM buffer and an AMBA bus. LTLMoP and TuLiP have specifications for robot controllers that have been demonstrated on simulated and/or real robots. And Salty has specifications for controllers to coordinate the actions of teams of vehicles that have been demonstrated in simulation. Figure 4 shows the amount of time needed to analyze examples as a function of total number of transitions in the Moore machine representing the controller, with examples that could not be analyzed due to memory errors set to 1. Results were generated on a Linux VM given 24 GB RAM and 4 processors on a MacBook Pro with a 2.9 GHz Intel Core i9 with 32 GB RAM. We ran 40 examples in total. Results for 33 examples are plotted. On most examples with less than 4000 transitions, SPARK was able to completely verify/prove the synthesized code complies with its specification. Examples that had more than 4000 transitions (the 7 unplotted examples) required too much memory to analyze, resulting in errors when attempting to verify them in SPARK. Of examples with less than 4000 transitions, two resulted in errors. These two examples had abnormally large specifications consisting of approximately 1000 atomic propositions each, whereas most other examples with a similar number of transitions had 500 or less. Such cases occur, e.g. when systems include a
End-to-End Verification of GR(1) Designs in SPARK
73
Time to Prove (s)
10 4
10
Proven Partial Error
3
10 2
10
1
10
0
10 1
10 2
10 3
10 4
Number of Transitions
Fig. 4. Timing results for example SPARK controllers as a function of number of transitions in the controller. “Proven” examples were fully verified, “partial” examples were partially verified, and “error” examples were too big to analyze.
large number of inputs and/or outputs but have very tight specifications on how they can transition, leading to large specifications and therefore lengthy pre- and postconditions but relatively small controllers. Two examples with less than 4000 transitions could only be partially proven. One was relatively large, with about 2000 transitions. The other had arithmetic terms in its specification (Salty and Slugs support integer inputs and outputs); we are investigating why this example does not fully prove, since we feel that SPARK should be capable of fully proving this example.
5
Lessons Learned
Throughout the process of synthesizing and attempting to verify controllers in SPARK, we learned several lessons, both about SPARK and about some of the finer points of GR(1) specifications. In terms of encoding SPARK controllers for verification, we originally tried to mirror the approach taken in other Salty language targets by building a static lookup table for state transitions. To do this, we tried to create an array of Formal_Hashed_Maps indexed by State_Num, where keys were derived from environment input values and used to look up the next State_Num and corresponding system output values. Ghost functions consisting of nested quantified expressions were used to check that in each state, specification properties held using input and output values encoded by the current state and all states reachable as contained in the hashed maps. These functions comprised the postcondition of a function that initialized the controller’s lookup table. The body of the Move procedure simply retrieved the outputs and next state from the lookup table using its stored State_Num and Environment input. The public portion of the SPARK specification was largely unchanged. This approach was only able to prove the smallest of examples in a reasonable amount of time. While the use of formal containers was intuitive, they are more complex to reason about in terms of proof because they require reasoning about models of
74
L. R. Humphrey et al.
the containers. Encoding the lookup table as a case statement is more straightforward, because for instance, it is “obvious” to the underlying solvers that state transitions are static and that a transition exists for every possible input, since case statements must be exhaustive. Encoding the Move procedure as a case statement still has some issues, mainly that (1) the generated code can be quite long, leading to memory errors when trying to prove the subprogram with SPARK, and (2) since the solvers prove all case statement alternatives at the same time and the number of case statement alternatives grows exponentially with the number of inputs, sometimes the solvers are not able to prove the postcondition. A solution to both problems could be to split the Move procedure into several smaller procedures. This would allow SPARK to apply modular analysis on several smaller procedures, thus enabling the proof on larger files. We are currently investigating ways to split up the procedure that does not accidentally create more difficulties for the underlying solvers. The process of encoding and analyzing controllers in SPARK did reveal some unexpected behaviors. First, as discussed in Sect. 3, there were two example controllers4 with specifications that resulted in states with no successors. As a result, these controllers contained empty case statements in the Move procedure. We had previously tested the Salty-synthesized controller in Python for the example in Sect. 3 and had not noticed the error, though it would have resulted in an unhandled runtime exception if one of the states without successors had been reached in the Python implementation. Second, a meta-analysis of SPARK timing results also revealed that other examples in our database did not have any inputs, i.e. they amounted to synthesizing a system independent of an environment. In those cases, we had specifications for a non-existent environment that were vacuous, and this caused SPARK to take an abnormally long amount of time to verify these controllers, given their relatively small size. These controllers did not have errors per se, but they were inefficiently encoded. We plan to modify Salty to handle such cases by removing the environment, functions over the environment, and all references to the environment in all pre- and postconditions. This greatly decreases verification time and also reduces the size and increases the efficiency of the code.
6
Conclusions
We were able to successfully use SPARK to verify safety and transition properties of moderately sized controllers synthesized by Salty from GR(1) specifications. Encoding the controllers and all of the annotations necessary for these controllers to prove automatically was relatively straighforward, and it was satisfying to be able to generate proofs using a single tool rather than having to use multiple tools to perform verification. Furthermore, the act of performing “end-to-end” verification with SPARK on such controllers was valuable because (1) it revealed a type of specification error in some examples that would result in runtime errors 4
Salty’s vip orig.salt and Anzu’s arbiter.salt.
End-to-End Verification of GR(1) Designs in SPARK
75
in other Salty target languages, and (2) it revealed cases in which controllers were inefficiently encoded, i.e. when there is no environment. In terms of future work, we can potentially improve the scalability of our approach by decomposing the Move procedure into several subprocedures, as discussed in the previous section. We are also interested in expressing and proving liveness properties. Liveness properties will be more challenging to verify because they necessarily require reasoning about future states beyond the “next” state. Verifying system liveness in SPARK will require something like encoding a lookahead buffer and showing that certain states will inevitably be reached when the environment satisfies its specification, which can itself also include liveness terms. This is likely to result in complex first-order formulas with alternating quantification over time, which are notoriously hard to handle in automated solvers, so discharging the resulting proof obligations may prove to be a challenge. To tackle this issue, collaboration with a model checker performing verification at the level of the input language might be more appropriate.
References 1. Alur, R., Moarref, S., Topcu, U.: Compositional synthesis of reactive controllers for multi-agent systems. In: Chaudhuri, S., Farzan, A. (eds.) CAV 2016. LNCS, vol. 9780, pp. 251–269. Springer, Cham (2016). https://doi.org/10.1007/978-3-31941540-6 14 2. Apker, T.B., Johnson, B., Humphrey, L.R.: LTL templates for play-calling supervisory control. In: AIAA Infotech@Aerospace. AIAA (2016) 3. Bloem, R., Jobstmann, B., Piterman, N., Pnueli, A., Yaniv, S.: Synthesis of reactive(1) designs. J. Comput. Syst. Sci. 78(3), 911–938 (2012) 4. Bloem, R., et al.: RATSY – a new requirements analysis tool with synthesis. In: Touili, T., Cook, B., Jackson, P. (eds.) CAV 2010. LNCS, vol. 6174, pp. 425–429. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-14295-6 37 5. Ehlers, R., K¨ onighofer, R., Hofferek, G.: Symbolically synthesizing small circuits. In: IEEE Formal Methods in Computer-Aided Design (FMCAD), pp. 91–100. IEEE (2012) 6. Ehlers, R., Raman, V.: Slugs: extensible GR(1) synthesis. In: Chaudhuri, S., Farzan, A. (eds.) CAV 2016. LNCS, vol. 9780, pp. 333–339. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-41540-6 18 7. Elliott, T., Alshiekh, M., Humphrey, L.R., Pike, L., Topcu, U.: Salty-a domain specific language for GR(1) specifications and designs. In: 2019 International Conference Robotics and Automation (ICRA), pp. 4545–4551. IEEE (2019) 8. Fainekos, G.E., Girard, A., Kress-Gazit, H., Pappas, G.J.: Temporal logic motion planning for dynamic robots. Automatica 45(2), 343–352 (2009) 9. Finucane, C., Jing, G., Kress-Gazit, H.: LTLMoP: experimenting with language, temporal logic and robot control. In: IEEE/RSJ International Conference Intelligent Robots and Systems (IROS), pp. 1988–1993. IEEE (2010) 10. Guo, M., Tumova, J., Dimarogonas, D.V.: Cooperative decentralized multi-agent control under local LTL tasks and connectivity constraints. In: IEEE Conference Decision and Control (CDC), pp. 75–80. IEEE (2014) 11. Hoang, D., Moy, Y., Wallenburg, A., Chapman, R.: SPARK 2014 and GNATprove. Int. J. Softw. Tools Technol. Transfer 17(6), 695–707 (2015)
76
L. R. Humphrey et al.
12. Jobstmann, B., Galler, S., Weiglhofer, M., Bloem, R.: Anzu: a tool for property synthesis. In: Damm, W., Hermanns, H. (eds.) CAV 2007. LNCS, vol. 4590, pp. 258–262. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-733683 29 13. Kress-Gazit, H., Fainekos, G.E., Pappas, G.J.: Where’s Waldo? sensor-based temporal logic motion planning. In: IEEE International Conference Robotics and Automation (ICRA), pp. 3116–3121. IEEE (2007) 14. Kupermann, O., Vardi, M.: Synthesizing distributed systems. In: IEEE Symposium Logic in Computer Science, pp. 389–398. IEEE (2001) 15. Moy, Y.: Climbing the software assurance ladder-practical formal verification for reliable software. Electron. Commun. EASST 76 (2019) 16. Wang, A., Moarref, S., Loo, B.T., Topcu, U., Scedrov, A.: Automated synthesis of reactive controllers for software-defined networks. In: IEEE Int. Conf. Network Protocols (ICNP). pp. 1–6. IEEE (2013) 17. Wongpiromsarn, T., Topcu, U., Ozay, N., Xu, H., Murray, R.M.: TuLiP: a software toolbox for receding horizon temporal logic planning. In: International Conference Hybrid Systems: Computation and Control, pp. 313–314. HSCC 2011, ACM (2011) 18. Xu, H., Topcu, U., Murray, R.M.: A case study on reactive protocols for aircraft electric power distribution. In: IEEE Conference Decision and Control (CDC), pp. 1124–1129. IEEE (2012)
Affine Systems of ODEs in Isabelle/HOL for Hybrid-Program Verification Jonathan Juli´ an Huerta y Munive(B) The University of Sheffield, Western Bank, Sheffield S10 2TN, UK [email protected]
Abstract. We formalise mathematical components for solving affine and linear systems of ordinary differential equations in Isabelle/HOL. The formalisation integrates the theory stacks of linear algebra and analysis and substantially adds content to both of them. It also serves to improve extant verification components for hybrid systems by increasing proof automation, removing certification procedures, and decreasing the number of proof obligations. We showcase these advantages through examples. Keywords: Hybrid systems
1
· Formal verification · Proof assistants
Introduction
With the increased number of computers controlling physical mechanisms, also known as cyber-physical systems, proofs of their correctness become more relevant. An important approach is differential dynamic logic (dL) [21]. It is an extension of dynamic logic with inference rules to reason about flows and invariants of ordinary differential equations (ODEs). Numerous case studies apply it and its domain-specific proof assistant, KeYmaera X [14,16]. Despite other approaches to verification [1,2], we focus on dL-style deductive verification. Our recent dL-inspired components allow the verification of hybrid programs in the general purpose proof assistant Isabelle/HOL [7,17,19]. Using a shallow embedding and Kleene algebras instead of dynamic logics, the implementation of these components makes them modular and extensible. Their modularity has been explored before in various ways, however their extensibility for the benefit of proof performance has not yet been pursued. In particular, extensions of Isabelle’s mathematical components for ordinary differential equations to specific classes promises significant benefits in this regard. Linear and affine systems of ODEs, for example, those described by linear (resp. affine) transformations, are among the simplest and most studied variants. They enjoy desirable mathematical properties like existence and uniqueness of solutions to their associated initial value problems (IVPs), and come with various methods for solving them. In particular, there is an explicit way to compute the This work was funded by CONACYT’s scholarship no. 440404. c Springer Nature Switzerland AG 2020 F. de Boer and A. Cerone (Eds.): SEFM 2020, LNCS 12310, pp. 77–92, 2020. https://doi.org/10.1007/978-3-030-58768-0_5
78
J. J. Huerta y Munive
general solution for their time-independent versions [10,22]. Although there is much work in extending both the ODE libraries [11–13] and the linear algebra libraries [6,23], there are no formalisations in Isabelle connecting both theory stacks and this reverberates in the verification process with our components. For instance, formalising existence and uniqueness results for affine and linear systems reduces the proofs that users have to supply. Also, where users have to find solutions to the time-independent versions, we can provide the general one. Thus, inspired by the deductive approach to verification of hybrid systems, our main contribution is the first formalisation of linear and affine systems of ODEs in a proof assistant by combining the theory stacks of linear algebra and ODEs. We add to this integration by extending these libraries with properties about operator norms, diagonal matrices, and derivatives involving matrix-vector multiplication. In addition, we provide evidence that the study and analysis of these systems with a proof assistant is feasible. Moreover, we extend the Kleene algebra verification components for hybrid systems by improving their tactics for checking if a function is a derivative of another one. We use these tactics to formalise the fact that all linear and affine systems of ODEs have unique solutions, and we certify the general solution for the time-independent case. In the cases where the linear transformation has a diagonalisable representation, we also prove lemmas that include a simpler representation of the general solution. Finally, we add proof automation for operations with the list-representation of n × n matrices. The Isabelle formalisation itself forms a major contribution of this paper. It adds new mathematical components to an important field of analysis and improves our verification components for hybrid systems. The formalisations are available in the reviewed Archive of Formal Proofs [18].
2
Affine Systems of ODEs
We first review the mathematical definitions and results for differential equations needed for our formalisation. Dynamical systems describe the time dependency of points in a state space S. Formally, they are monoid actions ϕ : T → S → S that satisfy ϕ (t1 + t2 ) = ϕ t1 ◦ ϕ t2
and
ϕ 0 = id ,
where the monoid (T, +, 0) represents time. A dynamical system is a flow or continuous if T = R or T = R+ , the non-negative real numbers. Flows emerge from solutions to systems of ordinary differential equations as explained below. In a system of ODEs X t = f (t, X t), the function f : T × S → Rn is a vector field ; it assigns a vector to each point in T × S with T ⊆ R and S ⊆ Rn , it is continuous and it suffices to describe the system [10,22]. An initial value problem then consists of a vector field f and an initial condition (t0 , s) ∈ T × S, where t0 and s represent the initial time and state of the system. Therefore, a solution to this system is a continuously differentiable function X : T → S that satisfies X t = f (t, X t) for all t ∈ T . This function also solves the IVP if it
Affine Systems of ODEs in Isabelle/HOL for Hybrid-Program Verification
79
satisfies X t0 = s. Finally, if for each s ∈ S there is a unique solution or trajectory ϕfs : T → S to the IVP given by f and (0, s), then the equation ϕ t s = ϕfs t defines the flow ϕ : T → S → S of f . Geometrically, the trajectory ϕfs is the only curve in S that passes through s and is always tangential to f . Picard-Lindel¨ of ’s theorem guarantees local existence and uniqueness of solutions for some IVPs [10,22]. It requires the domain T × S of f to be open with (t0 , s) ∈ T × S, and f to be locally Lipschitz continuous in S. That is, there must be ε > 0 and ≥ 0 such that for all t ∈ Bε (t0 ) ∩ T and all s1 , s2 ∈ Bε (s) ∩ S, f (t, s1 ) − f (t, s2 ) ≤ s1 − s2 , where − denotes the euclidean norm in Rn and Bε (t) = {τ | τ − t ≤ ε}. If these conditions are satisfied, then the theorem asserts the existence of an interval Ts ⊆ T where a unique local solution ϕfs : Ts → S for the IVP exists, that is (ϕfs ) t = f (t, ϕfs t) and ϕfs t0 = s for all t ∈ Ts . If t0 = 0 and T = s∈S Ts , then the flow ϕ of f exists and is a monoid action [22]. An important class of vector fields with unique solutions are those representing affine systems of ODEs. They satisfy the equation (ϕfs ) t = A t · ϕfs t + B t, for matrix-vector multiplication ·, n × n matrices A t and vectors B t, where A and B are continuous functions on T . Equally important are the corresponding linear systems where B t = 0 for all t ∈ T . Affine systems of ODEs are Lipschitz continuous with respect to the operator norm M op = Sup {M · s | s = 1}, where M is a matrix with real coefficients and Sup denotes the supremum of a set. Indeed, with Lipschitz constant = Sup {A top | t ∈ Bε (s)}, (A t) · s1 − (A t) · s2 = (A t) · (s1 − s2 ) ≤ A top s1 − s2 ≤ s1 − s2 . Constant exists by continuity of A and −, and compactness of Bε (s). PicardLindel¨ of thus guarantees a unique local solution for the associated IVPs. In particular, in the time-independent or autonomous case where A and B are constant functions, their unique solutions are well-characterised and globally defined. That is, flows ϕ for autonomous affine systems exist and satisfy t (exp (−τ A) · B) dτ, ϕ t s = exp (tA) · s + exp (tA) · 0
where exp is the matrix exponential exp A = i∈N i!1 Ai . Computing such exponentials may be computationally expensive due to the iteration of matrix multiplication. Exceptions are diagonalisable matrices A which are similar to a diagonal matrix D in the sense that there is an invertible P such that A = P −1 DP . For these matrices, exp A = exp (P −1 DP ) = P −1 (exp D)P,
80
J. J. Huerta y Munive
where exp D in the right hand side is diagonal and easy to characterise: its entries in the main diagonal are the exponential of those in D. Therefore, when working with solutions of autonomous affine (or linear) systems, it is preferable to work with those in diagonal form.
3
Isabelle Components for Affine Systems of ODEs
We describe our Isabelle formalisation of the mathematical concepts outlined in Sect. 2. More specifically, we explain our addendum of definitions and lemmas for an integration of the existing libraries for ODEs and linear algebra. We finish with an instance of Picard-Lindel¨ of’s theorem for affine and linear systems. As Isabelle only allows total functions, we formalise the type of vector fields as real ⇒ a ⇒ ( a :: real-normed-vector ), which by currying is isomorphic to R × V → V , where V is a normed vector space over R. We then restrict domains and codomains of solutions using our definition in [19]. definition ivp-sols :: (real ⇒ a ⇒ ( a :: real-normed-vector )) ⇒ real set ⇒ a set ⇒ real ⇒ a ⇒ (real ⇒ a) set (Sols) where Sols f T S t 0 s = {X |X . (D X = (λt. f t (X t)) on T ) ∧ X t 0 = s ∧ X ∈ T → S }
The first conjunct D X = (λt. f t (X t)) in the definiendum above translates to X t = f (t, X t), the second states that X :: real ⇒ a is a solution to the associated IVP, and the third that X maps elements of T into S. We use Rn , with n ≥ 0, as our default vector space. It is formalised using Isabelle’s type (real , n) vec (abbreviated as real ˆ n) of real valued vectors of dimension n. Isabelle’s HOL-Library builds this type using a bijection to the type of functions n ⇒ real with finite n. For s :: realˆ n, the expression s$i denotes the ith coordinate of s. That is, $ is the bijection from real ˆ n to n ⇒ real . Its inverse is written with a binder χ that replaces λ-abstraction. Thus, χi. s$i = s and (χi. c)$i = c for all s :: real ˆ n and c :: real . Matrices are then vectors of vectors—an m × n matrix A has type realˆ nˆ m. The product of matrix A with vector s is denoted A ∗v s; the scaling of vector s by real number c is written c ∗R s. In Isabelle, a solution X to an affine system of ODEs with A :: real ⇒ realˆ nˆ n and B :: real ⇒ realˆ n then satisfies the predicate D X = (λt. A t ∗v X t + B t) on T . We use a formalisation of Picard-Lindel¨ of’s theorem from [19]. The locale picard-lindeloef groups its assumptions. If picard-lindeloef f T S t 0 holds, then T and S are open, t0 ∈ T , s ∈ S, λ t. f t s is continuous on T , and f is locally Lipschitz continuous. The context of the locale also contains the lemma picard-lindeloef .unique-solution, stating that any two functions solving an IVP (D X = (λt. f t (X t)) on {t0 −−t})
X t0 = s
X ∈ {t0 −−t} → S
are equal at t ∈ T . Here, {t 0 −−t} is Isabelle notation for the set of all numbers between t and t0 where t can be above or below t0 . Our following lemma then yields a generic instance of picard-lindeloef .unique-solution for affine systems.
Affine Systems of ODEs in Isabelle/HOL for Hybrid-Program Verification
81
lemma picard-lindeloef-affine: fixes A :: real ⇒ realˆ nˆ n assumesAhyp: matrix-continuous-on T A and τ ε. τ ∈ T =⇒ ε > 0 =⇒ bdd-above {A top |t. dist τ t ≤ ε} and Bhyp: continuous-on T B and open S and t 0 ∈ T and Thyp: open T is-interval T shows picard-lindeloef (λ t s. A t ∗v s + B t) T S t 0 proof
Assumptions Ahyp and Bhyp state that functions A and B are continuous. The second one requires that the image of Bτ (ε) for τ ∈ T under λ t. A top is bounded above. The remaining ones are direct conditions of Picard-Lindel¨ of’s theorem. Continuity in Ahyp is different from that in Bhyp because Isabelle’s default norm for matrices A :: realˆ nˆ m is the Euclidean norm, not the operator norm from Sect. 2. Thus, for the lemma above, we formalise the Lipschitz continuity argument at the end of Sect. 2 starting with the following definition. abbreviation op-norm :: ( a::real-normed-algebra-1 )ˆ nˆ m ⇒ real (1-op ) where Aop ≡ onorm (λx . A ∗v x )
Function onorm lives in Isabelle’s HOL-Analysis library and it is an alternative definition of the operator norm onorm f = Sup {f x / x | x ∈ V }. However, for many proofs, the definition of −op in Sect. 2 is more convenient. Hence we formalise the equivalence as shown below. lemma op-norm-def : Aop = Sup {A ∗v x | x . x = 1 } proof
We omit its proof because lack of automation for suprema in Isabelle/HOL makes it an 8-line script. We also show that −op satisfies the norm axioms. lemma op-norm-ge-0 : 0 ≤ Aop using ex-norm-eq-1 norm-ge-zero norm-matrix-le-op-norm basic-trans-rules(23 ) by blast lemma op-norm-zero-iff : (Aop = 0 ) = (A = 0 ) unfolding onorm-eq-0 [OF blin-matrix-vector-mult] using matrix-axis-0 [of 1 A] by fastforce lemma op-norm-triangle: A + Bop ≤ (Aop ) + (Bop ) using onorm-triangle[OF blin-matrix-vector-mult[of A] blin-matrix-vector-mult[of B]] matrix-vector-mult-add-rdistrib[symmetric, of A - B] by simp lemma op-norm-scaleR: c ∗ R Aop = |c| ∗ (Aop ) unfolding onorm-scaleR[OF blin-matrix-vector-mult, symmetric] scaleR-vector-assoc ..
With this norm, we can define continuity for time-dependent matrix functions and prove Lipschitz continuity. definition matrix-continuous-on :: real set ⇒ (real ⇒ ( a::real-normed-algebra-1 )ˆ nˆ m) ⇒ bool where matrix-continuous-on T A = ∀ t∈T .∀ ε>0 .∃ δ>0 .∀ τ ∈T . |τ − t| 0 and a = 0 shows A a b = P (−ι2 /a) (−ι1 /a) ∗ (diag i. if i = 1 then ι1 else ι2 ) ∗ (P (−ι2 /a) (−ι1 /a))−1 proof
Integrating a computer algebra system directly into Isabelle, so that inputs and certification are done automatically, is beyond our research goals in this article. Although we omit the proof, it is a simple 4-line script thanks to the addition of our lemmas about standard matrix operations to Isabelle’s simplifier. Finally, we use this diagonalisation to compute the general solution of the ODEs generated by A a b and instantiate local-flow-sq-mtx-linear to this result. lemma mtx-hOsc-solution-eq: defines ι1 ≡ (b − sqrt (b 2 +4 ∗ a))/2 and ι2 ≡ (b + sqrt (b 2 +4 ∗ a))/2 defines Φ t ≡ mtx ( [ι2 ∗exp(t∗ι1 ) − ι1 ∗exp(t∗ι2 ), exp(t∗ι2 )−exp(t∗ι1 )]# [a∗exp(t∗ι2 ) − a∗exp(t∗ι1 ), ι2 ∗exp(t∗ι2 )−ι1 ∗exp(t∗ι1 )]#[]) assumes b 2 + a ∗ 4 > 0 and a = 0 shows P (−ι2 /a) (−ι1 /a) ∗ (diag i. exp (t ∗ (if i=1 then ι1 else ι2 ))) ∗ (P (−ι2 /a) (−ι1 /a))−1 = (1 /sqrt (b 2 + a ∗ 4 )) ∗R (Φ t) proof lemma local-flow-mtx-hOsc: defines ι1 ≡ (b − sqrt (bˆ2 +4 ∗a))/2 and ι2 ≡ (b + sqrt (bˆ2 +4 ∗a))/2 defines Φ t ≡ mtx ( [ι2 ∗exp(t∗ι1 ) − ι1 ∗exp(t∗ι2 ), exp(t∗ι2 )−exp(t∗ι1 )]# [a∗exp(t∗ι2 ) − a∗exp(t∗ι1 ), ι2 ∗exp(t∗ι2 )−ι1 ∗exp(t∗ι1 )]#[]) assumes b 2 + a ∗ 4 > 0 and a = 0 shows local-flow ((∗V ) (A a b)) UNIV UNIV (λt. (∗V ) ((1 /sqrt (b 2 + a ∗ 4 )) ∗R Φ t)) proof
Our matrix operation lemmas make the proof of both results easy to tackle for the experimented Isabelle user. The last lemma yields an automated certification of the uniqueness and the monoid-action behavior of this flow. These results will be useful later in the verification of a simple hybrid program. Example 2 (Non-diagonalizable Matrix). To derive the equations for constantly accelerated motion in one dimension, we start with the ODE x t = 0. This is
Affine Systems of ODEs in Isabelle/HOL for Hybrid-Program Verification
87
equivalent to the linear system ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ 010 xt x t ⎝v t⎠ = ⎝0 0 1⎠ · ⎝v t⎠ , 000 at a t where x, v and a represent the position, velocity and acceleration of the motion. Although the matrix in this system is non-diagonalisable, it is nilpotent as formalised below. abbreviation mtx-cnst-acc :: 3 sq-mtx (K ) where K ≡ mtx ( [0 ,1 ,0 ] # [0 ,0 ,1 ] # [0 ,0 ,0 ] # []) lemma powN-scaleR-mtx-cnst-acc: n > 2 =⇒ (t ∗R K )ˆn = 0 proof
We can use this fact to obtain the general solution and the kinematics equations for constantly accelerated motion with initial state s = (s$1, s$2, s$3). lemma exp-mtx-cnst-acc: exp (t ∗R K ) = ((t ∗R K )2 /R 2 ) + (t ∗R K ) + 1 unfolding exp-def apply(subst suminf-eq-sum[of 2 ]) using powN-scaleR-mtx-cnst-acc by (simp-all add: numeral-2-eq-2 ) lemma exp-mtx-cnst-acc-vec-mult-eq: exp (t ∗R K ) ∗V s = vector [s$3 ∗ tˆ2 /2 + s$2 ∗ t + s$1 , s$3 ∗ t + s$2 , s$3 ] proof
Here, vector is a function that turns lists into vectors. From this, a simple instantiation shows that the kinematics equations describe the flow of the ODE. lemma local-flow-mtx-cnst-acc: local-flow ((∗V ) K ) UNIV UNIV (λt s. ((t ∗R K )2 /R 2 + (t ∗R K ) + 1 ) ∗V s) using local-flow-sq-mtx-linear [of K ] unfolding exp-mtx-cnst-acc.
Throughout this section, formalisation and proofs are relatively simple. This is because our lemmas about matrix operations, if added to Isabelle’s simplifier, improve proof automation.
7
Applications in Hybrid Program Verification
To illustrate an application of our formalisation, we use our Isabelle verification components for hybrid programs [17]. This approach starts with an algebra (K, +, ; , 0, 1,∗ ) for simple while-programs that also supports a boolean subalgebra (B, +, ; , 0, 1, ¬) of tests such as a Kleene algebra with tests or a modal Kleene algebra [4,15]. With the interpretation of elements of K as programs, + as nondeterministic choice, ; as sequential composition, ∗ as finite iteration, and 0 and 1 as the aborting and ineffective programs respectively, the equations if p then α else β = p; α + ¬p; β, while p do α = (p; α)∗ ; ¬p
88
J. J. Huerta y Munive
model the behaviour of while-programs. These algebras allow us to write correctness specifications via Hoare-triples {−} − {−} or weakest liberal preconditions wlp [3,9]. This means that we can derive the rules of Hoare-logic and/or those of the wlp-calculus. In Isabelle, this approach accelerates the verification process as our Kleene algebra components automatically generate domain-specific conditions by handling the program-structure without intervention from the user. Moreover, these algebras have state transformer models where elements of the algebra are interpreted as functions of type S → P S for a given set S. In this setting, Kleisli composition (f ◦K g) s = {g s | s ∈ f s} interprets ;, + is pointwise union Kleisli unit ηS s = {s}, and ∗ n λ s. f s ∪ g s, 0 is λ0s. ∅, 1 is then+1 ∗K is f s = {f s | n ≥ 0}, where f = ηS and f = f n ◦K f [19]. Given a finite set of program variables V , the isomorphism between Rn and V R allows us to work in the state transformer semantics of S ⊆ RV , effectively giving us hybrid stores. Defining f [a → b] a = b and f [a → b] t = f t if t = a, the function λ s. {s[x → e s]} is a state transformer. It maps a store s ∈ S to the singleton of that store with variable x ∈ V updated to e s, for e : S → R. In particular, it models program assignments (x := e) s = {s[x → e s]}. Similarly, for an interval U ⊆ T such that 0 ∈ U , the orbit map γ ϕ : S → P S defined by γ ϕ s = P ϕfs U is a state transformer. It sends each s ∈ S to the set of all the points in the trajectory ϕfs for the IVP induced by f and (0, s). However, for modelling boundary conditions, an alternative G-guarded version is better. For predicate G : S → B, we use the evolution command state transformer (x = f & G) s = {ϕfs t | t ∈ U ∧ (∀τ ∈ ↓ t. G (ϕfs τ ))}, where “x =” is syntactic sugar to resemble ODEs, and ↓ t = {τ ∈ U | τ ≤ t}. By adding assignments and evolution commands to the language of these algebras of programs, we get hybrid programs. In particular, we also have correctness specifications for these commands {λ s. Q (s[x → e s])} x := e {Q}, {λ s ∈ S. ∀t ∈ U. (∀τ ∈ ↓ t. G (ϕfs τ )) → Q (ϕfs t)} x =f & G {Q}. The above definition of evolution commands requires uniqueness of the solution to the IVP X = f (t, X t) and X 0 = s. For a more general definition where this is not needed see [19]. Yet, affine and linear systems have unique solutions for specific IVPs. Thus, our formalisation of affine and linear systems is compositional with respect to the verification style described in [7,19].
8
Verification Examples
In this section, we verify two simple hybrid programs using the components of [19] and our formalisation of linear systems of ODEs. Both verifications follow directly from our results in Sect. 6.
Affine Systems of ODEs in Isabelle/HOL for Hybrid-Program Verification
89
Example 3 (Overdamped Door-Closing Mechanism). We use the system of ODEs 01 xt xt x t = · = (A a b) · , y t ab yt yt k d with a = − m and b = − m to model a damped harmonic oscillator as described in Example 1. The expression b2 + 4 · a dictates the behaviour of the system. If b2 + 4·a < 0, the damping factor is too big and there is no oscillation. Otherwise, the oscillation continues. Overdamping is a desired property of some oscillators inside door mechanisms where the engineer does not want the doors to slam or open on the opposite side. We use Isabelle’s s$1 and s$2 to formalise respectively the position (x) and velocity (y) of one of these door-oscillators. We represent a closed door with the equation s$1 = 0. Hence, an open door immediately after being pushed by a person corresponds to the conjunction s$1 > 0 ∧ s$2 = 0. We can prove that once this happens, the door will never open on the opposite side, that is s$1 ≥ 0, if its oscillator is overdamped. lemma overdamped-door : assumes b 2 + a ∗ 4 > 0 and a < 0 and b ≤ 0 and 0 ≤ t shows PRE (λs. s$1 = 0 ) HP (LOOP (λs. {s. s$1 > 0 ∧ s$2 = 0 }); (x ´=(∗V ) (A a b) & G on {0 ..t} UNIV @ 0 ) INV (λs. 0 ≤ s$1 )) POST (λs. 0 ≤ s $ 1 ) apply(rule fbox-loopI , simp-all add: le-fun-def ) apply(subst local-flow .fbox-g-ode-ivl[OF local-flow-mtx-hOsc[OF assms(1 )]]) using assms apply(simp-all add: le-fun-def fbox-def ) unfolding sq-mtx-scaleR-eq UNIV-2 sq-mtx-vec-mult-eq by (clarsimp simp: overdamped-door-arith)
Notation PRE P HP X POST Q is syntactic sugar for the Hoare triple {P }X{Q}, meaning that if the system starts satisfying precondition P , then after the execution of the hybrid program X, postcondition Q will hold. In the k d , b = −m and, lemma above we assume a < 0 and b ≥ 0 because a = − m in physics, the constants k, d and m are often positive. The condition t ≥ 0 guarantees the verification for a positive lapse of time. The hybrid program is the finite iteration of a discrete door-opening, modelled by the state transformer λ s. {s$1 > 0 ∧ s$2 = 0}, followed by the ODE x t = A · (x t). The loop-invariant of this iteration is the same as the desired postcondition. As we do not deal with boundary conditions, we use variable G for the guard of the evolution command. The first two lines in the proof of this lemma apply the Hoare-rules for loops and evolution commands respectively. The remaining lines simplify the emerging proof obligations. Example 4 (Automatic Docking). A space ship is aligned with its docking station d and approaching it with velocity v0 > 0. The ship needs to stop exactly at d and its current position is x0 , where d > x0 . In order to do this, the ship
90
J. J. Huerta y Munive v2
0 calculates that it needs a constant deceleration of a = − 2(d−x . Its motion 0) follows the system of Example 2, ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ 010 xt xt x t ⎝ v t ⎠ = ⎝0 0 1⎠ · ⎝ v t ⎠ = K · ⎝ v t ⎠ . 000 at at a t
We formalise the position, velocity and acceleration of the ship with state s = (s$1, s$2, s$3) and its discrete behaviour with an assignment of s$3 to the value of the safe acceleration. Under these assumptions, we need to guarantee that the ship will stop (s$2 = 0) if and only if its position coincides with d (s$1 = d). The formalisation is shown below. lemma docking-station-arith: assumes (d::real) > x and v > 0 shows (v = v 2 ∗ t / (2 ∗ d − 2 ∗ x )) ←→ (v ∗ t − v 2 ∗ t 2 / (4 ∗ d − 4 ∗ x ) + x = d) proof lemma docking-station: assumes d > x 0 and v 0 > 0 shows PRE (λs. s$1 = x 0 ∧ s$2 = v 0 ) HP ((3 ::= (λs. −(v 0 ˆ2 /(2 ∗(d−x 0 ))))); x ´=(∗V ) K & G) POST (λs. s$2 = 0 ←→ s$1 = d) apply(clarsimp simp: le-fun-def local-flow .fbox-g-ode[OF local-flow-sq-mtx-linear [of K ]]) unfolding exp-mtx-cnst-acc-vec-mult-eq using assms by (simp add: docking-station-arith)
In the proof of this hybrid program, as before, the first line applies the Hoarerule for evolution commands. The second line simplifies the emerging proof obligation by calling the lemma docking-station-arith which we proved separately.
9
Conclusion
We have developed new mathematical components for affine and linear systems of ODEs that improve a modular semantic framework for verification of hybrid programs based on Kleene algebras [19] in Isabelle. These extend the tactics of the framework and simplify the verification procedure by eliminating uniqueness and existence requirements for solutions to these systems of ODEs. As many systems in physics and engineering are linear, our work impacts a wide range of applications for our verification components. Furthermore, our extension showcases the advantages of using a general purpose proof assistant. It demonstrates that our components can handle exponentiation and other transcendental functions beyond first-order real arithmetic, to which traditional deductive verification of hybrid programs is confined [21]. Our work is also an extension to Isabelle’s HOL-Analysis library as it adds lemmas from linear algebra and the theory of ODEs. Previous formalisations in Isabelle/HOL intersect with our components in both fields, but none of them combines them. For instance, there are two formalisations of Jordan Normal forms in Isabelle’s archive of formal proofs (AFP) [6,23]. They have been combined and made executable in their exported versions to Standard ML or Haskell [5,6]. An integration of this work and our verification components to
Affine Systems of ODEs in Isabelle/HOL for Hybrid-Program Verification
91
handle more than just diagonalisable matrices is a pursuable endeavour. On the other hand, there is much work in extending Isabelle’s libraries for ODEs [11–13]. The AFP contains a definition for bounded linear operators and a proof that linear systems expressed with these have unique solutions [13]. However, the affine version of this result has not yet been formalised and it requires further work to make it compatible with the type of vectors real ˆ n and our components. Yet, much work remains to make this approach widely-adoptable in current practice. The general solution for non-autonomous linear systems of ODEs using resolvent matrices remains to be formalised in a proof assistant. Also, our work can only certify diagonalisations and solutions, but the generation of these is left to the user. An alternative approach would automate our procedure in Example 1. That is, a computer algebra system (CAS) would obtain the solution (or diagonalisation) and another tool would generate the Isabelle theory with a certification of the solution provided. This is left for future work. Acknowledgements. The author wishes to thank the reviewers for their insightful comments. He also thanks Georg Struth, Harsh Beohar, Rayna Dimitrova, Kirill Bogdanov and Michael Foster for discussions.
References 1. Althoff, M., et al.: ARCH-COMP19 category report: continuous and hybrid systems with linear continuous dynamics. In: ARCH19, pp. 14–40 (2019) 2. Alur, R.: Formal verification of hybrid systems. In: EMSOFT 2011, pp. 273–278. ACM (2011) 3. Armstrong, A., Gomes, V.B.F., Struth, G.: Building program construction and verification tools from algebraic principles. Form. Asp. Comput. 28(2), 265–293 (2015). https://doi.org/10.1007/s00165-015-0343-1 4. Desharnais, J., M¨ oller, B., Struth, G.: Algebraic notions of termination. Log. Methods Comput. Sci. 7(1) (2011) 5. Divas´ on, J., Aransay, J.: Gauss-Jordan algorithm and its applications. Archive of Formal Proofs (2014) 6. Divas´ on, J., Kun˘car, O., Thiemann, R., Yamada, A.: Perron-Frobenius theorem for spectral radius analysis. Archive of Formal Proofs (2016) 7. Foster, S., Huerta y Munive, J.J., Struth, G.: Differential hoare logics and refinement calculi for hybrid systems with Isabelle/HOL. In: Fahrenberg, U., Jipsen, P., Winter, M. (eds.) RAMiCS 2020. LNCS, vol. 12062, pp. 169–186. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-43520-2 11 8. Friedland, B., Director, S.W.: Control Systems Design: An Introduction to StateSpace Methods. McGraw-Hill Higher Education, New York (1985) 9. Gomes, V.B.F., Struth, G.: Modal Kleene algebra applied to program correctness. In: Fitzgerald, J., Heitmeyer, C., Gnesi, S., Philippou, A. (eds.) FM 2016. LNCS, vol. 9995, pp. 310–325. Springer, Cham (2016). https://doi.org/10.1007/978-3-31948989-6 19 10. Hirsch, M.W., Smale, S., Devaney, R.L.: Differential Equations, Dynamical Systems, and Linear Algebra. Academic Press, Cambridge (1974)
92
J. J. Huerta y Munive
11. Immler, F.: Formally verified computation of enclosures of solutions of ordinary differential equations. In: Badger, J.M., Rozier, K.Y. (eds.) NFM 2014. LNCS, vol. 8430, pp. 113–127. Springer, Cham (2014). https://doi.org/10.1007/978-3-31906200-6 9 12. Immler, F., H¨ olzl, J.: Numerical analysis of ordinary differential equations in Isabelle/HOL. In: Beringer, L., Felty, A. (eds.) ITP 2012. LNCS, vol. 7406, pp. 377–392. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-323478 26 13. Immler, F., H¨ olzl, J.: Ordinary differential equations. Archive of Formal Proofs (2012). https://www.isa-afp.org/entries/Ordinary Differential Equations.shtml 14. Jeannin, J., et al.: A formally verified hybrid system for safe advisories in the next-generation airborne collision avoidance system. STTT 19(6), 717–741 (2017). https://doi.org/10.1007/s10009-016-0434-1 15. Kozen, D.: Kleene algebra with tests. ACM TOPLAS 19(3), 427–443 (1997) 16. Loos, S.M., Platzer, A., Nistor, L.: Adaptive cruise control: hybrid, distributed, and now formally verified. In: Butler, M., Schulte, W. (eds.) FM 2011. LNCS, vol. 6664, pp. 42–56. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-64221437-0 6 17. Huerta y Munive, J.J.: Verification components for hybrid systems. Archive of Formal Proofs (2019). https://www.isa-afp.org/entries/Hybrid Systems VCs.html 18. Huerta y Munive, J.J.: Matrices for odes. Archive of Formal Proofs (2020). https:// www.isa-afp.org/entries/Matrices for ODEs.html 19. Huerta y Munive, J.J., Struth, G.: Predicate transformer semantics for hybrid systems: verification components for Isabelle/HOL (2019). arXiv:1909.05618 20. Huerta y Munive, J.J., Struth, G.: Verifying hybrid systems with modal Kleene algebra. In: Desharnais, J., Guttmann, W., Joosten, S. (eds.) RAMiCS 2018. LNCS, vol. 11194, pp. 225–243. Springer, Cham (2018). https://doi.org/10.1007/978-3030-02149-8 14 21. Platzer, A.: Virtual Substitution & Real Arithmetic. Logical Foundations of CyberPhysical Systems, pp. 607–628. Springer, Cham (2018). https://doi.org/10.1007/ 978-3-319-63588-0 21 22. Teschl, G.: Ordinary Differential Equations and Dynamical Systems. AMS, Premst¨ atten (2012) 23. Thiemann, R., Yamada, A.: Matrices, Jordan normal forms, and spectral radius theory. Archive of Formal Proofs (2015)
Interoperability and Integration Testing Methods for IoT Systems: A Systematic Mapping Study Miroslav Bures1(B) , Matej Klima1 , Vaclav Rechtberger1 , Xavier Bellekens2 , Christos Tachtatzis2 , Robert Atkinson2 , and Bestoun S. Ahmed1,3 1
2
Department of Computer Science, FEE, Czech Technical University in Prague, Karlovo namesti 13, Prague, Czech Republic [email protected] Department of Electronic and Electrical Engineering, University of Strathclyde, 99 George Street, Glasgow, Scotland, UK 3 Department of Mathematics and Computer Science, Karlstad University, Universitetsgatan 2, 65188 Karlstad, Sweden http://still.felk.cvut.cz
Abstract. The recent active development of Internet of Things (IoT) solutions in various domains has led to an increased demand for security, safety, and reliability of these systems. Security and data privacy are currently the most frequently discussed topics; however, other reliability aspects also need to be focused on to maintain smooth and safe operation of IoT systems. Until now, there has been no systematic mapping study dedicated to the topic of interoperability and integration testing of IoT systems specifically; therefore, we present such an overview in this study. We analyze 803 papers from four major primary databases and perform detailed assessment and quality check to find 115 relevant papers. In addition, recently published testing techniques and approaches are analyzed and classified; the challenges and limitations in the field are also identified and discussed. Research trends related to publication time, active researchers, and publication media are presented in this study. The results suggest that studies mainly focus only on general testing methods, which can be applied to integration and interoperability testing of IoT systems; thus, there are research opportunities to develop additional testing methods focused specifically on IoT systems, so that they are more effective in the IoT context. Keywords: Internet of Things · Testing Interoperability · Automated testing
1
· Verification · Integration ·
Introduction
The Internet of Things (IoT) provides numerous advantages to its users in various application domains. However, extensive development of IoT systems in the c Springer Nature Switzerland AG 2020 F. de Boer and A. Cerone (Eds.): SEFM 2020, LNCS 12310, pp. 93–112, 2020. https://doi.org/10.1007/978-3-030-58768-0_6
94
M. Bures et al.
last decade has led to a number of reliability and security challenges [12,31,33]. One of the challenges frequently reported by researchers as well as industry practitioners is the integration testing of IoT systems. In contemporary IoT projects, software developers work with network specialists and electronic experts for testing; however, these parties have different backgrounds and may be accustomed to using different methods of system testing (e.g., low-level testing vs. high-level functional testing). Moreover, different expectations may also play a role; for example, in a standard software system, lower layers (e.g., network or operating systems) are usually considered to be already tested and reliable; therefore, quality engineers focus on the application itself. In the case of an IoT system, the situation might differ and lower levels might also need to be tested properly. In addition, interoperability challenges are closely associated with integration testing; different devices using a variety of protocols need to cooperate in an appropriate manner, and this reliable cooperation has to be verified. Individual devices can have numerous versions and variants, which increases the difficulty of correct and seamless integration. Integration testing and interoperability testing of IoT systems are considered to overlap for several cases even though semantic differences and different definitions can be pointed out. However, because these terms overlap in their common usage, we decided to cover both interoperability and integration testing in the scope of this study. As mentioned earlier, there is an increased demand for more efficient interoperability and integration testing methods. Currently, the model-based testing (MBT) discipline naturally covers the area of integration testing through methods such as path-based testing [8,10,11], which is typical for E2E integration tests, or combinatorial [40] and constrained interaction testing [4], which is useful in unit integration testing and optimization of system configurations. Logically, in the recent period, researchers have attempted to tailor or apply formal verification and MBT techniques for IoT systems to increase system efficiency [3]. Interoperability and integration testing have significant importance in the IoT context and mapping current methods for IoT integration testing would provide valuable information to researchers for IoT and industrial quality assurance. Unfortunately, no systematic mapping study has been conducted yet in the field of integration testing for IoT systems. Hence, we attempt to bridge this gap through this study. The contributions of this study are as follows: 1. It gives an overview of research and development activity in this field, identifying the active parties and individuals; 2. It also provides an overview of methods and approaches that are available for IoT integration testing; 3. It identifies research opportunities and discusses possible research directions. This paper is organized as follows. Section 2 analyzes existing mapping studies and literature surveys in the fields of integration testing, IoT testing, quality assurance and IoT integration, which justifies the motivation of this study.
Interoperability and Integration Testing Methods for IoT Systems
95
Section 3 explains the methodology used in this study, defines the research questions (RQs) to be answered, and the stages through which relevant studies are identified and analyzed. Section 4 presents the answers to individual RQs and related discussions. The last section presents the analysis of the possible threats to the validity of this study and concludes the paper.
2
Motivation and Related Work
The motivation of this study is twofold. The first is the importance of integration testing in the quality assurance process of IoT solutions [12,33] and the second is the fact that no previous systematic mapping study has addressed integration testing methods for IoT systems specifically. In the field of general integration testing, there are several systematic literature surveys and mapping studies. In 2007, Rehman et al. published a survey of issues and available techniques for general software integration testing [45]. Their study summarizes and classifies a variety of integration testing approaches, covering the fields of MBT, test automation frameworks, and methodological aspects; it also provides a good overview of available approaches and concepts that can be used in the definition of a test strategy. However, the study focuses on general software integration testing and is not IoT-specific. Moreover, the study was published more than a decade ago; new techniques and approaches might be available now. Moreover, modern integrated software applications may change as the systems are becoming more complex and demands for their real-time or almost real-time operation have increased. This will also be reflected in integration testing methods; therefore, a state-of-the-art survey is required. A more recent study by Shashank et al. from 2010 also focuses on the field of integration testing of component-based software systems. However, the study, published as a conference paper, is limited in terms of its sample size; rather than an extensive classification, it provides an overview of available approaches and selected examples of approaches [49]. Despite the limited extent of the study, the brief classification of the state-of-the-art methods into established MBT and software verification categories provided in this study is valid. Another recent survey and analysis on model-based integration testing was conducted by Haser et al. in 2014 [27]. Essentially, this study is not limited to software systems; the authors discuss integration testing methods that can be applied to a broader scope of cyber-physical systems, which also covers the IoT domain. In the study, an extensive sample of 718 papers is analyzed, and conclusions are obtained for the defined research questions on software paradigms, system assessment types, and usage of non-functional requirements. However, the study is limited to model-based integration testing with limited scope of defined research questions. For the field of IoT-specific integration testing methods, a broader study is required. In the field of testing techniques that specifically focus on IoT systems and their specifics, a recent systematic mapping study by Ahmed et al. [3] focuses
96
M. Bures et al.
on general aspects of quality and quality assurance techniques designed for IoT systems. The scope of this study is broader than the field of integration testing and covers topics such as security, privacy, construction of testbeds, general MBT, and formal verification techniques. Integration testing is not discussed in depth in this study due to its general scope, and from this viewpoint, overlap with the scope of this study is minimal. Another recent conference paper by Dias et al. briefly summarizes current testing tools for IoT systems; integration testing is included in the examined aspects of the problem [18]. However, the discussion is brief, and regarding the selected method in the study, all state-of-the-art methods in this field are not covered. In 2019, Cortes et al. conducted a mapping study on software testing methods used for IoT systems [15]. The study categorizes and analyses publications discussing general testing approaches used in IoT systems. Unfortunately, the discussion of integration testing is very brief in this paper. Another study by Garousi et al. focuses on the testing methods for embedded systems [23] (which may, to a certain extent, overlap with IoT systems discussed in this study). However, besides the fact that the field of embedded systems is not the same as the IoT field, the study focuses on general testing methods and approaches and does not concentrate on interoperability and integration testing specifically. The most frequently addressed quality aspects of IoT systems in the last five years are security and privacy [3]. This is also clear from the availability of published literature surveys and systematic mapping studies. A meta-survey summarizing and analyzing 32 available surveys on security, privacy, and defensive mechanisms of cyber-physical systems (including IoT) was recently published by Giraldo et al. [24]. The study provides a good overview of previous works and motivates the reader to find relevant literature sources related to security and privacy problems. Regarding the integration of IoT systems, a mapping study focusing on integration techniques and styles as well as related architectural aspects of integration was published by Cavalcante et al. [14]. However, this study does not discuss testing or quality assurance aspects of system integration. To summarize, no current systematic mapping study is dedicated to integration testing techniques for IoT systems, discussing these techniques in the context of IoT domain and from the viewpoint of IoT quality challenges, which are frequent subjects of various reports [12,31,33]. This study aims to provide the missing information in this specific field.
3
Methodology
This systematic mapping study follows the methodology recommendations provided by Kitchenham and Charters [34]. The process of collection and analysis of relevant studies is divided into the following six stages:
Interoperability and Integration Testing Methods for IoT Systems
97
1. Research scope determination and definition of RQs to be answered in the study. 2. Search for potentially relevant papers, which includes establishment of a search strategy and acquisition of the identified papers. 3. Identification of truly relevant papers from the initial selection based on the title, abstract, full-text, and quality assessments, which includes performing snowball sampling of other relevant studies. 4. Data extraction from the remaining papers to allow further detailed analyses. 5. Classification of papers and analyses of the extracted data to answer defined RQs. 6. Validity evaluation and discussion of the possible limitations of the study. The main stages of the methodology are depicted in Fig. 1 and described in this section.
Fig. 1. Stages of the systematic mapping study methodology followed in this study
In this study, we define seven RQs for analyzing the field of integration testing methods for IoT systems from various viewpoints: – RQ 1: What is the research trend in this field in terms of the number of studies published in recent years? – RQ 2: Which researchers are currently conducting active research in this field? – RQ 3: Which publication media (journals and conferences) publish papers in the field of integration testing for IoT systems? – RQ 4: What are the topics and subproblems currently being dealt with in the field of IoT integration and interoperability testing? – RQ 5: Which testing techniques and approaches are used in this field? – RQ 6: What are the current challenges and limitations in the field of IoT integration testing? – RQ 7: What are the possible future research directions in this field?
98
M. Bures et al.
We do not limit the study to a particular class or type of subproblems, or testing techniques. Hence, RQ 4 involves informal testing techniques as well as formal and MBT techniques. To search for relevant papers in the field of integration testing for IoT systems, we decided to use the following four established publication databases: IEEE Xplore, ACM Digital Library, Springer Link, and Elsevier ScienceDirect. To verify the completeness of the search strings, we randomly selected a set of 30 papers as control samples, which discussed interoperability and integration testing issues of IoT systems. These control papers had to be present in the set of papers found using the search strings. After a couple of refinement cycles, a general search string was finally established as (‘Integration Testing’ AND IoT) OR (‘Integration Testing’ AND ‘Internet of Things’) OR (‘Interoperability Testing’ AND IoT) OR (‘Interoperability Testing’ AND ‘Internet of Things’) where the expression in apostrophes denotes the exact string to be searched at the same time. The general search string has been adopted based on particular notations used by individual databases. The timespan was determined to be from 2009 to 2019. Journal papers, book chapters, and conference papers were selected for download. In the initial stage, we also downloaded conference posters and popular magazine articles, which were subsequently filtered. The number of initially downloaded papers is presented in Table 1, column Initial sample size. Once the papers were downloaded, they were filtered in several steps. First, we excluded conference posters, papers shorter than two pages, and articles from popular magazines. Subsequently, two members of our research lab independently analyzed the paper title, abstract, and full text to assess whether the downloaded papers were relevant to the examined field. This process was conducted in parallel and the results were compared; in the case of mismatch of results, the relevance of the paper was assessed in a discussion until a consensus was reached. This was the case for 11% of the analyzed studies on average. The number of filtered papers for individual databases is presented in Table 1, column After filtering. In the next step, we followed the snowball sampling process; here, we analyzed other relevant papers and articles found in the references of the filtered papers, which were not already a part of the set of filtered papers. Studies and reports found during this sampling underwent the same filtering and assessment process as the downloaded set of papers; two lab members independently analyzed the title, abstract, and full text of the papers. The majority of the papers acquired by the snowballing process were obtained from the four major databases employed in this study (IEEE Xplore, ACM Digital Library, Springer Link, and Elsevier ScienceDirect) and two papers have been obtained from other databases. The described filtering process has been applied to the papers acquired by snowballing regardless of their source database.
Interoperability and Integration Testing Methods for IoT Systems
99
Those papers that were found relevant were added to the analyzed sample. The number of papers found by individual databases after this step is presented in Table 1, column After snowball. Table 1. Numbers of papers after filtering and snowball sampling. Source
Initial sample size After filtering After snowball
IEEE Xplore
384
45
53
ACM Digital Library
87
10
12
Springer Link
199
32
32
ScienceDirect
133
15
16
0
0
2
803
102
115
Other databases Total
During the data extraction and analysis phase, extracted data were independently verified by a specialist, who analyzed the set of papers and matched them with extracted metadata. A “two pair of eyes” approach was adopted for paper classification. Two specialists classified the papers independently; in the case of a mismatch, particular cases were discussed, papers were analyzed, and the final decision was made based on the discussion results. During this analysis, 8% of the papers underwent mentioned discussion because of mismatch in the classification. The final set after this phase contained 115 papers. The narrowed selection of the papers was analyzed by publication year to answer RQ1 and by author names and affiliations to answer RQ2. Publication media were categorized by type (journal article, book chapter, conference paper, and workshop paper) and name to answer RQ3. Then, to answer RQ4, we classified the papers by categories presented in Table 2. Categories were organized in two levels: main category and subcategories. Subsequently, full text and detailed analysis of the paper content were used to answer RQs 5 to 7. The final set of 115 papers with their metadata including abstract, category, source URL, source library, and BibTex string are available for download at http://still.felk.cvut.cz/iot-integration-testing/. In the folder, the list is available in CSV, OpenOffice spreadsheet, and MS Excel format.
4
Results
This section presents the results of the conducted analyses and answers to the individual RQs. Answers to each RQ are provided in a separate subsection. 4.1
RQ1: Publication Trend in Time
In the recent decade, the number of publications discussing interoperability and integration testing issues of IoT systems has constantly grown, as shown by the data presented in Fig. 2.
100
M. Bures et al. Table 2. Categories used in the paper classification Main category
Category short name
Category description
–
IoT quality discussion
Interoperability/integration testing included in a general IoT quality discussion
Testing methodology
Testing methodology, including
General testing methodology including interoperability/integration testing as its part
Focused testing methodology
Methodology specially focused on interoperability/integration testing
Formal techniques Formal testing/verification techniques for interoperability/integration of IoT systems
Testing frameworks, tools and testbeds
Simulation frameworks
–
Testing methodology, applicable
General testing methodology applicable to interoperability/integration testing
Literature review
Literature review related to IoT testing methods, which also includes interoperability and integration aspects
Testing frameworks, supporting
General testing framework directly supporting interoperability/integration testing
Test automation framework
Specialized test automation framework directly supporting integration testing
Testbeds
Report on IoT testbed directly supporting interoperability/integration testing
Testing framework, applicable
General testing framework applicable to interoperability/integration testing
Frameworks and tools overview
Overview of testing frameworks and tools applicable to integration testing
Simulation frameworks, applicable
General IoT simulation frameworks applicable to interoperability/integration testing
Simulation frameworks, supporting
General IoT simulation framework supporting interoperability/integration testing
Development frameworks
IoT systems development framework/approach/standard including interoperability/integration testing
A more significant number of publications started to appear since 2014. The growth in publication numbers from 2016 to 2019 is almost constant. When extrapolating the trend, we can expect similar growth of publications discussing interoperability and integration testing of IoT systems in the following years.
Interoperability and Integration Testing Methods for IoT Systems
101
Fig. 2. Number of publications by years
4.2
RQ2: Active Researchers
In the final set of analyzed relevant studies, eight authors emerged to be actively publishing in the field of interoperability and integration testing of IoT systems. They were Brian Pickering (University of Southampton, UK), Bruno Lima (University of Porto, Portugal), Hamza Baqa (Institut Polytechnique Paris, France), Koray Incki (Ozyegin University, Turkey), Mengxuan Zhao (France Telecom), Michael Felderer (University of Innsbruck, Austria), Paul Grace (Aston University, UK), and Thomas Watteyne (Inria, France); they all published three studies. No author from the analyzed set published more than three studies from 2009 to 2019, 29 authors published two studies, and 431 authors published one study. A total of 468 unique authors were found in the analyzed studies. This analysis also points out the relative heterogeneity of the research community and absence of research mainstream in this field. However, this is a contemporary situation and might change in the near future. 4.3
RQ3: Publication Media in IoT Integration Testing
During the analysis of the papers, we analyzed four main publication media types: journal article, conference paper, workshop paper, and book chapter. Papers of conference proceedings published in a book series (e.g., LNCS by Springer) were considered as conference papers. Among the analyzed set of papers, several have been published in conferences aggregating parallel workshops; such papers were also considered as conference papers. Most papers were published in conference proceedings (61%), followed by journal articles (22%), workshop papers (9%), and book chapters (9%). Figure 3 presents more details on the publication media type by individual years of the analyzed period. Among the media types of the published studies on interoperability and integration testing, we will start by analyzing conference papers. The analyzed studies were published in a wide variety of conferences spanning from established conferences in system testing (e.g., IEEE Conference on Software Testing, Validation and Verification (ICST), IFIP International Conference on Testing
102
M. Bures et al.
Fig. 3. Venue types by individual years
Software and Systems, and IEEE International Conference on Software Quality, Reliability and Security (QRS)) to various forums related to IoT technology (e.g., IEEE World Forum on Internet of Things (WF-IoT), IEEE International Conference on Future Internet of Things and Cloud Workshops (FiCloudW), and European Conference on Wireless Sensor Networks). However, the spectrum of the conferences publishing papers focusing on IoT integration and interoperability testing is rather heterogenic, and apart from a few exceptions, we have not found a leading conference publishing more than three papers in the analyzed sample. The IEEE World Forum on Internet of Things (WF-IoT) published three papers, and the Global Internet of Things Summit (GIoTS) published two papers. The remainder of the analyzed papers were published in various unique conferences. Regarding the journals publishing IoT interoperability and integration testing studies, the situation was found to be similar. Articles were published in a relatively wide spectrum of journals dedicated to computer systems, networks, software testing, and related areas. Three articles were published in IEEE Access and two articles in the International Journal on Software Tools for Technology Transfer. The remaining studies were published in various unique journals. The details can be found in the complete list of analyzed papers available at http:// still.felk.cvut.cz/iot-integration-testing/. To summarize, publication media for integration and interoperability testing studies are relatively heterogenic. Even though integration and interoperability testing are understood as established discipline in the industrial praxis, in the research world, no major journal or conferences outlies as a venue especially publishing in this specific field. This can be explained by the relative novelty of the field. However, considering that the present industry calls for more effective and systematic methods for interoperability and integration testing, the research community will very likely react to these demands, and the situation will possibly change in the coming years. 4.4
RQ4: Topics and Subproblems Being Addressed
Figure 4 presents the classification of analyzed relevant studies using the categories shown in Table 2. The complete list of individual papers assigned to each
Interoperability and Integration Testing Methods for IoT Systems
103
category are given in the link above. In the analyzed sample, two major groups were found to be testing methodologies supporting or related to interoperability and integration testing (main category Testing methodology with 31 papers in total); and testing frameworks and testing tools, including test automation tools and testbeds constructed for or supporting interoperability and integration testing of IoT systems (main category Testing frameworks, tools, and testbeds with 46 papers in total). The analyzed set of papers also includes various IoT simulation frameworks applied to IoT interoperability and integration testing (main category Simulation frameworks with 12 papers in total).
Fig. 4. Classification of analyzed studies
In the detailed categories, the largest number of analyzed studies include discussions on various IoT testbeds supporting integration testing (21 papers), followed by general IoT quality discussions (15 papers), and IoT testing methodologies applicable to integration and interoperability testing (15 papers). Interoperability and integration of IoT systems are discussed in 13 papers dedicated to IoT test automation frameworks. This topic is also the subject of 11 studies presenting various development frameworks for IoT solutions. On the contrary, the presence of formal methods in the analyzed papers is low; only two papers focus on this topic. Similarly, only five studies present a directly focused integration testing methodology. We analyze the used techniques and approaches in Sect. 4.5. Two of the analyzed papers were also literature reviews relevant to the scope of this paper: a literature review dedicated to testing methods for embedded systems [23] and a study summarizing general testing methods for the IoT filed [16].
104
4.5
M. Bures et al.
RQ5: Used Testing Techniques and Approaches
In the studies relevant to interoperability and integration testing of IoT systems, a variety of testing techniques and approaches have been researched and applied, from formal verifications [13] to informal crowdsourcing techniques that can be compared to exploratory testing [22]. In this study, by a testing approach, we mean (1) general approach to test design and test execution, e.g., formal verifications, Model-based Testing or informal testing, and, (2) generic testing approaches based on various test levels as unit testing, integration testing or acceptance testing, for instance. By testing techniques, we mean techniques to create test cases; for instance, combinatorial or constrained interaction testing, path-based testing, and data-flow testing. Regarding established testing techniques, path-based testing using finite state machines (or analogous structures) as a system under test (SUT) model is discussed in five studies [5,7,20,25,55]. In addition, an SUT model based on a timed state machine has also been employed [35]. Datta et al. presented a prospective approach to semantic interoperability testing of IoT devices or services [17]. In their concept, they distinguish between syntactic and semantic interoperability to be verified during the tests. Semantic testing is also employed in a test automation framework proposed by Kim et al. [32]. Regarding the established test case notations, TTCN-3 standard by ETSI has been employed in six proposals [38,41,46,48,50,55]. Nevertheless, established testing techniques related to IoT integration testing in the studied papers are few. In contrast, general testing approaches are discussed more intensely. The MBT approach, in general, is explicitly discussed in several studies [1,2,5,9,20,25,29,36], which mostly describe a general concept; particular testing technique, namely path-based testing, is discussed in the studies by Aicherning et al. [5], Estivill-Castro et al. [20], and Grace et al. [25]. Suggestions of formal verifications [13] and runtime verifications [28] do appear; however, for integration and interoperability testing of IoT systems, these have to be further elaborated. Mutation testing has been used by Lu et al. for verification of RFID devices [37]; this technique can be expected to be used in future works to verify the effectiveness of developed testing techniques. Other testing approaches include use case testing [51,52], and, practically, exploratory testing and error guessing [22]. Several studies suggest test strategies and approaches for IoT systems that consist of general test levels (e.g., unit testing, integration testing, and acceptance testing) and approaching these test levels informally (e.g., testing of individual sensors, testing or integration, and security testing) [19,21,26,30,39,42– 44,47,54,56]. These studies can be used as a basis for setting up a test strategy for an IoT system.
Interoperability and Integration Testing Methods for IoT Systems
4.6
105
RQ6: Challenges and Limitations
After the analysis of the current studies in terms of interoperability and integration testing and comparing the state-of-the-art methods with the current industry demand, several conclusions can be drawn and several prospective research directions can be suggested. The research community in the interoperability and integration testing of IoT systems seems rather heterogenic; from the analyzed studies, there is no clear leading publishing medium or author in this field focusing on this topic. This heterogeneity can be explained by a combination of several factors: 1. The field of IoT testing and quality assurance is relatively novel; despite the active production of innovative IoT solutions in the last five years, research and development of IoT-specific testing methods is currently a developing field. 2. General methods from the field of integration testing might be considered as satisfactory for testing IoT systems; thus, demand for IoT-specific interoperability and integration testing methods is not specially recognized in the research community. 3. In the research community, several research streams and subcommunities have been established, covering path-based testing, combinatorial interaction testing, constrained interaction testing, data-flow testing, and other individual basic testing techniques that can be combined to establish comprehensive integration testing methods. Hence, interoperability and integration testing itself is not considered as a subject of primary research. Instead, the focus is on primary testing approaches that can be employed for the interoperability and integration testing process. In particular, the second and third points deserve further analysis and discussion. Regarding the second point (hypothetical low necessity to develop IoTspecific interoperability and integration testing methods, because there are general testing methods for these cases already available), it is worthwhile to analyze the situation in the current IoT systems briefly. Compared to standard software systems or relatively isolated proprietary cyber-physical systems not connected to the Internet, the situation in interconnected IoT systems might be different for a number of cases. In these systems, a more extensive set of various protocols on different networks and application levels can be integrated together, and seamless integration has to be maintained. These protocols might span from standardized protocols like WiFi, Bluetooth, IEEE 802.15.4, Z-wave, or ZigBee for low levels of the system; REST, MQTT, CoAP, or LWM2M protocols for higher levels of the system [6]; to various proprietary protocols used by individual vendors. These proprietary protocols might also contain more defects than established standards, and this fact makes smooth interoperability and integration of an IoT system more challenging. This situation leads to the increased necessity to employ techniques testing correct functionality of integration interfaces and interoperability with different
106
M. Bures et al.
configurations. It also leads us to suggest that the current testing methods shall be revised in the IoT context to increase their potential to detect relevant defects. The same applies to individual devices, where the level of standardization might be relatively low. Several attempts to standardize IoT devices and allow their interoperability have been made (e.g., ETSI GS CIM 009 or ISO/IEC 21823); however, no major standard is currently established. This is the reason for significant integration and interoperability challenges. Therefore, the capability of previous interoperability and integration testing techniques should be revised, at the minimum; opportunities to create more effective approaches based on IoT system specifics have to be examined. These opportunities cover Combinatorial and Constrained Interaction Testing [4] as well as path-based testing and data-flow testing [53] techniques for integration testing (typically end-to-end integration tests). Regarding the third point, the argument that interoperability and integration testing itself might not be understood as a subject of primary research, rather as an application of primary testing approaches, there are two counter-arguments worth mentioning. First, in general system testing research, integration testing is understood as a standalone research topic, as is documented in previous mapping studies [27,49]; in particular, the study by Haser et al. documents the broad extent of studies dedicated to integration testing of software and cyber-physical systems [27]. Moreover, another finding by a recent study conducted by Ahmed et al. [3] should be considered. Even in the discussed primary testing approaches such as path-based testing, combinatorial interaction testing, constrained interaction testing, or data-flow testing, no specific variants of these techniques are published for IoT systems to a large extent. Hence, to summarize, relative heterogeneity of the IoT interoperability and integration testing approaches might be explained as a result of the relative novelty of the field. Further development of IoT-specific testing techniques to cover these areas is a prospective future research direction. We analyze potential research directions further in Sect. 4.7. 4.7
RQ7: Future Research Directions
Regarding interoperability and integration testing methods for IoT systems, several prospective future research directions can be discussed considering the industrial needs and specifics of IoT systems. First, specific techniques for integration and interoperability testing of IoT systems have not yet been studied extensively in the literature. The techniques might have been published under different names; for experts in the field, it might be an easy task to get an overall picture. However, for testing practitioners and researchers from other fields, getting such a picture might be more difficult. The first future research area is handling possible combinatorial explosion problems in integration testing when considering possible configurations to test large-scale IoT systems. When various devices are integrated together in IoT systems, where these devices may vary in versions, many different system configurations can be established; flawless interoperability of devices in these variants
Interoperability and Integration Testing Methods for IoT Systems
107
need to be tested. The current combinatorial [40] and constrained [4] testing disciplines handle the problem on a general level. However, IoT-specific support regarding the modelling of the problem and application of general combinatorial techniques to IoT and integration testing using specific metadata from an IoT system might represent another perspective direction for future research. Another relevant field is testing the seamless integration of various devices in an IoT system operating with limited network connectivity. Transmission of data from sensors and between actuators operating in areas with weak network signal coverage might be disrupted during the system run. Hence, in such situations, the overall functionality of an IoT system should be checked for functional correctness and transaction processing of the data, if required. To the best of our knowledge, in the testing of such reliability, current publications focus on lower levels of the system (typically network layer), and systematic methods for such tests on higher levels of an IoT system have yet to be provided. In addition, to ensure more effective tests and also give the testing practitioners better guidance on how to construct test cases, cross-over techniques between path-based testing [8] and combinatorial interaction testing [40] for testing of close APIs in IoT systems might be researched. Using specific information and metadata from the tested system usually helps focus on the test cases more effectively, and this direction can also be explored in the case of IoT systems.
5
Conclusion
In this study, we focused on the field of integration and interoperability testing of IoT systems. The motivation was twofold: the importance of this field in the current industry and the fact that this specific area has not yet been covered by a focused, systematic literature mapping study. In the study, we analyzed 803 papers from four major primary databases, namely, IEEE Xplore, ACM Digital Library, Springer Link, and Elsevier ScienceDirect and followed the current established recommendations for conducting mapping studies by Kitchenham and Charters [34]. After a detailed assessment of the papers and quality check, 115 papers were found to be relevant to the field. Our results suggest that currently there are general testing methods, which can be applied to the field of integration and interoperability testing of IoT systems; therefore, there is a research opportunity to evolve more specific testing methods directly focused on IoT systems, which might work more effectively in the IoT context. On the other hand, a number of testing and test automation frameworks that support interoperability and integration testing are being created already, and we can also find examples of individual testbeds supporting this field. There may be several concerns related to the validity of this study. The main concern may be the exclusion of some relevant papers from the list. This possible problem was effectively mitigated by multiple-stage paper filtering and snowballing process, as described in Sect. 3, which also includes a thorough validity check phase.
108
M. Bures et al.
Another possible concern may be the inclusion of irrelevant papers in the scope, which was also mitigated by the methodology (see Sect. 3) following wellknown methods for the selection criteria as well as the “two pairs of eyes” quality check. A limitation of this mapping study is that it analyzes papers published only in the four primary major databases (IEEE Xplore, ACM Digital Library, Springer Link and Elsevier ScienceDirect) and does not involve other possible sources such as Google Scholar, Scopus, researchgate.net or arxxiv.org, which might contain other relevant studies. Despite these possible limitations, several prospective research directions were suggested in this study. Acknowledgement. This research is conducted as a part of the project TACR TH0 2010296 Quality Assurance System for the Internet of Things Technology. The authors acknowledge the support of the OP VVV funded project CZ.02.1.01/0.0/0.0/16 019 /0000765 “Research Center for Informatics”.
References 1. Ahmad, A., Bouquet, F., Fourneret, E., Le Gall, F., Legeard, B.: Model-based testing as a service for IoT platforms. In: Margaria, T., Steffen, B. (eds.) ISoLA 2016. LNCS, vol. 9953, pp. 727–742. Springer, Cham (2016). https://doi.org/10. 1007/978-3-319-47169-3 55 2. Ahmad, A., Bouquet, F., Fourneret, E., Legeard, B.: Model-based testing for internet of things systems. In: Advances in Computers, vol. 108, pp. 1–58. Elsevier (2018) 3. Ahmed, B.S., Bures, M., Frajtak, K., Cerny, T.: Aspects of quality in internet of things (IoT) solutions: a systematic mapping study. IEEE Access 7, 13758–13780 (2019) 4. Ahmed, B.S., Zamli, K.Z., Afzal, W., Bures, M.: Constrained interaction testing: a systematic literature study. IEEE Access 5, 25706–25730 (2017) 5. Aichernig, B.K., et al.: Learning a behavior model of hybrid systems through combining model-based testing and machine learning. In: Gaston, C., Kosmatov, N., Le Gall, P. (eds.) ICTSS 2019. LNCS, vol. 11812, pp. 3–21. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-31280-0 1 6. Al-Fuqaha, A., Guizani, M., Mohammadi, M., Aledhari, M., Ayyash, M.: Internet of things: a survey on enabling technologies, protocols, and applications. IEEE Commun. Surv. Tutor. 17(4), 2347–2376 (2015) 7. Amalfitano, D., Amatucci, N., De Simone, V., Riccio, V., Rita, F.A.: Towards a thing-in-the-loop approach for the verification and validation of IoT systems. In: Proceedings of the 1st ACM Workshop on the Internet of Safe Things, SafeThings 2017, pp. 57–63. ACM, New York (2017). https://doi.org/10.1145/ 3137003.3137007 8. Anand, S., et al.: An orchestrated survey of methodologies for automated software test case generation. J. Syst. Softw. 86(8), 1978–2001 (2013) 9. Arrieta, A., Sagardui, G., Etxeberria, L.: A model-based testing methodology for the systematic validation of highly configurable cyber-physical systems, October 2014
Interoperability and Integration Testing Methods for IoT Systems
109
10. Bures, M.: PCTgen: automated generation of test cases for application workflows. In: Rocha, A., Correia, A.M., Costanzo, S., Reis, L.P. (eds.) New Contributions in Information Systems and Technologies. AISC, vol. 353, pp. 789–794. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-16486-1 78 11. Bures, M., Ahmed, B.S.: Employment of multiple algorithms for optimal pathbased test selection strategy. Inf. Softw. Technol. 114, 21–36 (2019) 12. Bures, M., Cerny, T., Ahmed, B.S.: Internet of things: current challenges in the quality assurance and testing methods. In: Kim, K.J., Baek, N. (eds.) ICISA 2018. LNEE, vol. 514, pp. 625–634. Springer, Singapore (2019). https://doi.org/10.1007/ 978-981-13-1056-0 61 13. Camilli, M., Bellettini, C., Capra, L.: Design-time to run-time verification of microservices based applications. In: Cerone, A., Roveri, M. (eds.) SEFM 2017. LNCS, vol. 10729, pp. 168–173. Springer, Cham (2018). https://doi.org/10.1007/ 978-3-319-74781-1 12 14. Cavalcante, E., et al.: On the interplay of internet of things and cloud computing: a systematic mapping study. Comput. Commun. 89, 17–33 (2016) 15. Cort´es, M., Saraiva, R., Souza, M., Mello, P., Soares, P.: Adoption of software testing in internet of things: a systematic literature mapping. In: Proceedings of the IV Brazilian Symposium on Systematic and Automated Software Testing, pp. 3–11 (2019) 16. Cort´es, M., Saraiva, R., Souza, M., Mello, P., Soares, P.: Adoption of software testing in internet of things: a systematic literature mapping. In: Proceedings of the IV Brazilian Symposium on Systematic and Automated Software Testing, SAST 2019, pp. 3–11. Association for Computing Machinery, New York (2019). https:// doi.org/10.1145/3356317.3356326 17. Datta, S.K., Bonnet, C., Baqa, H., Zhao, M., Le-Gall, F.: Approach for semantic interoperability testing in internet of things. In: 2018 Global Internet of Things Summit (GIoTS). IEEE, June 2018. https://doi.org/10.1109/giots.2018.8534582 18. Dias, J.P., Couto, F., Paiva, A.C., Ferreira, H.S.: A brief overview of existing tools for testing the internet-of-things. In: 2018 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW), pp. 104–109. IEEE (2018) 19. Eckhart, M., Meixner, K., Winkler, D., Ekelhart, A.: Securing the testing process for industrial automation software. Comput. Secur. 85, 156–180 (2019). https:// doi.org/10.1016/j.cose.2019.04.016 20. Estivill-Castro, V., Hexel, R., Stover, J.: Modeling, validation, and continuous integration of software behaviours for embedded systems. In: 2015 IEEE European Modelling Symposium (EMS), pp. 89–95, October 2015. https://doi.org/10.1109/ EMS.2015.24 21. Felderer, M., Russo, B., Auer, F.: On testing data-intensive software systems. In: Biffl, S., Eckhart, M., L¨ uder, A., Weippl, E. (eds.) Security and Quality in CyberPhysical Systems Engineering, pp. 129–148. Springer, Cham (2019). https://doi. org/10.1007/978-3-030-25312-7 6 22. Fernandes, J., et al.: IoT lab: towards co-design and IoT solution testing using the crowd. In: 2015 International Conference on Recent Advances in Internet of Things (RIoT), pp. 1–6, April 2015. https://doi.org/10.1109/RIOT.2015.7104907 23. Garousi, V., Felderer, M., Karapı¸cak, C ¸ .M., Yılmaz, U.: Testing embedded software: a survey of the literature. Inf. Softw. Technol. 104, 14–45 (2018). https:// doi.org/10.1016/j.infsof.2018.06.016
110
M. Bures et al.
24. Giraldo, J., Sarkar, E., Cardenas, A.A., Maniatakos, M., Kantarcioglu, M.: Security and privacy in cyber-physical systems: a survey of surveys. IEEE Des. Test 34(4), 7–17 (2017) 25. Grace, P., Pickering, B., Surridge, M.: Model-driven interoperability: engineering heterogeneous IoT systems. Ann. Telecommun. 71(3–4), 141–150 (2015). https:// doi.org/10.1007/s12243-015-0487-2 26. GuS ¸ eil˘ a, L.G., Bratu, D., Moraru, S.: Continuous testing in the development of IoT applications. In: 2019 International Conference on Sensing and Instrumentation in IoT Era (ISSI), pp. 1–6 (2019) 27. H¨ aser, F., Felderer, M., Breu, R.: Software paradigms, assessment types and nonfunctional requirements in model-based integration testing: a systematic literature review. In: Proceedings of the 18th International Conference on Evaluation and Assessment in Software Engineering, p. 29. ACM (2014) 28. Incki, K., Ari, I.: Democratization of runtime verification for internet of things. Comput. Electr. Eng. 68, 570–580 (2018). https://doi.org/10.1016/j.compeleceng. 2018.05.007 29. Incki, K., Ari, I.: Observing interoperability of IoT systems through model-based testing. In: Fortino, G., et al. (eds.) InterIoT/SaSeIoT -2017. LNICST, vol. 242, pp. 60–66. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-93797-7 8 30. Kaiser, A., Hackel, S.: Standards-based IoT testing with open-source test equipment. In: 2019 IEEE 19th International Conference on Software Quality, Reliability and Security Companion (QRS-C), pp. 435–441 (2019) 31. Khan, M.A., Salah, K.: IoT security: review, blockchain solutions, and open challenges. Future Gener. Comput. Syst. 82, 395–411 (2018) 32. Kim, H., et al.: IoT-TaaS: towards a prospective IoT testing framework. IEEE Access 6, 15480–15493 (2018). https://doi.org/10.1109/access.2018.2802489 33. Kiruthika, J., Khaddaj, S.: Software quality issues and challenges of internet of things. In: 2015 14th International Symposium on Distributed Computing and Applications for Business Engineering and Science (DCABES), pp. 176–179. IEEE (2015) 34. Kitchenham, B., Charters, S.: Guidelines for performing systematic literature reviews in software engineering (2007) 35. Larsen, K.G., Legay, A., Mikuˇcionis, M., Nielsen, B., Nyman, U.: Compositional testing of real-time systems. In: Katoen, J.-P., Langerak, R., Rensink, A. (eds.) ModelEd, TestEd, TrustEd. LNCS, vol. 10500, pp. 107–124. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-68270-9 6 36. Lima, B.: Automated scenario-based integration testing of time-constrained distributed systems. In: 2019 12th IEEE Conference on Software Testing, Validation and Verification (ICST). IEEE, April 2019. https://doi.org/10.1109/icst.2019. 00060 37. Lu, A., Fang, W., Xu, C., Cheung, S.C., Liu, Y.: Data-driven testing methodology for RFID systems. Front. Comput. Sci. China 4(3), 354–364 (2010). https://doi. org/10.1007/s11704-010-0387-6 38. Makedonski, P., et al.: Test descriptions with ETSI TDL. Softw. Qual. J. 27(2), 885–917 (2018). https://doi.org/10.1007/s11219-018-9423-9 39. Medhat, N., Moussa, S., Badr, N., Tolba, M.F.: Testing techniques in IoT-based systems. In: 2019 Ninth International Conference on Intelligent Computing and Information Systems (ICICIS), pp. 394–401 (2019) 40. Nie, C., Leung, H.: A survey of combinatorial testing. ACM Comput. Surv. (CSUR) 43(2), 11 (2011)
Interoperability and Integration Testing Methods for IoT Systems
111
41. Park, H., Kim, H., Joo, H., Song, J.: Recent advancements in the internet-of-things related standards: a oneM2M perspective. ICT Express 2(3), 126–129 (2016). https://doi.org/10.1016/j.icte.2016.08.009 42. Pontes, P.M., Lima, B., Faria, J.A.P.: Izinto: a pattern-based IoT testing framework. In: Companion Proceedings for the ISSTA/ECOOP 2018 Workshops, ISSTA 2018, pp. 125–131. ACM, New York (2018). https://doi.org/10.1145/3236454. 3236511 43. Pontes, P.M., Lima, B., Faria, J.A.P.: Test patterns for IoT. In: Proceedings of the 9th ACM SIGSOFT International Workshop on Automating TEST Case Design, Selection, and Evaluation, A-TEST 2018, pp. 63–66. ACM, New York (2018). https://doi.org/10.1145/3278186.3278196 44. Popereshnyak, S., Suprun, O., Suprun, O., Wieckowski, T.: IoT application testing features based on the modelling network. In: 2018 XIV-th International Conference on Perspective Technologies and Methods in MEMS Design (MEMSTECH), pp. 127–131, April 2018. https://doi.org/10.1109/MEMSTECH.2018.8365717 45. Jaffar-ur Rehman, M., Jabeen, F., Bertolino, A., Polini, A.: Testing software components for integration: a survey of issues and techniques. Softw. Test. Verif. Reliab. 17(2), 95–133 (2007) 46. Rings, T., Poglitsch, P., Schulz, S., Serazio, L., Vassiliou-Gioles, T.: A generic interoperability testing framework and a systematic development process for automated interoperability testing. Int. J. Softw. Tools Technol. Transf. 16(3), 295–313 (2013). https://doi.org/10.1007/s10009-013-0281-2 47. Sand, B.: IoT testing - the big challenge why, what and how. In: Mandler, B., et al. (eds.) IoT360 2015. LNICST, vol. 170, pp. 70–76. Springer, Cham (2016). https:// doi.org/10.1007/978-3-319-47075-7 9 48. Schieferdecker, I., Kretzschmann, S., Rennoch, A., Wagner, M.: IoT-testware - an eclipse project. In: 2017 IEEE International Conference on Software Quality, Reliability and Security (QRS). IEEE, July 2017. https://doi.org/10.1109/qrs.2017. 59 49. Shashank, S.P., Chakka, P., Kumar, D.V.: A systematic literature survey of integration testing in component-based software engineering. In: 2010 International Conference on Computer and Communication Technology (ICCCT), pp. 562–568. IEEE (2010) 50. Sotiriadis, S., Lehmets, A., Petrakis, E.G.M., Bessis, N.: Testing cloud services using the TestCast tool. In: Latifi, S. (ed.) Information Technology - New Generations. AISC, vol. 558, pp. 819–824. Springer, Cham (2018). https://doi.org/10. 1007/978-3-319-54978-1 101 51. de Souza, B.P., Motta, R.C., de O. Costa, D., Travassos, G.H.: An IoT-based scenario description inspection technique. In: Proceedings of the XVIII Brazilian Symposium on Software Quality, SBQS 2019, pp. 20–29. Association for Computing Machinery, New York (2019). https://doi.org/10.1145/3364641.3364644 52. de Souza, B.P., Motta, R.C., Travassos, G.H.: The first version of SCENARIotCHECK: a checklist for IoT based scenarios. In: Proceedings of the XXXIII Brazilian Symposium on Software Engineering, SBES 2019, pp. 219–223. Association for Computing Machinery, New York (2019). https://doi.org/10.1145/ 3350768.3350796 53. Su, T., et al.: A survey on data-flow testing. ACM Comput. Surv. (CSUR) 50(1), 1–35 (2017)
112
M. Bures et al.
54. Tan, T.-B., Cheng, W.-K.: Software testing levels in internet of things (IoT) architecture. In: Chang, C.-Y., Lin, C.-C., Lin, H.-H. (eds.) ICS 2018. CCIS, vol. 1013, pp. 385–390. Springer, Singapore (2019). https://doi.org/10.1007/978-98113-9190-3 40 55. T¨ onjes, R., Reetz, E.S., Moessner, K., Barnaghi, P.M.: A test-driven approach for life cycle management of internet of things enabled services. In: 2012 Future Network Mobile Summit (FutureNetw), pp. 1–8, July 2012 56. Walker, M.A., Schmidt, D.C., Dubey, A.: Testing at scale of IOT blockchain applications (chap. 6). In: Kim, S., Deka, G.C., Zhang, P. (eds.) Role of Blockchain Technology in IoT Applications, Advances in Computers, vol. 115, pp. 155–179. Elsevier (2019). https://doi.org/10.1016/bs.adcom.2019.07.008
FRed: Conditional Model Checking via Reducers and Folders Dirk Beyer1
and Marie-Christine Jakobs1,2
1
2
LMU Munich, Munich, Germany Department of Computer Science, TU Darmstadt, Darmstadt, Germany
Abstract. There are many hard verification problems that are currently only solvable by applying several verifiers that are based on complementing technologies. Conditional model checking (CMC) is a successful solution for cooperation between verification tools. In CMC, the first verifier outputs a condition describing the state space that it successfully verified. The second verifier uses the condition to focus its verification on the unverified state space. To use arbitrary second verifiers, we recently proposed a reducer-based approach. One can use the reducer-based approach to construct a conditional verifier from a reducer and a (non-conditional) verifier: the reducer translates the condition into a residual program that describes the unverified state space and the verifier can be any off-theshelf verifier (that does not need to understand conditions). Until now, only one reducer was available. But for a systematic investigation of the reducer concept, we need several reducers. To fill this gap, we developed FRed, a Framework for exploring different REDucers. Given an existing reducer, FRed allows us to derive various new reducers, which differ in their trade-off between size and precision of the residual program. For our experiments, we derived seven different reducers. Our evaluation on the largest and most diverse public collection of verification problems shows that we need all seven reducers to solve hard verification tasks that were not solvable before with the considered verifiers.
1
Introduction
Due to the undecidability of software verification, even after more than 40 years of research on automatic software verification [31], some hard verification tasks cannot be solved by a single verifier alone. To increase the number of solvable tasks, one needs to combine the strengths of distinct verifiers. Several combinations [3,8,9,20,23,25,32,33,37] were proposed in the literature. One promising combination is conditional model checking (CMC) [9], which unlike others does not modify the programs nor let the combined techniques know each other.
Replication package available on Zenodo [12]. Funded in part by the Deutsche Forschungsgemeinschaft (DFG) – 418257054 (Coop). c The Author(s) 2020 F. de Boer and A. Cerone (Eds.): SEFM 2020, LNCS 12310, pp. 113–132, 2020. https://doi.org/10.1007/978-3-030-58768-0_7
114
D. Beyer and M.-C. Jakobs
Condition
Folder
Program
Reducer
Fold-Reducer r ConditionGenerating Verifier v1
Residual Program
Verifier v2
Result
Conditional Verifier
Fig. 1. Reducer-based CMC configuration (v2 ◦ r) ◦ v1 with FRed
CMC works as follows: If the first verifier gives up on the verification task, it outputs a condition that describes the state space that it successfully verified. The (conditional) second verifier uses the condition of the first verifier to focus its work on the still-unverified state space. Note that one can easily extend the CMC approach to more than two verifiers by letting all verifiers generate conditions. To easily construct conditional verifiers (i.e., verifiers that understand conditions) from existing off-the-shelf verifiers, a recent work proposed the concept of reducer-based CMC [13]. Instead of making a verifier aware of conditions, reducer-based CMC constructs a conditional verifier from an existing verifier by plugging a reducer in front of the verifier. The reducer is a preprocessor that given the original program and the condition as input, translates the condition into a (residual) program, a format that is understandable by classic verifiers. The construction of a reducer, especially proving its soundness, is complex and so far there exists only one reducer. However, this reducer’s translation is very precise, and therefore, may construct programs that are orders of magnitudes larger than the original program. To solve this problem, and to support systematic experimentation with different reducers, we propose the formal framework FRed, which streamlines and simplifies the construction of new reducers from existing ones. Its underlying idea is to construct a new reducer r = F ◦ R, a so-called fold reducer, by sequentially composing an existing reducer R with a folder F . A folder uses a heuristic that specifies how to modify the program constructed by the existing reducer. More concretely, a folder defines which program locations of the program constructed by the existing reducer are collapsed into a new location and, thus, specifies how to coarsen the program. However, to avoid false alarms, the specified coarsening must not add new program behavior. New conditional verifiers CV can be constructed with FRed according to the equation CV = V ◦ (F ◦ R), where r = (F ◦ R) is the fold-reducer composed of the existing reducer R and a folder F , V is an arbitrary verifier, and ◦ is the sequential composition operator. Figure 1 illustrates this construction in the context of reducer-based CMC. We used this construction to build 49 conditional verifiers, which use the already existing reducer, one of seven folders, and one of seven verifiers. Our large experimental study revealed that using several reducers (with different folders) can make the overall verification more effective. Contributions. We make the following contributions: – We introduce FRed, a framework for the composition of new reducers from existing reducers and folding heuristics.
FRed: Conditional Model Checking via Reducers and Folders
N!=0 `0 ¬N!=0
i=-1;
¬N